audioVersionDurationSec
float64
0
3.27k
codeBlock
stringlengths
3
77.5k
codeBlockCount
float64
0
389
collectionId
stringlengths
9
12
createdDate
stringclasses
741 values
createdDatetime
stringlengths
19
19
firstPublishedDate
stringclasses
610 values
firstPublishedDatetime
stringlengths
19
19
imageCount
float64
0
263
isSubscriptionLocked
bool
2 classes
language
stringclasses
52 values
latestPublishedDate
stringclasses
577 values
latestPublishedDatetime
stringlengths
19
19
linksCount
float64
0
1.18k
postId
stringlengths
8
12
readingTime
float64
0
99.6
recommends
float64
0
42.3k
responsesCreatedCount
float64
0
3.08k
socialRecommendsCount
float64
0
3
subTitle
stringlengths
1
141
tagsCount
float64
1
6
text
stringlengths
1
145k
title
stringlengths
1
200
totalClapCount
float64
0
292k
uniqueSlug
stringlengths
12
119
updatedDate
stringclasses
431 values
updatedDatetime
stringlengths
19
19
url
stringlengths
32
829
vote
bool
2 classes
wordCount
float64
0
25k
publicationdescription
stringlengths
1
280
publicationdomain
stringlengths
6
35
publicationfacebookPageName
stringlengths
2
46
publicationfollowerCount
float64
publicationname
stringlengths
4
139
publicationpublicEmail
stringlengths
8
47
publicationslug
stringlengths
3
50
publicationtags
stringlengths
2
116
publicationtwitterUsername
stringlengths
1
15
tag_name
stringlengths
1
25
slug
stringlengths
1
25
name
stringlengths
1
25
postCount
float64
0
332k
author
stringlengths
1
50
bio
stringlengths
1
185
userId
stringlengths
8
12
userName
stringlengths
2
30
usersFollowedByCount
float64
0
334k
usersFollowedCount
float64
0
85.9k
scrappedDate
float64
20.2M
20.2M
claps
stringclasses
163 values
reading_time
float64
2
31
link
stringclasses
230 values
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
0
null
0
null
2017-10-17
2017-10-17 16:20:09
2017-10-17
2017-10-17 17:18:50
0
false
th
2017-10-17
2017-10-17 17:18:50
1
1832631c59e8
1.181132
2
0
0
Long Short-Term Memory networks หรือ LSTMs คือ machine learning ที่มีความชื่อแปลกที่สุดเท่าที่เคยพบมา
3
[ML] Long Short-Term Memory ความจำเธอสั้น.. แต่ฉันยาว(กว่า) Long Short-Term Memory networks หรือ LSTMs คือ machine learning ที่มีความชื่อแปลกที่สุดเท่าที่เคยพบมา ถามว่ามันแปลกยังไงน่ะเหรอ ? ก็ลองคิดดูว่าจะมีใครมาตั้งชื่อ ML ว่ามันทั้งสั้นและทั้งยาวในเวลาเดียวกัน ซึ่งพออ่านชื่อไปอ่านชื่อมาก็เริ่มงง ๆ ว่าตกลงมันจะเป็นความจำระยะสั้นหรือความจำระยะยาวกันแน่เนี่ย ?! ก่อนจะไปถึงตรงนั้นว่ามันคือความจำแบบไหน เรามาเข้าใจหลักการการทำงานของมันแบบคร่าว ๆ ซึ่งมันคือสรุปในสรุปอีกที เพราะมันเป็น Machine Learning ที่เข้าใจยากโคตร ๆ !! Recurrent Neural Networks (RNNs) ถ้าจะพูดถึง LSTMs ก็ต้องพูดถึง RNNs ก่อน RNNs คือ ML ที่มีประโยชน์อย่างมากต่อข้อมูลที่มีความต่อเนื่องกัน หรือข้อมูลที่เหตุการณ์ก่อนหน้ามีผลต่อเหตุการณ์ที่จะเกิดขึ้นต่อไปในอนาคต ยกตัวอย่างง่าย ๆ เช่น สมมติว่าเราจะบอกว่า ‘ฉันกินน้ำเยอะ ฉันจะปวด..’ แล้วให้ทายคำที่หายไป ซึ่งมันต้องใช้ประโยคก่อนหน้ามาตีความว่า เอ๊ะ คำต่อไปจะเป็นอะไรกันนะ จะเป็นคำว่า ‘ปวดคอ’ หรือเปล่า ? กินน้ำเยอะแล้วจะปวดฟันได้ไหม ? หรือความจริงแล้วมันจะคือคำว่า ‘ปวดฉี่’ กันแน่ ?! หลักการของ RNNs คือ มันจะส่ง input เข้าไปในแต่ละ node ซึ่งในแต่ละ node นั้นก็จะมีการคำนวณอะไรบางอย่างอยู่ด้านใน เพื่อส่งเป็นคำตอบออกมาจาก node นั้น แต่เท่านั้นยังไม่พอ ! (เสียงจอร์จในทีวีไดเรก) ค่าที่ได้จะถูกส่งต่อไปยัง node ต่อไปเพื่อใช้ในการคำนวณรอบต่อ ๆ ไปอีกด้วย แต่.. สิ่งที่ทำให้ LSTMs นั้นเหนือกว่า RNNs คือเรื่องของการจดจำเรื่องราวในอดีต หรือสิ่งที่เรียกว่า memory cell state เซลล์จดจำ RNNs ใช้การส่งต่อข้อมูลที่ได้จากการคำนวณแล้วให้ node ถัดไปโดยไม่ได้สนใจจะจดจำค่านั้นเอาไว้กับตัว ซึ่งมันได้สร้างปัญหาอย่างมากในการทำ back-propagation หรือการคำนวณค่าควาผิดพลาดย้อนหลังหลังการทำงานแต่ละ node สิ้นสุดลง เพราะการ BP นั้นจำเป็นต้องย้อนกลับหลายขั้นตอน ผ่านหลาย node และทำให้เกิดปัญหา Vanishing Gradient Problem หรือการที่ทำให้ gradient นั้นมีค่าลดลง (Note: Gradient ช่วยทำให้รู้ว่า การเปลี่ยนแปลงของค่าเริ่มต้นส่งผลต่อผลลัพธ์ที่เกิดขึ้น) ซึ่งมันมีผลต่อการเรียนรู้ของทั้งระบบอย่างมาก ปัญหาที่ตามมาคืออะไร คำตอบคือ RNNs จะเริ่มทำงานได้แย่ลงเมื่อระยะของข้อมูลนั้นมีความยาวมาก ๆ เพราะการจดจำของ RNNs นั้นทำได้ดีแค่ในระยะสั้นเท่านั้น พูดง่าย ๆ ก็คือ บางทีจากตัวอย่างข้างบน ‘ปวดคอ’ ก็อาจจะเป็นคำตอบที่ถูกต้องได้โดยไม่ได้ตั้งใจ แล้วถ้าเป็นแบบนี้ เราจะควรทำอย่างไรดีล่ะ ? LSTMs ช่วยคุณได้.. Long Short-Term Memory (LSTMs) เอากันง่าย ๆ LSTMs คือ RNNs ที่มี memory cell state หรือเซลล์จดจำ หลักการมันมีแค่นี้จริง ๆ ที่เหลือมันเป็นส่วนของการใส่ activation function เพิ่มเติมเข้าไปเพื่อให้การจดจำของ LSTMs นั้นมีประสิทธิภาพมากขึ้น ใน LSTMs มันสามารถทำหน้าที่ได้ถึง 4 อย่างด้วยกันคือ ลืมมันทิ้ง ยิงค่าใหม่ ให้ผลลัพธ์ และอัพเดทเซลล์ แหม.. คล้องจองยิ่งกว่าคำขวัญกรุงเทพมหานครเสียอีก ซึ่งเจ้าตัว activation function ที่เข้ามาช่วยการทำงานของ LSTMs นั้นมีอยู่หลากหลายตัวด้วยกัน แต่ที่ใช้กันอย่างแพร่หลายก็คงหนีไม่พ้นตัวฟังก์ชั่น Sigmoid นั่นเอง Sigmoid คือฟังก์ชั่นประตูที่สร้างประตูขึ้นมา 3 บาน คือประตูทางเข้า ประตูทางออก และประตูแห่งการลืมเลือน ซึ่งผลลัพธ์ที่ได้จากฟังก์ชั่นนี้ก็จะอยู่แค่ระหว่าง 0 และ 1 เท่านั้น โดย 0 หมายถึง ไม่มีข้อมูลใดที่จะผ่านออกไปได้จากประตูนั้น ในขณะที่ 1 หมายถึงปล่อยให้ทุกข้อมูลจากไปโดยไม่รั้งไว้อีกเลย และเพื่อแก้ไขปัญหาเรื่อง Vanishing Gradient Problem ที่เกิดขึ้นใน RNNs ฟังก์ชั่น Tanh จึงถูกเสนอเข้ามาเป็นผู้ท้าชิงในการแก้ปัญหานี้ เพราะเมื่อเราทำการดิฟค่าฟังก์ชั่นของ Tanh แล้ว มันจะสามารถคงค่าเอาไว้ได้เป็นระยะเวลานานจนกว่ามันจะกลาย 0 ในท้ายที่สุด แต่ใช่ว่า Tanh จะเป็นผู้ท้าชิงเพียงหนึ่งเดียว ผู้เข้าร่วมแข่งขันในสนามเดียวกันกับ Tanh นั่นก็คือ Rectified Linear Unit (ReLU) ซึ่งได้เปรียบกว่าในด้านของการดิฟตัวฟังก์ชั่น และประสบผลสำเร็จเป็นอย่างมากในการทำงานทางด้าน Computer Vision หรือ CV ที่เรารู้จักกัน ในปัจจุบัน LSTMs ถูกใช้งานอย่างแพร่หลายในหลากหลาย research มาก โดยเฉพาะการพยายามทายราคาหุ้น (ย้ำว่าราคา ไม่ใช่แค่ทิศทางของหุ้น) ซึ่งผลลัพธ์ก็ออกมาเป็นที่น่าพอใจในหลาย ๆ สำนัก เพราะค่า Root Mean Squared Error (RMSE) นั้นต่ำมาก ๆ ซึ่งก็หมายถึงความแม่นยำที่สูงมาก ๆ เช่นกัน ที่พูดมาซะยืดยาว เอาจริง ๆ แล้วใจความหลักของ LSTMs ก็อย่างที่พูดไปตั้งแต่ต้นว่า มันคือ RNNs ที่มีความจำ ส่วนเรื่องความหมายของชื่อ Long Short-Term เนี่ย มันแยกได้เป็น 2 คำคือ Long Term หมายถึง การเรียนรู้ค่าน้ำหนักของแต่ละ node Short Term หมายถึง ประตูเซลล์ที่มีการเปลี่ยนแปลงตลอดเวลา ณ node ที่ช่วงเวลาต่าง ๆ เอาเป็นว่า ถ้าใครอยากศึกษาเพิ่มเติมแบบละเอียด ๆ เข้าใจง่าย ๆ (แต่ภาษาอังกฤษล้วน) แนะนำให้เข้าไปที่บทความของ Colah เลย เขาอธิบายไว้ค่อนข้างดีเลยนะ และน่าจะทำให้เห็นพื้นฐานของตัว LSTMs ได้มากขึ้น และเชื่อเถอะ LSTMs เป็นอะไรที่พูดง่าย.. แต่เข้าใจยากจริง ๆ
[ML] Long Short-Term Memory ความจำเธอสั้น.. แต่ฉันยาว(กว่า)
3
ml-long-short-term-memory-ความจำเธอสั้น-แต่ฉันยาว-กว่า-1832631c59e8
2018-04-17
2018-04-17 14:28:20
https://medium.com/s/story/ml-long-short-term-memory-ความจำเธอสั้น-แต่ฉันยาว-กว่า-1832631c59e8
false
313
null
null
null
null
null
null
null
null
null
Lstm
lstm
Lstm
191
Doratong24
Writer / Photographer / Youtuber / Board Gamer / Creator / Developer / Software Engineer
2a51293fdb9f
tongkornkitt
85
73
20,181,104
null
null
null
null
null
null
0
t1=() t2=tuple() t1=(4,5,9,7,"dog",4.5) l=[4,5,6,7,9,2,34,8.212,"cat"] t2=tuple(l) t2.count(5)# Returns the number of times there is a 1 in the tuple t2.index(9)# Returns the position of the 9.
4
290efb9ac4f6
2018-08-04
2018-08-04 18:38:13
2018-08-04
2018-08-04 18:30:48
1
false
en
2018-08-04
2018-08-04 18:41:18
1
1833b823e98d
1.449057
0
0
0
“Always do what you’re afraid to do.” Ralph Waldo Emerson
5
LEARN PYTHON NOW! Book: The Pillars of Python. 10 — Variables II: Tuples “Always do what you’re afraid to do.” Ralph Waldo Emerson The tuples Tuples are ordered sequences that store different objects (up to here they are like lists). However, unlike lists, tuples are immutable, meaning that you cannot modify them (neither add nor remove elements, nor sort them, you cannot change their shape). Then what are they for? They are faster and safer (the positive point is that they cannot be modified). I usually visualize the tuples as already packed boxes, which we can not modify their inner content. Tuples are initialized in two ways: To add something to a tuple, you can either do it at the time of creation, or by converting a list to a tuple. Unlike lists and dictionaries that have enough methods or “things to do with them”, tuples only have two: One of the most commonly used little tricks is to create a list, modify its contents or sort it and then convert it to a tuple to make it work faster. — — — — — — — — — — — — — — The End — — — — — — — — — — — — — If you like this small and free magazine you can help us by simply sharing it or subscribing to the publication. My name is Rubén Ruiz and I work in Artificial Intelligence in the financial industry and as a personal project I run this little magazine where we experiment with Artificial Intelligence… until the computer explodes :) You can follow me on: Instagram (Personal life, it’s fun) => @rubenruiz_t Youtube (Channel about AI, try to make it fun )=> Rubén Ruiz A.I. Github (Where I upload my code, this is not so much fun anymore) => RubenRuizT
LEARN PYTHON NOW! Book: The Pillars of Python. 10 — Variables II: Tuples
0
learn-python-now-book-the-pillars-of-python-10-variables-ii-tuples-1833b823e98d
2018-08-04
2018-08-04 18:41:25
https://medium.com/s/story/learn-python-now-book-the-pillars-of-python-10-variables-ii-tuples-1833b823e98d
false
331
Experiments with Artifitial Inteligence. If it doesn't blow up is fine.
null
null
null
AI experiments
rubenruiz90@gmail.com
ai-experiments
ARTIFICIAL INTELLIGENCE,PYTHON,DEEP LEARNING,PROGRAMMING,R
null
Python
python
Python
20,142
Ruben Ruiz
null
2db774b0464f
rubenruiz_26771
24
22
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-20
2018-02-20 13:22:08
2018-02-20
2018-02-20 13:29:51
1
false
en
2018-02-20
2018-02-20 13:31:37
2
1834e6873889
1.150943
7
0
0
The Arcona developers team is working hard on the Arcona platform, the key algorithmic solutions of ours are presented on Github in the…
4
The ArconaCore updates The Arcona developers team is working hard on the Arcona platform, the key algorithmic solutions of ours are presented on Github in the ArconaCore library which last week was enlarged still further. We’ve got to remind that this part of our project is the most innovative and labor-intensive as the system of automatic remote generation of AR scene spatial markers is being developed. It is aimed at proper positioning of AR content and thus the worldwide Digital land layer creation. What is so innovative about our program conception? The problem of any digital content positioning is a global problem failed to be solved by far. It’s connected with incompleteness and noise pollution of the input information. This can be applied both to 3D sampled data and images. Currently existing software solutions can’t provide the required degree of robustness, stability and precision. Our team is betting on purposefully designed own paradigm extension of the artificial neural network. It already shows quite satisfactory results, so we can’t help but boast — the level of the software developed by the Arcona team is superior in terms of solving specific tasks before the system to all other currently available. This is already the second publicly presented version of the ArconaCore. It differs from the previous one for the better concerning usability and wider range of system opportunities. The beta-version is available for a check-up and feedback here. You are more than welcome with the questions and suggestions on our TG.
The ArconaCore updates
295
the-arconacore-updates-1834e6873889
2018-03-25
2018-03-25 20:08:19
https://medium.com/s/story/the-arconacore-updates-1834e6873889
false
252
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Arcona AR Ecosystem
A platform for remote positioning and management of the augmented reality objects powered by blockchain, more here www.arcona.io.
8404e4a19f59
arconaico
229
6
20,181,104
null
null
null
null
null
null
0
null
0
cb0f49791c0e
2018-08-29
2018-08-29 18:57:22
2018-08-29
2018-08-29 19:45:35
4
false
en
2018-08-29
2018-08-29 19:45:35
7
183586693842
1.918868
0
0
0
This summer Pykids partnered with ASDRP . Aspiring Scholars Directed Research Program (ASDRP) is a 501(c)(3) nonprofit summer program that…
5
Pykids partners with ASDRP This summer Pykids partnered with ASDRP . Aspiring Scholars Directed Research Program (ASDRP) is a 501(c)(3) nonprofit summer program that provides opportunities for high school students throughout the Bay Area, especially those who are underrepresented in STEM or who are socioeconomically disadvantaged, to conduct high-level novel scientific research. Students participate in research projects across various subjects in STEM, including chemistry, biology, computational modeling, computer science, and much more. Pykids curriculum was used on two HS research projects — the links below display the posters they presented in the ASDRP Expo held on Aug 26th in Fremont. Electronics with CircuitPython and Micro controllers Data Science fundamentals with spreadsheet analysis and Python based tools. Electronics with CircuitPython and Micro controllers With this “project based learning” approach, students learnt “hands on” Python application to create an electronic product , a blingy Christmas tree, with Adafruit microcontroller Gemma M0 . Students learnt how to code , build an electronic circuit, cut, strip and solder wires to create connections, create 3 D print design and finally print the model and assemble the final product — Whew! This blog post by Les Pounder is the inspiration behind this. Data Science fundamentals with spreadsheet analysis and Python based tools. With this project student’s learned tools and techniques to analyze a “real world” dataset very quickly and easily. They learnt to use Google spreadsheets and built in functions first . Then they used Python and Numpy to create small functions that could be repeated over different datasets to calculate Descriptive statistics concepts such as “measures of center ”, “standard deviation” and plot histograms. This was a fun project that sets them up nicely for AP Statistics course they are going to start in Fall. Student Feedback I collected student feedback via google forms. Quite interesting results — which lead me to think that the projects were a success and some improvements can still be made ! Feedback result Question 1 Question 2 Question 3
Pykids partners with ASDRP
0
pykids-partners-with-asdrp-183586693842
2018-08-29
2018-08-29 19:45:51
https://medium.com/s/story/pykids-partners-with-asdrp-183586693842
false
323
pykids is a voluntary effort to bring Python to elementary/middle/high school kids.
null
pyslice
null
pykids
meenalpant@gmail.com
pykids
PYTHON PROGRAMMING,STEM,KIDS AND TECH,EDUCATION
mpant
Data Science
data-science
Data Science
33,617
Meenal Pant
null
a1d89c672169
mpant
12
14
20,181,104
null
null
null
null
null
null
0
null
0
33f9a1e914d0
2018-07-18
2018-07-18 11:45:57
2018-03-12
2018-03-12 08:57:50
0
false
en
2018-07-18
2018-07-18 11:47:07
2
1836723d7e93
2.430189
0
0
0
null
5
Is AI turning all data computing to essentially HPC? I remember my first stint with HPC, in year 2001 when I was working as a scientist in Bhabha Atomic Research Center. Lots and Lots of machines lined up and configured to work as a unified cluster, accessing the data over NFS. Scientists from many disciplined using MPI or such other libraries to write codes which will scan through, correlate, process data from various experiments, and try n create a model that solve scientific problems OR create models etc. I used to see the struggle of scientists from fields like molecular biology, theoretical physics, genetics go through as they struggled to make sense out the data which was result of may be their own experiments, and spent much of their valuable energy in getting the code to do what it is supposed to do. and the actual job used to take days, weeks, there was always demand for more compute to scale out…and always demand for faster access to data and to minimize the time to read and write it… Fast forward to 2018, and I think those two basic demands just stayed. What we have are more informed scientists, more qualified researchers , faster compute and faster storage media. But we also have lot more data, in lot more formats, and these elite folks are trying to address much complex problems. And so, the need to streamline the data architecture and compute scheduling for their jobs just stays, and is ever more relevant today than yesterday. What we now also have is a set of new users, which were traditionally enterprise application developers. They used to develop codes in Java/C++ with three tier/ two tier architectures and predominantly addressed data manipulation through structured queries like using say SQL etc. Lot of these folks are now trying to scan through the data which is in their databases, but also outside it, may be in logs, in warehouses, in files, even external on social media, etc, trying to correlate and find value. The use cases may seem as diverse as can be, but fundamentally, for infrastructure it looks all similar. Does anyone of the following scenario resonates with you? 1. Are trying to enable multi-protocol access to the data? ( NSA, CIFS, OBJECT, HDFS?) 2. Are you struggling to manage workload scheduling, including GPS scheduling and scaling out into cloud for bursty workloads? 3. Are you a researcher or analyst who is spending more time on getting your model run and complete successfully instead of analysing the results and arrive at inferences? 4. Are you spending weeks/months and sleepless nights trying to keep your cluster of large distributed systems up and running? 5. Are you the sysadmin whose users seems to be forever unhappy for slow data access and long running job times? 6. Are you having analytics/AI cluster sprawl because of multiple ongoing pilot projects and exhausted by keeping track of data, its copies and the single version of truth across all? If any of these resonates with you and you happen to live in Singapore, you just got lucky. :) IBM is hosting a User Group for Spectrum Scale on 26th March, and a User Group for Spectrum Computing on 29th March in Singapore where you can learn more about how to address major challenges explained above. Kindly note that registration fees are waved off ( you will have to progress till the last “pay now” page to see the waiver though.) Join developers/though leaders from HPC Data Storage background to understand how to resolve some of the data and computing challenges in traditional HPC OR new AI/Analytics workload space. Interaction with experts/developers presenting at the event will give you an opportunity to discuss any specific challenge you have right there. Thank you Folks..and Help us help you find value from your data.
Is AI turning all data computing to essentially HPC?
0
is-ai-turning-all-data-computing-to-essentially-hpc-1836723d7e93
2018-07-18
2018-07-18 11:47:07
https://medium.com/s/story/is-ai-turning-all-data-computing-to-essentially-hpc-1836723d7e93
false
644
This blog simplifies various concepts in technology and provides insights into real world usage for the same
null
null
null
Concepts Simplified
null
concepts-simplified
STORAGE,TECHNOLOGY,FUTURE TECHNOLOGY,QUANTUM COMPUTING,BLOCKCHAIN
null
Concepts Simplified
concepts-simplified
Concepts Simplified
0
shalaka verma
null
5974e61f4815
shalakaverma
2
10
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-05
2017-12-05 00:09:52
2018-01-09
2018-01-09 01:41:57
0
false
en
2018-01-09
2018-01-09 02:15:40
8
1839714dcce
9.301887
38
0
0
With all of the literature out in the world about time series forecasting, it is easy to feel overwhelmed. The blog post that follows is…
5
2,000+ Words on Time Series Forecasting With all of the literature out in the world about time series forecasting, it is easy to feel overwhelmed. The blog post that follows is intended to be a synthesis of the oodles of articles that I have read (and am still reading) on the subject, and my hope is that it is helpful to fellow time series debutant(e)s. STL Decomposition STL (seasonal and trend decomposition using LOESS) is a very popular method used in time series forecasting that effectively decomposes a time series into three components — seasonal (systematic/periodic structure), trend (long-term behavior), and residual (the leftovers). Advantages: An STL decomposition is quick/flexible for anomaly detection, as one can often identify additive outliers directly from the residual terms (provided that STL is indeed good for your time series, of course). One can examine the three (seasonal, trend, residual) components separately, which will often shed a great deal of insight into the nature of your time series. That said, an STL decomposition can be useful to run even during exploratory data analysis. Disadvantages: This approach is rather inflexible “out-of-the-box”. If you have multiple trends/seasonalities, for instance, the lesser ones will still be present after running an STL decomposition. Thus, one would need to be careful about using the residuals to identify anomalies if there are, in fact, other significant seasonalities/trends in the time series. If you are interested in forecasting, you will likely have to rely on one of the other methods (ARIMA, for instance) to forecast the components after running an STL decomposition. Steps: Decompose time series into seasonal, trend, and residual components. If interested in forecasting, use any method of choice to forecast each component and recombine them. If interested in anomaly detection, use any method of choice (generalized ESD test) to identify outliers from residual components. Ideal For: The obvious ones — time series containing single strong trend/seasonal components without significant structural changes. ARIMA ARIMA (autoregressive integrated moving average) models are dependent on lagged observations (AR) and error terms (MA) after differencing (I). Standard notation for an ARIMA model is ARIMA(p,d,q), where p is the lag order, d is the degree of differencing, and q is the order of the moving average. (If extended to include seasonality, you’ll see an additional (P,D,Q)s term, where (P,D,Q) mean the same thing for the seasonal component and s is the number of periods associated with the seasonal behavior.) Advantages: The model is (relatively) simple to interpret and is only dependent on the observed time series. Additionally, selecting the model parameters (p,d,q) (also (P,D,Q)s if incorporating seasonality) is relatively simple to do with the ACF/PACF/various tests of stationarity. (R users need look no further than auto.arima() function.) Disadvantages: The time series must be stationary after (d orders of) differencing. If differencing does not induce stationarity, one must reach into his/her bag of tools to make it stationary prior to using ARIMA (in which case, you’ll just use an ARMA model). There is no underlying model that attaches physical significance to the moving parts in an ARIMA model. If one is comfortable with using the past to inform values of the future without any sort of interpretation, this is a nonissue! If you want to glean something more from your analysis, however, you may be better served by an alternative method. ARIMA models are rather inflexible in the sense that one needs to fit a new model each and every time one introduces new data. Steps (if not using automated procedure): Make time series stationary (via differencing, log transforms, etc.). This will provide the user with the parameter d. Look to the ACF/PACF to determine p and q (use the ACF for MA(q), PACF for AR(p); if you have a model with nonzero p and q, it often makes sense to do a bit of experimentation after examining the ACF/PACF). If you suspect seasonality may be present, difference your time series by a lag equal to the expected period. For example, if you have daily data with expected weekly seasonality, you would calculate the seventh differences. From there, examine the ACF/PACF. Seasonal behavior will be evident at multiples of the expected period, whereas non-seasonal behavior will crop up at early lags (if any). If interested in anomaly detection, forecast future values (ideally, with prediction intervals) and identify outliers with any method of choice. Note that these intervals will naturally increase the further we look out into the future, potentially limiting the realm of applicability for anomaly detection. Ideal For: Broadly applicable. Exponential Smoothing The term “exponential smoothing” refers to a class of models that use past observations and exponential weights to smooth a time series. There is an entire taxonomy of exponential smoothing models (here is an excellent reference) that enables a user to inject as much or as little sophistication into his/her model as necessary. It is also worth pointing out that there are equivalences/overlaps between ARIMA and exponential smoothing that make the advantages/disadvantages of each fairly similar. Advantages: The basic idea behind exponential smoothing is perhaps more intuitive than ARIMA — we assign more weight to past values that occur closer to the present, making our predictions less sensitive to older observations. The number of smoothing parameters is small and can be optimized efficiently, allowing for quick, yet trustworthy, forecasts. Disadvantages: When forecasting with exponential smoothing, one is only able to obtain point forecasts. That said, the inability to generate a prediction interval (or some semblance of one) would make this approach less viable (if usable at all) for anomaly detection. (However, ETS models, which fold in error terms, would be appropriate here.) The taxonomy of exponential smoothing models requires the user to think a bit about what model is appropriate for his/her time series before diving right into forecasting — do we need to account for trends? Seasonality? Are the two additive or multiplicative components? Although fast/flexible, this could be a bit of an extra wrinkle that may cause users to seek out an alternative, “hands-off” method. Steps: Explore time series to determine appropriate exponential smoothing model (seasonality/trend/additive/multiplicative?). Optimize smoothing parameters. Generate forecasts (the way in which you do so depends on the specific model). If interested in anomaly detection, ETS models would be more appropriate here, as you can generate prediction intervals that provide a systematic way of identifying anomalies. Ideal For: Broadly applicable. Bayesian Structural Time Series (BSTS) Models Bayesian structural time series (BSTS) models are just as you might imagine from their name— a happy marriage of Bayesian statistics with structural time series models. In essence, structural time series models are characterized by two overarching equations: (1) the observation equation, which relates the observed data to a set of latent state variables, and (2) the transition equation, which encode the time dependence of the state variables. Through the use of priors on all model parameters and a conditional distribution on the parameters that govern the initial state, we put the “B” in BSTS. Advantages: Within a BSTS framework, one is able to turn all of the knobs and levers, if you will. By literally write down your model explicitly, you can insert any/all of the moving parts that you would like under the hood (suitable priors, seasonal/trend terms, etc.). This offers an added layer of interpretability often missing from more traditional approaches. Tying into the last point, two particular strengths unique to BSTS are (1) the control that one has over uncertainty and (2) the ability to incorporate feature selection via spike-slab priors. For anomaly detection, (1) is particularly relevant, as what is classified as an anomaly is often dependent upon how far predicted values stray from the observed values. All of the components of the underlying model are modeled simultaneously, and the user has the ability to explore these components independent of each other by appropriately marginalizing over the posterior distribution. Disadvantages: Arguably, the only downside to this approach is the bit of extra thought/effort that crafting an appropriate model takes. Other popular approaches can be “blackboxed” and used to generate predictions fairly quickly; this, however, requires one to (1) specify an appropriate model, (2) sample the posterior distribution, and (3) use the posterior samples to get what you need to tell a story. Use Case: Write down your statistical model (likely motivated by something previously discussed, such as one from the taxonomy of exponential smoothing models) with suitable priors. Use MCMC to sample posterior distribution. If using BSTS for forecasting, make forecast via the posterior predictive distribution. If using BSTS for anomaly detection, use either the mean/median of or quantiles from the posterior predictive distribution to get “predicted” values/intervals. If using intervals, one might flag observed values that fall outside the prediction intervals as anomalous. Ideal For: Broadly applicable, but probably the most practical when forecasting/future comes with actual risk and/or user wants more control over uncertainty. Tree-Based Methods (CART, RF) Tree-based methods (classification and regression trees, random forests, etc.) are also relevant for time series data. The literature on these methods is extensive, but the basic idea is that we can train a tree/trees using features constructed from our time series (lagged terms, Fourier terms from seasonal components, other exogenous predictors, etc.) to predict future values. Advantages: The user has the ability to introduce additional features outside of the observed time series values. Trees are relatively simple to construct/train (vanilla scikit-learn and/or similar R packages will do the trick) with small number of parameters that can be tuned without much trouble. These packages often handle missing data internally as well (other approaches may barf if the data is not evenly spaced and/or require imputation). They are also interpretable. One can examine the splits within trees and/or check out feature importance to provide context for predictions. Disadvantages: One of the biggest criticisms of tree-based methods is that they cannot predict values that fall outside the range of values contained in the training set. For volatile time series, this would result in poor performance if one is solely interested in forecasting. For anomaly detection, however, this could, in theory, be advantageous, as it would be impossible to get close to anomalous values (provided that you haven’t included anomalous values during training, of course) while performing reasonably well on the rest of the dataset. Steps: Construct set of features from training set. As mentioned earlier, this could include lagged values of the response variable, features associated with seasonal/trend components, or other exogenous features. Use trained tree to make predictions for test set. If one is interested in forecasting using lagged values, one could either use predictions or observed values of the target variable as lagged values if we are looking to predict beyond the first time point after our training set. Ideal For: Stable, bounded series (as one would not need to worry about any potentially funky behavior near the boundaries of the training set). Neural Networks (RNNs, LSTMs) (Author’s Note: I am currently trying to learn deep learning deeply (pun intended?), and that said, this bit is a continual work in progress.) RNNs (recurrent neural networks)/LSTMs (long short-term memory networks) are neural networks that contain connections between nodes that allow them to “remember” things about values that have been fed into them previously, rendering them relevant for any task in which we want to make use of any underlying temporal structure. The difference between RNNs and LSTMs is that the latter is better suited for learning long-term dependencies. Advantages: With RNNs/LSTMs, we are (potentially) able to capture all of the complex structure (trends, seasonality, deviations from stationarity, nonlinear behavior, etc.) that we work hard to find/remove using more traditional approaches without having to do it ourselves. Disadvantages: One will likely sacrifice model interpretability for accuracy, and in many situations, the former will outrank the latter. To really have an edge over more traditional methods, one would need a fairly long (i.e., well sampled) time series with a good amount of complex structure. Otherwise, you may only see marginal improvement. Neural networks are particularly prone to overfitting and require the hand of an experienced user. This can be mitigated, of course, if one exercises caution while training the network (proper regularization, decent amount of data, etc.), but other methods don’t carry quite as many traps. In the context of anomaly detection, this is particularly relevant, as one would not want to have a model that identifies anomalous behavior as “normal”. Steps: Prior to feeding your data into an LSTM, you’ll want to normalize/standardize your input features — the choice of how you do this is yours, but it is necessary to remove different scales to optimize the gradient calculations during backpropagation. After splitting your time series up into training/test sets, you’ll also want to further divide it into samples to feed into your LSTM. Let’s say you have hourly data for a year’s worth of some time series. Rather than stick it into your LSTM in one chunk (you wouldn’t be making use of the “M” component of your LSTM), you’ll need to slice it into smaller pieces. For instance, you might try doing this in one day pieces (features are twenty-three hours worth of data, predict the value of the target variable at the twenty-fourth hour). I’m sure that there are some rules of thumb that folks use to determine the optimal size for samples, but it seems reasonable to arbitrarily choose one that makes sense or do some trial-and-error. When it comes to properly defining a neural network architecture (number of hidden layers/nodes, regularization parameters, etc.), I admit that I still have a lot to learn. I have used Keras/TensorFlow in my travels, and when I have used LSTMs, I have just tinkered around with the architecture until I got something reasonable. Forecast uncertainty with LSTMs also looks to still be an active area of research (check out this sweet article from Uber). This added wrinkle may be problematic when it comes to using LSTMs for anomaly detection, but it may not be as much of an issue if one is purely interested in forecasting. Ideal For: Long time series with a fair amount complex structure (perhaps if/when all of the other approaches have proved to be futile, too). Conclusion While these are some of the most trendy statistical methods used for working with time series data, it is important to remember to think carefully about the problem that you are trying to solve before choosing which one (if any) to use. This list is also not meant to be exhaustive — this area is an active field of research, so keep an eye out for the latest and greatest approaches to these types of problems!
2,000+ Words on Time Series Forecasting
214
2-000-words-on-time-series-forecasting-1839714dcce
2018-06-18
2018-06-18 13:03:35
https://medium.com/s/story/2-000-words-on-time-series-forecasting-1839714dcce
false
2,465
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Charlie Bonfield
null
97a1a7a4e4f8
chbonfield
34
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-04
2018-02-04 18:48:00
2018-02-04
2018-02-04 19:00:30
6
false
en
2018-02-13
2018-02-13 05:17:55
7
183a2d5f00d1
7.225472
2
0
0
In 1995, law enforcement officials convinced a judge that Kevin Mitnick had the ability to start a nuclear war by whistling into a pay…
5
The Social Engineering Attack Cycle: a framework used by the world’s most dangerous human hacker In 1995, law enforcement officials convinced a judge that Kevin Mitnick had the ability to start a nuclear war by whistling into a pay phone. Mitnick had become a high-profile hacker on the FBI’s Most Wanted list after a two-year cat-and-mouse chase where he eluded the FBI under various false identities. One time when he learned that they were on their way to his apartment, he kindly left them a box of donuts in the fridge before disappearing. His crime? Mitnick had hacked into over 40 major corporations. He had the uncanny ability to weasel his way into nearly any network’s core — from a military computer to FBI and DMV records. He had intercepted and stolen computer passwords, altered computer networks, broke into private e-mails, and sweet-talked his way into getting privileged access to proprietary information. And he did this mainly for the thrills — he was a hacker for the hell of it. In Mitnick’s eyes, everything on the web is transparent. If there’s a will, there’s a way for anything you write in email, every conversation you have in chat, every link you’ve visited, to be read — unless it’s been heavily encrypted. Mitnick had access to information that could do terrible damage — social security numbers, credit card numbers, proprietary software, email logins. He could have coerced and blackmailed people for lots of money. But he didn’t. His central addiction was curiosity. It was a game for him. In 1995, Kevin Mitnick’s games would come to an end — he was sentenced to 5 years in a federal prison and spent 8 months of that time in solitary confinement. Today, the potential to do damage is exponentially greater in the age of big data, big tech, and information oversharing. Central databases store what we like, where we’ve been, who our friends are, and whom we’ve slept with. The list of known data breaches since 2010 has been staggering — it includes LinkedIn, Uber, Equifax, JPMorgan, Sony, Anthem, Citigroup, Dropbox, eBay, Evernote, and many more. And since many people reuse passwords across similar sites, any password hack can lead to vulnerabilities in other sites. Meanwhile company employees have been spying on user activity…for purposes related to work, of course. Recently, it was reported that a whistleblower working at Lyft expressed concern that several employees looked up user data. The whistleblower used anonymous workplace app Blind to report user privacy concerns (see screenshots above). Allegations include employees using data to stalk ex-lovers, checking where significant others were riding, stalking people they found attractive who shared a Lyft with them, and looking up phone numbers of celebrities (such as Mark Zuckerberg). Government employees do it too. The 2013 Edward Snowden files gave a glimpse of the extent of surveillance at the National Security Agency. The NSA had wide-reaching surveillance tools for searching nearly everything a user does on the Internet. They had legal authority to request user data from companies including tech giants like Google, Facebook, and Apple. They were also intercepting 200 million text messages every day. In some cases, NSA employees had been caught spying on love interests, a practice now referred to as “LOVEINT”. And it’s not going away — on January 11, 2018, the U.S. House of Representatives voted 256–164 in favor of extending the NSA’s surveillance program for another six years with minimal changes. Ultimately, these organizations are comprised of humans, none of whom are infallible, and each with their own motivations, biases, and triggers. And they have all sorts of personal information at their fingertips — information they can use to satisfy whatever desires are pulling at them. Now consider the implications of wider business AI adoption. As we increasingly rely on complex algorithms to help us make high-stakes decisions, a not-so-nice version of Mitnick can engineer his way to accessing the code base. A large attack can disrupt the power grid, shut down hospitals, compromise a national security system, or as Elon Musk fears, start World War III. Projects like SingularityNet aim to create a decentralized marketplace for AI to enable anyone, not just the big tech companies, to buy and sell AI at scale. But this will open another can of worms. More actors participating will create more and easier opportunities for social engineering. How Mitnick Did It Today, Kevin Mitnick runs a consulting firm that helps corporations protect themselves against the methods he knows intimately. Here’s what’s interesting — by almost every account, Mitnick was technically dull. He accomplished most of his conquests through superb social engineering. Mitnick explains that social engineering is using manipulation, influence and deception to get a person, such as a trusted insider within an organization, to comply with a request. That request is usually to release information or to perform some sort of action item that benefits the attacker. Sometimes, they don’t even have to try — just being at the right place at the right time will suffice. Social engineers understand an important truth: When it comes to security, people are the weakest link, not the technology. A company can spends millions of dollars on the best data security measures, but a Mitnick just needs to persuade one human willing to cooperate and he’s in. Mitnick would achieve this by imitating a lineman’s jargon, impersonating a superior, conning unsuspecting employees, and exploiting his knowledge of a phone company’s organizational chart. Today, his attacks are much easier. He can research LinkedIn for employee information and take advantage of the blurred boundaries between professional and private social networks. To protect yourself and your company against manipulation, it helps to understand Mitnick’s Social Engineering Attack Cycle from his book, The Art of Deception: Controlling the Human Element of Security. If you become a target, here are the four steps an attacker will use. Step 1: Research In the Research phase, Mitnick would gather as much information about a target as possible in order to develop a strategy for building rapport. This can even come purely from publicly available sources like company websites, social networking sites, personal blogs and forums. Guys like Mitnick love it when you overshare and ignore your privacy settings. Step 2: Develop rapport and trust Next, Mitnick would determine the proper pretext. A pretext is a devised scenario that explains to the target why the attacker is engaging. A good pretext must be believable and withstand scrutiny. You’re more likely to divulge information to an attacker if you perceive a relationship exists. So this step is not trivial and can be time-consuming, but a good pretext makes it easier. One helpful framework for thinking about influence is Matthew Kohut’s matrix below that considers two axes: (1) the level of stakes and (2) the context of the interaction. First, how high are the stakes of doing what the attacker asks? You’re more likely to do something if it requires little effort or perceived risk on your part. Think about how easy it is to click on a friend’s Facebook or Twitter link. Second, how strong is the relationship? An attacker might establish an effective pretext and build instant rapport, but if he asks for a favor too soon, the relationship will feel transactional. And rapport quickly disappears. Social engineers sometimes have to build the relationship over multiple interactions, all of which must have a proper pretext without giving the appearance of manipulation. If the perceived relationship is strong, then the target may even enjoy doing high stakes favors. So the best human hackers must exercise patience when cultivating relationships. Step 3: Exploit Trust Next, Mitnick would exploit trust in order to elicit information. He might start by priming you — putting you in a desired emotional state, such as feeling sad or happy — that then leads you to divulge information. For example, he might relate to a sad story to evoke you into remembering a sad incident, and subsequently to feel sad. And then the information elicitation can begin. He might be looking for something as sensitive as a password, or as casual as knowing another person’s whereabouts at a particular time. After he gets what he needs, he will bring you back to a normal emotional state in order to avoid further consequences. He wants you feeling good, and not guilty, about what you just shared. Step 4: Utilise information All the steps above will have achieved nothing if the information is not utilised to achieve a goal. Here, Mitnick might discover that information was incorrect or insufficient to execute his goal. Whatever the roadblock, he would find a way. He’d repeat the cycle on another target, sometimes many more, until he got what he needed. Then, he moved onto the next attack. Mitnick’s Social Engineering Attack Cycle is not just limited to breaching company security systems. As long as you hold some piece of information that someone else wants, you can be targeted. Private investigators and spies have used the honeytrap strategy. The prototypical honeypot is an attractive female who baits a target into letting his guard down and divulging secrets. Ordinary honeytraps are common too. Lonely people who feel misunderstood are particularly susceptible to attention from attractive strangers with less-than-pure motives. Media mogul Harvey Weinstein employed former spies to social engineer journalists and actresses. Opportunistic employees will use social engineering to get embarrassing information about their coworkers from the company gossip. And you can use it to infiltrate a social tribe, or make friends with a potential employer, or gather intelligence on a competitor. The applications are endless. Mitnick’s framework for social engineering is just one of many models, none of which are difficult to understand. But here’s the hard part. When social engineering is done well in practice, you won’t even know you were a target. It can feel like you just made a strong connection with a new friend. Or the attack was so subtle that you won’t even remember it. Unless you develop your social intelligence and become attuned to the nuances and subtleties of human behavior, you’ll likely be like everyone else: too stuck in your own head to realize you’re being manipulated. If you’re seeking help protecting yourself against manipulators, consider a consultation. And to learn about other social engineering tactics, download a free copy of The Social Intelligence Blueprint. Inside, you’ll learn how attackers decode social cues and encode their desired pretext. Originally published at www.hz.agency.
The Social Engineering Attack Cycle: a framework used by the world’s most dangerous human hacker
47
the-worlds-most-dangerous-human-hacker-how-he-did-it-and-why-it-matters-even-more-today-183a2d5f00d1
2018-05-23
2018-05-23 12:42:28
https://medium.com/s/story/the-worlds-most-dangerous-human-hacker-how-he-did-it-and-why-it-matters-even-more-today-183a2d5f00d1
false
1,663
null
null
null
null
null
null
null
null
null
Social Engineering
social-engineering
Social Engineering
327
HZ Agency
Promoting Social Intelligence | http://www.hz.agency
be2265b5d10c
thehzagency
131
126
20,181,104
null
null
null
null
null
null
0
[犬、猫、鼠]→ [1,3,2] [犬、猫、鼠]→ [[1,0,0],[0,0,1],[0,1,0]]
2
666edce44658
2017-12-05
2017-12-05 14:08:49
2017-12-05
2017-12-05 14:41:01
0
false
ja
2017-12-05
2017-12-05 14:41:01
2
183b3d62cb
1.23
2
0
0
データの前処理でどちらを使うべきか迷ったので、調べました。
3
One Hot Encoder VS Label Encoder データの前処理でどちらを使うべきか迷ったので、調べました。 日本語で探しても無かった場合は積極的に記事にしていく所存…. smlyさんの発表資料 めっちゃ分かりやすい 8–9 ページにOne Hot Encoding とLabelEncoderの解説が載ってます。 パット見た感じどちらもカテゴリカル変数の変換方式なんですが、どちらを使うべきか迷ったので調べてみた。 stackexchange¹に明瞭な回答があったので翻訳がてら載せておきます。 LabelEncoder カテゴリカル変数をスカラ値としてエンコード 欠点 : カテゴリカル変数である犬と猫の平均が 鼠になってしまう(カテゴリカル変数では順序に意味を持つ(序数)のは駄目 ) 決定木やランダムフォレストに向いたエンコード方式。 利点 : 単にスカラ値に変換するのでディスク容量も抑えることができる One Hot Encoder カテゴリカル変数を直交する二値のベクトルとしてエンコード 利点: 各カテゴリ変数が直交してる 欠点: カテゴリカル変数が多い場合次元が爆発する 結論 とりあえず、まずはOne Hot Encoder使いましょう! カテゴリ数が多い場合は”Entity Embeddings of Categorical Variables”とよばれる手法が有用らしいです。²
One Hot Encoder VS Label Encoder
3
onehotencoder-vs-labelencoder-183b3d62cb
2018-05-12
2018-05-12 18:51:54
https://medium.com/s/story/onehotencoder-vs-labelencoder-183b3d62cb
false
47
🤖 < Computer Vision, Machine Leaning Tech Blog. Love Python 🐍
null
shunyaueta
null
Moonshot 🚀
null
moonshot
COMPUTER VISION,MACHINE LEARNING,PYTHON,PROGRAMMING
hurutoriya
日本語
日本語
日本語
18,705
Shunya Ueta
Machine Leaning Engineer 🤖 Tech Blog→ https://medium.com/moonshot
1f96d0a59fd4
hurutoriya
223
590
20,181,104
null
null
null
null
null
null
0
import os.path import pandas as pd class models: def __init__(self,tournament=70): parentdir = os.path.join(os.path.abspath(os.getcwd()),os.pardir) path = os.path.join(parentdir,"T{}/".format(tournament)) self.training_data = pd.read_csv(path + "numerai_training_data.csv",header=0) self.test_data = pd.read_csv(path + "numerai_tournament_data.csv",header=0) print(models().training_data.head(1)) #training data self.X_train = self.training_data[[f for f in list(self.training_data) if "feature" in f]] self.y_train = self.training_data['target'] #test data(part of prediction data) self.X_test = self.test_data[[f for f in list(self.training_data) if "feature" in f]][:16686] self.y_test = self.test_data['target'][:16686] #prediction data self.X_prediction = self.test_data[[f for f in list(self.training_data) if "feature" in f]] def DTC(self): model = DecisionTreeClassifier() p = [{'min_samples_split':[[2]],'max_features':[['log2'],['auto']],'max_depth':[[5]]}] return model,p def RFC(self): model = RandomForestClassifier() p = [{'n_estimators':[[10,50,100,300]],'min_samples_split':[[2]],'max_features':[['log2'],['auto']],'max_depth':[[2,3,4]]}] return model,p def LR(self): model = LogisticRegression() p = [{'max_iter':[[1000]],'tol':[[0.00001]]}] return model,p def kernel(self,model,p): parameter = ParameterGrid(p) clf = GridSearchCV(model, parameter, cv=3, scoring='neg_log_loss',n_jobs=2) clf.fit(self.X_train,self.y_train) # proba is the prediction of the final prediction data prediction = clf.predict_proba(self.X_prediction)[:,1] # part of the data is used to calculate the logloss for measuring performance error = log_loss(self.y_test,prediction[:16686],normalize=True) print(error) print(clf.best_params_) result = self.ids_test.join(pd.DataFrame(data={'probability':prediction})) result.to_csv('%.4f_submission.csv'%(error),index=False) #return result,error def StandardScaler(self,x): #to unit variance model = preprocessing.StandardScaler(copy=False) return model.fit_transform(x) def PolyFeature(self,x): model = preprocessing.PolynomialFeatures(interaction_only=False) return pd.DataFrame(model.fit_transform(x)) def KernelCenterer(self,x): #only demean model = preprocessing.KernelCenterer() return pd.DataFrame(model.fit_transform(x)) def advisory_screen(self,num=1,samplesize=10000): model = RandomForestClassifier(n_estimators=50) X_train = self.training_data.drop(['id','era','data_type','target'],1) X_test = self.X_prediction[16686:] model.fit(X_data,Y_data) pre_train = pd.DataFrame(data={'wrong-score':model.predict_proba(X_train)[:,1]}) pre_test = pd.DataFrame(data={'right-score':model.predict_proba(X_test)[:,1]}) test_alike_data = pd.DataFrame(self.ids_train).join(pre_train).sort_values(by='wrong-score',ascending=False)[:samplesize] test_class = self.ids_test[16686:].reset_index().join(pre_test).sort_values(by='right-score',ascending=False) Y_train = self.training_data[['id','target']] #####just for control print('out of {0} training sample and {1} testing sample'.format(sample_size_train,sample_size_test)) print('correct for training: {}'.format(sum([1 for i in model.predict_proba(X_train)[:,1] if i<0.5]))) print('correct for validation: {}'.format(sum([1 for i in model.predict_proba(X_test)[:,1] if i>0.5]))) print(pd.concat([test_alike_data.head(n=5),test_alike_data.tail(n=5)])) print(pd.concat([test_class.head(n=5),test_class.tail(n=5)])) ######## ids = test_alike_data.join(Y_train.set_index('id'),on='id')['id'] return self.X[self.training_data['id'].isin(ids)],self.Y[self.training_data['id'].isin(ids)] def eraStandardize(self,model): placeholder = pd.DataFrame() for i in range(1,97): data = self.training_data[self.era=='era{}'.format(i)][[f for f in list(self.training_data) if "feature" in f]] placeholder = placeholder.append(pd.DataFrame(model(data))) return placeholder
7
null
2017-09-01
2017-09-01 10:19:47
2017-09-01
2017-09-01 10:23:31
1
false
en
2017-09-01
2017-09-01 10:28:25
5
183b44e98f15
3.777358
2
0
0
Numerai Competition is an online machine learning tournament which is operated by Numerai, a hedge fund. You may refer to this article for…
3
Numerai Tutorial — I — Vanilla Algorithms and Adversarial Validation Numerai Competition is an online machine learning tournament which is operated by Numerai, a hedge fund. You may refer to this article for an introduction. In this article, I want to describe how I approach this problem, and built up a set of tools that helps me to rapidly iterate over different algorithms for testing, feature preprocessing and engineering. <script src=”https://gist.github.com/chrisckwong821/0ea85216a9b9b158334e24ded809a881.js"></script> The data contains 21 features range from 0 to 1, with the binary target(0,1). Numerai claims that they encrypted financial data into the dataset so it is more than simple time-series data. Each row contains a unique id, an era which label its type, and whether it belongs to train/test data. I inititize the training,test and prediction data under the class init for reference. Noted that the test data is part of the prediction data here. This would measure the error in a way more relevant to the final output. Still, cross validation would still be used during the training, simple because it would make the training more robust. To manage each model and its parameter efficiently, each model is wrapped under a function. In this format, basically I would be able to call any models from scikit-learn in this format efficiently and tune the parameters as I want. The model and its parameter would then be fed into a kernel function for training. Prediction is the model predicted probability range from 0 to 1. It is joined with the ids to form a standardized table readily for submission at Numerai. The performance would be measured by the logloss to the ground truth. This is usally close to your performance measured by the test score from my experience. The above part forms the basic and skeleton part of the workflow. Obviously plugging vanilla algorithms into the dataset is not going to get your far. Depending on which model you feed, it would rarely get you farther than 0.6923. RandomForest and Logistic Regression are among the best. But still, only marginally better than 0.6931/-log(0.5) if you guess 0.5 for all input, contrast to the level above 0.6880 for people on top 100. To incorporate feature preprocessing and engineering into the workflow, we can initilize some preprocessing functions: Similar to what is done for the model, define the API under a function and return the transformed dataframe, noted that the model.fit_transform returns an numpy array thus have to be wrapped as a DataFrame for later processing. Inspired by this article on adversarial validation, I have implemented this method as well. Basically it trains a classifier to tell training data from prediction data, then use the training data that most resemble the prediction data. Basically the function outputs the X and Y that most resembles the prediction data, of custom range. I use default sample size of 10000 which is reported by others to be most efficient. But this is heuristic and can be tested case by case. After this adviserial screening, the model actually does not improve significantly. My best score is 0.6920 only. To further take advantage of the data given, I want to do some averaging on the “era” label. For each training data, there are 96 era of varying sizes. which may range from one hundred to one thousand. To use this, I define a new function that preprocess data era by era: Model is the function from which we modify the data era by era. We can apply demean, unit variance, minmaxscaler or binarizer into the model. I am still working on ways to improve the performance, incorporate codes that allow faster iteration and testing. I have not incorporated model ensembling, mainly because my score is far from good, and ensembling a bunch of bad models would not make a good one. So I would also release my ensembling code after I manage to make some improvement on the logloss. Let me know anyway to improve and any feedback is welcome! My Blog at https://chrisckwong821.github.io
Numerai Tutorial — I — Vanilla Algorithms and Adversarial Validation
24
numerai-tutorial-i-vanilla-algorithms-and-adversarial-validation-183b44e98f15
2018-05-21
2018-05-21 03:40:27
https://medium.com/s/story/numerai-tutorial-i-vanilla-algorithms-and-adversarial-validation-183b44e98f15
false
948
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Chris Wong
calisthenics lover. Python developer. financial trading and meditation
cc85c70ce5f
chris_whirlwind
52
67
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-29
2018-01-29 06:47:01
2018-01-29
2018-01-29 07:03:01
1
false
en
2018-01-29
2018-01-29 07:03:01
4
183b640b43b0
3.50566
0
0
0
In sales? Afraid of losing your job to some AI powered virtual assistant (bot, chatbots)? Fear not, as AI won’t be stealing away your job…
5
Salespeople worry not, AI won’t steal your jobs! In sales? Afraid of losing your job to some AI powered virtual assistant (bot, chatbots)? Fear not, as AI won’t be stealing away your job. In fact, it will greatly assist the bottom-line for salespeople, which is more sales. Bots and AI though curb mundane chores and free up time for better relationship building, the ‘human touch’ is what, which will keep the flame alive for salespeople. Here’s how AI rather than stealing your job will make it better: The rise of AI and what it means for salespeople? AI has been doing a lot of rounds taking the world by a storm since a year or so, as tech giants are gearing up to foot in more and more on AI research and its implications to make lives easier for us. From driving to manufacturing, AI has stirred a wave of celebration as well as dismay garnering mixed reactions from the masses. In fact, robotics and machines are set to take over a huge chunk of professions and sales is no exception. But before we jump into any conclusion, let’s clear the air a bit. We all know that not all sales are same. Certain transactional sales will give way to AI, but some sales professions that involve intricate sales process will not be replaced, but in fact will be enhanced by the introduction of AI. Here’s the cache: Sales reps on an average spend around 70–80% of their time in lead qualification and the remaining 20–30% is spent on closing the leads. As we know that qualifying a lead requires advance level research followed by making numerous phone calls and drafting emails to convert a lead. It’s highly unlikely that robots could take up this task fully, but what if it helps to speed up this tiring process, all in a human-like manner? Yes, that’s what AI will do. AI will save time for salespeople As we know that automation is taking the world by a storm and is already replacing many routine business tasks/processes, which in a way will benefit salespeople. For instance, imagine an AI-powered sales tool that can automatically send out meeting invites and schedule meetings, so that you don’t have to do all these manually. This would free up a great deal of time for salespeople, which can be dedicated on other critical tasks such as drafting customized emails or conversing with prospective buyers in person. Long story short, AI would help salespeople focus on what they need to do the most, i.e. sell. Ask any salesperson about his/her biggest hurdle. The answer certainly would be time prioritization. Instead of relying on the old-school practice of guessing whether it is the right time to reach out or keeping a tab of all the communication with multiple prospects, a sales rep could simply rely on AI to help determine the right time to reach out or decide the next best action. Here comes the most exciting impact of AI on sales. As AI has the potential to churn out huge volumes of data. Soon, it will start providing smart suggestions based on the data analysed. For example, it can prompt a sale rep to follow up with a prospective lead after a phone call is made. Yes, an AI-powered CRM solution can analyse and machine learn customer data, emails, social, calendar, etc. before coming up with smarter recommendations and suggestions based on the goals you set. Here, smart sales reps can connect the dots to come up with next steps. Machine Learning: A Quantum Leap towards a Smarter CRM Human selling will never go obsolete! As sales sits tight on emotions like empathy, trust and not to forget emotional intelligence, it is one of the biggest and strongest reasons that salespeople will never lose their jobs to AI or machines. Whilst AI’s ability to understand emotions and language will improve over time, it is highly unlikely that it will fully replace the human touch when it comes to building trust, which is the quintessential part of sales. Though machines and technology have accomplished wonders, humans were and would continue to be the most complex beings evolved for over five million years. In other words, machines are still in their infancy stage and it shouldn’t bother salespeople who manage complex deals. This is one reason we are witnessing high demands for salespeople that can sell in a complex environment. Long story cut short, AI-powered sales tools would help salespeople automate mundane tasks such as paperwork, cold calls and scheduling meetings saving them time and efforts. Selling requires human connection and emotion, which is why it isn’t that easy to replace! To stay updated with all the upcoming tech trends in the world of enterprise technology and IT, visit our blog here or simply SMS SAGE to 56767 or shoot us a mail at sales@sagesoftware.co.in Disclaimer: All the information, views and opinions expressed in this write-up are those of the authors and their respective sources (web) and in no way reflect the principles, views or objectives of Sage Software Solutions (P) Ltd. Sources: Inc., Entrepreneur INDIA, Sales Readiness Group (SRG) and The Salesman Podcast Originally published at www.sagesoftware.co.in.
Salespeople worry not, AI won’t steal your jobs!
0
salespeople-worry-not-ai-wont-steal-your-jobs-183b640b43b0
2018-01-29
2018-01-29 07:03:02
https://medium.com/s/story/salespeople-worry-not-ai-wont-steal-your-jobs-183b640b43b0
false
876
null
null
null
null
null
null
null
null
null
Sales
sales
Sales
30,953
Sage Software Solution
World's leading software company in ERP, CRM and Payroll solution
e8132d98f9fe
sagesoft
702
1,790
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-15
2018-05-15 05:00:27
2018-05-15
2018-05-15 05:04:18
2
false
en
2018-05-15
2018-05-15 05:04:18
14
183bf4a7f7c5
3.802201
0
0
0
By Craig McDonald
5
Cybercriminals vs The World: together we can beat them By Craig McDonald Why do the people working on the frontlines of cybersecurity feel that collaboration is so important right now? I think the answer lies in the staggering statistics on cybercrime growth that are appearing in every corporate survey released recently and stories like the Facebook and Equifax data breaches that are constantly in the headlines as well. Cybercrime is growing almost exponentially at the moment and that sort of threat needs to be met by a proportionately massive response. But what single organisation can confront a threat like cybercrime that is on such a huge scale and is constantly shifting in terms of tactics and targets? At the CxO Cybersecurity Forum in March the presentations of all the speakers — including myself — were firmly focussed on collaborative strategies. I shared the stage at the forum with Alastair MacGibbon, Anita Sood and Steve Ingram: cybersecurity experts from government and industry who all agree on one important point; winning the cybersecurity battle means working together more closely than ever before. Companies, governments, law enforcement, researchers and even consumers need to share intel, coordinate their responses and treat cybercrime as a common enemy. You can read more about the CxO cybersecurity forum on the MailGuard Blog. There’s also a great interview with Alastair MacGibbon — pictured above — talking about cybersecurity, here. Make or break technology Cambridge University put out a fascinating report recently focussed on ‘existential threat’; issues that are most likely to cause fundamental breakdowns in society. Artificial Intelligence is included in the report, alongside threats like climate change and pandemic disease outbreaks. Why does the Cambridge report consider AI to be such a major peril? Because technologists can now see the potential for AI to be exploited in cyber-attacks to devastating effect — both as a crime tool and as part of future nation-state-level cyber-warfare. We’ve already seen the way cyber-attacks like Wannacry have been able to disrupt vital public infrastructure and services and these were fairly primitive cyber-attacks compared with the sort of thing we might be seeing in the near future, according to Cambridge University’s report. Wannacry was a destructive and concerning incident, but comparing it to the AI-powered cyber-attacks of the near future is like comparing a rifle to a Stealth Fighter Jet; these are generationally different technologies. Technology is fundamental to our contemporary way of life; it’s woven right into every daily activity now, so thinking about cyber-attacks that have the potential to take down tech infrastructure we’re really considering a threat to society as a whole. That sort of threat is an issue not just for businesspeople, not just for governments but an issue for all of us to think about. Cybersecurity calls for a grassroots level response; it’s a problem we all have to understand and defend ourselves from because, at the most fundamental level, cyber-attacks are actually targeted at individual people. Big crimes — small targets Cybercriminals go for big scores. They can reap millions of dollars from their attacks and take down huge companies, but their weapon of choice — email — is pointed directly at individual people. Statistics show consistently that 90% of cyber-attacks are initiated via email. That seems strange to some people but think about it; everyone uses email. Billions of messages are flying around the globe at any given moment and they can be used by cybercriminals the way terrorists used to send letter-bombs in the pre-internet era. To devastate a huge company, all a criminal needs to do is attach a malware file, like a virus, to an email and send it off to unsuspecting strangers. A cyber-attack email can hit millions of inboxes in seconds, and if just one of those recipients clicks on the message it can release a virus into their company’s computer system that is capable of shutting down the entire organisation. In the cyber-attack ecosystem we live in now, every person who uses the internet needs to think of themselves as a cybercrime fighter, because the unfortunate reality is, we are all potential cybercrime targets. How can we all work together? That’s the question I started asking myself a couple of years ago; and it led to the inception of my new cybersecurity project: GlobalGuard. GlobalGuard will be a mechanism for real-time collaboration to correct the imbalance of cybercrime prevention; it’s going to leverage the past seventeen years of threat data that MailGuard has collected in combination with an AI system that’s trained to detect cyber-attacks. The GlobalGuard technology and intel will be constantly available to and curated by the people who use it. It’s a hub to bring people together to confront the common enemy: cybercriminals. GlobalGuard will use Blockchain technology to create an unprecedented community of cybersecurity stakeholders; businesspeople; IT specialists; white hat hackers and universities. If you’ve heard the term Blockchain before, but you’re confused about what it really is and what it can do, then read my article about it, here. I’ll be launching GlobalGuard soon, so stay tuned! Hi, I’m Craig McDonald; MailGuard CEO, founder of GlobalGuard and cybersecurity blogger. Follow me on social media to keep up with the latest developments in cybersecurity; I’m active on LinkedIn and Twitter. I’d really value your input so please join the conversation. Originally published at www.mailguard.com.au.
Cybercriminals vs The World: together we can beat them
0
cybercriminals-vs-the-world-together-we-can-beat-them-183bf4a7f7c5
2018-05-15
2018-05-15 05:04:20
https://medium.com/s/story/cybercriminals-vs-the-world-together-we-can-beat-them-183bf4a7f7c5
false
906
null
null
null
null
null
null
null
null
null
Cybersecurity
cybersecurity
Cybersecurity
24,500
MailGuard
Cybersecurity news and advice from Australia's leading web and email security provider.
f156d8cd0d50
MailGuard
9
5
20,181,104
null
null
null
null
null
null
0
null
0
2141d97ccc50
2018-02-02
2018-02-02 23:02:37
2018-02-02
2018-02-02 23:07:15
1
false
en
2018-02-13
2018-02-13 13:17:50
3
183e41c33fc6
3.535849
0
0
0
Acquiring Data from Acquired Data
5
Spelunking v. Thunking Acquiring Data from Acquired Data While working on his Ph.D. thesis at Yale’s Artificial Intelligence Laboratory in the mid-1970s, James Meehan developed a LISP program capable of generating Aesop ‘s Fable-style stories from a database of facts and character interaction rules. The program, called TALE-SPIN, was set to the task of creating the fable of the Fox and the Crow, wherein a smooth-talking fox is able to swindle a piece of cheese away from a vain crow. During an initial run, TALE-SPIN produced the following fable; “Once upon a time there was a dishonest fox and a vain crow. One day the crow was sitting in his tree holding a piece of cheese in its beak. The crow became hungry and swallowed the cheese. The End.” Though considered an inappropriate output at the time, this result is an example of artificial intelligence at its best — the construction of an unexpected, but otherwise correct pattern or natural conclusion gleaned from a database of known values. Photo by Louise55 on Pixabay Thunking is a term used to describe the down-conversion of 32-bit into 16-bit data representation suitable for submission to legacy 16-bit functions. It can also be used to describe the function of analog-to-digital converters (ADCs), which is to “thunk” an analog signal’s infinite resolution down to a digital finite-bit representation. The prolific deployment of sensors and ADCs throughout instruments, processes, and entire enterprises has resulted in the collection, storage, and management of literally mountains of data. The data acquisition (DAQ) community is currently smothering under the weight of its own success. Developments in data access technology during the 1980s, including relational database management systems (RDBMS), structured query language (SQL), and open database connectivity (ODBC), facilitated orderly data storage and retrieval from large databases. These necessary and valuable tools permit a customer to ask an attendant behind a terminal at the local Home Depot a question, such as, “In which isle can I find 1-inch paintbrushes and how much do they cost?” The 1990s brought advances in data warehousing, decision support, and online analytic processing (OLAP), whose mission, according to the industry’s OLAP Council, is to “slice, dice, or rotate” data into any view requested by the analyst. This affords the Home Depot purchasing manager the ability to plot historic paintbrush inventory as a function of time and geographic store location to assist in the determination of next month’s order from the supplier. The next step in this evolution is the development of algorithms capable of autonomously searching or “mining” the databases for patterns and trends that have not been considered by the analyst. For example, an intelligent algorithm may inform the Home Depot manager “73% of paintbrush purchases were accompanied by the purchase of masking tape. Consider displaying this item in close proximity of the paintbrushes.” Data mining is the “killer application” long sought by an artificial intelligence (Al) community that has been quietly developing techniques that heretofore have enjoyed little popular fanfare in the business, manufacturing, and scientific communities. Data mining integrates seamlessly with RDBMS and OLAP servers to produce the desired analyses. Initial mining algorithms are based on Al techniques such as decision trees, clustering, neural networks, and genetic algorithms. Decision trees are branched structures representing sets of decisions used to generate rules for the classification of new, unclassified data. Clustering is an expectation method that uses iterative refinement to group data into neighborhoods of data exhibiting similar, predictable characteristics. Neural networks utilize non-linear predictive models that learn through training and resemble biological neural networks in structure. Genetic algorithms are optimization techniques based on the concepts of evolution that utilize the processes of genetic combination, mutation, and natural selection. As the demand and funding for data mining algorithm development increases, the strengths of each technique may hybridize or lead to the exploration of lesser-known AT research. The Human Genome Project (HGP) has with deservedly great fanfare completed a “working draft” of the approximately 30,000 genes in human DNA and has set to the task of sequencing the 3 billion chemical base pairs it contains. The instrumental analysis and DAQ technology utilized by the HGP have made this goal a reality, however the burden of leveraging this data into useful information including the causes and cures for cancer and genetic disease, sits squarely on the shoulders of nascent data mining technology. On a less grandiose scale, it is easy to envision the Al component of a not-so-distant HPLC system greeting me with “Bill, I have detected a 1.4% increase in peak tailing in the week leading up to the restocking of solvent over the past quarter. You may wish to verify the purity of the solvent supply or consider on-site purification.” This material originally appeared as a Contributed Editorial in Scientific Computing and Instrumentation 18:6 May 2001, pg. 16. William L. Weaver is an Associate Professor in the Department of Integrated Science, Business, and Technology at La Salle University in Philadelphia, PA USA. He holds a B.S. Degree with Double Majors in Chemistry and Physics and earned his Ph.D. in Analytical Chemistry with expertise in Ultrafast LASER Spectroscopy. He teaches, writes, and speaks on the application of Systems Thinking to the development of New Products and Innovation.
Spelunking v. Thunking
0
spelunking-v-thunking-183e41c33fc6
2018-02-13
2018-02-13 13:17:51
https://medium.com/s/story/spelunking-v-thunking-183e41c33fc6
false
884
Innovation is Elegance. Complex Explanations are Not. Innovation reduces system complexity. This publication seeks to reduce confusion.
null
williamlweaverphd
null
TL;DR Innovation
williamlweaver@gmail.com
tl-dr-innovation
TECHNOLOGY,SCIENCE,INNOVATION,SYSTEMS THINKING,INTELLIGENCE
williamlweaver
Machine Learning
machine-learning
Machine Learning
51,320
William L. Weaver
Explorer. Scouting the Adjacent Possible. Associate Professor of Integrated Science, Business, and Technology La Salle University, Philadelphia, PA, USA
286537bc098c
williamlweaver
183
189
20,181,104
null
null
null
null
null
null
0
null
0
d162644efe2a
2018-09-04
2018-09-04 14:34:21
2018-09-04
2018-09-04 14:42:13
1
false
en
2018-09-04
2018-09-04 14:42:13
2
183ede0072c5
1.950943
25
0
0
It is time to take a look at the results of August. There is still much work left for presenting fully-performing OLPORTAL ecosystem, but…
5
August: Wrapping-Up the Month’s Work It is time to take a look at the results of August. There is still much work left for presenting fully-performing OLPORTAL ecosystem, but the team of the project does its best to finish the work before time. For the last month, we have done a lot of tasks, solved many difficult issues to provide our audience with a really competitive product that meets all requirements of modern people. The logjam has been broken after we have developed a first test version of OLAI called Jessica that is quite a simple bot. It is generated to be tested by the developers and users for further improvements to be done. Jessica was launched on the August 5 and is still available for testing in two languages — English and Russian. However, this test OLAI neurobot was available only for Android platforms up to the end of the month, then Jessica became available for iOS devices. Moreover, the marketplace feature of OLPORTAL has been also implemented recently. Here you can see the list of the bots that we are going to launch in the near future. OLAIs by OLPORTAL are still available for free in order to continue testing. The second important announcement was about our partnership with the DLT-platform Hedera Hashgraph. Hedera Hashgraph is a new-generation distributed ledger platform allowing app developers to produce high-performing products due to its speed and security level. OLPORTAL will be powered by Hedera Hashgraph platform, and soon we will be able to move to its testnet, which will be the first step to the decentralization of our big ecosystem. One of the brightest events of the last month was the livestream with Gossip Guy where OLPORTAL team was talking about our idea, future neurobots, the reason of the decentralization on Hedera Hashgraph, and many other important technical and marketing issues. If you have missed the stream, you can watch it here https://www.youtube.com/watch?v=115lXqK9pec&feature=youtu.be Another good news of August was Ajay Prakash, managing partner at Qubit Capital, has also joined our team to provide our project with the highly experienced assistance. At the present moment, we are negotiating with some other influencers and experts in crypto and new technologies to elaborate better strategy. The August was an eventful month for OLPORTAL, but the team needs to handle many other issues and continue working hard on the development and promotion of our ecosystem. Many challenging tasks to solve and dozens of negotiations to conduct are ahead of us. Still, to give you a little hint, September will be marked with the first male OLAI release. Wait for announcements and continue to follow our updates! If you have any questions, you can ask them at any time in our Telegram chat https://t.me/olportal_ai. Welcome!
August: Wrapping-Up the Month’s Work
670
august-wrapping-up-the-months-work-183ede0072c5
2018-09-04
2018-09-04 14:42:13
https://medium.com/s/story/august-wrapping-up-the-months-work-183ede0072c5
false
464
The world's first decentralized messenger on neural networks with the Artificial Intelligence dialogue function.
null
OLPortal23
null
OLPORTAL
smm@olportal.ai
ol-portal-steps-forward-to-the-future-communicatio
MOBILE APP DEVELOPMENT,SOCIAL NETWORK,ARTIFICIAL INTELLIGENCE,NEURAL NETWORKS,BLOCKCHAIN
olportal
Blockchain
blockchain
Blockchain
265,164
OLPORTAL
The world's first decentralized messenger on neural networks with the Artificial Intelligence dialogue function.
b4a225970600
olportal.ai
960
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-19
2017-10-19 01:19:54
2017-10-19
2017-10-19 01:52:41
2
false
en
2017-10-19
2017-10-19 01:52:41
1
183f43caf1f6
1.455031
0
0
0
TheRecruiter as I call myself, reading below article not only put me in bliss for technology advancement and less workload but alarmed me…
5
Artificial Intelligence: Blessing or a Curse for a Recruiter? TheRecruiter as I call myself, reading below article not only put me in bliss for technology advancement and less workload but alarmed me for what I envision it to be TheRobotRecruiter. Article talks about AI tool — “assistants” which will cater towards candidates experience. Music to recruiter's ears? Tool called “Mya” would automate communication with the candidate during the application phase. Answering questions about job requirements — initial phase. Automated questionnaire with the candidate saving recruiters time and redundancy to ask same questions. IBM Watson in limelight for improving HR inefficiencies, building knowledge. For recruiters — prioritization of requirements to be filled. If this this get your attention yet, Article talks about Candidate Interview process (Video Experience) judging emotional intelligence and truthfulness. Reading this — I pondered if “Skype Corporate” Accounts would face a decline for all the Video Interviews? This technology gets rid of my every trouble to identifying true resume, true profile, genuine candidacy. Video Interviews: Reading emotions! I was thinking what other problems I have in my work day — There it is in next element of the article. Handling passive candidates. That old pipeline — all recruiters have. How do those candidates become an asset and I gauge their attention? AI helps in engaging those candidates — Tool (EngageTalent). Now, lets get back to the “Now”. AI software is driven by data. Retrieving good data (historical) would be crucial. “Good” meaning non-biased, best practices. I wonder if AI is too far from overcoming this danger of software learning only what is given. Below article was an enlightening read — Makes you wonder “What is next”! Website: https://www.shrm.org/resourcesandtools/hr-topics/technology/pages/recruiting-gets-smart-thanks-to-artificial-intelligence.aspx
Artificial Intelligence: Blessing or a Curse for a Recruiter?
0
artificial-intelligence-blessing-or-a-curse-for-a-recruiter-183f43caf1f6
2018-01-18
2018-01-18 02:16:46
https://medium.com/s/story/artificial-intelligence-blessing-or-a-curse-for-a-recruiter-183f43caf1f6
false
284
null
null
null
null
null
null
null
null
null
Recruiting
recruiting
Recruiting
15,454
Anchal Verma
The Recuiter
ac5471c6ad5f
anchalverma
15
13
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-15
2018-07-15 03:42:51
2018-06-26
2018-06-26 14:00:23
3
false
en
2018-07-15
2018-07-15 03:45:21
1
183fa3852713
5.078302
0
0
0
What is RankBrain?
5
RankBrain and Future of SEO: How to Win in the Age of Machine Learning What is RankBrain? Simply put, RankBrain is a machine learning artificial intelligence system. It provides a modern way for Google to understand search queries and search intent. RankBrain’s goal is to evaluate what people are searching for and identify the best content on the web. Google’s goal is to return results thats perfectly relevant to a given search query. Ultimately, the search engine’s goal wants to give searchers what they were looking for. Thats why Google needs to understand what you actually mean when you’re searching for something — search intent. In this post, you’ll learn how RankBrain affects SEO and 4 simple yet effective ways to optimise your content for RankBrain. How RankBrain affects SEO In the past, a query for notebook may not have returned content referencing its plural version, notebooks. Let’s say we’ve got a blog article about the best bread in Sydney. The old method of SEO would look at keywords with the highest search volume. Keywords which were most frequently used. Do we target “best bread in Sydney” or “best buns in Sydney”? We’d then fiddle our content to make sure we use “bread” versus “bun”, and included necessary modifiers such as “where” or “what” etc. But if you think about it, “bread” and “bun” are the same thing. A search for the “best bread in sydney” vs “top 10 buns in sydney” are really asking the same question. Today, RankBrain actually tries to figure out what you mean, just like a human would. RankBrain connects the dots and knows that regardless of the difference in word pairing and ordering of the search query, the same content will work. In other words, RankBrain is reducing the importance of exact-match, single keywords and prioritising the importance of matching content to fit a user’s query. 4 Ways to Optimise Content for RankBrain Move away from exact-match single keyword targeting Use LSI keywords to add context Begin mapping content to concepts Match content to user intent First, move away from single keyword targeting SEO is no longer about matching single keywords to single pages. Gone are the days when you have to create one page for best bread, and another page for best buns. Instead, optimise your content by condensing it into keyword groups. Create a keyword group by gathering synonyms and closely related ideas that best match your main topic. Second, use LSI keywords to add context LSI keywords stand for Latent Semantic Index keywords. They are words and phrases related to your content’s main topic. This is the secret ingredient that helps RankBrain understand a page’s context. For example, let’s say you’re writing a guide about content optimisation. Your LSI keywords will be: title tag meta description headings anchor text internal linking When RankBrain sees your content includes these related terms, they’ll know your page is about content optimisation. And you’re more likely to rank for keywords related to that topic. Now for every blog post you have, just sprinkle LSI keywords into your content. This shows RankBrain that your content is comprehensive and have contextual relevance. Third, begin mapping content to concepts and themes Earlier, we talked about moving away from single keywords and instead focus on keyword groups. Let’s take this a step further. When you map keywords to concepts, you’re essentially creating a theme. You can create a keyword theme by finding keywords that are conceptually related. This helps you maximize the traffic potential of every article you publish. Think about it, if you only optimise for a single keyword, you’re losing out on majority of potential traffic of many other related keywords under the same topic. There are a few ways to define keyword concepts. Google Related Searches At the bottom of Google’s search results is the section ‘searches related to…’ This is a friggin’ goldmine of conceptually related topics. Click on a few of those related searches and repeat that process of examining the bottom results, you’ll find dozens of long-tail keyword opportunities. This method usually generates more keyword ideas than Google Keyword Planner itself. Highly recommend it! Google Trends Look to the bottom of your search report, for ‘related topics’ and ‘related queries’. This section is also great to find conceptually related keyword queries. Fourth, match content with search intent Intent is probably the future of SEO in many ways. I find the best way to begin understanding search intent is by empathising with the user. Immerse yourself in the searches and find what people are telling you. Because there’s a lot of patterns that take place in keywords and thats where you can find the intent of the searcher. When you start thinking about user intent, you’re actually thinking about how to best meet the needs of a potential user. You can do this by grouping your target keywords into 3 intent categories: informational navigational transactional Informational intent queries People searching for informational-intent queries are looking for more information. They’re probably looking for queries that look like: “how to…” “compare X to Y” “why…” They are researching into their needs and potential solutions, but are not yet ready to buy. As a content creator, you might want to serve up informational content describing broad topics. This could be an article describing common problems your target audience might face. For example, if you’re a business selling makeup and skincare products. Your target audience’s queries (informational-intent) are likely to be: how to get rid of acne scars how to maintain dewy skin what are causes for acne prone skin These types of queries are problem focused, and the searcher intent is looking to understand his/her own situation better. Your content can satisfy this need by providing more clarity into the problem. Content types for informational queries: blog posts checklists infographics videos images Navigational intent queries These queries are branded queries. Meaning, people are looking for specific products or services from a particular brand. For example, “Nike shoes”, “Estee Lauder mascara”. They are likely further down the sales funnel than people searching for broad informational content. At this point, your audience might be comparing different brands are in the solution evaluation stage. Content types for navigational intent queries: Surveys / Quizes Webinars Free downloads Navigational intent queries tell you that these users are potential leads. Transactional intent queries Transactional or commercial intent queries have buying intent. Users have made up their minds to purchase, and probably have a credit card ready in hand. They’re looking for offers that can best meet their expectations. Most commonly used commercial keywords: “best” “top” “affordable” “cheap” “review” “discount” Transactional intent keywords can be the most valuable keywords for a business. Simply because they have the highest chance to convert visitors into paying customers. Content types for commercial intent queries: Webinars/Events Testimonials Customer story Sales ‘squeeze page’ Promotional / discount offers As you can see, there are different content types for different search intent. Serving a promotional sales page that targets informational-intent keywords will likely not do well. How did these strategies work for you? Leave your comments below! Originally published at www.leannewong.co on June 26, 2018.
RankBrain and Future of SEO: How to Win in the Age of Machine Learning
0
rankbrain-and-future-of-seo-how-to-win-in-the-age-of-machine-learning-183fa3852713
2018-07-15
2018-07-15 03:45:22
https://medium.com/s/story/rankbrain-and-future-of-seo-how-to-win-in-the-age-of-machine-learning-183fa3852713
false
1,200
null
null
null
null
null
null
null
null
null
Content Marketing
content-marketing
Content Marketing
34,905
Leanne Wong | SEO + Blogging Tips
Helping time starved bloggers and solopreneurs scale with digital marketing.
297413e3ebf9
leannewongco
1
2
20,181,104
null
null
null
null
null
null
0
Ridge(alpha=1, normalize=True) RandomForestRegressor(n_estimators=200, min_samples_leaf=5, min_samples_split=6)
2
e5cb038d0698
2018-08-28
2018-08-28 22:14:52
2018-08-28
2018-08-28 22:18:47
1
false
en
2018-08-30
2018-08-30 20:45:59
7
1841b9550aed
2.611321
1
0
0
Santander
4
Banking on Ensembling in the Santander Kaggle Competition Santander Santander Bank (formerly known as Sovereign Bank, until the Santander Group bought it in 2008) provided a challenge in Kaggle to personalize banking for their clients. To achieve this, Santander wants to predict the value of transaction so that they can be ready to provide services that their clients might need. Data Description The data was anonymized, meaning that the the features provided was masked with strings so that the actual features that they collect is confidential. The good thing is that all the features were numerical. Quick Dirty Linear Regression The first thing we did was just run the raw data through a linear regression to set the bar for the results. We applied a K-Fold cross validation to see what the accuracy and variance would be. The RMSLE for the Dirty Linear Regression is 1.7E15 with a standard deviation of 2.7E15. (A LOT of room for improvement.) Data Preprocessing We knew that the RMSLE will be bad but had no idea it would be THAT bad. Looking at the numeric values the first thing to do was to scale the features. Scaling the features will definitely help with the algorithm for two reasons. First reason is that numerical values that are in the magnitude of 100s will not take precedent of those that have values of 1 or something smaller. Scaling will fix this issue so that every feature will be put in a similar range to make it an even playing field. The second reason is that it will help make the gradient descent a lot more efficient in timing and calculating the gradients. Also there are almost 5000 features. This is a lot of features to deal with. So our first reaction was to reduce it using principal component analysis (PCA). The benefit of performing PCA is that we can select the features that produce the most variance and disregard features that barely has any variance. Features that have high variance typically provide a lot of information. Features that have low variance, do not contain much information. Therefore if we disregard the low-variance features, we can reduce the dimensions without losing too much information. Before running PCA, we had to remove highly correlated features (highly correlated features double the variance). Running PCA with n_components as None will give this graph. The cumulative explained variance is showed by the line. The Individual explained variance is shown by the bars. The bars are extremely small (you can see some near the zeroth tick). The graph shows the cumulative variance starting from the highest-variance feature. As you can see, it starts to plateau around 1000–1200 features. We decided to continue with 1000 features. (This decision is art part of data science). Model Linear Regression Running a linear regression after scaling the features and transforming the data with PCA, we got RMSLE of 3.8 and a standard deviation of 0.13. The results seems more reasonable now. Ridge Regression To continue on this regression problem, we thought of shrinking some parameters using a ridge regression. We tuned the alpha parameter to 1. Running a ridge gave us a RMSLE of 1.7 and standard deviation of 0.0426. Random Forest Next up we thought of running a random forest. Using grid search, we were able to get the best combinations of parameters. The best parameters we got were min_samples_leaf: 5, min_samples_split:6, n_estimators=200. This gave us an RMSLE of 1.5 and standard deviation of 0.0391. Ensemble To finish it off, we ensembled the two models. Our RMSLE is 1.57 and a standard deviation of 0.0434. Next Steps The next step we will want to take is run a deep neural network and see if there is an improvement in the results. There were only 4000 examples so we thought that DL was not necessary but still will be worth a try. For the full code: https://github.com/tqrahman/Kaggle_NYC/blob/master/Santa_Final.ipynb Credits: Kaggle NYC, Taraqur Rahman, Harry Moreno, Jacob Peters, Janet Deng
Banking on Ensembling in the Santander Kaggle Competition
50
banking-on-ensembling-in-the-santander-kaggle-competition-1841b9550aed
2018-08-30
2018-08-30 20:45:59
https://medium.com/s/story/banking-on-ensembling-in-the-santander-kaggle-competition-1841b9550aed
false
639
Kaggling together, one model at a time.
null
null
null
Kaggle NYC
nyckaggle@gmail.com
kaggle-nyc
DATA SCIENCE,MACHINE LEARNING,KAGGLE COMPETITION,SANTANDER,DEEP LEARNING
null
Data Science
data-science
Data Science
33,617
Taraqur Rahman
Avid learner.
1512c63c44f4
taraqur
13
148
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-21
2018-08-21 23:25:46
2018-08-22
2018-08-22 01:05:02
30
false
en
2018-08-22
2018-08-22 01:05:49
2
1845625791aa
2.800943
2
0
0
WWDC 2018
5
What’s New in Core ML, Part 1 WWDC 2018 Simplified Integrated Vision — VNCoreMLRequest Natural Language — NLModel Models on Device Problem Reducing Model Size Smaller bundle Smaller/faster downloads Reduced runtime memory usage Core ML App Size Resnet50 < 8 bit quantization Obtaining Quantized Models Post-training quantization Train quantized - From scratch or re-training - Then convert quantized models to Core ML Accuracy Tradeoff Check Your Quantized Model Accuracy with test data and metric relevant for your app Model dependent Use case dependent Active area of research Demo Sample Link One Flexible Model Combine Using Flexible Image Sizes One model No redundant code Faster model switching times Flexibility Options Which Models are Flexible? Core ML Tools can check for you ! Fully convolutional Neural Networks - Image processing - Object detection Performance and Customization For Loop vs Batch Any Horses? Custom Layers Custom Models Customization in Xcode Customization Options Summary
What’s New in Core ML, Part 1
40
whats-new-in-core-ml-part-1-1845625791aa
2018-11-01
2018-11-01 06:24:44
https://medium.com/s/story/whats-new-in-core-ml-part-1-1845625791aa
false
146
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Den Jo
iOS Developer
cfbacbb5764e
nilotic2
56
47
20,181,104
null
null
null
null
null
null
0
null
0
59039de05b80
2017-11-02
2017-11-02 11:58:30
2017-11-03
2017-11-03 12:41:23
1
false
en
2017-11-03
2017-11-03 12:41:23
1
1845f805de21
3.818868
0
0
0
null
4
5 Ways B2B Marketers Can Leverage Intent Data Big data and analytics have become the zeitgeist of contemporary marketing practice. Customer data is the new marketing overlord, where no marketing decision can be made without consulting available data. Consumers today have grown to expect a lot more from brands than simply being offered the ability to make a purchase from them. They are looking for better engagement, superior products, and faster services. Given this reality, as marketers, we can no longer market to consumers based on our timeline, and strategies. Instead, we must anticipate their needs, their expectations and deliver a seamless customer journey. In short, we need to listen to our consumer and learn what they care about before we engage. This is where the concept of intent-based marketing steps in. Intent-based marketing leverages data to gain rich insights into the interests of potential consumers and their purchase behavior, to create personalized buyer journeys. Decoding Intent Data Broadly speaking, there are two types of intent data. There’s data that’s generated through people’s engagements with your owned digital content, such as the pages on your website that a lead visits, the emails they open, the e-books/whitepapers they download and the links they click through on your social media posts. But only a few marketers have access to this kind of data on a large scale. Then, there’s data generated by people’s engagement with content on the internet (third-party). These actions include articles on industry publications (i.e. CMO.com, martechadvisor.com), watching webinars, and discussing key industry topics on social media or following influencer and brands in the social space. Five ways marketers can leverage intent data By tapping into third-party intent data sources, marketers can easily identify prospects and gain insights into what their prospects care about. Here are five ways marketers can leverage intent data: Discover new prospects: Although it is incredibly easy to segment your target audience based on parameters such as job titles, company size, and their existing tech stack, this approach will not help you determine whether your prospects are actively looking to purchase a solution that you offer. This is not actionable data. However, with intent data, you could easily identify prospects who have shown an interest in a solution that you offer, based on intent signals. For instance, if you offer a marketing automation solution, you can approach intent data providers to help you identify marketing technology professionals who recently joined a marketing automation community on LinkedIn or shared related articles on innovations in marketing automation. Lead and account scoring: Marketing automation platforms (MAP) have transformed the B2B marketing game. MAPs not only allow marketers to automate a sequence of communications with prospects, but also track whether — and to what extent — those prospects consume the content sent to them. These behavioral patterns can then be scored automatically to gauge a prospect’s interest. However, there are certain limitations when you score consumers’ engagement with your own content. For instance, it’s wrong to assume that just because someone has consumed a certain amount of content on your website, the person must be interested in your solution.This person could be a researcher or a competitor, or someone simply looking to educate themselves on your space. What you’d rather be interested in knowing, is whether there is a purchase decision being made. This is where intent data comes into play. Intent data monitoring links the digital behaviors of multiple consumers with the companies they represent across all channels they conduct their search on. This multipronged approach provides a stronger evidence of organizational intent to purchase than other methods that base the behavior of one individual as the intent of the organization. Sales intelligence: Sales teams understand the importance of doing research on their target prospects and accounts before reaching out. By providing your sales team with contextual insights about their target accounts and prospects from third party data sources, such as social media behavior, sales teams can reach out to prospects at the right time and drive engagement. For instance, before a sales rep reaches out to a lead, they could quickly check their lead’s social activity feed and glean the insights necessary to craft a personalized and relevant message. This information also helps sales reps revive cold leads. Interest-based segmentation: Traditional lead nurturing emails that target prospects based on job titles and industry have lost their efficacy. To enjoy greater engagement rates, you need to segment your leads based on their interests and needs. With third party intent data, you can identify who you should target with a certain email or ad campaign based on the topics your leads have shown an interest in. Content personalization: Figuring out what content will resonate with prospects and customers is a major challenge for most B2B marketing organizations. The current crop of content analytics tools only provide insights about audiences that have already viewed your content. Third party intent data can help you craft more relevant and personalized content. For instance, if you knew that a certain section of your leads follow a particular influencer on Twitter, you could decide on developing joint content with that influencer. The bottom-line Marketers can further deploy their intent data into strategies like ABM, KAM and omni-channel content marketing programs. Are we soon going to arrive at a point where consumers are delighted to see every marketing message they come across? Probably not. However, with intent data, we are arriving at a point where every interaction with the consumer can be personalized according to their specific needs and interests. Intent data gleaned from online behavior makes marketing more relevant. That’s great news for marketers and consumers alike. 5 Ways B2B Marketers Can Leverage Intent Data
5 Ways B2B Marketers Can Leverage Intent Data
0
5-ways-b2b-marketers-can-leverage-intent-data-1845f805de21
2017-11-03
2017-11-03 12:41:24
https://medium.com/s/story/5-ways-b2b-marketers-can-leverage-intent-data-1845f805de21
false
959
The World’s Leading Source for Marketing Technology News, Research, Product Comparisons & Expert Views
null
martechadvisor
null
MarTech Advisor
shabana.arora@martechadvisor.com
martech-advisor
MARKETING,BUSINESS,TECHNOLOGY,TECH,NEWS
MarTechAdvisor
Marketing
marketing
Marketing
170,910
MarTech Advisor
Helping Marketers Succeed
4a1305dc5a36
martechadvisor
172
448
20,181,104
null
null
null
null
null
null
0
null
0
cc02b7244ed9
2018-04-18
2018-04-18 06:46:36
2018-04-18
2018-04-18 06:48:40
0
false
en
2018-04-18
2018-04-18 06:48:40
11
18462920dfd2
2.030189
0
0
0
PRODUCTS & SERVICES
5
Tech & Telecom news — Apr 18, 2018 PRODUCTS & SERVICES Applications Apple keeps on track to become more of a “service company”, amid deceleration of the global smartphone market, which limits their growth perspectives in the core business. Yesterday rumours were published on a project to launch a premium (paid) news service, that would capitalise on current “fake news” crisis (Story) Artificial Reality Initial signs that the Augmented Reality market could start to heat up. AntVR, a Chinese startup, plans to launch a Kickstarter campaign to build a cheap AR headset that would compete against Microsoft’s HoloLens at a sixth of the price ($500 vs. $3,000). And (meanwhile) we keep waiting for Magic Leap’s launch… (Story) Enterprise IBM presented its 1Q18 results yesterday, and even if they announced their second consecutive quarter of revenue growth (+5% yoy to $19.07/Q), after 6 years of declines, they have not been received very well, as the shift to new revenue streams is accompanied by a stretch in margins (net profit fell -4% to $1.68bn/Q) (Story) First analysis actually question IBM’s actual ability to exploit customers’ shift to the cloud (an area that Amazon, Microsoft and, to a lesser extent, Google dominate). They also criticise an increasing “jargon addiction” in IBM’s top management, with e.g. the CFO using the word “blockchain” 10 times during the call (Story) Regulation Cyber warfare is becoming the new warfare, so several tech leaders, including Facebook, Cisco and Microsoft, have signed a “Cybersecurity Tech Accord”, in which they commit not to collaborate with government to build cyber attacks on the population. This is already being compared with the Geneva Convention (Story) HARDWARE ENABLERS Networks After US government’s moves against Chinese network equipment, limiting Chinese vendors’ opportunities from US 5G deployments, Huawei is now pivoting in its 5G position, and said yesterday (at its analyst day) that the new technology won’t bring tangible benefits for consumers or new revenues for operators (Story) SOFTWARE ENABLERS Artificial Intelligence A new ethics battlefront is appearing for Google, on the potential use of AI to build new weapons. A group of employees had published a letter demanding the company to end this research, but now ex-CEO E Schmidt is endorsing the Pentagon’s proposal for a new AI centre in which they’ll work with tech companies (Story) New search startup Node is trying to use AI to improve search customer experience, by better selecting the information that matter most to users, including personalised recommendations that take advantage of “interconnections between people and things”. Not clear that this cannot be replicated by Google… (Story) Privacy The battle of the Russian government against Telegram, after the messaging company’s reluctance to reveal its users’ data triggered an official ban, seems like a mess at the moment. Russian authorities are struggling to block the app’s traffic, but according to Telegram they have not had any significant impact yet (Story) In an apparently more benign manifestation of the same trend, a new proposal for electronic data regulation in Europe requires tech companies to deliver private data of terror suspects to European law enforcement agencies in as little as six hours, to avoid current slowdown of investigations due to problems to access data (Story) Subscribe at https://www.getrevue.co/profile/winwood66
Tech & Telecom news — Apr 18, 2018
0
tech-telecom-news-apr-18-2018-18462920dfd2
2018-04-18
2018-04-18 06:48:42
https://medium.com/s/story/tech-telecom-news-apr-18-2018-18462920dfd2
false
538
The most interesting news in technology and telecoms, every day
null
null
null
Tech / Telecom News
ripkirby65@gmail.com
tech-telecom-news
TECHNOLOGY,TELECOM,VIDEO,CLOUD,ARTIFICIAL INTELLIGENCE
winwood66
Augmented Reality
augmented-reality
Augmented Reality
13,305
C Gavilanes
food, football and tech / ripkirby65@gmail.com
a1bb7d576c0f
winwood66
605
92
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-27
2018-05-27 21:32:23
2018-05-27
2018-05-27 21:48:42
3
false
en
2018-05-27
2018-05-27 21:48:42
0
18464d5c699
8.757547
294
4
1
Inspired by recent talk at O’reilly Data Show Podcast “What machine learning engineers need to know” I have decided to shed some light at…
4
But what is this “machine learning engineer” actually doing? Inspired by recent talk at O’reilly Data Show Podcast “What machine learning engineers need to know” I have decided to shed some light at this traction-gaining subject. While I am not that much of an expert, I think that my year and a half of experience in this exact position might be sufficient to write a short summary of what is this all about. Machine learning, artificial intelligence and similar buzzwords are currently on the top of VC interests. Startups are establishing left and right, promising to the world astounding smart future. The better ones get acquired by Big Five to furtherly increase their revenue. With the growth of this IT branch also comes the rising market demand for developers that can actually deliver promised artificial intelligence applications. So, the natural choice is to hire a team of data scientists to develop your idea, right? Well, not exactly. Data scientists usually come from different background than software engineers. They do not necessarily make great programmers. In fact, they never intended to be them — for a data scientist coding is merely just a means to solve the current puzzle. And nothing more. Unlike software developers they do not treat the code as a form of art. Of course, their wisdom is invaluable but the range of skills required for being a successful data scientist is already broad (especially when the field frequently evolves with the new discoveries, making a significant portion of hard-earned knowledge obsolete on a daily basis). Too broad. You can not expect from a person that is highly specialized in computer vision or prescriptive analysis to be also bread-and-butter programmer, productionizing the models and putting them in the heavy-scalable cloud environment. While also maintaining a high quality, reusable code. Using functional programming. Or reactive. Or both. On the other hand software engineers are quite reserved when it comes to machine learning. The whole concept is rather weird from their perspective, especially when the majority of so called models their data science team creates are short, hacky scripts with strange method calls and unreadable code in unfamiliar language. Where are all the design patterns? Where is the clean code? Where is logging or monitoring? Why is the code not reusable? Shouldn’t the code solving such a complex problem be more than two hundred lines long? It is a very ugly script that only one person can understand! Is it even programming anymore? The merger With this conflict arising, a need was born. A need for a person that would reunite two warring parties. One being fluent just enough in both fields to get the product up and running. Somebody taking data scientists’ code and making it more effective and scalable. Introducing to them various programming rules and good practices. Abstracting away parts of code that might be used in future. Joining the results from potentially unrelated tasks to enhance the models performance even more. Explaining the reasons behind architectural ideas to the devops team. Sparing software developers from learning concepts way beyond their scopes of interests. That need has been met with emerge of machine learning engineer role. What is always missing from all the articles, tutorials and books concerning the ML is the production environment. It literally does not exist. Data is loaded from CSVs, models are created in Jupyter, ROC curves are drawn and voilà — your machine learning product is up and running. Time for another round of seed funding! Hold on. In reality the majority of your code is not tied to machine learning. In fact, the code regarding it usually takes just a few percents of your entire codebase! Your pretrained black box gives only the tiny JSON answer — there are thousands of lines of code required to act on that prediction. Or maybe all you get is a generated database table with insights. Again, an entire system needs to be built on top of it to make it useful! You have to get the data, transform and munge it, automate your jobs, present the insights somewhere to the end user. No matter how small the problem is, the amount of work to be done around the machine learning itself is tremendous, even if you bootstrap your project with technologies such as Apache Airflow or NiFi. that was not on Coursera, was it? Yet, somebody has to glue all the “data science” and “software” parts together. Take the trained model and make it work on quality production environment. Schedule batch jobs recalculating insight tables. Serve model in real time and monitor its performance in the wild. And this is the exact area in which machine learning engineer shines. When creating software, developers are naturally looking for all the possible outcomes in every part of application. What you get from a data scientist is just a happy path that leads to model creation for the particular data at the particular moment in time. Unless it is one-time specific analysis, the model will live for a long time after it gets productionized. And as the time flies, the bugs and all the edge cases are popping up(many of them were not even possible when the code was written). Suddenly a new unknown value shows up in one of the columns and the entire model start to perform way worse. As a machine learning engineer you prepare your applications for such events. You provide the logging and monitoring pipelines not only around machine learning tasks but also inside them. You try to preserve all the information so it is possible to answer a very important questions: What is the cause of bad model’s performance? Since when does it happen? It is just another API Because you do not treat ML as magic, you are aware of all other typical programming dangers that may arise when a machine learning job is executed. Database might refuse connection. GroupBy may blow up for large dataset. Memory or disk can be full. Combination of parameters specified by user might be illegal for certain algorithm. External service could respond with Timeout Exception instead of credentials. Column may not exist anymore. While nobody blinks an eye when such events take place in a safe lab environment on a daily basis, it is your responsibility to ensure they won’t happen when the end product is actually delivered. machine learning project roles Your data science team is always full of ideas. You have to make sure that no technology is limiting them. As good and customizable as the current ML frameworks are, sooner or later your teammates will have an intriguing use case that is not achievable with any of them. Well, not with standard APIs. But when you dig into their internals, tweak them a little and mix in another library or two, you make it possible. You abuse the frameworks and use them to their full potential. That requires both extensive programming and machine learning knowledge, something that is quite unique to your role in the team. And even when framework provides all you need programming wise, there still might be issues with the lack of computation power. Large neural networks take large amount of time to train. This precious time could be reduced by an order of magnitude if you used GPU frameworks running on powerful machines. You are the one to scout the possibilities, see the pros and cons of various cloud options and choose the most suited one. You may also be responsible for picking other tools and ecosystems, always taking into consideration the whole the project lifecycle(not just the reckless research part) — e.g. Azure ML Workbench or IBM Watson might be great tools for bootstrapping the project and conducting research but not necessarily meet all the requirements of your final version of the product when it comes to custom scheduling and monitoring. You must stay up to date with the state of art technologies and constantly look for the places in which the overall product performance could be improved. Be it a battle-tested programming language, new technology in the cloud, smart scheduling or monitoring system — by seeing your product on the bigger picture and knowing it well from both engineering, business and science sides, you are often the only person that has the opportunity to spot the potential area of improvement. This frequently means taking the working code and rewriting it entirely in another technology and language. Thankfully, as soon as you “get the grip” of what this fuzz is actually about and what steps are always taken in the process of learning and productionizing the models, you realize that most of these APIs do not differ at all. When you juggle between various frameworks, the vast majority of the whole process stays the same. You bring all the best software craftsmanship practices and quickly begin to build an abstraction over many repetitive tasks that data science team fails to automate and software development team is afraid to look at. A strong bridge between two worlds. A solid, robust foundation for a working software. Untold cons You can freely commune with all the hottest technologies in the field. Keras, pyTorch, TensorFlow, H2O, scikit-learn, Apache Spark — pick a name, you will probably use it. Apache Kafka! Pulsar! AWS! Every conference you attend speaks out loudly about your technology stack, as if it was The Chosen One. People look at you jealously, knowing that you are the guy using all the coolest things. What is always conveniently omitted is the fact that those cool things also happen to be not widely used things. And when the technology is new all you are left with is poor documentation and a bunch of blog posts. What you see on the conferences and tech talks are just the happy green paths(similar to those Jupyter notebooks you get from your DS team). You know that it is not how software works. Many times, after hours of debugging Apache Spark internals, I questioned my will to pursue my programming career in machine learning. Wouldn’t I be happier without all this? Was web development really that boring? You are expected to know a lot of concepts, both in software development and data science. Most importantly, people want you to gain new knowledge very quickly. I learn a lot by taking somebody else’s snippets, changing and breaking them and seeing what happens. Well, what if there are no snippets? What if the stacktrace you get is pretty meaningless and googling the exception name leads only to the code at GitHub throwing it? not even a quarter of all the buzzwords available is shown The learning and understanding curve is quite steep in some areas, especially when it comes to implementing ideas written in whitepapers. As cool(and sometimes exotic) as these tend to be, their form is always pretty scientific and just getting to understand them takes you a longer while. Then comes the coding part where you are totally on your own. Even though your application compiles fine and does not throw Runtime Exceptions all over the place it is often unclear how to ensure that your implementation actually works properly. And when it doesn’t, you ponder whether your code has a bug, data is skewed or maybe the whole idea is just not applicable in your use case. It is already challenging to keep up with the data science alone. When you throw in classical software development and cloud technologies as well, your brain might quickly become overwhelmed. You must organize your learning resources well and have a constant stream of news flowing in. And also accept the fact that being the jack of all trades has its cons, because you will probably never have a very deep knowledge in any of your areas. The imposter syndrome kicks in very hard. And often. But for some this is truly a dream job. Merging two so similar yet so different worlds. A full stack developer of the data science. Summing up A person called machine learning engineer: asserts that all production tasks are working properly in terms of actual execution and scheduling abuses machine learning libraries to their extremes, often adding new functionalities ensures that data science code is maintainable, scalable and debuggable automates and abstracts away different repeatable routines that are present in most machine learning tasks brings the best software development practices to the data science team and helps them speed up their work chooses best operational architecture together with devops team looks constantly for performance improvement and decides which ML technologies will be used in production environment This also happens to be my first blog post ever. I must say I really enjoyed writing and meanwhile had over a dozen ideas for more content. I will keep digging deeper into how a software developer can transition into full blown machine learning engineer. There are way too many pitfalls on the way. And too many resources to learn from! Lastly, I promised my friend we would both start blogging around 2018. Well, here it is, Joanna! It is your turn now.
But what is this “machine learning engineer” actually doing?
2,070
but-what-is-this-machine-learning-engineer-actually-doing-18464d5c699
2018-06-20
2018-06-20 01:46:18
https://medium.com/s/story/but-what-is-this-machine-learning-engineer-actually-doing-18464d5c699
false
2,175
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Tomasz Dudek
Machine Learning Engineer at upsaily.com
3ea5f803d503
tomaszdudek
123
1
20,181,104
null
null
null
null
null
null
0
null
0
c7445110c214
2018-05-16
2018-05-16 14:34:22
2018-05-16
2018-05-16 16:30:52
1
false
en
2018-05-17
2018-05-17 13:48:07
4
1847ce5d0c65
1.796226
2
0
0
AI, or Artificial Intelligence, catches up with restaurants.
5
Robot or Human: Does It Matter? AI, or Artificial Intelligence, caught up with restaurants last week when Google announced Duplex, a bot that sounds very much like a human when making a reservation. Google engineers — well, their robot actually — successfully did so. You can listen the booking here. Google engineers Daniel Yaniv Leviathan, and Matan Kalman, sitting in the restaurant they booked through a call from Duplex. (Source: Google) The announcement occasioned many comments, a substantial amount on the Google blog we link to above. Much of that discussion comes from people who appear to understand the technology at work here. In case you don’t, here’s an explanation of what makes Duplex sound human: At the core of Duplex is a recurrent neural network (RNN) designed to cope with these challenges, built using TensorFlow Extended (TFX). To obtain its high precision, we trained Duplex’s RNN on a corpus of anonymized phone conversation data. The network uses the output of Google’s automatic speech recognition (ASR) technology, as well as features from the audio, the history of the conversation, the parameters of the conversation (e.g. the desired service for an appointment, or the current time of day) and more. Training is everything, right? In any event, some comments on the blog raised ethical concerns about not disclosing that the call is from a robot. One commenter, for instance, griped: My sense is that humans in general don’t mind talking to machines so long as they know that they’re doing so. I anticipate significant negative reactions by many persons who ultimately discover that they’ve been essentially conned into thinking they’re talking to a human, when they actually were not. It’s basic human nature — an area where Google seems to have a continuing blind spot. Another problem of course is whether this technology will ultimately be leveraged by robocallers (criminal or not) to make all of our lives even more miserable while enriching their own coffers. My question is: Does it really matter when it comes to something as simple as a dinner reservation taken over the phone? I’d imagine what’s running through the minds of most restaurant managers and owners is this: Thank you. And and please, please show up. Yet should owners advise employees who answer the phone that they might be talking to a robot from time to time? Yes, it’s probably a good idea. And even better, one might be to begin a conversation with staff about technology’s impact on restaurants — and it how can help make their job easier. In fact, a useful way to launch that discussion is whether or not they’d use Duplex to book a table.
Robot or Human: Does It Matter?
50
robot-or-human-does-it-matter-1847ce5d0c65
2018-05-17
2018-05-17 13:48:08
https://medium.com/s/story/robot-or-human-does-it-matter-1847ce5d0c65
false
423
We Unleash Potential
null
ResultsThruStrategy
null
Results PDQ
info@resultsthrustrategy.com
results-pdq
BUSINESS DEVELOPMENT,BUSINESS STRATEGY,BUSINESS INTELLIGENCE
resultspdq
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
ResultsThruStrategy
A group of creative consultants who help restaurants unleash potential by growing their sales, profits, and people.
fd8b000fbdf5
resultspdq
331
201
20,181,104
null
null
null
null
null
null
0
null
0
31f4f88d6548
2017-11-29
2017-11-29 14:58:28
2017-11-29
2017-11-29 16:22:09
1
false
en
2017-11-29
2017-11-29 16:22:09
0
184a5e211163
1.279245
6
0
0
Who are our PhD students? Where do they come from, what are they studying now, and where do they hope to go in the future? Find out more…
5
PhD Candidate Profile: Vladimir Kobzar Who are our PhD students? Where do they come from, what are they studying now, and where do they hope to go in the future? Find out more about one of our PhD candidates, Vladimir Kobzar! Vladimir Kobzar I am a Ph.D. candidate at the NYU Center for Data Science (CDS), where I am a member of the Math and Data Group. I earned my MS in Mathematics at the Courant Institute under the advisement of Professor Afonso Bandeira, as well as my JD and LLM from the NYU School of Law. Before joining the CDS, I worked as a researcher at Argonne National Laboratory operated by he University of Chicago on the development of machine learning models for analysis of time-resolved X-ray scattering data. Before that, I served for several years as Executive Director at Goldman Sachs, where I advised on legal and regulatory aspects of transactions and other initiatives involving data and technology in growth markets. In my current research, I use methods of optimization and probability to establish that recovering the position and orientation of an object from noisy observations can be done in a robust and efficient manner. This problem is central to various areas of science and technology, including cryo-electron microscopy, robotics, and computer and human vision. At the CDS, I plan to continue working on the development of provably robust and computationally efficient algorithms at the intersection of mathematics and data science. I am also interested in designing a framework and methods for interpreting deep learning and other “black box” algorithms. These frameworks and methods can then be used to analyze, among other things, the legal and regulatory ramifications of such algorithms.
PhD Candidate Profile: Vladimir Kobzar
24
phd-candidate-profile-vladimir-kobzar-184a5e211163
2018-03-18
2018-03-18 19:03:48
https://medium.com/s/story/phd-candidate-profile-vladimir-kobzar-184a5e211163
false
286
This is the official research blog of the NYU Center for Data Science (CDS). Established in 2013, we are a leading data science training and research facility, offering a MS in Data Science and, as of 2017, one of the nation’s first universities to offer a Ph.D. in Data Science.
null
nyudatascience
null
Center for Data Science
ab4829@nyu.edu
center-for-data-science
DATA SCIENCE,DATA MINING,TECHNOLOGY,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING
NYUDatascience
Data Science
data-science
Data Science
33,617
NYU Center for Data Science
Official account of the Center for Data Science at NYU, home of the Master’s and Ph.D. in Data Science.
880781a85c2
NYUDataScience
3,530
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-08
2018-08-08 18:37:42
2018-08-08
2018-08-08 18:38:52
2
false
tr
2018-08-09
2018-08-09 19:27:36
7
184bb0a38af1
1.481447
0
0
0
TOKEN YAPISI
5
2 — Ingot Coin: Para Birimleri İçin Dönüşüm Köprüsü TOKEN YAPISI Her proje ortaya çıkışından itibaren gerçeğe dönüşmek için bir kaynak arayışı içine girer. Bu kaynak kimi zaman projenin hayata geçmesine, kimi zaman ise gelişmesine yardımcı olur. Ingot Coin projesi de aynı şekilde kaynak yani yatırım beklentisi içindedir. Bundan dolayı da Ingot Coin yani IC isimli tokenlerini üretmişlerdir. Şimdi bu tokenden ve yapısından bahsedeceğim. Öncelikle en çok dikkatimi çeken detaydan bahsedeceğim. Bu detay da 1 IC’nin 1 Amerikan Doları’na eşit olmasıdır. Bu pek sık rastlanan bir durum değildir. Genelde tokenler Cent’lerle ifade edilen değerlere sahip olurlar fakan Ingot için işler biraz farklı. Piyasaya 120 Milyon dolar değerinde Ingot Coin sunulacak ve bu satışlardan elde edilen kazanç çeşitli alanlarda kullanılacaktır. Satışa sunulup satılmayan tokenler akıllı sözleşmeler dahilinde yakılacaktır. Bu sayede token değeri korunacaktır. Proje için belirlenen soft cap 37 Milyon dolardır. Bunun geçilmesinden sonraki hedef olan hard cap ise 90 Milyon dolar olarak belirlenmiştir. Token dağılımına bakacak olursak: %75 halka arz edilecektir. %7’lik kısım erken destekçilere, %5 ödül programına, %5 ekibe, %4 danışmanlara, %4 ise kuruculara ayrılmıştır. Bunun yanı sıra toplanan fonların dağılımı şu şekilde olacaktır: %23’lük bölüm dijital bankaya aktarılacaktır. %22 ise likidite fonuna gidecektir. Kalan kısıma bakarsak toplam yatırımın %18’i komisyonlara, %10 ise borsa cüzdanına aktarılacaktır. Kalan %11 ise saklanarak daha sonra kullanılacaktır. Bu da projenin elinde bir maddi güç olmasına neden olur. Yani maddi açıdan gücünü korumayı hedefleyen bir ekip ile karşı karşıya olduğumuzu söyleyebiliriz. Yukarıda da belirttiğim gibi 1 Dolar’lık token fiyatı ile çoğu projeden ayrılan Ingot Coin için geleceğin ışık dolu olduğunu söyleyebiliriz. Yine de bu bir yatırım tavsiyesi değildir. Aşağıdaki linklerden resmi kaynaklara ulaşabilir ve daha detaylı bilgiler edinebilirsiniz. Website: https://www.ingotcoin.io/ Bitcointalk ANN:https://bitcointalk.org/index.php?topic=3581009 Whitepaper: https://www.ingotcoin.io/documents/en/white-paper.pdf Facebook: https://www.facebook.com/ICOINGOT/ Twitter: https://twitter.com/ICOINGOT Telegram: https://t.me/INGOTCoin My BitcoinTalk Profile: https://bitcointalk.org/index.php?action=profile;u=1780407
2 — Ingot Coin: Para Birimleri İçin Dönüşüm Köprüsü
0
2-ingot-coin-para-birimleri-i̇çin-dönüşüm-köprüsü-184bb0a38af1
2018-08-09
2018-08-09 19:27:36
https://medium.com/s/story/2-ingot-coin-para-birimleri-i̇çin-dönüşüm-köprüsü-184bb0a38af1
false
291
null
null
null
null
null
null
null
null
null
Money
money
Money
35,618
Burak Koçyiğit
Industrial Engineer / Cryptocurrency Enthusiast
34eedb2284dc
burakkocyigit1
927
1,203
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-21
2017-10-21 11:22:35
2017-10-21
2017-10-21 12:29:50
3
false
en
2017-11-20
2017-11-20 12:41:35
7
184bf563e591
1.580189
6
2
0
“We all are, in a sense, poets.” Pale Fire
5
Your Perception Is Your Reality “We all are, in a sense, poets.” Pale Fire Source Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience. If you and I attend the same concert, the changes that occur in my brain when I listen to Beethoven’s 5th will almost certainly be completely different from the changes that occur in your brain. Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences. The empty brain One person’s caviar is another person’s junk food. Listen You open the box of Cracker Jack as the universe folds in on itself. Again. You visit the white hotel. You meet Sigmund Freud over schnapps. An orchestra plays… You open the box of Cracker Jack as the universe folds in on itself. Again. the More you Eat-the More you Want! How many shapes can you make with your hands? For one hundred dollars, ‘What is schnapps?’ What is The Threepenny Opera? Listen… Crickets… I hear crickets! Do you hear crickets? Do you want to hear crickets? Do you want to see crickets while you hear crickets? No? Too bad. “One gets the sense that an alien civilization has dropped a cryptic guidebook in our midst.” The crack appears You open the box of Cracker Jack as the universe folds in on itself. Again.
Your Perception Is Your Reality
103
your-perception-is-your-reality-184bf563e591
2018-04-17
2018-04-17 11:56:49
https://medium.com/s/story/your-perception-is-your-reality-184bf563e591
false
273
null
null
null
null
null
null
null
null
null
Poetry
poetry
Poetry
217,749
Jeffrey Field
It ain't what you think. Former newsman, car salesman, teacher. Everything is Thou, if you so allow it. You can find some of it at https://youtu.be/w6RtVjMDHzE
4f99c49ff347
FarkleUp
3,414
9,394
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-14
2018-04-14 10:01:11
2018-04-14
2018-04-14 10:01:12
1
false
en
2018-04-14
2018-04-14 10:01:12
1
184cbbf1efde
2.690566
2
0
0
null
5
4 Data Lake Solution Patterns for Big Data Use Cases When I took wood shop back in eighth grade, my shop teacher taught us to create a design for our project before we started building it. The way we captured the design was in what was called a working drawing. In those days it was neatly hand sketched showing shapes and dimensions from different perspectives and it provided enough information to cut and assemble the wood project. The big data solutions we work with today are much more complex and built with layers of technology and collections of services, but we still need something like working drawings to see how the pieces fit together. Solution patterns (sometimes called architecture patterns) are a form of working drawing that help us see the components of a system and where they integrate but without some of the detail that can keep us from seeing the forest for the trees. That detail is still important, but it can be captured in other architecture diagrams. In this blog I want to introduce some solution patterns for data lakes. (If you want to learn more about what data lakes are, read “What Is a Data Lake?”) Data lakes have many uses and play a key role in providing solutions to many different business problems. The solution patterns described here show some of the different ways data lakes are used in combination with other technologies to address some of the most common big data use cases. I’m going to focus on cloud-based solutions using Oracle’s platform (PaaS) cloud services. These are the patterns: Let’s start with the Data Science Lab use case. We call it a lab because it’s a place for discovery and experimentation using the tools of data science. Data Science Labs are important for working with new data, for working with existing data in new ways, and for combining data from different sources that are in different formats. The lab is the place to try out machine learning and determine the value in data. Before describing the pattern, let me provide a few tips on how to interpret the diagrams. Each blue box represents an Oracle cloud service. A smaller box attached under a larger box represents a required supporting service that is usually transparent to the user. Arrows show the direction of data flow but don’t necessarily indicate how the data flow is initiated. The data science lab contains a data lake and a data visualization platform. The data lake is a combination of object storage plus the Apache Spark™ execution engine and related tools contained in Oracle Big Data Cloud. Oracle Analytics Cloud provides data visualization and other valuable capabilities like data flows for data preparation and blending relational data with data in the data lake. It also uses an instance of the Oracle Database Cloud Service to manage metadata. The data lake object store can be populated by the data scientist using an Open Stack Swift client or the Oracle Software Appliance. If automated bulk upload of data is required, Oracle has data integration capabilities for any need that is described in other solution patterns. The object storage used by the lab could be dedicated to the lab or it can be shared with other services, depending on your data governance practices. Data warehouses are an important tool for enterprises to manage their most important business data as a source for business intelligence. Data warehouses, being built on relational databases, are highly structured. Data therefore must often be transformed into the desired structure before it is loaded into the data warehouse. This transformation processing in some cases can become a significant load on the data warehouse driving up the cost of operation. Depending on the level of transformation needed, offloading that transformation processing to other platforms can both reduce the operational costs and free up data warehouse resources to focus on its primary role of serving data. Posted on 7wData.be.
4 Data Lake Solution Patterns for Big Data Use Cases
5
4-data-lake-solution-patterns-for-big-data-use-cases-184cbbf1efde
2018-04-15
2018-04-15 03:16:53
https://medium.com/s/story/4-data-lake-solution-patterns-for-big-data-use-cases-184cbbf1efde
false
660
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
Yves Mulkers
BI And Data Architect enjoying Family, Social Influencer , love Music and DJ-ing, founder @7wData, content marketing and influencer marketing in the Data world
1335786e6357
YvesMulkers
17,594
8,294
20,181,104
null
null
null
null
null
null
0
null
0
ccfede98ebb4
2018-07-12
2018-07-12 20:31:46
2018-07-12
2018-07-12 20:49:26
1
false
en
2018-09-18
2018-09-18 16:16:10
6
185047b7fb7d
1.773585
3
0
0
Kurt Cagle ‘s perceptive analysis of data science titled “Why You Don’t Need Data Scientists” explains the many reasons that data science…
5
What Data Scientists Really Need Kurt Cagle ‘s perceptive analysis of data science titled “Why You Don’t Need Data Scientists” explains the many reasons that data science falls short of the high expectations usually placed on it: Fancy dashboards are pretty but are only as valuable as the data behind them. Data quality often….stinks. Data sets are quirky and difficult to work with. Users/stakeholders know their business domain but little about what data can do for them. A multi-million dollar initiative to rebuild the data pipeline from the ground up is generally off the table. The people who own the databases won’t give data scientists access. Everyone agrees that integrating data from disparate databases is really, really hard, but in reality, it’s much harder than people think. These are all excellent points and often the conversation ends here — in exasperation. We can tell you that we have been there and have the PTSD to prove it. Fortunately, a few years ago, we found a way out of what may seem at times like a no-win situation. We believe that the secret to successful data science is a little about tools and a lot about people and processes. Don’t Boil the Ocean Use Agile methods to create new analytics. Leverage infrastructure that enables teams to work together in an Agile way. Start small and simple. Create something quickly that adds value. Get feedback from your stakeholders. Repeat iteratively. Tests are Best Implement automated process controls that monitor data at every stage of your data analytics pipeline. Think of your data analytics as a lean manufacturing pipeline where the quality of data cannot drift outside statistical and logical bounds. Let your tools work 24x7 so data scientists can stay focused on creating analytics that add value. Automate and Orchestrate Data Scientists spend 75% of their time doing data engineering. It’s about time that data professionals took a page from DevOps. Automate workflow and the deployment of new analytics. Orchestrate the end-to-end data pipeline so we stop sucking the life out of data scientists. A single data engineer should be able to support ten data analysts and scientists, who in turn should be supporting 100 business professionals. An automated pipeline can get you there. Learn More We call this approach DataOps and above we boiled it down — probably oversimplified it — to its core. There are actually seven steps to implementing DataOps. We’ve written extensively about it. See our white paper for more information.
What Data Scientists Really Need
102
what-data-scientists-really-need-185047b7fb7d
2018-09-18
2018-09-18 16:16:10
https://medium.com/s/story/what-data-scientists-really-need-185047b7fb7d
false
417
The DataOps Blog
null
datakitchen.io
null
data-ops
info@datakitchen.io
data-ops
DATA SCIENCE,DEVOPS,ANALYTICS,BIG DATA,DATAOPS
datakitchen_io
Data Science
data-science
Data Science
33,617
DataKitchen
null
d1a05acf3f79
datakitchen_io
292
9
20,181,104
null
null
null
null
null
null
0
null
0
4a4418e36c33
2017-09-20
2017-09-20 12:30:17
2017-10-04
2017-10-04 15:46:51
4
false
en
2018-09-21
2018-09-21 21:04:09
34
1851cb57c308
10.677358
112
7
0
The last decade has marked a profound change in how we perceive and talk about Artificial Intelligence. The concept of learning, once…
5
Why Continual Learning is the key towards Machine Intelligence Brain circuits In Brainbow mice. Neurons randomly choose combinations of red, yellow and cyan fluorescent proteins, so that they each glow a particular color. This provides a way to distinguish neighboring neurons and visualize brain circuits. 2014. HM Dr. Katie Matho. The last decade has marked a profound change in how we perceive and talk about Artificial Intelligence. The concept of learning, once confined in the corner of AI, has now become so important some people came up with the new term “Machine Intelligence”[1][2][3] as to make clear the fundamental role of Machine Learning in it and further depart form older symbolic approaches. Recent Deep Learning (DL) techniques have literally swept away previous AI approaches and have shown how beautiful, end-to-end differentiable functions can be learned to solve incredibly complex tasks involving high-level perception abilities. Yet, since DL techniques have been proven shining only with a large number of labeled examples, the research community has now shifted his attention towards Unsupervised and Reinforcement Learning, both aiming to solve equivalently complex tasks but without (or less as possible) explicit supervision. However, most of the DL research today is still carried out solving specific, isolated tasks which would hardly lead to a more long-term vision of Machine Intelligence endowed with common sense and versatility. In this story I’d like to throw some lights on the paradigm of Continual/Lifelong Learning and why I think this is at least as much as important as the Unsupervised and Reinforcement Learning paradigms. What is Continual Learning? Continual Learning (CL) is built on the idea of learning continuously and adaptively about the external world and enabling the autonomous incremental development of ever more complex skills and knowledge. In the context of Machine Learning it means being able to smoothly update the prediction model to take into account different tasks and data distributions but still being able to re-use and retain useful knowledge and skills during time. Hence, CL is the only paradigm which force us to deal with an higher and realistic time-scale where data (and tasks) becomes available only during time, we have no access to previous perception data and it’s imperative to build on top of previously learned knowledge. On the terminology What I’ ve described under the name of Continual Learning is now a fast emerging topic in AI which have been often branded as Lifelong Learning or Continuous Learning and it’s not well consolidated yet. The term “Lifelong Learning” has been around for years in the AI community, but prevalently used in areas far away from the field of Deep Learning. This is why more people would go for a modern term like “Continuous” or “Continual Learning” targeting specifically Deep Learning algorithms. I personally love (and used in my papers ) “Continuous Learning” since it focuses and makes explicit the idea of a smooth and continuous adaptation process that never stops. The distinction with Continual is subtle but important as beautifully put in Oxford Dictionaries: Both can mean roughly “without interruption” […] however, Continuous is much more prominent in this sense and, unlike Continual, can be used to refer to space as well as time […]. Continual, on the other hand, typically means ‘happening frequently, with intervals between’ […]. Even though, current research focuses on rigid task sequences problems where we actually stop learning at the end of each task [4], I find Continuous Learning would be much more appropriate in the long term with the developments of algorithms which can deal with a continuous stream of perception data like the real world. On the other hand, the term “Continuous” may result too confusing in many contexts (especially in Reinforcement Learning) as often used as the opposite of “Discrete”. That’s why the DL community seems to start converging to the use of the term “Continual” instead. Wait a minute, but what’s wrong with the terms “Online” and “Incremental Learning”? As many other researchers, I see the term “Online” opposed to “Batch Learning” as the technical way of processing data in an algorithm rather than a paradigm of learning [5]. The term “Incremental Learning” instead, while still focuses on the idea of building knowledge incrementally, doesn’t really express the idea of adaptation which sometimes means also to temper or erase what has been previously learned [6]. Why Continual Learning? Let’s set back for a moment and look at some definitions of intelligence given during the past by some prominent researchers in the field of Psychology and Learning. This quote is from Loyd Humphreys in “The construct of general intelligence”: “ The resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills.” And this is Reuven Feuerstein in “Dynamic assessments of cognitive modifiability”: “ The unique propensity of human beings to change or modify the structure of their cognitive functioning to adapt to the changing demands of a life situation. ” Let’s have a look at the last one from Sternberg & Salter in the “Handbook of Human Intelligence”: Goal-directed adaptive behavior. Wow! I really liked the last one, very concise and to the point. Now, can you see what’s connecting all the definitions? It’s the idea of adaptation, the ability to mold our cognitive system to deal with the always changing demanding circumstances. Yet, very little of this can be found in the current Deep Learning literature where much of researchers’ focus has been devolved to solve more and more complicated problems but in narrow and closed task domains. Adaptation, while at the core of the definition of intelligence, has been currently left out of the game. In the next paragraph we will talk more about adaptation, and why it’s an essential quality of any AI systems facing the real-world and not unnatural benchmarking settings. The second and most significant notion behind Continual Learning is scalability. Scalability is one of the most important concept in Computer Science and once again at the core of Intelligence. As we’ll see in the next paragraphs, in CL this idea force us to think at Intelligence and develop algorithms which can already deal with real-world computational and memory constraints. If we want machines which are endowed with versatility and common sense, we better make sure they are scalable in terms of intelligence and stay sustainable in terms of resources (computation/memory). On Adaptability Let’s focus now again on adaptation and why it is important for a Strong AI system. Nowadays, no matter if you are working on Unsupervised, Reinforcement Learning, working on Vision or NLP, you would go for a fixed well confined task and pick a function which can be trained to solve it. This is amazing if you have an industrial/routine problem which involves perception (high-dimensional) data, but suddenly becomes less interesting when you want to tackle open world problems where things keep changing over time. Unless you assume that the universe can be constrained in a finite number of variables you can process deterministically there’s no escape: you need to keep adapting. CL for continual improvements The simplest application of CL is in scenarios where the data distributions stay the same but the data keeps coming. This is the classical scenario for an Incremental Learning system. You can think at a lot of applications like Recommendation or Anomaly Detection systems where data keeps flowing and continually learn from them is really important to refine the prediction model and in the end improve the service offered. If you think about it, very little amount of problems (also very constrained and well defined a priori) cannot benefit from a bunch of new data which comes only later in time. CL for ever-changing scenarios However, nowadays, for most of the commercial DL applications it’s ok to re-train the model from scratch with the cumulated data. The game becomes really interesting instead when the scenario keeps changing over time. This is where Continual Learning really shines and other techniques are unable to solve the problem. Most of the time it’s very hard to collect a large and representative dataset a priori, but it can be even wrong when the semantics of these data keeps changing over time (i.e. we are actually solving a different task). For example, you can think to a Reinforcement Learning system in a complex environment in which the reward keeps changing based on a hidden variable we do not control (welcome to the real life LoL). On Scalability Now, how we can ensure that our cognitive system can scale in terms of intelligence (while processing more and more data) but maintaining computational/memory fixed or at least sustainable? The core trick is to process data once and than get rid of them. Like biological systems storing perception data (given their high-dimensionality and noise rate) would be impossible to maintain and process cumulatively on a long time scale! So, you can imagine the AI system as an actual brain which filter perception data and retain only the most important information (Edge Computing people on fire here LoL). At this point, some of you may think: “Mmmh, Moore low is not over yet, and maybe it will never be.. so, who cares about Continual Learning if computational power still doubles every year?!” Well, IDC published a white paper this year arguing that by 2025 (less than 8 year away) data generation rate will grow from 16 ZB per year (zettabytes or a trillion gigabytes) today, to 160 ZB and we will be able to store only between 3% and 12% of them. You read it right. Data has to be processed on the fly or it will be lost forever because the storage tech can’t keep up with the data production which is the result of many exponentials combined together. Data creation by type each year. From early 2000s we’ve talked a lot about Big Data, well let’s put that in prospective! Hence, in the end, CL is not only about drastically reducing the computational burden (to avoid retraining our model from scratch each time we have new available data) but it’s the only way of learning since most of the time we won’t be able to even store the data! CL is ideal for Unsupervised Streaming Perception data With high-dimensional streaming (real-time) data (~25% of Global Datasphere in 2025 [7]) the problem appears even clearer since it would be just impossible to keep the data in memory and re-train the entire model from scratch as soon as a new piece of data becomes available. CL is ideal for streaming perception data since it embeds the idea of continually updating the model with the new available data. Of course in a supervised setting it could be very hard to couple real-time perception data with labels (yet feasible with temporal coherent data as Neurala showed) but what if we are in an Unsupervised/Reinforcement setting? Well, CL becomes the perfect buddy to pair with! CL enables Multimodal-Multitask Learning Now, what if we don’t have a single stream of perception data but many of them coming from different sensors (with different input modalities) and at the same time we want to solve multiple tasks (welcome to the real-world again)? Łukasz Kaiser & All from Google Brain this year [8] come up with a single model which has been able to learn very different tasks in very different domains and with many input modalities (with a huge static training set). However, this beautiful prediction model would be really impossible to use in a real-world context with current DL techniques since updating it would require to re-train the entire model from scratch (good luck with that) as soon as a new piece of data is available from one of the many input streaming sources. Yet, Multimodal and Multitask Learning are essential towards strong Machine Intelligence since, in my view, it’s the only way of endowing machines with common sense and basic, implicit, “reasoning” skills. Let’s make an example. In the picture below you can see a very famous and funny error made by a Automatic Image Captioning system [9] based on DL techniques: One of the mistakes made by the Multimodal Recurrent Neural Network proposed in 2015 by Karpathy & Fei-Fei [4] So, in this case the Multimodal RNN, based on the training set with <image, caption> pairs, has wrongly identified the toothbrush as baseball bat. But why as humans we laugh at this error? Because it’s obvious that a child of that age won’t be able to hold a baseball bat and that as a matter of prospective a baseball bat can’t be that small. All these inferences which can be intended as a simple version of reasoning are also what we call common sense. But what if the same system, other than just give caption to images was also trained to evaluate more precisely the age of a person in the picture and imagine the weight/size of each particular object in a scene. Well in that case, disambiguating the toothbrush from the baseball bat would have become much easier, right? Because the co-occurrences of a very young boy holding that weight are much less frequent! Of course, for more complex tasks, multiple input modalities are also needed: like disambiguating type of birds based on visual but also auditory cues. So, in the end, Multimodal/Multitask Learning can be really what makes our AI agents smarter but only through Continual Learning, which essentially enables asynchronous alternate training of such tasks and only updating the model on the real-time data available from one or more streaming sources in a particular moment! State-of-the-art & Future Challenges While not already at its explosion, Continual Learning has been getting more and more attention in the Deep Learning community and in the last two years very good contributions have come out ([10][11][12][13]). I plan to cover a good part of them in a series here on Medium on CL, but let’s summarize what we know so far. Pros: Contrasting Catastrophic Forgetting is possible in many ways, and not only through careful hyper-parametrizations or regularization techniques. We have already proved that CL can be used in complex problems in Vision, Supervised or with Reinforce. Accuracy results (on some toy benchmarks) are impressively high, almost in line with other techniques which have access to previously encountered data. Cons: It’s not completely clear how evaluate CL techniques, and a more formalized framework may be needed. We have pretty much focused on solving a rigid sequence of (simple) tasks (on the scale of dozens) and not on streaming perception data, neither on multimodal/multitask problems. It’s not clear how to behave after the saturation of the capability of the model, neither how to selectively forget. In our latest work [14], which has been recently accepted @ CoRL2017, we tackle the 2nd problem, providing a dataset and benchmark CORe50, specifically designed for Continual Learning, where temporally coherent visual perception data becomes available in (a lot of) small batches. CORe50 official Homepage. If you are intrigued by the latest research on CL, take a look at the collaborative wiki and open community continualai.com I’m currently maintaining or join us on slack! :-) Even though still very far from solving the problem I’m confident that in a few years Deep Learning techniques will be able to smoothly and continually learn from streaming multimodal perception data leading to a new generation of AI agents which will unlock thousands of new applications and services opening the path towards Strong Machine Intelligence. I hope you enjoyed this post, and I’m looking forward to hear from you in the comment section! Thank you for your attention, and remember to like it or share it! :-) I’d also like to give a special thanks to my fellow PhD student Francesco Gavazzo for the useful discussions and his great suggestions which lead to this story! If you’d like to see more posts about my work on AI and Continual/Lifelong Deep Learning follow me on Medium and on my social: Linkedin, Twitter and Facebook! If you want to get in touch or you just want to know more about me and my research, visit my website vincenzolomonaco.com or leave a comment below! :-)
Why Continual Learning is the key towards Machine Intelligence
531
why-continuous-learning-is-the-key-towards-machine-intelligence-1851cb57c308
2018-09-21
2018-09-21 21:04:09
https://medium.com/s/story/why-continuous-learning-is-the-key-towards-machine-intelligence-1851cb57c308
false
2,644
An Open Community and Collaborative Wiki on Continual/Lifelong Deep Learning | Join us @ ContinualAI.org or on slack: https://continualai.herokuapp.com/
null
ContinualAI
null
Continual AI
contact@continualai.org
continual-ai
CONTINUAL LEARNING,DEEP LEARNING,ARTIFICIAL INTELLIGENCE,COMMUNITY,RESEARCH
ContinualAI
Machine Learning
machine-learning
Machine Learning
51,320
Vincenzo Lomonaco
AI and Deep Learning PhD Student @ University of Bologna | Working on Continual Learning with Deep Architectures | http://vincenzolomonaco.com
bf2760649b14
vlomonaco
178
26
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-27
2017-09-27 20:30:11
2017-10-10
2017-10-10 16:10:14
3
false
en
2018-04-13
2018-04-13 16:28:28
2
1852d0e3bcc5
7.825472
6
0
0
Todd Sieling is the co-founder of Denim & Steel, an interactive technology studio on Main St. Vancouver. He co-founded the agency with…
5
Picture from Denim & Steel. The Future of Technology — A Conversation with Todd Sieling Todd Sieling is the co-founder of Denim & Steel, an interactive technology studio on Main St. Vancouver. He co-founded the agency with friend Tylor Sherman in 2011, after years of experience as a consultant. Todd’s commentary on technology has always been very interesting to me. It challenges the dominant narrative and brings to light aspects of our experience with technology that are too often ignored or forgotten by both the tech community and the multitudes of tech enthusiasts who follow its evolution. Here’s our short conversation on the future of tech. Maryam Khezrzadeh: Good morning Todd! Thanks for sitting here with me today to talk about the future of tech. So before we do that, let’s look at how the tech industry looks like today. It seems to me that like any other industry, the tech industry has a certain landscape. We hear a lot of talk about Virtual Reality, AI, Internet of Things, Automation. So my question for you is: how does tech look like to you? How do you see that landscape? Todd Sieling: That’s a very good question. I think that technology as a topic is always situating itself as talking about the future and talking about what’s coming soon, what’s going to happen soon, and what things will be possible that might not be possible right now. But it also is a business and a real thing in the present, and so is always following its ordinary course of things. And right now we have a lot of talk about, like you say, AI, VR, Internet of Things. Those are the future things that we might have some small pieces of today. But we are also starting to talk a lot about the effects of technology, much more. I think that technology is becoming less of a topic about itself, which is pretty much the only thing that it talks about, until you start thinking about things like employment and sustainability and economics and markets and so forth. Technology is becoming less of a topic about itself — which is pretty much the only thing that it talks about — until you start thinking about things like employment and sustainability and economics and markets and so forth. I think that there are large changes going on in the topic of technology itself; and that its walls, that separate it from the rest of the world, are dissolving very rapidly; and people are starting to look at it more as something that is happening within the world, and has to be dealt with by the world, rather than a thing that functions on its own. In a lot of ways it’s the same thing that it’s always been — at least for the past 150 years or so — which is this idea of tremendous possibility and change and and innovation and so forth. But there is also a shift happening in the topic and one that I think that a lot of people who have enjoyed: — I don’t want to say worship — but the admiration of people; like if you say that you’re in technology, they admire you: “you must be smart!” “that must be exciting!” Now the sentiment is shifting, and it’s more one of like: “ah that’s interesting! but I’m a little suspicious”, or “I’m not as excited about it, I’m a little bit more worried”. So I think that there’s going to have to be some reconciliation between the way that we are used to thinking about technology and accepting the changing reality of what it actually is and how we administer that. MK: Like a little healthy dose of pessimism towards technology. TS: I think so, but I think that because there’s been so little pessimism that there is going to be an overcompensation for a little while where there’s probably going to be a backlash. MK: so if everything goes right, then what is the possibility? what is the future of technology? TS: Well, what is right depends on probably who you are and what you want to see. Everybody who makes technology and uses technology has some kind of motivation or agenda that they are looking to fulfil. Setting that aside, I’d rather just say what I think the future might be. And by future I mean the near future. I think that If you try and predict the far future, it’s just useless. It really is. You can talk about what might happen, but to say “this is the way things are going”, there is an authority that is undeserved when people say this is going to happen; and I find that particularly funny on one hand — because it’s almost certain to be wrong — but I also find it very offensive; in that it’s like I demand that there be no other possibility other than the one that I imagine right now, which is probably one of the most existentially show-business tech things that a person can do. The biggest topic around the discussion of technology is going to be the ideas of truth and trust. So in the near future though, I think that the biggest topic around the discussion of technology is going to be the ideas of truth and trust. So coming out of, or being right in the middle of, an epidemic of, or a crisis of you know what is true, what can we actually believe, what should we believe? Who is telling the truth and how do we know? The role that technology is playing in the distortion of how we accept things as true or not — in the discourse that we have — is both acting as the facilitator, and it’s being used against the idea of actual discourse or actual truth making. I think there is going to be a very serious reckoning with that kind of change. We are already seeing the prospect of being able to simulate someone on video and sound saying something that they did not say at all, you know what is the prospect of having a public or a society when that kind of mechanism is in play, and it doesn’t have any kind of control over it whatsoever. So I think that how we determine what is true and what is not, rather than people just screaming at each other, and who screams the most, who has the most bots, those things are unsustainable as ways of establishing truth. That’s just a way of forcing it to happen that won’t last. So I don’t know what the outcome is there. in 2017, Lyrebird, a Canadian Tech Startup, announced they have achieved the technology to generate digital voices that could sound like anyone. — Photo from Lyrebird Youtube channel The other thing is about trust, and these kind of go hand in hand. How do you forge trust across a networked commons that covers the whole globe? That Has many different agendas and has many different contexts and cultures and motivations and constraints. I think that trust always has to be there, like I said they go hand in hand. But the way that we make trust happen between different parties. There is a lot of talk right now about bitcoin and extended blockchain technologies and how these can be used as a way of creating trust. I find these quite interesting. I think that there’s something really interesting there. Just though the way that we reconcile remoteness, the remoteness that technology makes possible, with the actual idea of trust. You and I sitting here, you know I trust the people walking around here are not going to attack us or try to steal my bike, my back is turned to my bike but my bike is not locked up. I trust that people walking by won’t try and take it. In a different situation, if there were some more distance, I wouldn’t trust it as much. So I would lock it up and there I’m trusting the lock to kind of mediate between the way that I’m able to leave things in the world, and my feeling of comfort in doing so. You can’t really have a society without regulation and policy. A lot of people think that those things give them away, but the moment that you change the word regulation into protection, suddenly things are very different. So in a networked world, or a digitally immersed world, what mechanisms do we use to create trust and I think that a lot of those that are being overlooked right now but will become very necessary, are the ideas of regulation and policy. I think that you can’t really have a society without regulation and policy. A lot of people think that those things give them away, but the moment that you change the word regulation into protection, suddenly things are very different. So the restaurant that you go into — I always like restaurant because they are one of the the best examples of social trust — you go into a restaurant, you sit down in a place that you might have never been, people you have never met and you can’t even see them working, you just take what they put in front of you, and you put it in your mouth, and you just like eat it. And it’s fine! And in most cases it works out just fine unless there is a food poisoning or something like that. But those things are very very very rare examples on the number of meals that are served. But that would not be the case unless you had both a basis of trust, that as people we are going to behave like this, but also you had something governing the way that that trust is formed. Todd Sieling and Me in Riley Park, Vancouver. MK: Yeah, it sounds very complicated. Like we have a system — a model of trust — in our brains which we are trying to decode and understand so that we can incorporate it into technology. TS: That’s a really interesting way to put it, to decode it; because I think we often just enact it. We do things that indicate trust or we receive actions or messages that indicate trust or build trust and then we form like a pattern that we use to make predictions, and the reliability of those predictions is what we call trust. And so the idea of placing that within technology, we have to be able to understand, how does it work with us naturally in order to teach machines how to do it. If you design a machine to not trust anybody then it will act as if it doesn’t trust anybody. You design it where it requires trust, then it has to be able to create trust with people. But the point is that we have to actually say to the design process, the machine is going to do it this way. A lot of people, they really think that machines are neutral. That they are just like “oh whatever you wanna use me for is fine” and that we are realizing more and more that that’s not the case. if you design a machine to not trust anybody then it will act as if it doesn’t trust anybody. You design it where it requires trust, then it has to be able to create trust with people. so I think that is going to become a major factor in design of technology and how we roll it out into the world. And how we adjust to things when it goes wrong or when it goes right too.
The Future of Technology — A Conversation with Todd Sieling
12
the-future-of-technology-a-conversation-with-todd-sieling-1852d0e3bcc5
2018-04-13
2018-04-13 16:28:30
https://medium.com/s/story/the-future-of-technology-a-conversation-with-todd-sieling-1852d0e3bcc5
false
1,928
null
null
null
null
null
null
null
null
null
Future Of Technology
future-of-technology
Future Of Technology
205
Maryam Khezrzadeh
I build software, I write data stories and in my mind, philosophy, religion and science do not contradict. Ask me why; I love conversations. http://mkhezr.com/
d88cf97bab22
mkhezr
122
55
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-28
2018-02-28 12:08:46
2018-02-28
2018-02-28 20:21:00
0
true
en
2018-02-28
2018-02-28 20:21:00
0
18561755a83f
4.871698
0
0
0
— Why, why do i need to change myself instead of expecting the others to change? I complained as a spoiled child. - Because is easier to…
5
Brain uploading — Why, why do i need to change myself instead of expecting the others to change? I complained as a spoiled child. - Because is easier to wear a pair of comfortable slippers than to cover the whole earth surface with a fluffy carpet, my Master answered. (Unknown) * This blog is becoming more and more close to a scientific and future technology blog. Is fine, maybe tomorrow i will write a short story to balance it a bit. Today subject will be brain uploading. Whole brain emulation (WBE) or mind uploading (sometimes called “mind copying” or “mind transfer”) is the hypothetical process of copying mental content (including long-term memory and “self”) from a particular brain substrate and copying it to a computational device, such as a digital, analog, quantum-based or software-based artificial neural network. The computational device could then run a simulation model of the brain information processing, such that it responds in essentially the same way as the original brain (i.e., indistinguishable from the brain for all relevant purposes) and experiences having a conscious mind. Mind uploading may potentially be accomplished by either of two methods: Copy-and-Transfer or Gradual Replacement of neurons. In the case of the former method, mind uploading would be achieved by scanning and mapping the salient features of a biological brain, and then by copying, transferring, and storing that information state into a computer system or another computational device. The simulated mind could be within a virtual reality or simulated world, supported by an anatomic 3D body simulation model. Alternatively, the simulated mind could reside in a computer that’s inside (or connected to) a humanoid robot or a biological body. How science fiction is this. But, if you really want to know let’s talk about this in sequential steps. First we gonna need a data capture device. We are almost there. Lat me to tell you another story. Everything felt possible at Transhuman Visions 2014, a conference in February 2014 billed as a forum for visionaries to “describe our fast-approaching, brilliant, and bizarre future.” Inside an old waterfront military depot in San Francisco’s Fort Mason Center, young entrepreneurs hawked experimental smart drugs and coffee made with a special kind of butter they said provided cognitive enhancements. None of this seemed particularly ambitious, however, compared with the claim soon to follow. In the back of the audience, carefully reviewing his notes, sat Randal Koene, a bespectacled neuroscientist wearing black cargo pants, a black T-shirt showing a brain on a laptop screen, and a pair of black, shiny boots. Koene had come to explain to the assembled crowd how to live forever. ‘’As a species, we really only inhabit a small sliver of time and space,’’Koene said when he took the stage. ‘’We want a species that can be effective and influential and creative in a much larger sphere.’’ Koene’s solution was straightforward: He planned to upload his brain to a computer. By mapping the brain, reducing its activity to computations, and reproducing those computations in code, Koene argued, humans could live indefinitely, emulated by silicon. “When I say emulation, you should think of it, for example, in the same sense as emulating a Macintosh on a PC,” he said. “It’s kind of like platform-independent code.” The audience sat silent, possibly awed, possibly confused, as Koene led them through a complex tour of recent advances in neuroscience supplemented with charts and graphs. Koene has always had a complicated relationship with transhumanists, who likewise believe in elevating humanity to another plane. A Dutch-born neuroscientist and neuro-engineer, he has spent decades collecting the credentials necessary to bring his fringe ideas in line with mainstream science. Now, that science is coming to him. Researchers around the globe have made deciphering the brain a central objective. In 2013, both the U.S. and the EU announced initiatives that promise to accelerate brain science in much the same way that the Human Genome Project advanced genomics. The minutiae may have been lost on the crowd, but as Koene departed the stage, the significance of what they just witnessed was not: The knowledge necessary to achieve what Koene calls “substrate independent minds” seems tantalizingly within reach. The concept of brain emulation has a long, colorful history in science fiction, but it’s also deeply rooted in computer science. An entire subfield known as neural networking is based on the physical architecture and biological rules that underpin neuroscience. Roughly 85 billion individual neurons make up the human brain, each one connected to as many as 10,000 others via branches called axons and dendrites. Every time a neuron fires, an electrochemical signal jumps from the axon of one neuron to the dendrite of another, across a synapse between them. It’s the sum of those signals that encode information and enable the brain to process input, form associations, and execute commands. Many neuroscientists believe the essence of who we are — our memories, emotions, personalities, predilections, even our consciousness — lies in those patterns. The next problem to solve is that, even if we get the best data capture devices, another difficulty arises. Processing. We got not enough computer power and storage space. Some estimate our brain capacity at 2.5 petabytes (1 petabyte — 1000 terrabytes). Some said is less, some are thinking at much bigger numbers, but the fact is that we do not know for sure. When someone had just witnessed his job at its best. “This is what I do,” he says. “You have got tons of labs and researchers who are motivated by their own personal interests.” The trick, he says, is to identify the goals that could benefit brain uploading and try to push them forward — whether the researchers have asked for the help or not. Certainly, it seems, many scientists have proven willing to consult and even collaborate with Koene. That was clear last spring, when scientists from institutions as varied as MIT, Harvard University, Duke University, and the University of Southern California descended on New York City’s Lincoln Center to speak at a two-day congress that Koene organized with the Russian mogul Itskov. Called Global Future 2045, the conference’s objective was to explore the requirements and implications of transferring minds into virtual bodies by the year 2045. We are searching for immortality through technology. But is this the real answer? Is this the way? How do we know? This are some of the questions we need to answer. ** I tried to do the entire 8 hours — 8 modules Japanese course in one of my days off. Was a success and a failure in the same time. I could do only 3 modules of approx. one hour each with 15–20 minutes break before i reach my limit. What i learn. First, it is still easy for me to learn, but eventually my focus will disappear and i will reach a limit (how can i push this limit further will be one of my next tasks). Second, lately i am easily distracted, maybe is time to reassess my yoga and meditation techniques (in other words start to do them again as i was a bit lazy and i didn’t do them too much in the last months). Third, i observed that i was slacking in almost every area of my life in the last months, so in a way it was like a wake-up signal, and that is good. I was feeling like i am not at my peak, so it is the time to go back there, where everything is sunny and bright (easy to say it, but hard to achieve as it is raining a lot lately. But can’t rain all the time, right Brandon?)
Brain uploading
0
brain-uploading-18561755a83f
2018-02-28
2018-02-28 20:21:00
https://medium.com/s/story/brain-uploading-18561755a83f
false
1,291
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mihalache Catalin (aspiring polymath)
All my books are self-published on Amazon. I have written all my life, mostly poetry and short fiction. I care about me. I care about others. I care as a job.
c5653346b76d
mihalachecatalin
29
198
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-10
2018-09-10 18:25:01
2018-09-18
2018-09-18 19:31:30
4
false
en
2018-09-18
2018-09-18 19:31:30
3
1856ac0b2cab
6.673585
1
1
0
In the past few months, I took a class in Data Science through General Assembly, a coding academy. We primarily coded in Python, with…
5
Applying Data Science to Sports Betting In the past few months, I took a class in Data Science through General Assembly, a coding academy. We primarily coded in Python, with extensive use of Python libraries designed for mathematical computation & statistical analysis (Numpy, MatPlotLib, Pandas, and others). I wanted to write this article to detail the motivations, methods, & findings of a data science project I created as part this class, in a way that was more accessible to a non-technical audience. Introduction For our final project, we were directed to use the tools we developed over the course to come up with a problem which could be addressed using a machine learning model. I will go through my workflow throughout this project, starting with stating the problem I set out to address, looking at my method of acquiring the data, moving through how I structured my dataset, and ending with an explanation of the models I created and an interpretation of their results. I’ll conclude with some lessons I learned in the process. The Problem I’m a big fan of NBA basketball. The idea for this project occurred when trying to think of a way to use basketball statistics in a machine-learning context. I initially thought of using box score statistics from previous games to make a prediction as to whether a particular team would win or lose. I am also a big fan of sports betting (in theory, less so in practice), so I thought it would be more interesting to try and make a prediction using box score statistics, with the idea I could use that prediction to inform a bet. My decision for the ultimate direction of the project was to use NBA box score statistics from previous games to train a logistic regression model. This model would return a prediction as to whether a game’s total score would be Over | Under a point total set by a bookkeeper. The hope was I could use the predictions over the course of an NBA season to inform a long-term betting strategy. Context For every NBA game, a sports book will set a point total, which is the book’s prediction of the final score. The sports book challenges bettors to bet either an OVER, if the bettor believes the total final score of the game will be higher than the total set by the book, or UNDER, if the bettor believes the total final score of the game will be lower than the total set by the book. If the game’s total final score is equal to the point total set by the book, that’s called a PUSH, and the book returns the bet. (For a more detailed explanation of the sports betting context behind my project, please see my Technical Report) Acquiring Data After figuring out the outline of my project, my next step was finding a reliable source of data I could work from. After some research, I found two websites that could give me the data I needed: basketball-reference.com, for data on NBA games and statistics, and sportsbookreview.com, for data on the corresponding betting lines. To get the information I needed, I scraped the above websites for box scores for every game for the past 5 NBA seasons, as well as the corresponding point totals set for every game in that period. Structuring the Data My goal was to return an Over | Under prediction for an NBA game, based on the box score statistics for the previous games. To do this, I structured my dataset in a way that each individual game was represented as the box scores for the 3 prior games for each team, for a total of 6 prior games. I made a small example below for reference: A: Team 1 B: Team 2 O: Opponent #: represents number of games back from the current game Current Game: A vs B Represented As: 1 A vs O, 2 A vs O, 3 A vs O, 1 B vs O, 2 B vs O, 3 B vs O By setting up my data up in this format, I hoped to create a model which would pick up on a relationship between the prior 3 games of box score statistics and the total score of an NBA game. In addition, I thought there may be bias on the part of the bookmakers in terms of adjusting the point totals based on the performance of the teams’ previous games; bias that my model could detect. Modeling The process of taking this dataset and drawing insight from it involves creating a model. (See Glossary) The specifics of the model and how I created it are fairly technical, so I’m going to refer people who are interested in those details to the Modeling section of my Technical Report. At the end of the process I had created 2 models, one which returned a prediction on whether an NBA game would be OVER a set line or not, and one which returned a prediction on whether an NBA game would be UNDER a set line or not. Results With the predictions from my models, I could analyze how often my model returned correct predictions. My models, making a prediction on all the NBA games in a season, were not significantly better than the average in determining whether a game would be Over | Under. Therefore, I set up what I termed a confidence threshold; my model returned probabilities for the chance of a game being Over | Under, and if the probability of a prediction was above the threshold I set, the prediction was “confident.” Here is a breakdown of my results, with a confidence threshold of 62%: OVER MODEL Predicted confidently: 88 games Predicted correctly: 52 games Accuracy: 59.09% UNDER MODEL Predicted confidently: 96 games Predicted correctly: 52 games Accuracy: 54.16% I set the threshold at 62% because that point optimized both prediction accuracy and the # of games that were predicted “confidently” My model did better in accurately predicting games that were Over as opposed to those that were Under, for the 2018 NBA season. I have a guess for this phenomenon: Basketball point totals have been increasing over seasons. Looking at the past 5 years of NBA games, the average score has increased by ~ 5 points per team. Source: basketball-reference.com My Over model may be picking up on some bias present in the way bookmakers are setting the lines. Perhaps books are seeing teams score high numbers of points, and are adjusting the point totals downward to reflect a belief that the teams will not score as highly as in future games. Simulation I created a basic simulation, combining the predictions for each game in the 2018 NBA season. Starting with $10,000, and making a bet each time my models predicted a confident bet (as above, set at the threshold of > 62% probability), I show how my model performs informing a betting strategy over the course of the season: The final total for the betting strategy informed by my model’s predictions was $11,880, with an accuracy of 56.52% on the bets that were predicted confidently. Conclusions Working with time series data was more difficult than I expected. Because I chose to represent an NBA game as 6 different sets of statistics, I had to coordinate a large amount of data along a time dimension. One of our instructors said 90% of the work in data science is cleaning and arranging data, and that was certainly true for this project. Although my models were successful in returning enough accurate predictions to return a profit, I would not use these model to inform a long-term betting strategy, or I would maybe use the OVER model I created in conjunction with other strategies. My reluctance stems from the limited # of games the models were able to predict confidently. Also the fact that I had to manually target the level at which the models were maximally effective is a caveat to my results. I had a good time working with sports betting data because the results were visible, and I could show clear benefit to the work I did. Glossary I recognize the concepts present in the Data Science portions of my project can be difficult to grasp, so I’m breaking down some of these concepts and how they’re used in my project here. Data Science — The interdisciplinary practice of using scientific methods, algorithms, and systems to draw insights from data. Combines elements of computer programming, statistics, mathematics, machine learning, as well as domain-specific knowledge. Web Scraping — The process of writing code to pull data off of a website. For a look at the python code I used to scrape the data, see my Web Scraping Notebook Over | Under — The bar between the word over and under can be read as “Over or Under” Model — A model is a system of making a determination about the value of an unknowable data point, using information you have available about the characteristics of other data points. Logistic Regression — The process of determining a relationship between a set of data points, where each data point falls into one of two categories. This relationship can then be used to make predictions on the category of additional data points. Can be thought of as predicting classification into one of two categories (in this case, classifying each game as either OVER or UNDER)
Applying Data Science to Sports Betting
20
applying-data-science-to-sports-betting-1856ac0b2cab
2018-09-18
2018-09-18 19:31:30
https://medium.com/s/story/applying-data-science-to-sports-betting-1856ac0b2cab
false
1,583
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Jordan Bailey
null
dc87dc38d17c
jxbailey23
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-14
2018-09-14 16:41:35
2018-09-14
2018-09-14 16:43:17
2
false
en
2018-09-14
2018-09-14 16:47:54
0
185722fbf3a6
1.907862
0
0
0
The picture above contains two key features:
3
ML/ Analytics application as Service in Software Testing The picture above contains two key features: > Something is being searched/ investigated > A word cloud with ‘text’ based jargons from testing world Found it to be one of the best pictures to summarize application of data analytics/ ML in Software Testing. Word cloud is one of the popular usage of text analytics; Text data (in form of requirements/ test cases) is the most suitable ‘data’ in testing for such analysis; and an investigator/ tester is probably looking for bugs through it. Lot of existing commercial options in market are g eneric PRODUCTS, which claim to aid with help of AI/ ML in software testing, and seen as some advanced type of automation. Success/ Failure of them as a whole is still a moot point… So here with my humble knowledge of testing and data analytics/ ML, would like to explore one possible related application, as a Service instead. Few key aspects from software testing projects/ teams: The most common data type available is in ‘text’ form. Namely requirements (User Stories), Test Conditions/ Cases For test automation, the best candidate is something being repeated like regression scenarios. Similarly candidate for data science, is something intuitively expected to have inherent patterns in data. For robotics, a task which can be put in a workflow. Most IT development projects are enhancements using a generic framework for a certain business usage; rather than coded from scratch. Like CRM/ ERP/ HRMS/ SCM based solutions. Hence; there are certain aspects broadly common or repeated across implementations; atleast for a particular client + domain. Hence the problem statement here becomes: Using historical data of requirements + test cases; creating a framework to suggest (prescriptive analytics) a set of generic reusable test cases; for requirements of a new project. The core underlying assumption is that each test case or each requirement; can be identified by a group of words occurring together. Say for example ‘bonus offer setup’ for a customer loyalty program system. So eventually am proposing a concept here leveraging Text Analytics for semi-automatic generation of test cases: Hence this may possibly provide an edge to testing teams, in terms of reduced test creation time and better coverage. Please share your thoughts :) #Analytics #Testing #MachineLearning #TextMining #NLP Disclaimer: This is a theoretical concept note only. Article penned basis my individual understanding of text analytics applications and testing in general.
ML/ Analytics application as Service in Software Testing
0
ml-analytics-application-as-service-in-software-testing-185722fbf3a6
2018-09-14
2018-09-14 16:47:54
https://medium.com/s/story/ml-analytics-application-as-service-in-software-testing-185722fbf3a6
false
404
null
null
null
null
null
null
null
null
null
Software Testing
software-testing
Software Testing
4,905
piyush gupta
null
c11f08b6eb6f
piyushgupta.jaipur
0
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-05
2017-12-05 09:34:54
2017-12-05
2017-12-05 09:40:48
1
false
en
2017-12-05
2017-12-05 09:45:54
0
1858d4797004
1.509434
3
0
0
Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed.
3
Machine learning Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed. Arthur Samuel, an American pioneer in the field of computer gaming and artificial intelligence, coined the term “Machine Learning” in 1959 while at IBM. Evolved from the study of pattern recognition and computational learning theory in artificial intelligence, machine learning explores the study and construction of algorithms that can learn from and make predictions on data– such algorithms overcome following strictly static program instructions by making data-driven predictions or decisions, through building a model from sample inputs. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms with good performance is difficult or infeasible; example applications include email filtering, detection of network intruders or malicious insiders working towards a data breach, optical character recognition (OCR), learning to rank, and computer vision. Machine learning is closely related to (and often overlaps with) computational statistics, which also focuses on prediction-making through the use of computers. It has strong ties to mathematical optimization, which delivers methods, theory and application domains to the field. Machine learning is sometimes conflated with data mining, where the latter subfield focuses more on exploratory data analysis and is known as unsupervised learning. Machine learning can also be unsupervised and be used to learn and establish baseline behavioural profiles for various entities and then used to find meaningful anomalies. Within the field of data analytics, machine learning is a method used to devise complex models and algorithms that lend themselves to prediction; in commercial use, this is known as predictive analytics. These analytical models allow researchers, data scientists, engineers, and analysts to “produce reliable, repeatable decisions and results” and uncover “hidden insights” through learning from historical relationships and trends in the data. According to the Gartner hype cycle of 2016, machine learning is at its peak of inflated expectations. Effective machine learning is difficult because finding patterns is hard and often not enough training data is available; as a result, machine-learning programs often fail to deliver.
Machine learning
3
machine-learning-1858d4797004
2018-05-08
2018-05-08 08:57:10
https://medium.com/s/story/machine-learning-1858d4797004
false
347
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
abhi ramani
null
4861fd5f8942
abhiramani3
4
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-12
2018-02-12 18:22:55
2018-02-13
2018-02-13 19:26:00
1
false
en
2018-02-13
2018-02-13 22:35:37
7
185bcdae01ac
2.709434
4
0
0
What do you think about a truck that has the same height of a building of three floors being driven by a person a thousand kilometers from…
5
Industry 4.0: autonomous trucks in the Mining Projects What do you think about a truck that has the same height of a building of three floors being driven by a person a thousand kilometers from away? Off-road trucks represents one of the major costs in the mining operations. “Truckless Projects” are being a reality, but we will see the big ones operating for a long time, however, no humans operators! “Since the 1930s, science fiction writers dreamed of a future with self-driving cars, and building them has been a challenge for the AI community since the 1960s. By the 2000s, the dream of autonomous vehicles became a reality in the sea and sky, and even on Mars, but self-driving cars existed only as research prototypes in labs.” — say Stanford Report 2016. The future for the Mining Projects using high technology is now. If you notice what is happening with the use of the innovation and new systems in the principal players like Caterpillar and Komatsu, it’s amazing! A lot of thecnologies like Artificial Intelligence (AI), Learning Machine (LM), Big Data Analysis, and others, are revolutioning this Economoy sector around the world (examples in South Africa, Australia, Brazil - important mining markets). A consequence of the Fourth Industrial Revolution (or Industry 4.0), predicted by technology analysts. The Cat MineStar Command autonomous hauler control system is currently only offered with the Cat 793F, but more Cat units, including the 797F, as well as the Komatsu 930E should soon be able to use the system. Font: ConstructionWeekonline.com . To drive an autonomous truck not is so similar like to use an autonomous car in a civil road. The major variables to control are: sensors, safe of the way, entrance of strange and unexpected events or objects (person, animal, another vehicle, etc.), telemetry, weather, condictions of the road, to note some requisits. But, for a big truck as a mining equipment you have another (and importante) variable: the weight of the load! We are talking around 150 or more tons of iron. In the video below is possible to understand how the system works. See that is necessary a strenght telecommunication infrastructure to support the commands and telemetry. In Brazil, Brucutu’s site (a Vale Project iron ore mining), Caterpilar is testing the autonomous trucks. The idea is in the future, the next sales of equipments, only for autonomous trucks. It will be necessary training the maintenance team for the new technology, to prepare centers of controls and telemetry. When you analyse the advantages of the use this system, it is possible make a list of some important points: a) Less accidents: in this job is common the clash between the mining trucks and other vehicles; or the operator sleep and cause big crashes. b) Less consume of fuel and tires: if you track a route, the autonomous AI system is more smart to respect and to follow a routine; consequently will be efficient to more works per day. c) More data to planning the pits and excavation: Big Data Analysis it gives information for the best decision and to manager the mining site using good routes. Finally, it’s the start of the opportunities to manage sites and to maintenance of mechanical equipments and eletronic complete systems. The transition not will be easy, but necessary to reach less costs in the mining operations. References Caterpillar to develop autonomy for Komatsu trucks by John Bambridge on Jan 31, 2017 The Cat MineStar Command autonomous hauler control system is currently only offered…www.constructionweekonline.com https://www.cat.com/en_US/by-industry/mining/surface-mining/surface-technology/command/command-for-hauling.html Komatsu Autonomous Haul System: AHS Mining Industry Edit descriptionwww.komatsu.com.au Self-Driving Mining Trucks Walk around a surface mine with 430-ton, 2,600-horsepower trucks hauling material, and you might hope the drivers paid…www.asme.org Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller. “Artificial Intelligence and Life in 2030.” One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA, September 2016. Doc: http://ai100.stanford.edu/2016-report. Accessed: September 6, 2016. https://im-mining.com/2014/12/11/sandvik-fully-mobile-ipcc-rigs-in-place-at-s11d/
Industry 4.0: autonomous trucks in the Mining Projects
4
industry-4-0-autonomous-trucks-in-the-mining-projects-185bcdae01ac
2018-03-21
2018-03-21 06:30:21
https://medium.com/s/story/industry-4-0-autonomous-trucks-in-the-mining-projects-185bcdae01ac
false
665
null
null
null
null
null
null
null
null
null
Autonomous Trucks
autonomous-trucks
Autonomous Trucks
2
Italo Coutinho
Getting results on challenging projects since 1994. And I do not intend to stop so soon! Engineering Teacher and Consultant for Industrial Projects.
99de39ad3c90
italonaweb
90
93
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-13
2017-09-13 00:24:18
2017-09-13
2017-09-13 00:59:26
1
false
en
2017-09-14
2017-09-14 23:21:44
5
185bee8c6daa
6.064151
676
24
0
Artificial intelligence, AI, has grabbed headlines, hype, and even consternation at the beast we are unleashing. Every powerful technology…
5
AI: Scary for the Right Reasons Artificial intelligence, AI, has grabbed headlines, hype, and even consternation at the beast we are unleashing. Every powerful technology can be used for good and bad, be it nuclear or biotechnology, and the same is true for AI. While much of the public discourse from the likes of Elon Musk and Stephen Hawking reflects on sci-fi like dystopian visions of overlord AI’s gone wrong (a scenario certainly worth discussing), there is a much more immediate threat when it comes to AI. Long before AI goes uncontrollable or takes over jobs, there lurks a much larger danger: AI in the hands of governments and/or bad actors used to push self-interested agendas against the greater good. For background, as a technology optimist and unapologetic supporter of further development, in 2014 I wrote about the massive dislocation in society AI may cause, and while our economic metrics like GDP, growth, and productivity may look awesome as a result, it may worsen the less visible, but in my opinion, far more critical metrics around income disparity and social mobility. More importantly, I argued why this time might be different than the usual economists’ refrain that productivity tools always increase employment. With AI, the vast majority of current jobs may be dislocated regardless of skill or education level. In the previous industrial revolution, we saw this in agriculture between 1900–2000, when it went from a majority of US employment to less than 2%, and in industrial jobs, which today are under 20% of US employment. This time, the displacement may not happen to just lower skill jobs — truck drivers, farm workers and restaurant food preparers may be less at risk than radiologists and oncologists. If skilled jobs like doctors and mechanical engineers are displaced, education may not be a solution for employment growth (it is good for many other reasons) as is often proposed by simplistic economists who extrapolate the past without causal understanding of reasons why. In this revolution, machines will be able to duplicate the tasks they previously could not: those that require intellectual reasoning and fine grained motor skills. Because of this, it is possible that emotional labor will remain the last bastion of skills that machines cannot replicate at a human level and is one of the reasons I have argued that medical schools should transition to emphasizing and teaching interpersonal and emotional skills instead of Hippocratic reasoning. We worry about nuclear war as we should, but we have an economic war going on between nations that is more threatening. The pundits like Goldman Sachs advocate internationalism because it serves their interests well and is the right thing if played fairly by all. And though the wrong answer, in my view, is economic nationalism, the right answer goes far beyond just a level playing field. While Trump-mania may somewhat correctly stem from feelings of unlevel playing fields in China, the problem is likely to get exponentially worse when AI is a factor in these economic wars. This problem of economics wars will likely get exponentially amplified by AI. The capability to wage this economic war is very unequal among nation states like China, USA, Brazil, Rwanda or Jordan based on who has the capital and the drive to invest in this technology. As it’s mildest implications, left to its own devices, AI technology will further concentrate global wealth to a few nations and “cause” the need for very different international approaches to development, wealth and disparity. I wrote about the need to address this issue of disparity, especially since this transformation will result in enormous profits for the companies that develop AI, and labor will be devalued relative to capital. Fortunately, with this great abundance, we will have the means to address disparity and other social issues. Unfortunately, we will not be able to address every social issue, like human motivation, that will surely result. Capitalism is by permission of democracy, and democracy should have the tools to correct for disparity. Watch out Tea Party, you haven’t seen the developing hurricane heading your way. I suspect this AI driven income disparity effect has more than a decade or more to become material, giving us time to prepare for it. So while this necessary dialogue has begun and led to the ideation of solutions such as robotic taxes and universal basic income, which may become valuable tools, disparity is far from the worst problems AI might cause and we need to discuss these more immediate threats. In the last year alone, the world has seen some of the underpinnings of modern society shaken by the interference of bad actors using technology. We’ve directly seen the integrity of our political system threatened by Russian interference and our global financial system threatened by incidents like the Equifax hack and the Bangladesh Bank heist (where criminals stole $100m). AI will dramatically escalate these incidents of cyberwarfare as rogue nations and criminal organizations use it to press their agendas, especially when it is outside our ability to assess or verify. This transition will resemble what we see when wind becomes a hurricane or a wave becomes a tsunami in terms of destructive power. Imagine an AI agent trained on something like OpenAI’s Universe platform, learning to navigate thousands of online web environments, and being tuned to press an agenda. This could unleash a locust of intelligent bot trolls onto the web in a way that could destroy the very notion of public opinion. Alternatively, imagine a bot army of phone calls from the next evolution of Lyrebird.ai with unique voices harassing the phone lines of congressmen and senators with requests for harmful policy changes. This danger, unlike the idea of robots taking over, has a strong chance of becoming a reality in the next decade. This technology is already on the radar of the authoritarian countries of today. For example, Putin has talked about how AI leaders will rule the world. Additionally, China, as a nation, has focused on very pointedly acquiring this powerful new AI technology. The accumulation of expertise beyond normal business competition and their very large funding directed here is a major concern. This is potentially equivalent to or worse than the US being the only nation with nuclear capabilities when the Hiroshima attack was conducted. There was very little for our Japanese opponents to respond with. It is hard to say if this economic war weapon will be as binary as the nuclear bomb was, but it will be large and concentrated in a few hands and subject to little verifiability. Surreptitious efforts, given its great amplification potential, could create large power inequality. Matters get worse if one realizes that major actors in AI development in the West, like Google, Facebook, and universities, have adopted a generally open policy publishing their technology approaches and results in scientific journals in order to share this technology more broadly. If individual state actors don’t do that, and I doubt they will, we will have a one way flow of technology from the US. AI development in certain parts of the world will additionally have huge advantages because of policies against/for data. As Andrew Ng (a Stanford professor hired by massive Chinese company, Baidu, to lead it’s AI efforts until he left to incubate his own ideas) has said, “Data is the rocket fuel for the AI engine”. So while AI progress has been frenetic recently, it will be much faster when data privacy and occasional accidents are less important in the interest of “national security.” This disregard for data privacy and one way transfer of technology will lend nationalistic countries like China and Russia a huge advantage in this generation’s space race. AI will be much more than an economic, business, or competition issue that it is talked about today. We will need to rethink capitalism as a tool for economic efficiency because efficiency will matter less, or at the very least, disparity will matter more, but that consideration may be many decades away. The biggest concern in the next decade is that AI will dramatically worsen today’s cyber security issues and be less verifiable than nuclear technology. Nationalistic nations like China and thuggish dictators like Putin will have massively amplified clandestine power. I don’t believe we, as a society, would be willing to give up the safeguards in our society like open progress and privacy to “keep up” with other nations. I have some thoughts as to what we can do here, but this is a complex problem without obvious solutions. Maybe we limit funding of non-NATO investors in US AI companies? Maybe having the US government or NATO invest in their own AI technologies for national security? An AI white hats force? Increased efforts in Black Swan developments like quantum computing? Less risk aversion, more patience, and less backlash from society and government to the risks, biases and shortcomings of new AI technology as it grows up? Regardless of what we do, what’s clear is we need much more dialog, debate, and increased countermeasure funding; instead of generating hysteria about some far off dystopian possibility mired in uncertainties and what ifs, we need to focus on the immediate wave of danger before it hits. Not taking risks here might inadvertently be the largest risk we take.
AI: Scary for the Right Reasons
3,908
ai-scary-for-the-right-reasons-185bee8c6daa
2018-06-08
2018-06-08 22:11:50
https://medium.com/s/story/ai-scary-for-the-right-reasons-185bee8c6daa
false
1,554
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Vinod Khosla
entrepreneurship zealot, grounded technology optimist, believer in the power of ideas
3dc350c75361
vkhosla
54,129
177
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-26
2018-02-26 14:37:51
2018-02-26
2018-02-26 14:39:48
8
false
en
2018-02-26
2018-02-26 14:41:19
22
185c3c3330ad
7.250314
2
0
0
On 19th February 2018 Dr. Bastian Halecker, CEO of Nestim and Startup Tour Berlin, held a session devoted to innovative startups focusing…
5
Startup Tour Berlin presents the most innovative startups from its ecosystem at DTIM conference On 19th February 2018 Dr. Bastian Halecker, CEO of Nestim and Startup Tour Berlin, held a session devoted to innovative startups focusing on disruptive technologies at DTIM conference in Berlin. The companies taking part in the conference applied Data Science, Artificial Intelligence, Internet of Things, Blockchain and new technologies in Consulting, Food Tech, HR, and other industries. On the picture (from left to right): Jan Schnitker (is it fresh GmbH), Falco Schuett (Next Big Thing AG), Sven Forgber (How I Like), Martin Michenfelder (How I Like), Dr. Bastian Halecker (Nestim GmbH, Startup Tour Berlin), Christin Schäfer (acs plus AG), Martin Micko (SearchInk), Tim Vogelsang (Octorank), Franka Birke (METR). The startups’ representatives pitched their projects in a very dynamic pace and have received a lot of feedback and questions on the interactive platform PigeonHole. Because of the lack of time for a proper Q&A session, we would like to publish the comments and questions later answered by the participants online. acs plus UG founded by Christin Schäfer in 2016 supports its clients in creating data solutions from the design to the first creation of a prototype towards production, offering services from data advisory to data analytics and machine learning. – Go!!! acs plus go!!! – What’s Special, what’s your USP? acs plus: Focus on solutions of the problem at hand in a pragmatic and adequate way. It might be, that the creation of a data model makes the trick, that a data visualization already solves the question, but it could also be the case, that we need to train a neural net with deep learning architecture. acs plus‘ solution space is not restricted to pure data modelling, or data analytics, nor machine learning. We love and use them all. – Machine learning nur ein Hype? acs plus: Machine Learning ist eine Technik zur induktiven Interferenz aus Daten, die aktuell wesentlich zu den Fortschritten im Bereich Narrow AI beiträgt. Sie wird auch dann noch relevant sein, wenn der aktuelle Hype beendet ist. Vielleicht mit einem neuen Namen. METR Building Management Systems founded by Franka Birke and Maité Zubeldia provides physical and virtual infrastructures that enable new business models and applications in the field of smart building automation and smart home. – Sehr klarer case – Go ahead, stick to open standards, no nest, no Alexa, crowdsource! – App Store für wohnungswirtschaft klingt super! – Endlich mal ein konkretes Produkt. Habe ich verstanden. Finde ich gut. – Good pitch. Any MVP / Proof of Concept resp City using it yet? METR: Proof of Concept: successful pilots at one housing company and one measurement service provider. Delivery of the first gateways on the 20th February. – Metr seems solid! Wo pays for the extra Costs? Is it extra Costa on top of Rent? METR: The housing company pays for the gateways (= IoT infrastructure of the building). As soon as we have introduced the app store for the tenants, the tenant pays a small monthly fee for using the apps. The METR App Store allows tenants to browse and download apps and services developed by METR and in partnership with third party suppliers. – Welche Funkstandards werden unterstützt? METR: Aktuell sind es OMS, M-Bus, wireless M-Bus, QundisAMR, wifi. Zukünftig auch LoRa, zigbee, z-wave Next Big Thing AG founded by Harald Zapp in 2016 offers a complete framework for the acceleration of IoT ventures. NBT is an operational VC, technology provider, and innovation partner made up of a strong team of business architects, technology experts, and experienced founders. – IoT und KMU — guter Ansatz, aber nicht auch widersprüchlich? Stichwort: deutsche Entwicklungsgeschwindigkeit – What’s your product? As an IoT company builder, we create and accelerate ventures. Essentially, NBT offers two ‘products’: an Entrepreneur in Residence Program and Corporate Partnerships. Through our unique venture design process, we combine these two product elements with in-house IoT technological expertise, enabling disruptive innovation across industries. We currently have ventures in healthcare, energy, prop-tech, logistics, agriculture, and blockchain technology — with weeve as a primary example of our venture empowering token economies. Our prop-tech venture, METR, brings smart metering to housing complexes, as presented at DTIM. – Was ist der USP? NBT combines the best aspects of an incubator, accelerator, and operational VC. We provide seed funding, mentorship, business expertise, and technological competence in IoT and blockchain. As Germany’s IoT Hub, we form the center of a strategic European ecosystem connecting industries, policy-makers, and research institutions. By utilizing market reach and industry knowledge from corporates, entrepreneurial drive, startup agility, and in-house tech teams, we improve services around the data from IoT devices. Our core pillars are security, price, zero configuration (ease of use), and emotion. – IoT is everywhere — at least as a concept. Considering how much damage is done over the Internet already (cyber crime), how are we going to handle increased threats in an interconnected world? Cybersecurity will gain increasing importance in the future because things or devices will generate more value and have to become the points of trust — carrying valuable information such as private keys, IDs, or hardware wallets. Consequently, we work on strengthening all the technological layers of the IoT world — starting from the hardware (the physical layer), throughout the boot and operating systems, over the protocols and the cloud architectures, while complying with the latest security and data privacy standards. is it fresh GmbH founded by Jan Schnitker and Alexey Yakushenko in 2017 digitalizes food packaging and introduces freshness monitoring into every single package using advanced printed electronics technologies and functional nanomaterials. – Are the chips manufactured in-house? Full control of ID? How do you address serialization? is it fresh: Yes, the chips are manufactured in-house and that gives us full control. Each chip features a unique ID. – Are textile applications also possible? is it fresh: Yes, textile applications would be definitely possible. – How is the quality control exactly executed? is it fresh: The tags measure in-situ freshness parameters (product specific), which are aggregated and sent back to the user. – What’s your advantage to the Big Fish in the market? is it fresh: Our NFC-enabled chip platforms offer a very wide range of sensing capabilities for which there is no known competitor on the market to the best of our knowledge. SearchInk founded in 2015 offers AI-based smart process automation for all kinds of businesses. Thanks to the most advanced developments in Deep Learning, SearchInk solution extracts and semantically connects the relevant data contained in highly diversified document streams. Employees & organizations are able to jump straight to the decision-making steps instead of spending time on manual data entry & administrative tasks. – Klingt nach sehr großem Potenzial! – Good use case, with AI can you ensure 100% Quality and not only 99%. – Problem klar beschrieben — Klingt nach sehr großem Potenzial: Anwendungsbeispiel? SearchInk: Im Versicherungsbereich sind die Anwendungsbeispiele Schadensfall (Claims) Management oder Polizzenextraktion zur Portfolioumdeckung. Darüber hinaus gibt es Use Cases im Bereich Ordermanagement (also Bestellanfragen und Bestellungen unterschiedlichster Kunden) oder für die Verarbeitung von Logistikdokumenten. – Referenzkunden? SearchInk: Diese können gerne bei Bedarf genannt werden. – OCR + AI? What’s new? Your USP? SearchInk: OCR extrahiert alleinig Text ohne Aussage über den semantischen Inhalt zu liefern. SearchInk „versteht“ den Inhalt und liefert bei der Extraktion somit das Label (z.B. Name, Vertragsnummer, etc.) und den extrahierten Wert und dies aus einer Fülle von Dokumenten die ähnliche Strukturen aufweisen aber nicht ident sind. how I like founded by Sven Forgber and Martin Michenfelder offers companies an instant catering service providing their employees with fresh food right within their office. Fresh meal combos, salads, sandwiches, wraps, dessert and drinks made by professional chefs, nationally and internationally experienced producers and innovative startups. An intelligent refrigerator gives customers a chance to buy everything fresh, cashless, 24/7 per day and 365 days a year. How I Like is currently being tested by companies located in Berlin and Brandenburg. Meine Mitarbeiter wollen fettiges ungesundes Standard Essen aus Pappschachteln. Mit gesundem Essen werden sie schon zu Hause gequält. -Was sucht ihr? how I like: Als junges Startup sind wir besonders auf der Suche nach erfahrenen Partnern/Kooperationen in den Bereichen smart Hardware (white good manufacturing), Logistik und smart Software solutions. – Mikrowellennahrung statt Lieferservice? how I like: Wir versuchen den Mitarbeitern und Firmen eine breite Range von Möglichkeiten zu bieten, sich innerhalb der Firmen gut ernähren zu können. Von Mikrowellen fähigen Convenience Produkten wie Suppen und kombinierbaren Mahlzeiten bis hin zu frischen Sandwiches, Wraps, Salaten und Snacks. Wir verstehen uns als Plattform für eine smarte, gesunde Ernährung direkt am Arbeitsplatz. Octorank founded by Tim Vogelsang in 2016 allows its users to announce calls for submissions and manage their dealflow. Octorank uses peer rating, data analysis and cloud experts. – What is your USP? Octorank: „Einfach. Schlau. Bewertet.“ Keine andere Innovationssoftware macht komplexe Bewertungsverfahren so kinderleichtin der Benutzung und valide im Resultat. – Warum sollten die startups ihr know-how mit Vattenfall teilen? Octorank: Startups bewerben sich mit grundlegenden Eckdaten auf Kooperationen mit unseren Auftraggebern (z.B. Vattenfall), und es gibt echte Vorteile für sie, z.B. Marktzugang, ein Corporate-Referenzkunde oder Finanzierung. – Habt ihr ein Ökosystem? Octorank: Ja, wir haben ein Ökosystem aus Multiplikatoren (Organisationen, Medien, Ambassadors), sowie ein Netzwerk an Entrepreneuren und Kreativen. – Gibt es das nicht 40x schon? Octorank: Nein. Es gibt bereits einige Innovationsplattformen am Markt, die Ideation-Prozesse abbilden. Doch macht andere Innovationssoftware komplexe Bewertungsverfahren so kinderleicht in der Benutzung und valide im Resultat. Source: http://nestim.com/startup-tour-berlin-presents-the-most-innovative-startups-from-its-ecosystem-at-disruptive-technologies-innovation-foresight-minds-conference/
Startup Tour Berlin presents the most innovative startups from its ecosystem at DTIM conference
98
startup-tour-berlin-presents-the-most-innovative-startups-from-its-ecosystem-at-dtim-conference-185c3c3330ad
2018-03-19
2018-03-19 13:04:09
https://medium.com/s/story/startup-tour-berlin-presents-the-most-innovative-startups-from-its-ecosystem-at-dtim-conference-185c3c3330ad
false
1,621
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Nestim
Nestim is connecting people and technology to drive innovation in the digital age. Learn more at www.nestim.com
f16424712f43
nestim
12
71
20,181,104
null
null
null
null
null
null
0
# LAMBDA SCHOOL # # MACHINE LEARNING # # MIT LICENSE code.sample('data') # Great code, leading up to plt.plot(x,y)
4
null
2018-04-20
2018-04-20 21:23:59
2018-04-25
2018-04-25 03:33:30
4
false
en
2018-04-25
2018-04-25 16:22:08
4
185cc74f197f
1.20566
12
0
0
Start by storytelling. Who are you and what are you doing. Why is what you are doing interesting?
1
Tutorial for Machine Learning Reporting Start by storytelling. Who are you and what are you doing. Why is what you are doing interesting? ML Research Reporting I learned/researched/discovered/implemented something new in my education/study/curiousity about Machine Learning this week. I found it interesting because of this and that. By Kenneth Jensen [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons I started implementing it this way. Here are some interesting anecdotes I encountered along the way: Example of what makes programming hard/tool useful/implementation worked first time or had bugs. Another anecdote. Figure 1: Overlay of data and function fit. Equation 1: g = x^{1/\sqrt(3)} Conclusion I love studying Machine Learning. I’m learning about X and Y and looking forward to changing the world in the domain of A, B, and C. References and Links Data Viz Catalogue datavisproject.com python-graph-gallery.com
Tutorial for Machine Learning Reporting
282
tutorial-for-machine-learning-reporting-185cc74f197f
2018-04-26
2018-04-26 03:17:48
https://medium.com/s/story/tutorial-for-machine-learning-reporting-185cc74f197f
false
134
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Thomson Comer
null
f68431a3f25a
thomcom
11
17
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-06
2018-04-06 10:57:31
2018-04-06
2018-04-06 11:00:37
1
false
tr
2018-04-06
2018-04-06 11:00:37
0
185ed37f151c
1.90566
1
0
0
Yapay zekanın bankacılıkta maliyetleri nasıl azalttığını, müşteri sadakatini nasıl kurduğunu ve finansal hizmetlerde güvenliği nasıl…
3
Bankacılıkta Yapay Zeka ne durumda? Yapay zekanın bankacılıkta maliyetleri nasıl azalttığını, müşteri sadakatini nasıl kurduğunu ve finansal hizmetlerde güvenliği nasıl artırdığını artık herkes biliyor. Yapay zekanın şu an bankacılıkta gelirlere etkisi %1–2 bandında. Önümüzdeki üç yıl içerisinde gelirlerde %3.4 oranında artış, masraf giderlerinde ise %3.9 oranında bir tasarruf sağlayabileceği öngörülüyor. Bu yüzden pek çok uzman bankaların mobil uygulamalarına değil, yapay zeka uygulamalarına yatırım yapmasını tavsiye ediyor. Üstelik bankaların yapay zekan uygulamalarına hizmet verdikleri tüm mecralarda yer vermesi gerektiğini belirtiyorlar; web sitesi, sosyal medya, mobil uygulama ve whatsapp! Dünyanın sayılı danışman firmalarından Accenture da bu konuda yayınladığı raporda finans şirketlerinin yapay zekaya yatırım yapması gerektiğini açık bir şekilde dile getiriyor. Raporda özellikle 5 ana trend üzerinde duruluyor; Yapay zeka temelli yeni arayüzler ile müşteri deneyimi artırmak, iş akışlarını basitleştirmek ve maliyetleri düşürmek. Ekosistemdeki güç değişimi ile şirketlerin artan istihbaratları ve platformun ötesine geçerek zengin ve sağlam bir ekosisteminin ön plana çıkması. İşgücü piyasasının geleneksel kurum içi hiyerarşileri dönüştürmesi, çok sayıda serbest çalışan ve yarı zamanlı çalışanların yetenek pazarında tercih edilmesi. İnsanı ön plana alan tasarım yaklaşımıyla büyük veri ve gelişmiş analitik çözümler sayesinde yalnızca insanların bugün nerede olduğunu bilmek değil, ileride nerede olmak istediklerini öngörebilme. “Unchartered” kavramı ile bankaların henüz net olarak tanımlanmamış yeni dijital endüstrilere dahil olması. Raporda ayrıca bankalar ile yapılan yapay zeka anket sonuçlarına da yer verilmiş. Bu ankete göre küresel ölçekte banka yöneticilerinin yüzde 79’u yapay zekanın bankaların bilgi toplama ve müşterilerle etkileşim kurma biçiminde devrim yaratacağına inanıyor. Yine yüzde 79’luk bir kesim yapay zekanın şirket genelinde teknolojik uygulamaların benimsenmesini hızlandıracağına ve çalışanlara, müşterilerine daha iyi hizmet verebilmesini sağlayacak araçlar ve kaynaklar sağlamasını bekliyor. Bankacıların yüzde 78’i yapay zekanın bankalara insanlar arası iletişime benzer bir müşteri deneyimi yaratacak basit kullanıcı arayüzleri sağlayacağına inanıyor. Yüzde 76’lık bir kesim, önümüzdeki üç yıl içinde müşterilerin yapay zekayı bankalarıyla etkileşime geçmek için birincil yöntem olarak kullanacağına inanıyor. Diğer yüksek oranların aksine sadece yüzde 29’luk bir kesim ise ürün ve hizmetlerin merkezi platformlar, sanal asistanlar ve chatbot’lar ile sunmanın son derece önemli olduğunu düşünüyor. Benzer konuda Business Insider’ın premium araştırma hizmeti olan BI Intelligence da bir rapor yayınladı. Raporda öne çıkan başlıklar kısaca şu şekilde; Yapay zeka veya insan zekasını simüle eden teknolojiler, bankacılık ve ödeme çevrelerinde trend olan bir konu. Birçok farklı şekilde ortaya çıkıyor ve birçok CEO, CTO ve strateji ekibi tarafından hızla değişen bir finansal ekosistemde tasarruf sağlayarak övgüler alıyor. Bankalar müşteri kimliklerini güvence altına almak, banka çalışanlarını taklit etmek, dijital etkileşimleri derinleştirmek ve müşterileri kanallar arasında buluşturmak için ön uçta yapay zeka kullanıyor. Bankalar ayrıca çalışanlara yardımcı olmak, süreçleri otomatikleştirmek ve sorunları önlemek için arka uçta da yapay zeka kullanıyor.Ödemelerdeki yapay zeka kullanımı, dolandırıcılık önleme ve tespit, kara para aklama girişimi tespiti (AML) ve conversational payments hacmini büyütmek için kullanılıyor.
Bankacılıkta Yapay Zeka ne durumda?
1
bankacılıkta-yapay-zeka-ne-durumda-185ed37f151c
2018-04-13
2018-04-13 07:19:51
https://medium.com/s/story/bankacılıkta-yapay-zeka-ne-durumda-185ed37f151c
false
452
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jet Istanbul
Jet is a brand new digital centric creative agency.
50f6194b75d7
jetistanbul
26
2
20,181,104
null
null
null
null
null
null
0
null
0
ac0f9958baa5
2017-11-15
2017-11-15 12:09:20
2017-11-15
2017-11-15 12:47:08
1
false
en
2017-11-22
2017-11-22 12:55:16
3
1861c9bc6559
2.30566
0
0
0
I’m not someone who should ever do music reviews, and that isn’t my intention here. I am the opposite of the people who Jimmy Kimmel…
5
Light it Up I’m not someone who should ever do music reviews, and that isn’t my intention here. I am the opposite of the people who Jimmy Kimmel interviewed at Coachella where interviewees were asked about trendy new bands (who happened to be fake) and many of them said they liked these new bands just to act like they were cutting edge — give me mainstream songs and I’m ok. I’m not trendy. www.stanway.org When I gave one of my standard, untrendy, commands to Alexa this morning “Alexa, play country music” a new song came on from Luke Bryan called Light it Up. We have an Amazon Echo Show in our kitchen and one of the neat features is that it displays the lyrics for most songs on the screen as the song plays. This can be a blessing and curse. It is amazing how many lyrics I had wrong. I started to look at the Light it Up Lyrics and I couldn’t believe it, literally every line in the song had something to do with using our phones. It also summarized the state of modern relationships pretty well. Waiting for a text back can really weigh on us. There is texting etiquette. I actually have friends who are known to have better ‘text game’ than others. It may be time for more of us to use technology like the Moment App to reduce our screen time and reliance on phones when in becomes the foundation of relationships — and popular songs. Lyrics below… I open my eyes, reach for the phone Not a word from you baby It don’t leave my sight since we had that fight Can’t remember but maybe I blew you up In the middle of the night again You were drinking with your friends You ignored it but you got it I get so neurotic about it baby ’Cause I know you’re reading your phone I can’t help from going crazy Thinking you might not be all alone I wake up, I check it, I shower and I check it I feel the buzz in my truck And I almost wreck it I always got it on me Just in case you want me So, if you’re looking for my love Then light it up Every time I unlock my screen I hope I see one of them red lipstick ‘I miss you’ pictures I’m on your clock, you’re in control You want me now baby go figure My worlds at the tips of your fingers I get so neurotic about it baby ’Cause I know you’re reading your phone I can’t help from going crazy Thinking you might not be all alone I wake up, I check it, I shower and I check it I feel the buzz in my truck And I almost wreck it I always got it on me Just in case you want me So, if you’re looking for my love Then light it up Yeah baby, then light it up I go to sleep, I check it In the middle of the night, I check it, I feel the buzz in my bed And I don’t get no rest I always got it on me Just in case you want me So, if you’re looking for my love Then light it up Yeah baby, light it up
Light it Up
0
light-it-up-1861c9bc6559
2017-11-22
2017-11-22 12:55:18
https://medium.com/s/story/light-it-up-1861c9bc6559
false
558
A collection of daily thoughts on Christianity and technology. Part of Stanway or WWJD AI — Healing brokenness by combining the 52,000,000,000+ words pastors preach each year with artificial intelligence — www.stanway.org
null
null
null
Stanway
jake.k.klinvex@stanway.org
stanway
RELIGION,CHRISTIANITY,ARTIFICIAL INTELLIGENCE,TECH,FAITH
stanway_ai
Amazon Echo
amazon-echo
Amazon Echo
3,511
Jake Klinvex
Co-founder of two companies that were sold to eMoney Advisor (which sold to Fidelity in 2015) and SessionM. Follower of Jesus. Villanova alumni. Pittsburgh Fan.
635ff16dc2d2
jake.k.klinvex
7
8
20,181,104
null
null
null
null
null
null
0
null
0
cdb7feb3ba60
2018-05-22
2018-05-22 08:35:56
2018-05-31
2018-05-31 04:01:30
3
false
en
2018-05-31
2018-05-31 04:01:30
4
186206e0a48c
4.500943
25
0
0
Quadrant protocol is supported by an excellent team of blockchain developers. The experience and focus they bring to the project is one of…
5
Meet Our Team of High-Calibre Blockchain Developers Quadrant protocol is supported by an excellent team of blockchain developers. The experience and focus they bring to the project is one of the key advantages that we believe will make Quadrant a success. In this post, we introduce you to three of our engineers: Barkha Jasani, Sharique Azam, and Roger Ganga. Head of Research and Technical Development - Barkha Jasani Barkha Jasani Barkha Jasani is DataStreamX’s Head of Research & Technical Development. She is a seasoned team member who has been executing on our roadmaps and successful proof of concepts, as well led the successful commercialization of the DataStreamX platform. She is now in charge of commercializing the technology, meeting the needs of our partners, ultimately is transforming our vision for Quadrant Protocol into reality. Barkha is one of the core members of our team, working closely with Mike and DataStreamX since its launch in 2014. Barkha is currently focused on architecting our decentralized and scalable solution to provide data authenticity and integrity in our transaction platform. This includes core development of our blockchain ecosystem. Areas she is working on include set-up, configuration, and testing of our Testnet and Mainnet. She is also in charge of building out Quadrant’s stamping protocol and smart contract development. Additionally, she oversees producer consumer clients, coordinating the team’s efforts on this front. Barkha is highly familiar with dApp architecture, having studied it specifically and worked on several projects related to the field. She has led the successful implementation of several ERC 20 tokens, architecting smart contracts to underlie these projects. Her impressive record of achievements includes the development of an enterprise-quality healthcare solution that is currently used by nearly a dozen healthcare facilities in the US. This mission-critical system was needed acutely and has helped improve the lives of countless patients across the country. Separately, an online government portal she developed for the Rajasthan Public Service Commission, in India, led to her being awarded the National Award for e-Governance 2012–2013 (Gold Award). Previously, Barkha held senior roles at Silver Touch Technologies, a major developer of software platforms and applications. In this role, she was instrumental in architecting solutions, programming modules, and communicating with clients, earning a reputation for multi-skilled professionalism and the ability to successfully juggle multiple projects. Barkha is passionate about her work on Quadrant Protocol, which she hopes will help to develop a fluid and trusted global data economy. “Big Data is the buzzword, but the value of all that data isn’t in its vastness or volume,” she says. “It’s in the insight that can be derived from it. So what we all work for is actionable, or insightful data,” she says. Big Data & Blockchain Engineer - Sharique Azam Sharique Azam Sharique leads the core development of Quadrant’s private blockchain and distributed applications. Sharique plays a key role in implementing our roadmap developing the Data Producer and Data Consumer Clients, as well as the Anchoring Client. In his work at DataStreamX, he was responsible for creating the systems that handle of millions of data records per day, making sure the data keeps flowing to and from the system. With Quadrant Protocol, he is developing our Clients to be able to handle these workloads, ensuring the next generation of data transaction run efficiently on the protocol. His experience in Smart Contracts and Big Data Processing are vital to the success and real-world implementation of Quadrant Protocol. Sharique is a skillful engineer with three years’ experience working at DataStreamX. In this role, he successfully developed a variety of SDKs and APIs that allow major users to consume data efficiently. Sharique is also experienced in building robust pipelines and performing data transformations, text mining, and text analysis. He has created smart contract-based clients both for data stamping to prove the authenticity and provenance of that data, and for verification of the stamped data itself. He also created a web based “Explorer” for Quadrant Protocol’s testnet, and is currently working on data smart contracts designed to make the experience of buying and selling data in Quadrant Protocol seamless. Sharique has a strong track record of success in his field outside of Quadrant as well. In 2014, he won Second Prize at National University of Singapore’s iCreate Mobility Challenge and 3rd prize in Tech in Asia’s Hackathon. Sharique is a passionate, hard-working and efficient engineer –a truly valuable member of the team. We are confident that his experience and professionalism will continue to contribute to sustainable solutions using to blockchain to solve the problem of data authenticity and integrity. Data Scientist & Solidity Engineer - Roger Ganga Roger Ganga As our lead Data Scientist and Solidity Engineer, Roger brings broad experience to Quadrant Protocol, both as a data scientist and engineer. He combines these skills with specific expertise in blockchain development. At Quadrant, Roger is responsible for the the implementation of data smart contracts and the AI / MircoService layer that will power the adoption of the protocol. Roger joined DataStreamX in 2017. At DataStreamX, he successfully deployed AI and Machine Learning projects for the company’s data lake. He also developed blockchain and smart contract dApps across the DataStreamX stack. We are confident he will bring his deep, unique experience to bear in the same way as he now focuses on achieving Quadrant’s milestones. Together with Barkha and Sharique, we are confident in his ability to deliver a best-in-class product. Roger has done significant work that has garnered the attention of the developer community. Among his recent accomplishments was simulation of the English Premier League in 2017 to determine the probability of each team winning. Additionally, he performed a successful market-basket analysis for a retail store to find products complementary to its offerings. Roger is a dedicated professional focused in providing actionable insights to business problems using real world data. His enthusiasm for learning new concepts, architecting new solutions, and successfully implementing them is a palpable benefit for the entire Quadrant team. Roger studied Computer Science and holds a Master Degree in Business Analytics from the Singapore Management University. If you would like to discuss Quadrant Protocol’s technology further with our engineers, reach out to them in our Telegram group. They will be more than happy to have a chat with you!
Meet Our Team of High-Calibre Blockchain Developers
681
meet-our-team-of-high-calibre-blockchain-developers-186206e0a48c
2018-06-05
2018-06-05 04:51:51
https://medium.com/s/story/meet-our-team-of-high-calibre-blockchain-developers-186206e0a48c
false
1,047
A BLUEPRINT FOR MAPPING DECENTRALIZED DATA
null
quadrantprotocol
null
Quadrant
mike@datastreamx.com
quadrantprotocol
BLOCKCHAIN,BIG DATA,AI,SINGAPORE STARTUP,DAPPS
explorequadrant
Blockchain
blockchain
Blockchain
265,164
Nikos Kostopoulos
Supporting data professionals and companies to build projects and grow organizations, utilizing blockchain technology with tools offered by Quadrant Protocol!
3e4393ac1c48
nikoskostopoulos
152
194
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-30
2017-10-30 20:32:04
2017-10-30
2017-10-30 20:50:50
2
false
pt
2017-10-30
2017-10-30 20:56:43
4
18626dfcbce5
1.692767
5
0
0
Muitas vezes eu e muita gente reclama que a academia brasileira se isola do mercado. Apesar de alguns esforços muito bacanas, que…
4
Quick Drop 2: Residência em Ciência de Dados no DCC/UFMG Muitas vezes eu e muita gente reclama que a academia brasileira se isola do mercado. Apesar de alguns esforços muito bacanas, que normalmente dependem da boa vontade de alguns poucos professores e ex-alunos, o gap entre academia e mercado no Brasil segue gigantesco. Ainda assim, as iniciativas que surgem são sensacionais! Mas olha só: O DCC-UFMG, uma das maiores referências nacionais e mundiais em Computação, está lançando um Programa de Residência em Ciência de Dados! O Programa de Residência é uma versão bem mais aplicada do que uma Iniciação Científica, e visa a preparação do profissional para ser contratado dentro do próprio laboratório, no caso, o Synergia que é conhecido por vários projetos externos à universidade de consultoria. Além disso, por ser um laboratório de projetos de software, o residente vai ter de perto mentoria de pessoal que entende bem do processo de desenvolvimento de software e boas práticas, essenciais pra quem vai começar a carreira. A princípio, o programa é fechado para alunos do DCC-UFMG, mas nada impede dessa extensão virar um programa aberto a toda comunidade acadêmica brasileira. De qualquer maneira, achei muito massa e acho que vale a pena divulgar essa iniciativa! Seguem abaixo o texto de chamada e o flyer do programa que recebi por email: O Departamento de Ciência da Computação integra dentro do ambiente universitário as atividades de ensino, pesquisa e extensão, fazendo com que a prática efetiva de atividades relacionadas com a tecnologia contribua para a formação dos alunos e o aperfeiçoamento dos professores. O Programa de Residência em Ciência dos Dados surge da decisão de estruturar, formalizar e dar continuidade à formação de profissionais de alto nível técnico. Além disso, visa propiciar aos participantes uma formação abrangente e sólida, através de um processo de formação organizado e consistente. O Programa de Residência tem como objetivo capacitar profissionais para atuar na área de Ciência dos Dados em parceria com uma grande empresa do mercado, em atividades de melhorias de processos e solução de problemas importantes da empresa.
Quick Drop 2: Residência em Ciência de Dados no DCC/UFMG
5
quick-drop-2-residência-em-ciência-de-dados-no-dcc-ufmg-18626dfcbce5
2018-01-04
2018-01-04 13:55:40
https://medium.com/s/story/quick-drop-2-residência-em-ciência-de-dados-no-dcc-ufmg-18626dfcbce5
false
347
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Allan Sene
Founder and Principal Data Engineer at Data Minders & Co-Founder of Data Hackers. Loves science, code and cats ^*^
49882670bc7f
allan.sene
465
234
20,181,104
null
null
null
null
null
null
0
null
0
69c20edac0bc
2018-05-24
2018-05-24 18:24:32
2018-05-20
2018-05-20 12:54:31
1
false
en
2018-07-06
2018-07-06 14:28:55
7
186329786f72
7.928302
4
0
0
If you are here, I believe that you have a strong interest to understand what it takes to become a Data Scientist.
5
The Definitive Guide To Get In Data Science If you are here, I believe that you have a strong interest to understand what it takes to become a Data Scientist. I am writing this post because I see a tremendous amount of people in a dilemma and there is absolutely no information out there, but just countless articles on what online courses you should take. People with almost all backgrounds — IT, mechanical, electrical, electronics, energy, chemical and civil, with people from B.Tech, M.Tech, B.Sc, M.Sc and even Ph.D, varying with no experience to 5 years of experience in my own circle and outside — asked me the same question — Should I get into data science? So hereby I will try my best to share how you can make it, with our knowledge on economics, psychology and study hacks. Economics To do or not to do. It is my firm belief that you should know how this can work out from the economic point of view. When you will start applying, people are going to judge you based on your past. Accordingly, they will evaluate your worth in the market and roll out an offer letter. If you are a non-IT professional, it is going to take a hell lot of effort to learn it and you don’t wont to be disappointed! So a safe number to say is that if you earn more than 7 lakhs*, you should be ready to be disappointed unless you hold a degree from Tier -1 college — Bachelors or Masters. If you are an IT professional, you are in an advantageous position. In DS interviews we definitely give importance to people who have worked in IT as we don’t have to teach them all the nitty-gritty of how IT works. No matter how fancy DS looks, at the basic level it is still IT. We want people to know the IT stuff — database working, querying, ETL, testing, deployment and clean coding. People with a history of IT have exposure to this and adds a boost to the resume. The only thing they need to take care is all the data science — which seems much more manageable to them compared to non-IT people. Since their learning curve can be faster and they can leverage the past experience to get a better offer, their risk of exploring this direction is definitely less. So unless you are 7+ lakhs earner in India, you can go ahead and do it. Even if you end up getting less than 7, you will catch up much easily in later years. Psychology To be frank, getting into data science is a game of capturing your fears. In the beginning it will get overwhelming with tons of acronyms and jargons but you will have to get used to it. And if you have been pretty fearful of equations back then in college, either it’s time to break your fears or forgot your dreams of getting in. My initial days were filled with doubts which lead to another doubts. The learning curve was steep, and confusion just kept on rising. After a while of making countless notes and revising them like 3 times, I was able to absorb things on its grand scale. Study Hacks On journeys like this, it’s always good to have a companion. I found people through Facebook, Whatsapp and Telegram groups and learnt immensely by pairing up with them on projects. Work on the same project, push code to Github and discuss. This will keep you rolling and expand your approaches. There can be so many ways to solve the same problems that you will be surprised. A good data scientist is essentially someone who has made enough mistakes to know what will not work. So pair up with people and work on different ideas. Google and find how people worked on a Kaggle problem and try to understand it. In case you have no ideas initially, just download data and available code from Kaggle and rewrite it line by line. My first clustering project happened exactly like this. I just rewrote existing stuff and tried to make sense of each line and the maths behind it. Later I started writing my own with help from StackOverflow. Now, if I am working on a problem already tackled before, I know what to do without any guidance or tutorial. It’s a god damn journey. Also, you will hardly remember the syntax unless you are doing the same thing every day. So don’t worry about it. Just open the documentation or tutorial and start writing. Get plugged in to the ecosystem Try to invest time in your LinkedIn profile from the beginning. It serves 2 purposes. Not only you will start networking with people in the industry but you will also get to know DS projects and latest advancements in the field. DS is evolving so quickly that you need some source of updates and there comes LinkedIn. This Facebook group is also very active and you can use it to find people of similar interests. With time, you will realise how less you remember of things you read. Hence, invest time in making notes. I used to pause videos of Andrew NG and make notes. It took almost thrice the time than watching videos but I ended up learning more — which is required in the beginning. Try to answer doubts of others even though you might not be an expert. This will lead to a deeper clarity on topics. Some of these can also end up being your interview questions. Courses There are a lot of courses and many approach the same topics in different ways. Initially, I was of the opinion that you should do only one course and that means you should find out the best course and do it and I selected Andrew’s. Later when I was placed and had time, I checked out the course on Udacity. I came to know that it is a more practical course on the pros and cons of algos and this was not discussed much in Andrew’s course. So it seems, different experts have different content to talk about. Hence, if you want to start a course, just do it. All of them are available for free. Stop the course if you feel uncomfortable with the style and content. I agree that Andrew’s course is a bit dense and requires you to watch it more than once. But that’s how you learn to not give up and learn what is required. If it was easy to do data science, everyone would have been doing it, demand would have been less than supply and people would not have been paid so high. So start with any course and don’t give up easily. Time Many people ask how much time will it take to prepare and get a job. Since time is a function of your current knowledge and grasping ability, I would rather define it in terms of projects. Doing all the basic courses and around 8 supervised and 2 unsupervised projects can easily take 4–6 months of dedicated(10 h/day) effort. If you are doing it part-time, you can easily take 8–12 months. (Including time of finding companies and interviewing with them.) Interviews Ideally once you have gone through the basics, you should start interviewing with companies to get a sense of the structure of interview and get comfortable failing at it. You can find some good companies hiring on platforms such as CutShort. Remember that this interviews can get excruciatingly tough. My experience tells me that the better the company, the tougher the interview. The richness of interview almost acts as a proxy of the strength of the team interviewing you. So if you are doing a very easy interview, chances are that you are going to get into some low quality excel or scraping stuff. And if you want to get into a good job, you should be great at the nuances of navigating an interview. Once you get into this process of interviewing, some of them will give you coding assignments. Here, it’s important for your to write your own code and review it with the aim of finding flaws in it. I cannot tell you how much I have screwed up in these assignments. I made projects with technical flaws and poor coding practices. But with each failure I found ways to do it better. I look back at those scripts some time and realise how far I have come. One thing to note here is — Don’t get it done by your friends. You can consult them if you want but get into the habit of cracking problems on your own. It is easier said than done but it will develop a character in you. The day of cracking your first job in data science will be etched forever in your memory. Things to know to increase your chances of cracking interviews. Statistics — A lot of companies ask on Bayes theorem and Normal distribution Machine/Deep learning basics — Algorithm pros/cons and working Strong coding skills — Python + Competitive coding Database — Minimum required is SQL skills. Good to know both SQL and NoSQL databases. Cloud computing — A huge add-on but not an absolute necessity. Learn AWS(Amazon Web Services) Github — Displaying good work on Github shows confidence and enthusiasm — what best companies look for Blog — Blogging leads to self-clarity on your topics of interests. Also, since I learnt a lot by reading blogs of others, I always like sharing my own learnings. How I judge companies: The tougher the interview, the better the team, work and pay. I check out the profile of team members and leaders on LinkedIn. I check out their history of work and current work descriptions. Sometimes people write vague descriptions or say I do scraping — This is a strong indicator for me to stay away from the company. Data science is god damn huge. If you want to learn quickly, join the team where smart people are. Sometimes they are startups and sometimes they are MNCs. The question of startups Vs MNCs is a debate worth another blog post. There are practices to learn from both of them. Startups have agility and MNCs have resources. Try to get into a company which is a market leader in at least one thing and has research-oriented mindset. It shouldn’t be a company which is doing data science for cost reduction but does it because it’s their bread and butter. Such companies are rare though. At the end of the interview, I ask age of the team, its size and the average experience of team members. I don’t hate startups or small teams but I just like to know the metrics. I also probe on what problems they are working on currently but most of them will not answer due to privacy. I have LinkedIn premium — so I check out the growth of hiring of the company in last 3 months, 6 months and 1 year. I specially do this for small companies and startups. Growth of a team is directly related to the health of organisation. This is good to have but not a necessary criteria. Check out reviews on Glassdoor.com for the company. Be sure to confirm if the work environment is healthy otherwise just cancel the process. If things look well, be ready with what kind of CTC they might roll out. You can also check the numbers in advance to see if they fit in your range. You can also ask them directly what’s their range to avoid wasting time. Yes, all this is one long story. Just give your best. Earn it. Let me know your thoughts on the article and it will be great if you can also share your own experience so far. *The median salary of a data scientist in India is ~7 lakhs. Unless you are from Tier -1 college or have more than 1 year experience in DS or have rich IT experience with decent DS knowledge, you should not expect more than this. Originally published at ml-dl.com on May 20, 2018.
The Definitive Guide To Get In Data Science
78
the-definitive-guide-to-get-in-data-science-186329786f72
2018-07-06
2018-07-06 14:28:55
https://medium.com/s/story/the-definitive-guide-to-get-in-data-science-186329786f72
false
2,048
Get clear insights for a successful "data career". Curated by CutShort, the fastest growing career platform in India
null
null
null
Practical Data Science Career Insights
datacareers@cutshort.io
data-science-career-insights
DATA SCIENCE,DATA VISUALIZATION,DATA ANALYTICS,DATA ANALYSIS
null
Data Science
data-science
Data Science
33,617
Pratik Bhavsar
NLP | Quants | Deep learning
c0101388583
pratikbhavsar
12
38
20,181,104
null
null
null
null
null
null
0
null
0
18997fc907fc
2017-12-16
2017-12-16 22:22:15
2017-12-16
2017-12-16 22:56:45
1
false
en
2018-03-27
2018-03-27 04:36:32
2
1865d9fa1c57
12.996226
1
0
0
It is trends time. From obvious insights to bold predictions, every industry has them at this time of the year. Most can easily be ignored…
5
FastCo Design’s Big Design Trends: The Inane, the Wishful, the Pragmatic It is trends time. From obvious insights to bold predictions, every industry has them at this time of the year. Most can easily be ignored, because they are an offshoot of a well known cross-industry phenomenon. But the recent 9 Big Design Trends struck me as particularly pointless, not only because of how obvious they were, but to what extent they exposed either a too narrow lens of thinking or an unjustified sense of omnipotence. To be fair, a solid and unforgiving editing pass would have eliminated some of the most striking statements. But then there would not be anything left. So let’s take a closer look at these predictions and really think about what they say about the future of design. (I am pasting each prediction that I am commenting on in its entirety, to provide complete context for my interpretation): The inane I feel slightly bad about taking this group of predictions apart because they seem so obviously the result of the authors’ stream of consciousness, rather than a thoughtful process of sifting through what is important to the author and what’s relevant to a larger audience (or what can be backed up by an argument). But they decided that they are publishable, so… Trend: A revolution in user-friendly politics is coming “I see political-oriented design being far more user-centered. Usually, the user-centered approach to design is seen primarily with product design, but I can see grassroots political organizing adopting a lot more of the principals. Already we have various organizations using websites as a resource to have their base participate in local actions. I anticipate these resources going a step further, where we begin to see the development of tools dedicated to informing people of local actions, local elections, offices to run for in their communities, etc. But we can also see the development of programs and activities to reach those who aren’t privileged to have and use technology. “Regardless of the end product, I believe the goals will ultimately lead to face-to-face interactions. But having those design decisions guided by users should be the focal point of political-oriented design in 2018.” — Samuel Adaramola, lead designer, Our Revolution User-friendly politics is not by default a good thing, especially if it is the only tenet of a system. If that is the central goal and focus, you end up with populism and really not that smart electorate. Rather than making user-friendliness the ultimate goal, designers need to think about in what situations it is required, to what extent and what else they need to balance the system with — education, tools for civic discourse, freedom to debate. “Usually, the user-centered approach to design is seen primarily with product design…” Even if that was ever true, it no longer is. Experience and service design are just two examples of areas where the approach is not only valid, but is being refined and pushed forward. Let alone the adoption of user-centered design principals. Is there an adoption agency for the principals? Is it a blind adoption? “Regardless of the end product, I believe the goals will ultimately lead to face-to-face interactions.” What are these goals? How would they lead to interactions? Why? And aren’t goals the ultimate destination, not the lead to it? Our Revolution is an organization with an aspirational mission but with a tagline “Campaigns end. Revolutions endure.” This tagline could easily be the result of this modern tool. But more importantly, as the product of a revolution and someone who has been trained and excelled in revolution propaganda, I can attest that revolutions are perhaps the least user-friendly political experience you can live through. There is no such thing as a user-friendly revolution. Trend: A whole new field will be born: artificial intelligence design “Humans are on a cusp of the single largest revolution in technology, or call it the next industrial revolution. Design as a practice is going to evolve rapidly, as fast as the neural networks and AI are. Artificial Intelligence Design will be the new role in the AI industry, just like the movie director has a role in making the movie. Artificial Intelligence Designers will lead multidisciplinary teams in the creation and design of the era of artificial intelligence. “We are advancing extremely rapidly in perfecting deep learning algorithms. Today, we know that a deep neural network learned how to play chess at a human level in only four hours–and that it will never be beaten by a human. We are perfecting a number of things that AI can do for us, and at the same time, we are compiling a pile of extremely narrowly focused functions, all of them disjointed as a whole. It is like we are building an artificial person but we are starting from all parts of the brain at once. “Currently, technology is leading the way in the advancement of AI. But just like design made technology human, design will play a critical role in the advancement and adoption of AI. “The next massive role of industrial design and design in general is going to be the creation of an entirely new design practice: AID, Artificial Intelligence Design. When you look into history, it is the industrial designers that led the way into making the world a better place for humans. Industrial design is the most complex art of design because it combines human, technologies, tangible objects, and multilayered functions. It is industrial design that gave birth to the laptop and UI and UX–think of Bill Moggridge and the Matrix computer as the very first portable computer. An industrial designer who gave birth to the Apple Macintosh–Hartmut Esslinger–also gave birth to the UI and UX that the world had never before seen. “Artificial Intelligence Design stands to become the most exciting design practice in the history of humankind. This practice does not exist yet, nor there is a school for it. But luckily industrial designers, being nonlinear thinkers and being able to cross platforms in true depth from sociology, ethnology, material science, ecology, biology, physics, mechanical engineering, electronic engineering, software development, and so on–they are our best bet, and are the best-equipped people on the planet to tackle this complex task of making AI that is safe and good for humans.” — Branko Lukic, founder, Nonobject This prediction is not shy of big proclamations. “Humans are on a cusp of the single largest revolution in technology, or call it the next industrial revolution.” AI would not be possible without the Internet with its connectivity and data, so the single claim may be a bit far fetched. But that is a small exaggeration compared to: “When you look into history, it is the industrial designers that led the way into making the world a better place for humans.” Scientists, physicists, doctors, teachers, philosophers apparently have less to boast about. This his may sound nit picky, and could have been easily fixed with more disciplined editing. But how is industrial design even relevant to AI? The majority of AI systems will be attached to objects that are already designed (even if we keep buying the version with the rounded corners, then the one with the sharp edges). And the challenges and risks of AI do not stem from whether the machine looks anthropomorphic or what the individual interaction is. Issues like moral decisions, agency, knowledge acquisition and retention are AI’s biggest threats. Shouldn’t designers try to address them given that they are “nonlinear thinkers and being able to cross platforms in true depth from sociology, ethnology, material science, ecology, biology, physics, mechanical engineering, electronic engineering, software development, and so on.” Trend: We’ll eat our feelings. “I think in general that things are going toward edibles. Everything–beyond just marijuana–is an edible for a purpose, whether it’s for migraines, energy, sleep, creativity, or brain activity. “I think it’s the whole idea of easy logic–around wholesome food with a desire for clear, clean transparence (plant-based is huge; people want plant based protein, and drinks). I think it just feels natural, wholesome, and ancient. As people go more into, “well, people have been meditating for thousands of years!,” they’re looking for ancient solutions to health problems and life balance. I see more and more of that as a reference point. They never knew the science behind Golden Milk, they just knew it worked. Now people are just so much more open to these ideas. “I know the pharmaceutical business wants to get into edibles, but I think there’s something different about eating it than taking a pill–in the urban vibe at least. The age of spiritual meditation, and food as medicine, is something we’re going to do a lot with.” — Katrina Markoff, founder, Vosges Chocolate Huh? Yes, because biology. There are two ways to absorb substances through the digestive tract. Can this trend be explained by the fact that most, if not all, people prefer from the way down? The wishful thinkers Trend: Inclusivity will go mainstream “The future of designing to advance the human experience will require a more comprehensive look at, well, the human experience. Not every one of us have the same abilities or the same needs, but everything from the way our cities are planned to the design of most of our products and services assumes that we do. Going forward, it won’t be enough to design for some people, or even for most. The real challenge will be to design for all. “More industries are heading in this direction. Microsoft CEO Satya Nadella made a touching commitment on the company’s behalf to design their products to be more accessible to all people, a matter close to his heart having fathered a son born with cerebral palsy. Apple and Facebook have proven they are committed to accessibility, too. Retailers like Target and Tommy Hilfiger are expanding on their own previous commitments to accessible design, making clothing and goods that suit people of different abilities. “At first, perhaps the biggest challenge for organizations looking to honor inclusivity will be knowing where to start. With a clear focus on empathetic, human-centered design, more businesses will be able to share their best offerings with more customers from even more walks of life.” — Justine Lee, Frog Inclusivity is a good thing. The fact that it is becoming a key trend only now somewhat belies Our Revolution’s earlier opinion that user-centered design approach is the purview of product design, but better late than never. However, “designing for all” is just one side of the inclusivity coin. Designing for all, together, is going to be the bigger challenge in the age of mixed reality, addictive and meaningless apps, and social network echo chambers. We have mastered the art of individual interactions and, as long as we challenge ourselves to think about individual different abilities, I am sure we will master the broadest range of individual interactions as well. Unless we start thinking about interactions in the context of our non-digital relationships, a better label for inclusive design may be individualistic design. Trend: What “value” means to brands (and consumers) will change “I think there will be a further evolution of the definition of the word ‘value’ in 2018. “Take the froth around the monetary value of bitcoin vs. its perceived value. It’s going to perpetuate CO2 emissions and kill the planet (faster than we already are)–it’s a great example of how the multidimensional understanding of and use of value is evolving. You have the raw power of monetary value butting heads with the value of a conscience. Who wins that fight historically is clear. Similarly, the very idea of truth has been put into question in the larger national conversation, by our very president, and that has profound consequences. Brands are actively getting involved in that dialogue around genuineness. (Patagonia’s recent extreme statement in response to Utah parks is great example.) “We are in the midst of abiding change. We don’t yet have the tools or systems in place to help us navigate. And the values we collectively upheld, which fueled the industrial revolution and governed the last generation, need to adapt. As designers we have to remain optimistic. As individuals, we need to become more skilled at navigating the value-exchange, from CRISPR to machine learning, from artificial intelligence to emotional technology. Every company today is cultivating a path that leads simultaneously in two directions–powerful tools that enhance what it means to be human, and powerful tools that threaten the core of humanity. We have to stick up for ourselves and opt-in with intention.” — Charles Fulford and Dawn Moses, Elephant I really liked the thoughtful commentary of the authors of this prediction. Brands should be actively looking for ways to go beyond monetary value and create what some call “shared value.” However, the claim that companies who place monetary value above the value of a conscience lose the fight of history is fiction. On the contrary. They may suffer short term loss of brand value, but our ADHD brains move onto the next outrage before that loss becomes a burden on their business performance. Just look at VW. Or Apple, who is the authors’ only public client. Is Apple winning because of its conscience? How does the conscience fit with their stance on taxes? Or is it winning because of prioritizing monetary value, even if they invest in conscience marketing? It’s hard to escape the dissonance between the designers wishful thinking and the impact of their client. The pragmatists Trends: Digital is disappearing & We will finally move beyond flat design “Digital is no longer the centerpiece of brand experience. “For the past five years, how we design services has been dictated and limited by the touch points that were available to us–the PC, mobile, and analog touch points. Much emphasis was placed on creating experiences delivered through digital screens and as a result, people spent more time interacting via device than in person. “This is about to change. A major shift is underway in technology, fueled by lower costs, users’ growing angst about their “screen addiction,” and the disaggregation of core technology components, such as cameras, microphones, speakers, and screens, which are increasingly being embedded in an array of different environments–especially in the home. From Amazon to Alibaba, a growing number of primarily digital brands are now placing greater emphasis on physical presence while making the most of digital and data to improve experience. “Soon we will no longer be able to delineate between digital and physical design–they will be one and the same. Carnival Corporation, for example, has developed the Medallion–a wearable smart coin that connects customers to a cruise ship through a digitally enabled service called the Compass. Each guest receives a unique and seamless experience with their personal preferences constantly captured to optimize service as it is delivered. “This will have huge implications for brands and organizations. Re-skilling will be critical, and organizations must ensure their workforce is willing and able to learn, relearn, and relearn again. They must also ask themselves: What future structure, brief, and role should there be for digital departments or heads of digital as digital becomes ubiquitous and increasingly invisible?” — Olof Schybergson, CEO, Fjord “Designers are now negotiating how to differentiate through form in a visual world that has become predominantly flat, whether in illustration or in interface. After skeuomorphism was eclipsed by the flat design zeitgeist, we are seeing a re-infusion of subtle dimensional elements to create ownable design language systems. “Flat design–spurred by Microsoft’s novel Metro design language (and evolved through Fluent)–mimicked many designers’ enthusiasm for stripping out visual elements that were becoming cumbersome, both in terms of file size and the feeling of being enslaved to design within a framework of physical analogies. A trashcan and floppy disk have evolved to become universal glyphs for Trash and Save, no longer needing the texture of dimension. This mirrored a larger tilt in the balance back to the International Style that embodied a return to typographic-led composition and elegant solutions sans ornamentation. “But if everything is flat, then nothing is differentiated. Now we are seeing some interesting trajectories in the world of post-Flat Design. Google’s Material Design language provided an interesting take on adding subtle physicality back to Flat form, humanizing the visual elements, and paying special attention to how the elements moved. Contemporary micro-UX is building off of animation and gestural shorthand established through broadcast design in previous eras. “In this post-Flat world, designers are encountering a world that is increasingly synesthetic as people want to speak to, touch, and see their interfaces. Through a multi-sensorial lens, even GUIs (graphical user interface) are seen as vestiges of an age-old, visual-led consideration. If Alexa is any indication, VUIs are making some visual considerations feel outdated. “Early developments in augmented reality (AR) show that skeuomorphic forms are returning to bridge the gap between the known and the unfamiliar. We could imagine a day when augmented forms become as minimal as screen-based UI. Apple’s AR Kit promises to push the formal boundaries of the medium with an iPhone in so many people’s hands. Facebook has skillfully migrated the camera from hardware to software. This will be seen in years to come as monumental. “Similar to Vine’s six-second video constraint, we are left to wonder whether Flat design is a tactical path or an ideological one. I am excited by what is to come.” — Forest Young, head of design, Wolff Olins San Francisco In the grandiose scale of the others’ opinions, these two were refreshingly grounded. Digital is not disappearing per se, as much as it is becoming part of the fabric of the physical world, argued one of the luminaries. Similarly, flat design, which worked so well when we were still delighted by our fancy new screens, is becoming too one-dimensional for the complex interaction between the real and the virtual, hardware and software. These were the only two designers who focused on the thing they have control over — design. And that might make them more capable of making a difference than their colleagues. Trend: Designers will fight back “Designers will begin to awaken to the social and political implications of their work. This will involve a lot of self-reflection and hopefully no shortage of concrete action. Design work has for too long been assumed to only bear fruit as positive improvements to the world. But today we’re encountering the negative side effects of many of our most beloved innovations–social networks that propagate lies and empower hate, devices that disconnect us from the real world, AI that encodes social and economic stereotypes, and technologies that magnify economic advantages. 2017 seemed to signal early indications that we are waking up to the negative side effects of the last 20 years of rabid innovation. 2018 will pose the most difficult question: are we part of the problem, or are we willing to risk our hard-won, new positions to be part of a solution? “I think in 2018 we fight back, like how we will begin to use AI to crackdown on fake news and cyberbullying. For example, as a leading content publisher, Thomson Reuters also uses machine learning and AI to detect and identify fake news. The Reuters News Tracer leverages an algorithm that looks at more than 700 factors to determine whether a trending topic on social media is factual or not. Hopefully, it’s just the beginning.” — Mark Rolston, founder, Argodesign Despite its quite revolutionary tone, this prediction is almost hopeful. But which windmills should designers be fighting? The system? Their clients? Let’s be clear — the dystopia some of us feel we live in is not caused by designers. It is the result of system failures and our individual and collective willingness to not overthink inconvenient truths until they become unsurmountable challenges. Even if designers “fought back,” we alone would not be able to fix these problems. If we work together with others, we might be able to. But that means putting the feeling of superiority on the back burner for a while.
FastCo Design’s Big Design Trends: The Inane, the Wishful, the Pragmatic
3
snark-bites-9-big-design-trends-1865d9fa1c57
2018-03-27
2018-03-27 04:36:33
https://medium.com/s/story/snark-bites-9-big-design-trends-1865d9fa1c57
false
3,391
Ideas, commentary and criticism on the topics that (should) matter
null
null
null
F COLLECTIVE
emilia@fcollective.co
f-collective
null
epalaveeva
Design
design
Design
186,228
Emilia Palaveeva
CAREER: strategy, technology, innovation; LIFE: traveling, reading; MEDIUM: when I have something to say
bfcc3c66b09c
epalaveeva
125
329
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-17
2018-08-17 06:54:32
2018-08-17
2018-08-17 06:49:21
4
false
en
2018-08-17
2018-08-17 07:02:29
5
1868a80584d9
7.364151
1
0
0
In our continuing series on the basic concepts of Artificial Intelligence, today we take a closer look at ‘expert systems,’ a somewhat (but…
5
What are Expert Systems? Photo by Marlon Lara on Unsplash In our continuing series on the basic concepts of Artificial Intelligence, today we take a closer look at ‘expert systems,’ a somewhat (but not entirely) obsolete branch of symbolic AI. For a long time, expert systems were the most promising, highest-hyped products of AI research. But both the philosophical attacks by Dreyfus, Winograd and others, as well as a lingering sense of the failure of expert systems to deliver on their promises contributed to the 80’s disillusionment with AI — what has since been dubbed the “AI winter,” and that ended only with the advent of deep learning in the early 2010s. What is an ‘expert system’? Expert systems are rule-based inference machines for particular domains of knowledge. They are intended to replace “experts” in that domain. Here’s a very simplified example for some rules that an expert system might use to identify microorganisms from some basic observational data: Example rule of a medical diagnosis expert system (MYCIN): IF the stain of the organism is grampositive AND the morphology of the organism is coccus AND the growth conformation of the organism is clumps THEN (probability=0.7) the identity of the organism is staphyloccus. Big expert systems had thousands of such rules. Expert systems were advertised for use in all areas where the knowledge of a domain could be encoded with big numbers of relatively simple and clear rules. Such areas included diagnostic medicine, biology, engineering (diagnosis of car or computer faults), the processing of credit applications in banks, and other similar domains. Expert systems replace experts Photo by Kyle Wagner on Unsplash Let’s look again at the definition above: “Expert systems are rule-based inference machines for particular domains of knowledge. They are intended to replace experts in that domain.” Expert systems are rule-based. So they are examples of symbolic AI systems. The world they know about is represented as a collection of explicit rules that contain references to the objects the system knows about (microorganisms, stains, morphologies, conformations; or: computers, screens, keyboards, error messages, beeping sounds). Expert systems are inference machines. Their main purpose is to perform logical inferences, that means, to deduce a conclusion from a given set of premises. Given that the computer is turned on, electricity is present, the screen lights up, but the boot process stops with an error message number 42, the system can deduce what is wrong and inform the user about the fault and how to fix it (or whom to contact). This is the biggest difference between ‘classic era’ expert systems and modern deep learning applications, and also the biggest drawback of an expert system: a deep learning system can deal with ‘fuzzy’ or ambiguous inputs, and it can learn to draw conclusions from noisy or incomplete data. A classic expert system, on the other hand, needs a complete set of explicit rules, and it can only process its input data insofar as it can deductively draw conclusions from those data. For particular domains of knowledge. All expert systems of the classic era of symbolic AI applied only to very narrow domains. This was necessary since the whole of the domain had to be encoded into distinct, clear, complete and contradiction-free formal rule systems. Clearly, this could only be achieved for relatively narrow domains. They are intended to replace experts. This answers the question “why bother?” — Human experts are a precious and limited resource. They take half a human lifetime and immense cost to produce: think of the years of learning and practice that are required to make a heart surgeon, a cancer specialist, a medieval art expert, an experienced rescue pilot, a foreign relations expert, or a specialist in planetary geology. Experts are in very short supply in most places around the world, particularly where they would be needed most: vast parts of the globe cannot afford specialised doctors, fake artworks can be sold as real because specialists are not available to analyse them, and experts in planetary geology are too fragile to strap onto a rocket and send crashing into interesting celestial bodies. In all these cases, we’d like very much to have a machine that can do the job. As opposed to humans, machines can be easily replicated: one needs to create only a single expert system, and can later copy it again and again to create thousands of identically performing systems. A thing that is very clearly not the case with humans. Every single doctor has to begin training as a baby, spend years and years to learn how to eat, how to speak, how to wash his hands, how to read and write; at which point he’s just barely able to actually start learning useful stuff: chemistry, physics, biology, anatomy, physiology, histology, radiology and so on and on, for years and years, until finally he’s a rookie doctor, inexperienced, awkward and afraid. Now follow hundreds of badly treated patients, unnecessary mistreatments and the occasional avoidable death, experiments that go wrong again and again, errors of judgement, gaps in one’s knowledge, until, finally, many years later, the doctor has finally reached competence in his discipline. And now only begins the long path to real expertise, which will take another ten years or so, years of stress, doubt, uncertainty, long nights in the library, in the lab, at the patient’s bedside, months and years of staring at X-ray images and lab results, of slowly learning to see connections, to see patterns, to feel confidence and certainty, to recognise the special case where the lab fails, where the X-ray is misleading and ambiguous, and to know the truth of each case. The prospect of replacing all that with a set of rules was a great, utopian promise. No wonder it drove the interest in AI almost single-handedly for decades. Basic structure of an expert system Basic architecture of an expert system Expert systems generally have a similar basic architecture: At the core of it is a database of the domain knowledge that the expert system contains. This is called the knowledge base. Often, this knowledge takes the form of ‘if/then’ rules (like in the Mycin example above). The ‘if/then’ format allows for a trivial inference logic, where the system tries to match the ‘if’ clauses with the facts provided by the user, emitting the statements after the ‘then’ clauses as the expert’s diagnosis of the problem. This logical inference is performed by a component called the inference engine. The inference engine must not only match ‘if’ clauses with the user-entered facts, but it must also have some way to deal with contradictory or unclear facts in the knowledge base. For instance, a person might be a vegetarian, but in an emergency situation they might eat meat to survive. The inference engine will also generally implement some sort of logic calculus, so that it can process logical relations between facts. Most likely this will be a variant of propositional or predicate logic, but probabilistic inference (“fuzzy logic”) is also a common approach, because the “facts” in our everyday experience are generally not “true” or “false” in a binary way, but “likely true” or “likely false.” A small child living with two grown-ups of different gender as a family is likely to be their child, but this is not certain. The child can be adopted, or the adults can be its grandparents, and so on. If the inference engine does not consider the more unlikely options at all, it will sometimes make errors that a human expert would avoid. So it’s a good idea to attach probabilities to facts, in order to get a better model of the world. — The problem with probabilities in the knowledge base is, obviously, that (a) many probabilities of everyday ‘facts’ are not known. How likely is it really that a child living with two adults as a family is their child? Sure, one might perhaps find some statistics on that, but finding the correct values for such facts can be difficult and will certainly drive up the price for the development of the expert system. (b) It seems that our everyday thinking does not really work using probabilities. Human common sense is bad at probability calculations, as whole books on probability fallacies demonstrate. If expert systems want to mimic how experts really think, they must also model human likelihood estimation, even when that diverges from the formally correct probability calculation model. The code that controls the dialog between the user and the system is called the user interface. This is of little theoretical interest, but of huge practical importance. A bad user interface can render the best expert system unusable. A brilliant user interface can cover up many deficiencies of the underlying system and provide value to the user even if the actual expert system is less sophisticated. User interfaces can be menu-driven, or based on a command line. They can accept problem descriptions in natural language or even in human speech. They might even use pictures or other graphical elements as part of the user interaction. Expert system individual roles Image by Helena Lopes on Unsplash A big expert system, having all these different components, cannot usually be created by a single person. Development of expert systems therefore is divided among several people with different roles: The domain expert is the person who is going to be replaced by the system. It is his knowledge that the systems aims to model and reproduce. The knowledge engineer has the task of extracting the domain expert’s knowledge and encoding it in some machine-readable form, so that the expert system can access it. The knowledge engineer is, therefore, the human interface between the expert (who does not need to understand IT) and the programmers (who don’t need to understand anything about the expert’s domain). The system engineers are the programmers who are actually building the software. They program the user interface, the inference engine and all other parts of the system, using the knowledge base that the knowledge engineer has provided. Finally, the user is the one who will be using the expert system in the field, accessing it through its user interface in order to get expert advice. Conclusion The promise of replacing costly and precious human experts by mass-produced and cloned machines was a very strong incentive for the industry (and governments that supported it) to spend years and billions in the development of expert systems. The dream never quite came true. Why it failed, and why it perhaps never had a chance, we will see in the next post in this series. Originally published at moral-robots.com on August 17, 2018.
What are Expert Systems?
1
what-are-expert-systems-1868a80584d9
2018-08-17
2018-08-17 07:02:29
https://medium.com/s/story/what-are-expert-systems-1868a80584d9
false
1,766
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Moral Robots
Making sense of robot ethics. Read more at moral-robots.com
4553d71f4756
MoralRobots
760
179
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-24
2018-01-24 23:25:57
2018-01-24
2018-01-24 23:27:13
2
false
en
2018-01-24
2018-01-24 23:28:11
1
18691db3dce1
1.070126
1
0
0
CDEV caught the attention of HOLLY — Trade Ideas’ AI Engine — this week as the stock attempts to break above all-time highs set in early…
5
Centennial Resource Development (CDEV) Is Latest Addition To HOLLY Hot List. CDEV caught the attention of HOLLY — Trade Ideas’ AI Engine — this week as the stock attempts to break above all-time highs set in early November. HOLLY has a good track record with this stock, first alerting subscribers to an entry on October 24th when the stock was trading just below $19. Now north of $21, it appears catalysts are once again arriving in the stock that may drive it past those highs set in early November after the initial drive. The model HOLLY Alpha Long Swing Trade portfolio will be adding this stock to it’s portfolio at tomorrow’s opening print (January 25, 2018) and we’ll keep this stock in the portfolio until such time that it retraces a full 20% from the high print we see during our hold time. Model Portfolio Update, prices as of close of trading on 1/24/18: To employ HOLLY in your service to capture Alpha in your swing trading or day trading portfolio, please visit Trade Ideas to learn more. - Sean McLaughlin (@chicagosean)
Centennial Resource Development (CDEV) Is Latest Addition To HOLLY Hot List.
10
centennial-resource-development-cdev-is-latest-addition-to-holly-hot-list-18691db3dce1
2018-01-26
2018-01-26 08:11:44
https://medium.com/s/story/centennial-resource-development-cdev-is-latest-addition-to-holly-hot-list-18691db3dce1
false
182
null
null
null
null
null
null
null
null
null
Stock Market
stock-market
Stock Market
16,290
Sean McLaughlin
Independent Stocks & Options Trader. Market Strategist @ Trade Ideas. Chief Options Strategist @ All Star Charts. chicagoseantrades.com
be8c810c6501
chicagosean
1,615
138
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-27
2017-09-27 11:47:17
2017-09-27
2017-09-27 13:03:08
0
false
en
2017-09-27
2017-09-27 13:03:08
0
18694b1e5fa8
1.792453
0
0
0
In my most recent project, I used two APIs from IBM Waston to help the user keep track of their moods and determine their personality…
1
Machine Learning Styles and Algorithms In my most recent project, I used two APIs from IBM Waston to help the user keep track of their moods and determine their personality profile from diary entries they write. One reason I was interested in a project that used Watson was the opportunity to work with machine learning algorithms. Machine learning is a type of AI that allows a program to improve on its own by learning from input data instead of being programmed directly. Machine learning is a very popular and fast-growing field in computer science, particularly when it comes to improving user experience. The many algorithms that can be used in machine learning that can generally be categorized into four different learning styles: supervised, unsupervised, semi-supervised, and reinforcement learning. It should be noted, however, that some algorithms can be used in more than one learning style. Supervised Learning In supervised learning, the data that gets put through an algorithm is labeled and already has a predetermined result. As the algorithm tries to figure out this result, it gets corrected when wrong until a pattern for finding the result is determined. This pattern can then be used to determine finding results from unlabeled data. Some algorithms used in supervised learning are decision trees, or modeling data in a tree structure, and linear regression, or modeling data in a linear fashion. Practical applications of supervised learning include classifying data and making predictions. Unsupervised Learning In unsupervised learning, the data being put through the algorithm has no labels and does not have a predetermined result. It’s the algorithm’s job to determine patterns in the data. Common algorithms used in unsupervised learning are clustering algorithms, or grouping data, and association rule mining algorithms, or finding relationships in data. Practical applications of unsupervised learning include structure discovery and targeted marketing. Semi-Supervised Learning Semi-supervised learning is a combination of supervised and unsupervised learning. Both labeled and unlabeled data is put through an algorithm, and patterns are found with assistance from the labeled data. A common practical application of semi-supervised learning is classifying very large amounts of data. It uses algorithms that are also used for supervised and unsupervised learning. Reinforcement Learning In reinforcement learning, an algorithm iterates over its environment and learns and improves over time based on its experiences with the environment. It detects all possible states of an environment and determines the best results for each state based on some sort of feedback from the environment, such as user input in an application. Some algorithms used in reinforcement learning are temporal difference algorithms, which uses differences in results over time to predict data based on future values, and Q-learning, which determines decisions for any type of problem. Reinforcement learning is used in real time decisions, such as in AI for games or skill acquisition.
Machine Learning Styles and Algorithms
0
machine-learning-styles-and-algorithms-18694b1e5fa8
2018-02-18
2018-02-18 17:23:16
https://medium.com/s/story/machine-learning-styles-and-algorithms-18694b1e5fa8
false
475
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Violet Suber
null
9c8f103e07e9
violet.suber
45
1
20,181,104
null
null
null
null
null
null
0
null
0
63505f63fad2
2017-09-19
2017-09-19 12:52:36
2017-09-19
2017-09-19 12:57:32
2
false
en
2017-09-19
2017-09-19 12:59:08
21
18697fd6eb99
4.953145
3
0
0
Set to open its first phase in 2017, the state-of-the-art Cornell Tech campus is a fusion of academia and business for a program already…
5
Roosevelt Island: New York City’s Next Startup Hub? Image: Kilograph Set to open its first phase in 2017, the state-of-the-art Cornell Tech campus is a fusion of academia and business for a program already churning out Silicon Alley tech startups. By Rob Marvin New York City’s push to become the tech and startup hub of the east coast is no secret. The city and state governments, the education sector, major tech and finance companies, and the startup world have all bought into building and sustaining a pipeline of ambitious young entrepreneurs and innovative technologists. Whether you call it “Silicon Alley” or not, the idea is to educate and train new generations of tech and business professionals, give them internships, encourage them to build products, develop business plans, enter the tech workforce, or start companies based in NYC that feed back into the city’s economy. Cornell Tech is a cornerstone of that strategy. Back in late 2011, a bid from Cornell University in partnership with the the Technion–Israel Institute of Technology won then-mayor Michael Bloomberg’s $100 million competition to build a state-of-the-art applied sciences and engineering graduate school on Roosevelt Island. Situated on the south end of the narrow 148-acre island on NYC’s East River between Manhattan and Queens, phase one of the campus is nearly complete. The first three buildings are set to open in summer 2017, including “The Bridge at Cornell Tech,” a co-working office building set to house a combination of NYC-based business and startup offices, university researchers, and student enterpreneurs. The school has been up and running since 2013 in a space within Google’s NYC offices in Chelsea, offering Master’s Programs across computer science, IT, and business. Since 2014, Cornell Tech alumni have launched 29 startups and raised $12.8 million in pre-seed and seed funding, with 93 percent of those startups headquartered in NYC. Cornell Tech’s leadership gathered for a briefing at the Chelsea campus this week to give an update on the Roosevelt Island campus, detail initiatives such as the Cornell Tech Studio, and discuss some of the notable startups that have already emerged from the program and set up shop in Silicon Alley. “Our objective at Cornell Tech is combining three things: the academic depth of top universities, the action orientation of business, and the social good orientation of nonprofits and government agencies,” said Daniel Huttenlocher, Dean and Vice Provost of Cornell Tech. “When our new campus opens on Roosevelt Island, it will be the first campus built from the ground up for the digital age.” A Look at The Bridge The full 12-acre Cornell Tech campus won’t be fully completed for decades (2043 is the current estimated date) but The Bridge will immediately give the campus an outside business presence. Huttenlocher and Andrew Winters, Senior Director of Capital Projects at Cornell Tech, gave some details on what they described as a “hub for academic and business collaboration.” The 230,000-foot co-working space is independently owned by NYC property developer Forest City Ratner Companies but 39 percent of the building has been pre-leased back to Cornell. “We’re trying to bring academia closer to the real world,” said Huttenlocher. “We’re partnering pretty closely with Forest City Ratner and leasing a third of the building for academic purposes, but the next largest tenant will be a co-working operator. And then we might have R&D and innovation center-type spaces from larger companies, which could be in any industry.” Image: Cornell Tech Forest City Ratner is in charge of handpicking tenants but Huttenlocher said he wouldn’t be surprised if the co-working operator turned out to be a startup such as WeWork. Winters added that the building design hews closer to startup offices with high ceilings, big windows, and an open, flexible floorplan that could house one company or 15 companies. Winters didn’t have any kind of estimate on how many startups and entrepreneurs might ultimately inhabit the space but he did talk about Cornell Tech’s broader goal. The university sees the campus as a springboard to expand Silicon Alley from lower Manhattan and downtown Brooklyn into Long Island City and western Queens. “Tech companies incubated by Cornell Tech will need affordable space for offices, exhibit areas, and light manufacturing,” said Winters. “We see western Queens — with its abundance of space, relatively low rents, and excellent access to Manhattan — as an ideal landing spot for startups that already have an increasingly vibrant life/work culture.” Inside Cornell’s Startup Pipeline Cornell Tech has already seen alumni launch and gain backing for an increasing number of new tech startups. Greg Pass, Cornell Tech’s Chief Entrepreneurial Officer and the former CTO and VP of Engineering at Twitter, and Ron Brachman, former DARPA researcher and current Director of the Jacobs Technion-Cornell Institute, discussed the Cornell Tech Studio programs and some of the exciting startups to come out of the school thus far. Within Cornell Tech Studio, programs including Startup Studio and Runway Startups put teams of grad students to work developing and pitching tech products. Companies including Google, JetBlue, Medium, The New York Times, and WebMD have also come in with specific business challenges for which the teams need to develop a product or service to solve. The Runway Startup program combines an academic postdoctoral program with an incubator to develop and help fund startup ideas with deeper technical execution. “The value of the Runway program is the combination of deep PhD-level research with the idea of a product in a specific market, so we can get products out the door that are substantial in depth of technology in a way that’s not necessarily typical of many startups you see,” said Brachman. Out of the 29 alumni-founded startups that have launched out of Cornell Tech in the past two years, Pass and Brachman pointed out a few particularly notable ones: Gitlinks: What Pass described as a “FICO for open-source software.” Gitlinks runs monthly audits of all the open-source software businesses use to track how well it’s maintained, track bugs and fixes, and to provide transparency around compliance and licenses. Uru: A native advertising startup that uses machine learning (ML) algorithms to inject ads and relevant branding onto an existing surface within a video that doesn’t get in the way of the content. Nanit: An intelligent baby monitor that uses computer vision and ML to autonomously analyze a baby’s behavior and sleep patterns to generate intelligent data and reports for parents. The monitor aims to deliver insights such as whether parents going into the child’s room at night has a positive or negative effect on behavior. Shade: A small, wearable device for sun-sensitive users or for use cases such as lupus patients that registers cumulative exposure to ultraviolet and infrared radiation, and notifies users when they’re in danger. Aatonomy: A “plug-and-play brain” for robotics that can connect wirelessly with any robot and perform functions such as voice commands and object recognition. It works on everything from a Roomba to a drone. Check out Cornell Tech’s live stream to see the Roosevelt Island campus under construction in real time. Read more: “How a Small City in Finland Turned Into a 5G Pioneer” Originally published at www.pcmag.com.
Roosevelt Island: New York City’s Next Startup Hub?
4
roosevelt-island-new-york-citys-next-startup-hub-18697fd6eb99
2018-02-01
2018-02-01 06:04:12
https://medium.com/s/story/roosevelt-island-new-york-citys-next-startup-hub-18697fd6eb99
false
1,211
PC Magazine: redefining technology news and reviews since 1982.
null
pcmag
null
PC Magazine
pcm_medium@ziffdavis.com
pcmag-access
TECHNOLOGY,COMPUTING,INTERNET,MOBILE,FUTURE TECHNOLOGY
pcmag
Entrepreneurship
entrepreneurship
Entrepreneurship
226,400
PCMag
null
b3ca0a39a185
pcmagazine
33,881
281
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-04
2017-12-04 12:44:23
2017-12-04
2017-12-04 12:51:57
1
false
en
2017-12-04
2017-12-04 12:51:57
3
186a044b4890
0.698113
4
0
0
To properly align the needs of your business with a strong CRM system, a best practice is to strategically leverage what is known as the…
5
Big Data, Analytics and Metrics to Make Better Decisions Big Data Analytics To properly align the needs of your business with a strong CRM system, a best practice is to strategically leverage what is known as the ‘SMART’ approach, a methodology that big data expert, Bernard Marr, explains in his recent book, Big Data: Using SMART Big Data, Analytics and Metrics to Make Better Decisions and Improve Performance. The SMART approach enables you to focus on high-level objectives that dictate what you want to achieve through your CRM initiative; at the same time, the SMART approach also takes 360-degree data points into account, which complements and supports the overall strategy. Read More Read Also : - Big Data to Enable Artificial Intelligence and Drive Digital Transformation -Data becoming a Priority over Voice in Telecommunications
Big Data, Analytics and Metrics to Make Better Decisions
26
big-data-analytics-and-metrics-to-make-better-decisions-186a044b4890
2018-06-02
2018-06-02 00:00:22
https://medium.com/s/story/big-data-analytics-and-metrics-to-make-better-decisions-186a044b4890
false
132
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
Tech Hunt
#Technology #Freak - BigData, Digital Transformation, Cyber Security, Cloud Computing, Internet of Things
33f515b691ce
techhunt2195
388
740
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-05
2018-02-05 07:38:55
2018-02-05
2018-02-05 07:50:34
3
false
en
2018-02-05
2018-02-05 07:50:34
9
186a3e14bbcb
3.765094
3
0
0
Dirty data is the most common problem for scientists, engineers, and researchers who work with data.
3
The Leading Causes of Dirty Data The world is producing data at exponential rates. By 2025, the global datasphere will include 10 times the amount of data generated last year alone, according to IDC’s Data Age 2025. All of this data generation compounds an already common problem: “dirty data.” As we discussed in part one of this series, dirty data is data that is invalid or unusable. It’s also the most common problem for scientists, engineers, and researchers who work with data, according to a Kaggle survey. This is one reason data scientists complain about “data wrangling” — cleaning and preparing data for use in systems that power the business — something they’re likely to say consumes more than half their time. Businesses across every industry are pondering how they can use data to better serve customers, create products, and disrupt industries. Yet fewer than half (44%) trust their data to make important business decisions, according to Experian’s 2017 Global Data Management Benchmark Report. C-level executives are the biggest skeptics, believing 33% of their organizations’ data is inaccurate. The first step to cleaning up dirty data is to understand how it got dirty in the first place. Let’s take a look at some of the leading causes of dirty data. Human Error The biggest challenge in maintaining data accuracy is human error, according to the Experian report. Dirty data caused by human error can take multiple forms: Incorrect — The value entered does not comply with the field’s valid values. For example, the value entered for month is likely to be a number from 1 to 12. This value can be enforced with lookup tables or edit checks. Inaccurate — The value entered is not accurate. Sometimes, the system can evaluate the data value for accuracy based on context. For most systems, accuracy validation requires a manual process. In violation of business rules — The value is not valid or allowed, based on the business rules (e.g., An effective date must always come before an expiration date.) Inconsistent — The value in one field is inconsistent with the value in a field that should have the same data. Particularly common with customer data, one source of data inconsistencies is manual or unchecked data redundancy. Incomplete — The data has missing values. No data value is stored in a field. For example, the street address is missing in a customer record. Duplicate — The data appears more than once in a system of record. Common causes include repeat submissions, improper data joining or blending, and user error. When joining data, you can address some quality issues up front. A developer can use scripts and coding tools to merge the data for consistency and accuracy for two or more relatively small data sources. You still may find you need to remove duplicates, adjust case and date/time formats, and regionalize spelling (e.g., British English vs. American English). Quality Issues in IoT Systems that use the Internet of Things (IoT) connect devices and sensors to software that can interpret the data and make it visible to decision makers for business use. Data is the lifeblood of those systems, and as we discussed in article one, dirty data can be costly. James Branigan, IoT software platform developer and founder of Bright Wolf, works with businesses to start, save, or reboot IoT initiatives. Based on his experiences, he explains in detail how to plan for high-quality data in a series of articles, “Four Critical Design Factors for IoT Project Success.” According to Branigan, IoT failures are most often caused by issues in one or more of these areas: Trust is the foundation of an IoT system. It means that you know you are talking to the right device and the device knows it is talking to the correct end system. Identity refers to the association of incoming data with the correct time series history and addressing messages to the correct device. Time is an accurate date and time stamp for each event and data point. In IoT, it can be a challenge for devices operating across time zones and where users can make manual adjustments to clocks, for example. Chain of custody refers to understanding the complete history of each data point, including details about the devices and software that processed the data. The Bottom Line For decades, the information technology world has used the term “garbage in, garbage out.” It means no matter how accurate a system’s logic is, its results will be incorrect if the input it has received is not valid. Perhaps never before has the phrase meant so much as it does today, when identifying patterns in reliable data can help business leaders transform entire services, products, and industries. To achieve those outcomes, data must be clean and structured. The next and final article in this series will explore some of the steps you can take to tackle your organization’s dirty-data problem. This article originally appeared in IoT for All on December 28, 2017. It is the second in a series of three articles about dirty data. Originally published at blog.cloudfactory.com.
The Leading Causes of Dirty Data
27
leading-causes-dirty-data-186a3e14bbcb
2018-02-09
2018-02-09 03:47:15
https://medium.com/s/story/leading-causes-dirty-data-186a3e14bbcb
false
852
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
CloudFactory
CloudFactory provides an on-demand, digital workforce for scaling your business in the cloud. https://www.cloudfactory.com/
8aebef586dc0
thecloudfactory
769
325
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-22
2017-11-22 06:29:54
2017-11-22
2017-11-22 06:44:03
0
false
en
2017-11-22
2017-11-22 06:44:03
3
186ac2444024
0.607547
0
0
0
darkflow allows people use Darknet without dealing with C which means we don’t need to do make. I haven’t tried Darknet, so maybe make…
3
How to Set up Darkflow thtrieu/darkflow darkflow - Translate darknet to tensorflow. Load trained weights, retrain/fine-tune using tensorflow, export constant…github.com Darknet: Open Source Neural Networks in C Darknet: Open Source Neural Networks in C.pjreddie.com YOLO: Real-Time Object Detection You only look once (YOLO) is a state-of-the-art, real-time object detection system.pjreddie.com darkflow allows people use Darknet without dealing with C which means we don’t need to do make. I haven’t tried Darknet, so maybe make Darknet isn’t hard, but actually sometimes make is pretty hard even author wrote a nice document. So, if I can avoid make, I generally escape from it. In this time, I tried darkflow: Translate darknet to tensorflow. Here is what I do for setup. If you can install properly, you will see something like this. darkflow demo Images Webcam What I want to do with darkflow Right now, I’m trying to build a kind of surveillance camera for keeping ITP kitchen clean.
How to Set up Darkflow
0
how-to-set-up-darkflow-186ac2444024
2018-04-24
2018-04-24 04:45:46
https://medium.com/s/story/how-to-set-up-darkflow-186ac2444024
false
161
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Koji Kanao
ITP resident Wandering between tech and art #CreativeCoding #Art #PhysicalComputing #Startup #MachineLearning #DeepLearning #python #Nodejs
9004d20e450b
sleepy_maker
10
18
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-30
2017-12-30 15:32:04
2018-01-02
2018-01-02 13:31:59
1
false
en
2018-01-04
2018-01-04 14:12:45
24
186b604a0b17
6.701887
4
0
0
The Forrest Four-Cast: January 2, 2018
5
March Magic Memories: Amir Husain The Forrest Four-Cast: January 2, 2018 The Founder and CEO of SparkCognition, Amir Husain leads a company that he has positioned to be at the forefront of the “AI 3.0” revolution. An undisputed tech leader in Austin and in the industry at large, he built multiple venture-funded startups between 1999 and 2009, at which point he took over as President and CEO of VDIworks. For his many inventions, Husain has been awarded 22 US patents and has over 40 pending patent applications. His first book “The Sentient Machine: The Coming Age of Artificial Intelligence” was published in November. At SXSW 2018, Husain will speak on a session on Friday, March 9 titled “The Power of Vertical AI in a Monolithic AI World.” In 20 words or less, what is the main focus of your current job? As Founder & CEO, to grow SparkCognition and advance our goal to realize the full potential of artificial intelligence in all spheres of life. When you were younger, what did you want to be when you grew up? A computer scientist. Since the age of four, I’ve been enamored with computers and software. In my teen years I became seriously interested in artificial intelligence. I’ve been fascinated with the idea that the core concepts underlying computation provide a framework to understand a lot of what goes on in the universe and in our daily lives. The philosophy, science and practical applications of computer science continue to intrigue me no end. How do those career aspirations from your younger self connect to what you are doing now? I was very lucky in that I found a passion at a very young age, and stuck to it. It turned out to not be a mere passing interest, but a core driver of my life’s work. With my own children, I’ve encouraged exploration across disciplines because — based on my own experience — I feel the biggest gift a parent can give a child is to help them find their passion; their true calling. If you can do that, the incredibly curious and tenacious mind of a child will do the rest. What are you most passionate about at present? Artificial intelligence in general, but specifically, cracking the code on “intrinsic motivation”; the idea that human beings not only have general purpose intelligence, but also the innate desire and curiosity that directs this pliable intelligence in myriad different directions. How do we get machines to do that? Actors, athletes, musicians, entrepreneurs, scientists, whoever — you can invite any three living people from anywhere in the world to dinner. Which three people do you invite? Bill Gates, Xi Jinping and my physicist sister and string theorist, Tasneem Zehra Husain. I’m assuming my wife, Zaib, and my three boys would be hosting this dinner with me. A big attraction for me would be for the boys to listen in. What is the last great book you read and why did you like it so much? I recently finished “Fallen Leaves: Last Words on Life, Love, War, and God” by the historian Will Durant, whose books I was introduced to as a child by my late father. This was Durant’s last work, printed years after his passing. I have been giving copies to my loved ones and many friends ever since I finished it. It is my kind of world view and my kind of philosophy… an absolutely brilliant work by one of the greatest Americans of all time. SparkCognition recently hosted a two-day conference on artificial intelligence called “Time Machine 2017.” Who was your favorite speaker at the event and why? The caliber of speakers was exceptional and in truth, all the talks were fantastic. I thought Gen. Allen and Boeing CTO Greg Hyslop were particularly phenomenal. Gen. Allen touched upon a subject of great personal interest to me, which is the impact of exponential advancements in AI at a time when the geo-political balance is shifting so radically. I thought the points Dr. Hyslop made about truly large scale aerial autonomy being one of our great unconquered frontiers were very poignant. Was this conference-organizing experience a good one? Will there be a “Time Machine 2018”? It was spectacular and impactful beyond our expectations. There will most certainly be a Time Machine 2018! We’re planning it already. In addition to SparkCognition, there are quite a few other startups / companies doing impressive AI-related work in Austin. How does the AI scene here compare to other cities? What is the city’s competitive advantage in context to other locations? I’m very bullish on Austin’s role in driving the AI ecosystem. I think UT Austin is our massive advantage, and I’m sure our Chief Science Officer and former chairman of UTCS, Prof. Bruce Porter, will agree. According to the latest US News rankings, our CS Dept. was #1 in the country and #2 in the world. When you factor in the sheer volume of students UT Austin graduates, the quality/quantity combo is hard to beat! Back in 2015 at my SXSW session I said we were very excited about building Austin up to be a global center for AI. So far, SparkCognition alone has brought in more than $100M in direct outside investment and reinvested revenue to the Austin ecosystem. At our company alone, we’ve got close to 50 Ph.Ds. focused on AI applications and research. Then there are other companies that are doing great work too. The fact is, Austin is already a TOP global AI destination. What does the city need more of to continue to grow as a center of AI innovation? Two important things, in my view. First, the city needs to maintain its character as much as possible. We have to remember that what makes Austin attractive to a lot of companies and top tier researchers is the quality of life you get here. I think urban planners have a challenge ahead of them, which is to maintain the city’s character while enhancing infrastructure that can support the significant growth we’ve seen and will likely continue to see. Second, we need to enrich UT Austin with support, grants, investments, industry partnerships… whatever we can do. I’m on the Advisory Board of UT’s Computer Science Department and I feel making UT stronger is one of the smartest things we can do to drive innovation and research in Austin. Various tech media have reported on how much focus China is putting on artificial intelligence. Do you agree with this perception — and does it concern you if the US is not the world leader in this industry? At a recent Center for a New American Security conference in DC where Alphabet Chairman Eric Schmidt and I were both speaking, and several top DoD officials and Generals were present, I asked Eric how soon China would catch up to the US in AI. He said they would catch up in five years in his view. If present trends hold, I concur with Eric’s assessment. This may be surprising to many since we’ve got a lot of advantages relative to our strong network of universities… and let’s not forget, the Dartmouth conference that saw the birth of AI as a field of research happened right here in the US. All that said, China is going to be the strongest competitor the US has ever faced. Their determination to lead the world in AI within the next 12 years, as articulated in their 2030 AI plan, cannot be taken lightly. This is not entirely a zero sum story by any means, but let’s be realistic. AI can enable a strategic and tactical edge… Your new book is titled “The Sentient Machine: The Coming Age of Artificial Intelligence.” How long did it take you to write this book? What was your writing process like? It took me close to three years to write the book. I wanted to get it right and express ideas that would remain relevant for a long time. I think I got pretty close. It was hard to do this concurrently with running and growing SparkCognition, and I don’t think I could have managed were the book on any subject other than AI. The fact that I’m obsessive by nature and think about AI and the future world we’re building round the clock certainly helped! There were so many ideas I’d been churning in my head for decades that started to simply flow out the tips of my fingers and on to the screen. In the future, will robots be better authors than humans? Photographs are a more accurate representation of reality, but there is still room for the “inaccuracies” that materialize on canvas in the form of a human artist’s creative license. I think AI systems will be better at almost everything eventually, but for many areas, “better” is a subjective concept. The real question is, will there be room for humans? And I think the answer to that is, yes, there will be. What do you think SXSW will be like in five years? I hope it will continue to get more interactive and more demonstrative. That is one of the strongest things about SXSW and I hope to see much more of it over time. I also hope it becomes more global, with a greater percentage of speakers and attendees coming in from all parts of the world. Exponential technologies are so fast-paced (by definition!) that it is hard to imagine what topics SXSW 2023 will feature, but I think we’ll see some really exciting applications of AI bringing autonomy to areas we didn’t think possible. We’ll should also be seeing a tremendous explosion of robots of all types in 5–10 years, so I think that will be an exciting area too. Other installments of the March Magic series include interviews with Tim O’Reilly, Guy Kawasaki, Robyn Metcalfe, Stephanie Agresta, Andrew Hyde, Brad King, Gary Shapiro, Chris Messina, Yuval Yarden, Jenny 8. Lee, Aziz Gilani and whurley. Hugh Forrest serves as Chief Programming Officer at SXSW, the world’s most unique gathering of creative professionals. He also tries to write at least four paragraphs per day on Medium. These posts often cover tech-related trends; other times they focus on books, pop culture, sports and other current events.
March Magic Memories: Amir Husain
6
march-magic-memories-amir-husain-186b604a0b17
2018-05-05
2018-05-05 09:44:14
https://medium.com/s/story/march-magic-memories-amir-husain-186b604a0b17
false
1,723
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Hugh Forrest
Celebrating creativity at SXSW. Also, reading reading reading, the Boston Red Sox, good food, exercise when possible and sleep sleep sleep.
25e4f2ec328a
hugh_w_forrest
6,086
993
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-18
2018-09-18 09:34:06
2018-09-18
2018-09-18 18:48:12
1
false
en
2018-09-18
2018-09-18 18:48:12
1
186db50f5e4c
4.569811
11
2
0
“A certain author said “we will need to plunge even more deeply into our comparison between speech and vision if we wish to have a truly…
5
Caster’s Baby (Part 2 & 3 ) { You and The Machine } “A certain author said “we will need to plunge even more deeply into our comparison between speech and vision if we wish to have a truly comprehensive picture of the situation…… and that it now seems fairly certain that both human speech and vision are implemented within the brain in stack-like fashion.” Art and Artificial Intelligence by G. W. Smith There is a Czech proverb that says, “As many languages you know, as many times you are a human being.” Speaking two different languages makes me feel I am perceived differently. I always want to believe it has to do with my ability to express myself with the vocabularies that exist in them. This is because certain events leave me with no words to describe them in one language. So, I tend to combine two languages to express myself. In that moment of expression, people who are able to understand my description can empathize because of our common multilingual abilities. However, people’s perception of me has little to do with any common language we speak because I am no different. The difficulty to express myself completely in a language sometimes depends on the context, my preferred language and the language that adequately captures the context. How can I present an idea about something I don’t have a word for? Imagine a world where a machine could empathize human feelings. Onyx’s bias was created out of syntax rather than semantics. This being that Onyx has no basis for is bias other than the words it has acquired which he has no understanding of. This implies that true understanding of words you listen to helps create your own bias. Hence, structured placement of words do not necessarily form models for conversations about an idea, rather, it is formed from one’s perception towards the idea, which gives room for another possible word to describe such idea. How is a word understood? What comes to your mind when a word is uttered? A blue car, a house, a child, an office, a school. The first images that comes to mind when each of these words are mentioned or thought of are your understanding of those word. This is where your imagination comes in. That is why it is more difficult for you to understand abstract concept till an analogy of a factual concept is used to illustrate the idea. Very often, your representation of this concept is linked to an image you either saw first or recently. Now you would see that this concept and process is very similar to how your babies actually began to understand the world you brought them into. I propose a scheme that attempts to make machines (Artificial) form their own Artificial Natural Language based on sounds and visuals which a human (Natural) is capable of learning and yet could be replaced by another machine. By ascribing sounds or a group of sounds to an object, scene or action, hence, enhancing its understanding. With Artificial Natural Language, a machine can understand and see things from a different perspective allowing it create its own empathy. This scheme attempts to find semantics and context rather than correct syntactic structures. After all, syntax only gives birth to ambiguity. For example, sometimes children say words and make grammatical errors but we do not fail to find meaning in the idea they are trying to convey. The learning agent(Machine) would be sensitized with mics and high definition cameras; with machine learning algorithms tailored for pattern recognition in speech streams, thus identifying high and low pitch, rhythm and loudness of syllables, matching using time stamps and gesture recognition and analysis as a basis for forming opinions. In it’s infant stage, the machine will be in a controlled environment as an observer where a finite known set of entities would be intentionally placed. Activities would be carried out in this controlled environment such that spoken words relate to existing items in the environment. The infant stage is completed when the agent finally has a persistent sound or word it utters for every existing item in that environment. Such that it can carry object recognition without a labeled training data-set. Consequently it would be allowed to interact with such environment making attempts to interact with objects it recognizes. Gradually foreign objects would be introduced into the environment with its actual word being uttered occasionally. Hence allowing us to test for curiosity and speculation in the agent. This would allow us see how quickly it can now associate words or sounds with an object. Uttered sounds will be noted and will be used in an attempt to communicate back with the agent. Eventually items that would never be pronounced would be introduced into this environment. This is done in an attempt to see how the machine would invent a sound for an object it has no previous word or sound for. The next stage (formative-stage) would be a stage where we observe how the machine recognizes actions by being exposed to streams of repeated actions, where known objects exist. This actions would be repeated in real life, with expectations of it able to recognize this actions mostly as a result of consequences of actions to known objects. This phase is an attempt to learn the consequences of certain actions towards items and how it affects their states. After this phase the machine would be exposed to the ambiguity of this world and would have to find meaning from it with the guidance of a patron. The patron would interact with the agent using Motherese. The motherese, in the context of this research, is formed by the agent and learned by the patron by playing sounds generated by the agent’s algorithm which analyzes the patterns it finds in speech. This whole methodology is based on the critical study of how babies learn their first words. Hence an attempt to create a digital approximation of every stage involved in child language acquisition. The various stages are designed so as to cover longitudinal and cross sectional acquisition. In Conclusion, at the end of this research one would expect to have a machine that can communicate its idea, not necessarily by following correct syntax in any given language; hence a machine that can empathize. Empathize in the sense that an inanimate machine excels to a degree at manifesting something like warmth and compassion which is a branch of how these qualities emanate in human minds. This said mind being a biochemical machine. After all, symbols were not the primary means of communication for early men but sounds. Symbols only came in when knowledge had to be annotated in caves hence the birth of semantic annotation. So why should machines be any different if the goal is to model them to act like humans and simulate the human mind? Caster’s Baby (Part 1) {Your Baby}
Caster’s Baby (Part 2 & 3 ) { You and The Machine }
168
casters-baby-part-2-3-you-and-the-machine-186db50f5e4c
2018-09-18
2018-09-18 18:48:12
https://medium.com/s/story/casters-baby-part-2-3-you-and-the-machine-186db50f5e4c
false
1,158
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Opemipo Durodola
aka(Emmanuel Caster)
e02e7325f368
opemipodurodola
18
23
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-18
2018-08-18 04:27:27
2018-08-18
2018-08-18 04:50:13
8
false
en
2018-08-18
2018-08-18 04:50:13
0
186dca9904cc
4.805031
0
0
0
The blockchain technology is proving itself to be indispensable lately, churning out a couple of stunning ideas and projects that sees this…
5
DISCLAIMER: This article is by no means an authority or has any direct link with Datum, it I a publication for a potential reward on BountyOx The blockchain technology is proving itself to be indispensable lately, churning out a couple of stunning ideas and projects that sees this world beyond what it is at the moment. The competition is definitely heating up with new projects emerging and ICOs being launched at an almost alarming rate, the challenge, therefore, is to find that project that is unique and has potential for success because it is reported that half the ICO launched last year have failed and a further half is still to fail that calls for concern for potential investors in the blockchain hence the need to find a project of relevance. Data as it is today Well, it has been announced that the worlds most powerful resource are no more oil but data generated by the over 3 billion smartphone users worldwide. It is also estimated that we generate about 2.5 quintillions of data daily, the value for data for a week, month or even a year would be mind-boggling and 90 percent of these data was created in the last 2 years, who can imagine what it will be like in the next say 10 or 15 years? Well, Datum does For users of Facebook, twitter, likedin, google and the likes, it is estimated that you make $2000 from you data alone, so how come you never get a dime from it? That is why the mega cooperate entities keep getting bigger and bigger, little wonder the companies that are grossing the highest and having the owners with the biggest net worth are companies that deal with data, and these data are collected by individuals who upload pictures, chat, upload files, synchronise systems and all that. What I mean is this, what if you upload your data and still have total control over that data and you are able to monetize these data and generate money for yourself from the same data you would have given out to gain nothing I return. This is how it works - The stakeholders in this ecosystem are The user: Is the generator or source of these almost could be a person or a business and the data is subject to a stringent validation process before it becomes accessible. Storage nodes: this provides the field for the storage of the generated data in the decentralized system. Data consumers: these are the people r organization that wants to use the stored data on the ecosystem, and like every consumer in any system they are almost the most important in the system as they help to keep the system functional. DAT token holders: they are the government in all these are the allocate and issue the token that grants access and makes the system what it is. Privacy: A typical illustration should be when you upload data on any of these social media, how to you know how many people have actually seen it? But with datum you can make use of its functionalities to determine specifically who sees your data. Open data are data that are left to be accessible by volunteer organisations and certain government institutions if the need be, for instance if Unicef needs data of say the number of people that are disabled in a particular region for them to carry out reach out programs for them, it would be inhumane of someone to withhold those kinda of data but there are still subject to the approval of the source. Data consumers: the team has put a lot of work with KYC procedures to make sure that they have accurate knowledge of the people seeking data and therefore determine their eligibility before even linking them up with the data provider. In All the power is still coming to the users and generators of these data. The token details Role: of Token Enable trade of data between data owners and buyers Symbol: DAT Supply: 3,000,000,000 For Sale: 1,530,000,000 Emission Rate: No new tokens will be created Price: 25’000 DAT per 1 ETH Sale Period ;29/10/2017 13:00 UTC to 29/11/2017 13:00 UTC Accepted Currencies :ETH Token distribution: 4th December to 11th December 2017 Minimum goal: 5000 ETH (raised on 8th September 2017) Maximum goal: 61200 ETH Details of the roadmap and other important demography are shown below. The team Roger Haenni, Co-Founder , nd CEO Roger is a serial entrepreneur with 17 years of experience with big data systems. He is also the co-founder of StockX, SwissInvest, PCP.ch, and Kosi. Most recently, Roger founded Clever Baby, a business focused on wearable smart-devices for children. Gebhard Scherrer, Co-Founder Gebhard is an operations product and service specialist with 20 years of experience in operations and sales. He is the co-founder of Gelid Thermal Solutions and Arctic Cooling. He has his master’s degree in business administration, with a focus in finance and capital markets. VC Tran, Co-Founder VC is a branding and marketing expert with 10 years of experience. He joined Gelid Thermal Solutions at launch and helped turn them into one of the leading CPU cooler and fan brands. Most recently, he co-founded Kosi, Ltd. with Roger. Theo Valich, Head of Growth Theo is an entrepreneur and analyst with 21 years of experience in technology-related fields. His experience ranges from GPU to supercomputer design and he was the co-founder of Space Image Network, Robotic Systems, and VR World. Florian Honegger, Smart Contract Expert Florian has 15 years experience as an enterprise document and data management architect. Vitaly Krinitsin, Community Manager Vitaly has experience as a community liaison and marketer. Most recently, he held the position of technical marketing manager at GELID Solutions. ||Website:datum.org)||whitepaper:https://datum.network/assets/Datum-WhitePaper.pdf|| Facebook:http://fb.me/datumnetwork ||Twitter:https://twitter.com/datumnetwork|| BountyOx username: Gjoe64
DISCLAIMER: This article is by no means an authority or has any direct link with Datum, it I a…
0
disclaimer-this-article-is-by-no-means-an-authority-or-has-any-direct-link-with-datum-it-i-a-186dca9904cc
2018-08-18
2018-08-18 04:50:13
https://medium.com/s/story/disclaimer-this-article-is-by-no-means-an-authority-or-has-any-direct-link-with-datum-it-i-a-186dca9904cc
false
973
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Ohere George
A Civil Engineer, Food specialist, content creator, bounty hunter per excellence.
dddd0fa89b77
georgeohere064
12
142
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-18
2018-05-18 08:39:01
2018-05-18
2018-05-18 08:39:46
0
false
en
2018-05-18
2018-05-18 08:39:46
1
186e17e1c7ca
0.25283
0
0
0
We strongly believe that the year 2018 and the years following thereafter the behavioural and data science takes a substantial hold in the…
5
Top-Most Tech Trends You Need To Learn We strongly believe that the year 2018 and the years following thereafter the behavioural and data science takes a substantial hold in the company setups, business models etc with almost all the tech companies embracing these widely prominent economic, social, and data science principles to increase their worker’s and work’s productivity and nudge the clients towards a desirable outcome.
Top-Most Tech Trends You Need To Learn
0
top-most-tech-trends-you-need-to-learn-186e17e1c7ca
2018-05-18
2018-05-18 08:39:47
https://medium.com/s/story/top-most-tech-trends-you-need-to-learn-186e17e1c7ca
false
67
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
harishchand25
null
44517c25bdab
harish.JanBask
1
2
20,181,104
null
null
null
null
null
null
0
null
0
b06ed8e9bb28
2018-05-18
2018-05-18 10:27:18
2018-05-18
2018-05-18 10:28:41
1
false
en
2018-05-18
2018-05-18 14:13:53
2
186fb6ddca5b
5.641509
2
0
0
Project Jetson is a platform aimed to provide UNHCR operations predictions about population movement (arrivals/departures) for specific…
5
Jetson: insights into building a predictive analytics platform for displacement Project Jetson is a platform aimed to provide UNHCR operations predictions about population movement (arrivals/departures) for specific regions or countries. Jetson — a machine-learning based application — measures multiple variables to see how changes over time that affect movement of UNHCR’s persons of concern, particularly refugees and internally displaced people. This experimental project was launched by UNHCR’s Innovation Service in 2017 to better understand how data can be used to predict movements of people in Sub-Saharan Africa, particularly in the Horn of Africa. We asked Sofia Kyriazi, Artificial Intelligence (AI) Engineer, and Babusi Nyoni, User Experience (UX) Designer, from the project team to discuss the challenges of predicting displacement and what success may (or may not) look like for the project as it moves forward. Why is user experience so important to this project? BN: Project Jetson, a project about the future of displacement, is one that must be articulated carefully. This is because user experience is a process to enhance user satisfaction with a certain product or service — in this case, a website — and how it is perceived by users. It involves taking into consideration human-computer interaction. On the one hand, as a team broaching a future-facing project, a visual representation befitting of the magnitude of the work would seem appropriate. A willful disregard for user experience convention would be permitted to a degree considering the rationale that an idea of the future deserves a matching facade. On the other hand, the future, because it is unknown, is intimidating, and sometimes scary. A misstep in proper articulation could nullify the purpose of the website when users of the site failed to comprehend the content and how to interact with it. What were some of the challenges you faced during the process of developing Jetson? BN: Mapping the user experience had its fair share of challenges. First of all being the responsibility of presenting an interface that both the general public and UNHCR staff will find intuitive. This presented interesting challenges in creating a unified visual experience while still catering for very specific use cases such as UNHCR staff on the ground in Somalia and their more office-oriented colleagues in Geneva considering that the bandwidth disparity between the two is extensive. Another challenge was that the map visual morphed over time from something static to a very dynamic representation of conflict and displacement data over time. The use of multiple references contributed to the near-convolution of this process, but this was expected as the main consideration was that the solution would have to be bespoke and tailored to the scenario. As we continue to define the story, the visual is expected to change to something more befitting. SK: The first and most important challenge was the question: what are we trying to show? Which number is important to us. Implementing all different based mathematical functions, to approach the actual arrivals, a lot of times the results were completely off, even though we were doing the training with all the available data. This could either mean that the data is not correlated and that we needed to expand with more information or that we were working in the wrong way. BN: Additionally, presenting information in the most succinct manner was challenging in that while the website was meant to house the predictive engine, the map visual, and long-form content, considerations had to be made as to how much information and how to display it. A user-friendly summarisation of the engine was conceived that gave casual users of the website a brief view of the engine metrics and results with the option to view the parameters at depth. SK: Another challenge for me was the uncertainty over correlations between datasets, of various formats, that we were collecting and how they could assist in predicting arrivals in each region. The datasets had to be cleaned, transformed and grouped per month, with the use of scripts in python or/and R, an action required to minimise the input to the modelling engine. The scope of the project had to be limited in modelling arrivals in the region of Bay. These were challenges regarding the data volume, thus to apply our effort and focus researching one use case, the pilot case, and documenting the process, in order to systematise it for the rest of the cases. Eureqa, an A.I. powered modelling engine, as a tool lacks in examples of time series predictions, future predictions. Only through forums, we were able to find a way to modify the research function to be able to predict arrivals for a month in advance. The produced functions were implemented, with the use of R, commonly used for statistical analysis, and the predictions were collected to be compared with the official real numbers of arrivals. The “winner” function was used in the final application, developed under the Shiny library, hosted in the shinyapp.io platform. What has worked so far? BN: An essential part of the process was the weekly standup/check-in meetings that helped track progress and kept this mapped to the project goals and deliverables.The mid-process workshop held in Geneva with all members of the team physically present fast-tracked progress on the resolution of a number of pain points. It also assisted in the rapid iteration of recommendations to the current state of the respective deliverables.The ability to tap into UNHCR Innovation’s domain expertise in big data and on-the-field information came in handy when framing the solutions and validating outputs and having the collaborators on-site meant our efforts could very easily be contextualised for UNHCR’s needs, which is something the team appreciated. SK: It was a matter of time for the team to gain the same speed on dealing with requests, and we overcame the barrier of depending on completion of each other’s tasks, to proceed with our own. We managed to automate the process of collecting and transforming data to assist future predictions, this part is now done in a short time and with ease. We have created a systematized process we follow, to expand on other regions of Somalia, in terms of collecting results from the tools we use, implementing, testing and iterating to come up the best estimation of movements. What will success look like for the team and for the product? BN: Success from a user experience perspective is an intuitive interface. It is one that tells a story that can be understood without supervision and that users can articulate to non-users of the website accurately. This includes powerful imagery where necessary; concise representations of data; and interactions that convey trust all aide in creating an experience that is the best representation of a platform’s intent. For the team, success lies in presenting deliverables that articulate, with cohesion, the team’s mandate with regards to the task. This begins with defining and adhering to a team dynamic that works and also, at the same time, allowing for a level of fluidity from team members in executing their responsibilities respectively. And finally, the product should be trustable enough to use without any degree of failure, either by the product (in doing what it is expected to) or by the user (in achieving their goals). SK: It would be a huge achievement to have the image of arrivals and departures for each region, meaning if someone would want to see, what is going to happen in Somalia over the course of the next month, they could be able, ideally in a more visual way (not just numbers) to see where big movements will take place. It would be even better if we got an “out of the ordinary” prediction, such as an alert of an unconventional movement. This would indicate that the engine has been trained enough to predict abnormalities. Regarding team success, over the last couple of months, we took time to make mistakes and sometimes we used time, expecting results from each other. Our over the weekend workshop had some amazing results and it gave the team a new pace, faster and more confident. To try to define it more, I would want to see everyone expressing their creativity and passion while being on the same track. We’re always looking for great stories, ideas, and opinions on innovations that are led by or create impact for refugees. If you have one to share with us send us an email at innovation@unhcr.org If you’d like to repost this article on your website, please see our reposting policy. Icon: Created by Moons for Noun Project
Jetson: insights into building a predictive analytics platform for displacement
51
jetson-insights-into-building-a-predictive-analytics-platform-for-displacement-186fb6ddca5b
2018-05-23
2018-05-23 08:08:30
https://medium.com/s/story/jetson-insights-into-building-a-predictive-analytics-platform-for-displacement-186fb6ddca5b
false
1,442
UNHCR Innovation Service's new blog brings you a transparent looks at what humanitarian innovation looks like in practice for (and with) refugees.
null
unhcrinnovation
null
UNHCR Innovation Service
innovation@unhcr.org
unhcr-innovation-service
HUMANITARIAN AID,INNOVATION MANAGEMENT,TECHNOLOGY AND DESIGN,DATA SCIENCE,HUMAN CENTRED DESIGN
unhcrinnovation
Machine Learning
machine-learning
Machine Learning
51,320
Lauren Parater
@UNHCRInnovation comms manager • humanitarian innovation enthusiast • thoughts on refugees, migration + #scicomm • maybe a little wine • views are my own
237f5ede6110
loparater
152
115
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-08-30
2018-08-30 09:16:53
2018-08-30
2018-08-30 09:20:12
1
false
en
2018-09-01
2018-09-01 09:44:17
7
1872924f3ac2
4.815094
2
0
0
In my last two articles on this topic, I suggested that the current generation of AI is not really intelligent and explored whether…
2
Will Deep Learning make AI truly intelligent? In my last two articles on this topic, I suggested that the current generation of AI is not really intelligent and explored whether programmable computers could ever be capable of intelligent behaviour. Here, I look at how Deep Learning techniques and technologies might be getting us closer to that aim. The only example we have of high-level intelligence is, of course, ourselves but the human brain is vastly more complex than any computer system that we have invented. And it seems likely that it is through this complexity, the high level of connectivity and the plasticity of the brain, that intelligence and reasoning emerges (and maybe self-awareness, too). So do we have any hope in mimicking the functionality of the brain and really achieving something that approaches, or even exceeds, our own capabilities? Artificial neural networks (ANN) are a technology that has been round, at least in theory, for a long time, however, it is only relatively recently that we have had the computing power to be able to properly implement them. ANNs are crude imitations of the network of neurons in our brains. The artificial neurons are basically bits of software that take numerical inputs and, provide some sort of numerical output. (In software terms, this is called a function: an example of a simple function is one that adds two numbers together, the inputs are the two numbers and the output is the sum of those numbers.) The way an artificial neuron works is by assigning weights to its inputs — if one input is given a weight of two and another a weight of one, then the first is deemed to be twice as important as the second one — and then it compares the combination of inputs to a particular threshold. If the threshold is met, then the neuron ‘fires’, i.e. it produces an output. ANNs can form the basis of a decision making mechanism. Let’s say you like to go to the cinema to see a good movie but you don’t drive, so you don’t like to go when the weather is bad. There are two things, then, that help you decide whether to go or not, if the weather is bad and if the movie is good. Let’s say you score a movie from 1 to 10 and you also give the weather a score from 1 to 10, where 1 is cold and rainy and 10 is a fine day. Let’s also say that you are quite keen on movies and really don’t mind the weather that much. So we could give a weight to the movie score which is double that of the weather score. So the maximum score would be 30,10 for a fine day plus 20 (2x10) for a great movie. The worst score would be 3, 2 (2x1) for a bad movie plus 1 for dreadful weather. We might decide that the threshold for making the decision whether to go to the movies or not is 15. If the score is higher, go to the movie, otherwise stay at home. So for a great movie (a score of 20) you would go whatever the weather was like (the score is over 15) but for an average movie (a score of 10) you will only go is the weather is reasonable (a score of at least 5). This is what a single artificial neuron will do; it will take the inputs, apply a weighting to them, combine them and compare them to a threshold. If the overall score is above the threshold the neuron ‘fires’. As you might expect, a network of artificial neurons makes things much more complex. A neural network consists of an array of neurons that take any number of inputs and feed their outputs into another array of neurons and these, in turn may feed into yet another array of neurons. This layered approach, with each neuron having its own input weightings and thresholds, can provide a very sophisticated, and complex, decision making system made up of many neurons in several layers. This is where we get to, what is known as Deep Learning. We saw in the first article that email spam filters are trained using a whole load of emails that have already been marked as spam, or not spam. ANNs are also trained via a technique called back propagation. This is, basically, the ANN adjusting its own individual weights and thresholds until it gets the right result. So when training an ANN to, for example, recognise the handwritten character ‘a’ it is presented with many different images of an ‘a’ and it adjusts its weights and thresholds until the result that it produces is consistently correct. It does this over and over again and eventually it can identify any handwritten ‘a’ you throw at it. Now, if, instead of using a single character, you do the same sort of thing with a training set of all characters, after many iterations, you end up with a sophisticated character recognition system. An interesting thing about this process is that, after training, the programmers have no real knowledge of the weights and thresholds that have been set within the network and so don’t really know exactly how the system is making its decisions! ANNs are used in character and image recognition and do a wonderful job of distinguishing between pictures of cats and dogs, for example. But while this is very clever and potentially useful, it is still not exhibiting the sort of intelligence that we humans display. An ANN-based system can recognise images but it has no knowledge of those images apart from what it can determine from the pixels that make it up. It could not relate a dog to a dolphin (both mammals), it could be trained to recognise a car but would have difficulty in labelling something as a form of transport (car, boat, plane, train, bicycle) because the images are so different. Humans can do these things easily because our experience of these things is not simply seeing images. We have a whole set of contexts in which we can place things. A car is a form of transport, for sure, but it can also be seen as a convenience, a status symbol, a piece of sports equipment, a danger to pedestrians, a contributor to greenhouse gas emissions or any number of things, depending on the context. The use of artificial neural networks have enabled great strides to be made in the sophistication of AI systems. ANNs form an intrinsic part of Deepmind’s Alphago, the computer system that has beaten the world’s finest players of Go, which is a deceptively simple but insanely complex game. This is an achievement that has been labelled a great leap forward for AI. And Alphago is clearly an extremely powerful and sophisticated system. But it still only plays Go. This is the third of a set of three articles that discuss whether current AI is actually intelligent, whether it is even possible for them to be really intelligent and whether Deep Learning provides a basis for real intelligence. You can find the others below. Are current AIs really intelligent? Can AIs ever be really intelligent? Will Deep Learning make AI truly intelligent? AlanJones|JustEnoughPython|My programming blog|Buy me a Coffee
Will Deep Learning make AI truly intelligent?
21
will-deep-learning-make-ai-truly-intelligent-1872924f3ac2
2018-09-01
2018-09-01 09:44:17
https://medium.com/s/story/will-deep-learning-make-ai-truly-intelligent-1872924f3ac2
false
1,223
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
info@datadriveninvestor.com
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Alan Jones
An ex-university professor and software engineer, I mostly write about AI, programming and technology in general - occasionally other stuff, too.
7d3f5fb94faa
jones.alan
59
7
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-28
2018-05-28 05:19:25
2018-05-28
2018-05-28 05:22:31
0
false
ru
2018-05-28
2018-05-28 05:22:31
2
1873eb5ed86b
4.298113
0
0
0
Отсутствие иерархий позволяет принимать коллективные решения и влиять на общую стратегию. Каждый, кто не согласен с действиями компании…
3
Сотрудники GOOGLE протестуют против Пентагона Отсутствие иерархий позволяет принимать коллективные решения и влиять на общую стратегию. Каждый, кто не согласен с действиями компании, может высказаться и изменить ситуацию. Например, сотрудники Google высказались против участия корпорации в разработке искусственного интеллекта для боевых дронов США. The New York Times осветила эту тему 4 апреля, прочитать на английском можно здесь. Тем, кто не читает на английском, предлагаю перевод статьи. «Бизнес войны»: сотрудники Google протестуют против Пентагона ВАШИНГТОН. Тысячи сотрудников Google, включая десятки старших инженеров, подписали письмо, в котором они протестуют против участия компании в программе Пентагона, использующей искусственный интеллект для интерпретации видеоизображений и в перспективе улучшающей нацеленность ударов боевых дронов. Письмо, которое распространяется внутри Google и уже собрало более 3100 подписей, отражает культурное столкновение между Силиконовой долиной и федеральным правительством. Конфликт, вероятно, будет усиливаться, потому что новейший искусственный интеллект все чаще используется в военных целях. (Прочитать текст письма) «Мы считаем, что компания Google не должна участвовать в войне», — говорится в письме, адресованном исполнительному директору компании Сундару Пичаи. Сотрудники компании просят, чтобы Google прекратила участие в проекте Maven, закрыла пилотную программу с Пентагоном, и объявила о том, что никогда не будет строить технологию ведения войны. Такая идеалистическая позиция, хотя и не разделяется всеми сотрудниками Google, но естественно подходит компании, чей девиз: «Не будь злым». Однако это чуждо массивной оборонной промышленности Вашингтона и, конечно, самому Пентагону. Министр обороны Джим Маттис часто говорил, что главная цель — увеличить «летальность» военных США. С самого начала компания Google поощряла намерение сотрудников высказываться по вопросам, связанным с делами корпорации. Руководители и рядовые сотрудники на специальных встречах обсуждали продукты и ​​политику компании. На одном из недавних общих собраний сотрудники подняли вопрос о причастности Google к проекту Maven. Диана Грин, возглавляющая бизнес облачной инфраструктуры Google, защищала сделку и стремилась успокоить волнующихся сотрудников. Представитель компании сказал, что большинство подписей на письмо протеста были собраны до того, как компания получила возможность объяснить ситуацию. Впоследствии компания описала свою работу над проектом Maven как «ненасильственную» по своей природе, хотя видеоанализ Пентагона обычно используется в противоповстанческих и контртеррористических операциях, а публикации Министерства обороны дают понять, что проект поддерживает эти операции. Обе структуры: Google и Пентагон заявили: продукция компании не будет создавать автономную систему оружия с использованием искусственного интеллекта, которая может стрелять без человека-оператора. Но улучшенный анализ беспилотного видео может быть использован для отбора человеческих целей при забастовках, а также более четкого определения гражданских лиц для уменьшения случайного убийства невинных людей Не обращаясь напрямую к письму, г-ну Пичаю заявил, что «любое военное использование машинного обучения, естественно, вызывает веские опасения». Он добавил: «Мы активно участвуем во всестороннем обсуждении этого важного вопроса». Компания назвала такие обмены «чрезвычайно важными и полезными». Хотя некоторые сотрудники Google, знакомые с письмом, говорили об этом только при условии анонимности, заявив, что они обеспокоены возмездием. В заявлении говорится, что часть компании Project Maven была «специально задействована для целей, не связанных с наступлением», хотя официальные лица отказались предоставить соответствующие данные. Департамент обороны сказал, что, поскольку Google является субподрядчиком проекта Maven первому подрядчику, компании ECS Federal, он не может предоставить ни сумму, ни условия контракта Google. Представители ECS Federal не ответили на запросы. В Google утверждают, что Пентагон использует «программное обеспечение для распознавания объектов с открытым исходным кодом, доступное любому клиенту Google Cloud» и основанное на неклассифицированных данных. «Эта технология используется для обзора, и предназначена для спасения жизней и спасения людей от необходимости выполнять очень утомительную работу», — сказали в компании. В одном из интервью в ноябре г-н Шмидт, бывший исполнительный председатель Google, который выступает в консультативном органе Пентагона в совете по инновациям в области обороны, признал «общее беспокойство в техническом сообществе военно-промышленным комплексом, использующим их материалы, чтобы убивать людей». Он предположил, что военные «использовали эту технологию, чтобы помочь сохранить безопасность страны». Беспокойство по поводу военных контрактов среди небольшой части сотрудников Google может не стать серьезным препятствием для роста компании. Но в сфере исследований в области искусственного интеллекта Google занимается интенсивной конкуренцией с другими техническими компаниями, в которые стремятся попасть самые талантливые люди. Поэтому у рекрутеров могут возникнуть трудности, если некоторых кандидатов отпугнет защита Google. В то время как Google защищает контракты от внутреннего инакомыслия, конкуренты не стесняются разглашать собственную работу над оборонительными проектами. Amazon рекламирует свою работу по распознаванию изображений с Министерством обороны, а облачная технология Microsoft выиграла контракт на обработку секретной информации для каждой ветви военных и оборонных ведомств. Нынешний спор, о котором первый сообщил портал Gizmodo, сосредоточен на проекте Maven, который начался в прошлом году в качестве экспериментальной программы, цель которой — найти способы ускорить военное применение последних технологий ИИ. По словам пресс-секретаря Пентагона, ожидается, что проект будет стоить менее 70 миллионов долларов в первый год своего существования. Но подписавшие письмо в Google явно надеются препятствовать тому, чтобы компания вступила в другие гораздо более крупные контракты Пентагона, которые будут создаваться по мере роста обороноспособности искусственного интеллекта. Ожидается, что Google будет конкурировать с другими техническими гигантами, включая Amazon и Microsoft, за многолетний многомиллиардный контракт на предоставление облачных сервисов в Министерство обороны. Джон Гибсон, главный управляющий отдела департамента, в прошлом месяце сказал, что программа закупок облачной инфраструктуры совместного предприятия была частично разработана для «повышения летальности и готовности», подчеркивая сложность отделения программного обеспечения, облачных и связанных с ними услуг от фактического ведения войны. Письмо протеста сотрудников, которое было распространено на внутренней системе связи в течение нескольких недель, утверждает, что охват военной работы может иметь неприятные последствия, отчуждая клиентов и потенциальных новичков. «Этот план наносит непоправимый урон бренду Google и его способности конкурировать за таланты», — говорится в письме. «На фоне растущих опасений по поводу предвзятого и вооруженного ИИ Google уже изо всех сил пытается сохранить доверие общественности». Это говорит о том, что Google рискует оказаться в рядах крупных оборонных подрядчиков, таких как Raytheon, General Dynamics и компании Palantir с большими данными. «Аргумент, что другие фирмы, такие как Microsoft и Amazon, также участвуют в подобных проектах, не делает проект менее опасным для Google», — говорится в письме. «Уникальная история Google, ее девиз« Не будь злом », и его непосредственная досягаемость в жизни миллиардов пользователей — это то, что отличает корпорацию от конкурентов». Как другие бывшие выскочки, которые стали гигантами Силиконовой долины, Google вынужден противостоять идеализму, который руководил компанией в первые годы ее существования. Facebook начал с высокой миссии по подключению людей по всему миру, но недавно он был под огнем за то, что стал проводником поддельных новостей и использовался Россией для влияния на выборы в 2016 году и посеял инакомыслие среди американских избирателей. Пол Шарр, бывший чиновник Пентагона и автор «Army of None», в своей будущей книге об использовании искусственного интеллекта для создания автономного оружия, сказал, что столкновение внутри Google было неизбежным, учитывая историю компании и стремительный спрос на ИИ в армии. «Среди технического сообщества есть сильный либертарианский дух и настороженность в отношении использования технологий правительством, — сказал г-н Шарр, старший научный сотрудник Центра новой американской безопасности в Вашингтоне. «Теперь ИИ внезапно и довольно быстро выходит из исследовательской лаборатории и в реальную жизнь». Оригинал текста на английском можно прочитать здесь
Сотрудники GOOGLE протестуют против Пентагона
0
сотрудники-google-протестуют-против-пентагона-1873eb5ed86b
2018-05-28
2018-05-28 05:22:33
https://medium.com/s/story/сотрудники-google-протестуют-против-пентагона-1873eb5ed86b
false
1,139
null
null
null
null
null
null
null
null
null
Google
google
Google
35,754
Ruslan Gafarov
Founder Malikspace.com
7ddafc1a553d
Gafarov
39
3
20,181,104
null
null
null
null
null
null
0
null
0
e7cf412cdd31
2018-07-27
2018-07-27 08:53:16
2018-08-08
2018-08-08 11:47:00
5
false
en
2018-08-08
2018-08-08 13:10:18
24
1874186efe9b
9.323899
4
0
0
null
5
Topology of Business: A Knowledge Graph of Federal Tax Service One of the first knowledge graphs (KG) that we at DataFabric built (and continue to evolve) was the knowledge graph of Federal Tax Service of Russian Federation (FTS) which accumulates tremendous amount of data about Russian companies and individuals. In this story I dive into the details of the steps we took and what we continue to do to maintain the knowledge graph. What is the Federal Tax Service’s data about? The data contains information about russian and foreign companies, sole proprietorships (a.k.a. individual entrepreneurs), people, government organizations, open-end funds. Here and after, the numbers may be inexact, since during the generation of the knowledge graph some data may be lost because of some problems (syntactic errors, etc.) in the original data. The knowledge graph contains information about 10,325,245 companies that includes their: names, IDs, status (active, inactive, etc.), registered address, commercial activities, stockholders, signatories and managing companies or people, licenses, etc. Information about 13,562,875 sole partnerships that includes: names, IDs, status, commercial activities, owner, etc. Also contains information about 26,739,725 people which may play roles of owners, stockholders, signatories or managers of companies or sole partnerships. The following attributes are known for people: names (in Russian and sometimes English), IDs, gender, etc. And the other entities, 28,690 governments organizations and 558 funds. The other large part of the knowledge graph consists of registered addresses of companies and the buildings extracted from the addresses, deduplicated and linked to the hierarchy of regions, cities, streets and etc. The numbers: registered addresses — 2,314,147, the buildings — 11,690,604. Read another our story about the technologies we use to build knowledge graphs and terminology we employ. How is the original data published? Unfortunately, the data is private, though FTS is a public agency, so you have to buy it from them to use. We, same as number of other companies, buy it and provide end-user services on top of it. Challenges we’ve faced There exist several challenges we’ve been facing while working on the knowledge graph. These challenges aren’t unique for this particular knowledge graph, but not all of them you may face working on a KG. Big Data as it is. The knowledge graph has grown from 1 billion to more than 6 billion triples that’s already quite a big number. It requires Blazegraph, which we currently use, to work at the limit of it’s capabilities. We use the single node edition of Blazegraph 2.1.4 on a machine with 4 CPUs, 26 GB memory and a 1.5TB SSD disk. An import of the whole knowledge graph in raw RDF (N-Triples) takes several days on a machine with 64GB memory and some queries may fail to complete within a reasonable time. In addition to that, any processing involving all the data requires a cluster of dozens of machines, but it isn’t a problem if you use Apache Beam and Google Dataflow as we do :) BTW: Blazegraph isn’t maintained anymore, since the time it was acquired by Amazon. Although there is a discussion to fork its open source edition. As for the problem with the triplestore, we’re experimenting with Apache Rya. It’s an RDF triplestore running on top of Apache Accumulo and HDFS. So far results are promising, follow us for more details in next stories. Dirty and broken data. The original data is created manually, by filling all sorts of forms by FTS employees and company owners, therefore there may be dirty values, like typos in the names, share sizes, etc. and different names for the same things, e.g. cities and street names. Apart from that, there may be syntactical errors in the XML files and ZIP archives which are used to publish the data. For now we fix only the share sizes and percentages by recovering them from other related information, e.g. total share size, percentage from size, size from percentage and etc. Typos in the names are fixed semi-automatically by creating fixed catalogs of cities, street names, etc. Entries with syntactical errors are skipped and as much as possible valid entries are read from the XML files and ZIP archives. Multiple URIs for the same entity. Triples describing an entity may be generated in parallel pipelines, e.g. person’s full name, gender, VAT number are generated in one pipeline, but an ownership relation between a company and a person in another. To make these triples describe the same person, the Я person in both pipelines should have the same URI, in other it should be generated in a deterministic way. And to generate such stable URIs usually some existing stable IDs are used, e.g. VAT number of a person, registration ID of a company and so on. Unfortunately, there are situations when there is no any stable ID to use, in example, because of a gap in the data or a mistake in the ID itself. In such situations we have a person or a company with multiple URIs and can say exactly that these URIs denote the same entity. Currently, we don’t do anything to fix it, but there are several approaches to deal with such entities. For now, I can only suggest to look at existing research on this topic, e.g. A. Hogan and others, “Scalable and distributed methods for entity matching, consolidation and disambiguation over linked data corpora”. Historical data and everyday updates. The original data is changed on an everyday basis. However, the historical data is important for due diligence, BI analytics, etc. So the challenge is to be able to store the historical data and apply the everyday updates to a running triplestore without any downtime. To deal with it, we developed an ontological model based on RDF reification statements that allows us to keep both current and historical data and a set of pipelines which incrementally apply incoming changes to a running triplestore. More details in the next sections. The schema The schema of the graph is based on the FIBO ontologies, their extensions that are specific for the Russian jurisdiction, and a number of domain independent RDFS and OWL ontologies. FIBO is huge and to easy the usage it’s divided in modules each of which covers a specific topic, e.g. corporations, sole proprietorships, loans and etc. I’m going to describe a few examples of entities, because the full schema is to big to be presented in details. A company. On the screenshot below you see how Yandex LLC. is represented in the user interface. Only some of the relations are shown, but we see that it has an owner, 21 subsidiaries, it’s registered in Moscow and the CEO is Бунина Е.И. We can go even further by opening other relations, you can do it yourself, just open the link. Yandex LLC. in the Topology of Business Now let’s look at how the company is represented in RDF with FIBO. In the snippet below, you can see three relations. Two of them are current relations which are actual at this moment, and the other one is a historical which is not actual anymore (it’s signatory relation). Yandex LLC. represented in RDF (Turtle) with FIBO ontologies Each relation has a reification statement, e.g. line 6 is a relation and 8–11 is a reification statement, which convey the version number (line 12) and version date (line 13) of the relation. A version date and number come from the original data, so it’s possible to trace any relation back to the source. The difference between a current and a historical relation is that for the current one we have both a relation and a reification statement when as for a historical one we only have a reification statement. Look at lines 17 and 23–26 for an example of a current relation and lines 37–40 for a historical relation. This is really simple model, isn’t it? A person. Now on the screenshot we see a person and we see that he owns 10 companies and at another 2 companies he is a signatory. Galitskiy Sergey in the Topology of Business Below the information about the same person, but in RDF. Here only two relations are shown: full name and relation with a company which he owns 90% of the shares that is equal to 61,323,318 rubles. Galitskiy Sergey represented in RDF (Turtle) with FIBO ontologies Hopefully, the snippets above have gave you an idea of the schema used in the knowledge graph. FIBO extensions. Obviously, FIBO ontologies don’t have everything you may need, because every jurisdiction is a bit different, so we added several extensions to it. You can find all the extensions at our repository. Types of commercial activities (a.k.a. OKVEDs). More specific properties, e.g. to denote a VAT number. Company statuses. In total 25 statuses. Types of signatories, e.g. CEO, Chief Accountant, etc. In total 10 types. An overview of the ETL pipelines The knowledge graph is generated with a set of ETL pipelines which we developed from scratch using Kotlin, Apache Beam (GCloud Dataflow) and Apache Jena. The pipelines are orchestrated by Apache NiFi. Read more about using Apache NiFi with Apache Beam in our previous story. The pipelines are divided into two sets: full loading pipelines — they process all the original data we collected for several years at ones. So they’re used to generate the whole graph from zero to 6 billion triples. rolling update pipelines — they take as input only some snapshots of the original data, calculate the difference between the current data in the knowledge graph and the snapshots and apply detected changes. These pipelines are executed to apply monthly or daily updates. In the next chapter I describe the full loading pipelines in much more detail, but I leave the rolling update pipelines for another story on Medium, so don’t forget to follow us :) The full loading ETL pipelines The input data is ZIP archives with large XML files in them. Each XML file contains up to 10k snapshots of data about companies or sole proprietorships. A snapshot in the original data created each time when something change, e.g. the share of one of the owners. The data, which didn’t change, is copied from the previous snapshot as it is. There are two types of snapshots, for a company and a sole proprietorship, other entities such as people, government orgs, etc. don’t have their own snapshots,but are extracted from these ones. In the pipeline above the snapshots are converted to JSON objects and at the same time the pipeline filters out snapshots with syntactic errors. The JSON objects are converted to POJOs, for each snapshot there is one POJO. Since there may be more than one snapshot per a company, POJOs are merged and duplicated fields are discarded. Two fields are duplicates if they have the same version date and number, so at this step we don’t look at the values. Now having these POJOs for companies and sole proprietorships, we’re able to generate POJOs for the other entities. The next set of pipelines are responsible for the generation of RDF out of POJOs. Each field in POJO is unique in terms of object structure, so for each field we have a separate processor written in Kotlin which maps the field to corresponding triples. Below is the rest of pipelines which transform POJOs to RDF. Also the pipelines do another deduplication, but now the fields are deduplicated based on the actual values and a latest value is marked to easy further generation of RDF. As you may noticed, there is an intermediate ontology which is used before FIBO ontologies. This is so, because the FTS ontology (the intermediate one) were used at the first version of the knowledge graph instead of FIBO. We decided at that moment that it’d be easier to write mapping rules between ontologies, instead of rewriting the whole pipeline. The mapping rules, are simple SPARQL CONSTRUCT queries that are executed by Apache Jena. So, as the output of the pipelines we get the RDF that is ready to be bulk imported in a triplestore. The second set of pipelines are run for each entity type separately, e.g. one work only with companies, the other one with people. The reason is that data about people exist in both company and sole proprietorship snapshot, and we want to reuse the code. The linkage between pipelines on the level of relations between entities, e.g. person is an owner of a company where the RDF for the person is generated in one pipeline, but the RDF for the ownership is in another, is guaranteed by the usage of stable URIs for such entities. And…lesson we learned Take a simplest schema, but be ready to change it. At the beginning we used an in-house developed ontology as the schema, but then requirements have changed and we needed to migrate. Be ready for that, since ontologies aren’t set in stone, they’re changing. Validation is must have. It’s not possible or desired to cover transformation pipelines with unit tests that would guarantee the validity of the generated RDF. Look at the automated ways to run validation tests based on some declarative constraint rules, e.g. SHACL or OWL axioms. There is room for improvements :) We’ve spent a lot of time developing these pipelines and we’ve got an idea how such ETLs could be improved. We even published a short survey on existing tools for the non-RDF to RDF transformations. Having such hands-on experience and knowing that there is no tool which could help us, we decided to develop one ourselves. If you’re interested in our recent developments, contact us and we’ll be glad to share more details.
Topology of Business: A Knowledge Graph of Federal Tax Service
5
topology-of-business-a-knowledge-graph-of-federal-tax-service-1874186efe9b
2018-08-09
2018-08-09 14:34:21
https://medium.com/s/story/topology-of-business-a-knowledge-graph-of-federal-tax-service-1874186efe9b
false
2,250
Knowledge engineering: Semantic web, Knowledge Graphs, Linked Data, Ontologies
null
datafabric.cc
null
datafabric
info@datafabric.cc
datafabric
KNOWLEDGE GRAPH,SEMANTICWEB,LINKED DATA,RDF,COMPLIANCE
null
Knowledge Graph
knowledge-graph
Knowledge Graph
88
Maxim Kolchin
http://kolchinmax.ru
a56c89cda8a5
kolchinmax
28
55
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-21
2018-09-21 09:40:51
2018-09-21
2018-09-21 09:41:44
1
false
en
2018-09-21
2018-09-21 09:41:44
1
187549af1e2d
0.456604
0
0
0
Amazon launches Scout, a machine learning (https://www.amazon.com/scout)- powered visual shopping tool
4
Amazon launches Scout, a ML based visual shopping tool Amazon launches Scout, a machine learning (https://www.amazon.com/scout)- powered visual shopping tool Scout to help shoppers better figure out what they want to buy in a more visual fashion, Using a combination of imagery, a thumbs up/down voting mechanism, and machine learning technology, Scout offers a Pinterest-like way of browsing Amazon products Using visual attributes, to refine the searches.
Amazon launches Scout, a ML based visual shopping tool
0
amazon-launches-scout-a-ml-based-visual-shopping-tool-187549af1e2d
2018-09-21
2018-09-21 09:41:44
https://medium.com/s/story/amazon-launches-scout-a-ml-based-visual-shopping-tool-187549af1e2d
false
68
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Digital Marketing Notes by Harsha
Manager Analytics @ Concentrix
e551ebc70887
Harsha_MP
1,211
1,255
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-23
2018-07-23 05:04:49
2018-07-26
2018-07-26 18:14:46
3
false
en
2018-07-27
2018-07-27 04:29:37
5
18770ce8e89d
4.361321
0
0
0
I'm writing this journal to document and introspect my successes and failures as a Data Science Intern
3
Week 4 as a Data Science Intern Day 1: ########### RANT ALERT########### For some reason I woke up today cursing Apple, because of their shitty keyboards on those Mac book pros, they are so pathetic, I honestly can’t believe that the people who designed them are some of the best designers and engineers in the world. The worst part is, Apple survived because of developers and software engineers, it was us who bought Macs on the time the company didn't sell phones. Now Apple completely ignores the Pro users and makes thin and light mac book pros, with pathetic keyboards for Chinese school girls to play candy crush. Very disrespectful towards the developers who stood by Mac(and UNIX) since day one. “A keyboard is the most dependent part for a developer, if those keyboards are not dependable (shitty) then the computer is NO GOOD” ########### RANT OVER########### Anyways good news, the hackintosh community has never been more active. I’ll be making a hackintosh (installing macOS on a ASUS Gaming laptop) next week for a friend (no the guy can afford a macbook pro, but he doesn’t want those awful keyboards, so I advised him to go the hackintosh way !!!) I’ll make a whole article on making a hackintosh this weekend when I actually do it. You’re welcome !!!. Types of Neural Networks and their uses Anyways today I realized that many are confused on the different types of neural networks and what they are used for. So I’ll have a short and sweet description. (I’m no expert but these are neural networks I have used before) Multi layer Perceptron : a.k.a deep neural network(with dem hidden layers), good for tabular data (think csv files where you cases of data) an example of a CSV file Convolutional Neural Network : are named after convolution functions in math, (which means 2 function produce a third function, come to think of it I was recently going through Fourier’s Transform, it sort of separated a single signal, into a couple of sine curves, I wonder whether they are the reverse form of convolutions), THESE are good for Image recognition,(they are great with multi dimensional data) this is the forte of Convolutional Neural Networks Recurrent Neural Network : they weren’t used much (coz difficult to make them learn, something one of my lecturer points out on us LOL) until a schema of network called Long Short Term Memory network was made (LSTM). This made RNNs back in the game and they are mostly used for processing text data and text mining. Most used discipline is Natural Language Processing (I know people have use LSTMs for a lot more but I have used them only for NLP at the moment). I haven;t worked a lot with these but I have used a handy API called Dandelion API its very easy to use, in many languages specially in my favorite, python (not you Java). Now dandelion API is paid but if you use sparingly its totally free. If you make an app in Python or JavaScript you can have it up and running in minutes. Anyways more info on that here. CNN Long Short-Term Memory Networks Gentle introduction to CNN LSTM recurrent neural networks with example Python code. Input with spatial structure, like…machinelearningmastery.com And this article is a beauty (most of the NN types explained) 7 types of Artificial Neural Networks for Natural Language Processing by Olga Davydovamedium.com Today I made a pig script and learnt HBASE, its a column-store database that’s utilized in Hadoop. It’s pretty cool. Day 2: Today is apparently a surprise to me, because my company health check up, So I’m supposed to fast for 12 Hrs (which I didn’t), but they are giving free food to break the fast, so thought of showing up at the health camp for free food. So now going to start on bash. Bash scripting cheatsheet Variables · Functions · Interpolation · Brace expansions · Loops · Conditional execution · Command substitution ·…devhints.io Work, Work, Work. Day 3: More and more bash. And what a day made so many things today. Most importantly made regression machine from scratch. more details on that here. Linear Regression Example - scikit-learn 0.19.2 documentation This example uses the only the first feature of the diabetes dataset, in order to illustrate a two-dimensional plot of…scikit-learn.org And nothing eventful happened except for the fact that I spilled my glass of water in office, luckily there was not a lot that spilled. Day 4 : Fine I’m gonna be honest, I didn’t like math when I was studying for engineering. I absolutely hated biology for that matter. But when I started learning AI, it was apparent to me that math was needed, but this time around I loved doing it, I actually took to the books to learn about laplaces and Fourier transform. What I learned was the end game MATTERS (whether I’m studying math for engineering or AI made a huge difference in my approach towards the subject). Anyways I’m posting the stuff I’m learning. I spent the whole day trying to hack Fourier Transform and finally did it after 8 hours (I know I’m that slow). These are valuable sources I pulled from. An Interactive Guide To The Fourier Transform The Fourier Transform is one of deepest insights ever made. Unfortunately, the meaning is buried within dense…betterexplained.com I went through this video as well. WHY AM I LEARNING THIS ???? It because remember, I shared some articles on MapReduce. MapReduce can be improved over Fast Fourier Transform. Here is the awesome paper on it. I don’t understand all of it, but I understand enough to make a simple prototype. Since it’s not my company project, I’ll be putting it on my GitHub once I’m done. If you have any questions in Fast Fourier Transform (not the Schönhage-Strassen Algorithm I’m still working on it)? gimme a message, I’ll be able to help you out. My friend Anjana is eating like an animal. Since the friday is a poya day we have our “friday guys night out” today. Chill and Adioss !!!
Week 4 as a Data Science Intern
0
week-3-as-a-data-science-intern-18770ce8e89d
2018-07-27
2018-07-27 04:29:37
https://medium.com/s/story/week-3-as-a-data-science-intern-18770ce8e89d
false
1,010
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mohamed Ayoub
An AI and Data Science researcher by day, and a Jiu Jitsu coach by night. I love riding my bicycle and sometimes I cook. Physics and Technology is my dope.
7cbb06b667c5
mohamedayoob01
6
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-12
2018-06-12 17:21:54
2018-06-12
2018-06-12 18:19:06
4
false
en
2018-10-27
2018-10-27 23:23:14
8
187792f3e99
2.5
4
0
0
Imagine running a machine learning training and inference stack wherever… on whatever… with little no configuration needed to flow from one…
5
How to Get Started with Kubeflow Imagine running a machine learning training and inference stack wherever… on whatever… with little no configuration needed to flow from one piece of hardware to the next, from a hybrid environment to cloud vendor X… Mind blown… Enter Kubeflow. What is Kubeflow? Think of being able to run TensorFlow jobs at scale on containers with the same scalability as containers and container orchestration that comes with Kubernetes. What? Okay. Let’s consider what it takes to compose a machine learning training and inference rig: With Kubernetes, you’re able to abstract away the operating system down in terms of managing resources: Operating system and hardware abstracted with Kubernetes and containers. And with Kubeflow, all you need to worry about is running your model training and, in proper Kubernetes fashion, setting a goal and parameters for those processes to hit while allowing Kubeflow to manage the rest: Now, let’s be abundantly clear- Kubeflow as it is now is not necessarily as simple as a one click solution (a very loaded expression). But what they do allow you to do is determine parameters for resources available in your environment to use to complete both inference and training tasks. Optimizing how Kubeflow is implementing with your TensorFlow model will vary (for the technical development side) especially if you are dealing with juggling GPUs, CPUs, and TPUs. One type of processor might not be the most optimal choice across the board for all ML models, so keep this in mind. Currently resource allocation must be done manually through training code… I would not be surprised if automation of optimized resource usage across an ML training task is on the list of features that will arrive in the future for Google Cloud’s different chip offerings. I am on the cusp of trying my own first workload with Kubeflow, but in the meantime this article’s main purpose was to pass along the following resources to get you started: Intro to Kubeflow blog post here. Get your hands dirty right away by running through this Kataconda lab here. You WILL have an ah-ha moment where you see the distribution of an ML job scale across nodes. Then the usefulness of all of the things begin making sense. The Introduction to Kubeflow Codelab- set up the whole damn thing, the training and serving stack, and see how you can connect a flask server and UI to then serve your model and perform inference! An easy way of installing Ksonnet without fighting the war I had to fight on my Macbook. Learn how to perform TensorFlow training jobs here. Learn how to perform TensorFlow serving here. How to get a Jupyter notebook started here. And last but not least, the Kubeflow Github repo. Scroll all the way down on the readme to get step-by-step instructions on how to install KSonnet and the necessary packages to run Kubeflow on your laptop.
How to Get Started with Kubeflow
4
how-to-get-started-with-kubeflow-187792f3e99
2018-10-27
2018-10-27 23:23:14
https://medium.com/s/story/how-to-get-started-with-kubeflow-187792f3e99
false
477
null
null
null
null
null
null
null
null
null
Containers
containers
Containers
2,511
Amina Al Sherif
null
de46c1e173d3
amina.alsherif
34
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-25
2018-06-25 20:47:47
2018-06-25
2018-06-25 20:49:56
2
false
en
2018-06-25
2018-06-25 20:49:56
4
1879d3caf21
1.141824
1
0
0
Nowdays widespread and acknowledged #ArtificialIntellegence in fact is known for decades: it emerged in the way we know it today in 1943…
5
Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So Nowdays widespread and acknowledged #ArtificialIntellegence in fact is known for decades: it emerged in the way we know it today in 1943. Just imagine, mathematicians stated the core of AI 80 years ago. The missing link to unleash its power was in fact the absence of technology. It’s been caught up just recently, in fact mostly in the last 3–4 years with the recent explosion of #bigdata and ever-faster computers that brought about sweeping chaing in its so-called neural networks. And the underpining part of AI is deep learning which aims at resembling human logic through the employing the layers of kind of analogous of neurons. Deep — refers to many -layers structure. However it is already been questioned by some experts in ints power to really drive meaningful conclusions and if that’s type of learning is really deep. Some see the hope in the beyond- the — #deeplearninghub and argue that it’s the right time to tackle more daunting challenges of Artificial Intellegence. Kindy is one of that projects towards higher precision, greater learning paths and overall performance, according to its founder. https://www.nytimes.com/2018/06/20/technology/deep-learning-artificial-intelligence.html?rref=collection%2Fsectioncollection%2Ftechnology
Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So
1
is-there-a-smarter-path-to-artificial-intelligence-some-experts-hope-so-1879d3caf21
2018-06-25
2018-06-25 20:49:56
https://medium.com/s/story/is-there-a-smarter-path-to-artificial-intelligence-some-experts-hope-so-1879d3caf21
false
201
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Quantor | Quantitative Investing
Developing a Brand New World of Quantitative Investing https://tokens.quantor.co/
5120c66db1f9
Quantor
24
3
20,181,104
null
null
null
null
null
null
0
null
0
6ea01d218284
2018-06-14
2018-06-14 09:47:57
2018-06-19
2018-06-19 10:54:28
1
false
en
2018-06-19
2018-06-19 10:55:27
4
187ac2ae8927
4.615094
1
0
0
The second in our contributions by writer and journalist Philip Ellis
5
Who will win the AI arms race? The second in our contributions by writer and journalist Philip Ellis Artificial intelligence is the paradigm-shifting technology of our age, poised to change the way we live to a greater degree than other similarly hyped fields like virtual reality or the blockchain. When it comes to who will dominate this massive industry, the popular narrative is that the duopoly of the United States and China is already a done deal, but in fact a number of countries are stepping up their investment in AI readiness. “It’s a space race redux, where world superpowers battle to define generations of technology to come,” writes Quartz’s Dave Gershgorn, adding: “Unlike space, there’s no clear finish line.” Silicon Valley is an obvious contender for AI supremacy, with the GAFAM companies (Google, Apple, Facebook, Amazon and Microsoft) behind the majority of the last decade’s disruptive technologies. Virtual assistants Alexa and Siri have been the most overt introduction of AI into our daily lives, and human-machine interaction via voice is already starting to reshape consumer behaviour; one in four Google searches are conducted aloud, a figure which ComScore estimates will rise to 50 per cent by 2020. And as far as these vocal avatars are concerned, a great many people have already given their loyalty (and their data) to market-leading brands. However, a new student tax bill currently working its way through Congress threatens to price talent out of STEM fields and force grad students to pursue their endeavours elsewhere, creating a “brain drain” in the States. “The U.S. government’s moves come at an inopportune moment given how competitive AI is becoming, and how much emphasis other nations are placing on the technology,” writes Will Knight of the MIT Technology Review. “Over the long term, the consequences could also be felt not just in academia but in the U.S. technology scene, which has often fed off academic advances.” Additionally, there is some concern that the United States is not sufficiently prepared for the potential mass displacement of jobs that will accompany the AI-empowered new industrial revolution. As the University of Pretoria’s Professor Lorenzo Fioramonti points out: “The major difference with the past is that today’s automation technologies are highly intelligent and able to learn.” According to the Economist Intelligence Unit’s Automation Readiness Index, which assessed the innovation environment, education policies, and public workforce development of 25 developed economies in anticipation of this sea change, the United States has fallen behind in taking the necessary steps to prepare workers and safeguard employment opportunities. South Korea, Germany and Singapore were the top three ranking states in the index, all of which have earmarked considerable funds for the development of AI and robotics, as well as introducing government-supported lifelong learning programmes which will equip citizens with the skills and training necessary to the pursuit of fulfilling careers in an increasingly automated society. The whitepaper also highlights an additional challenge that will affect many countries, including those with established vocational training schemes; namely, that they may well find their existing learning programmes have too much of a focus on low-skilled jobs to be fit-for-purpose for the next generation. China, meanwhile, is forging ahead with President Xi Jingping’s plans to build a $150 billion AI industry by 2030. The BAT tech giants (Baidu, Alibaba and Tencent) are pouring significant funds into AI, and at last year’s AAAI Conference on Artificial Intelligence, Chinese authors made up 23 per cent of scientists presenting their research. Alibaba is currently the country’s biggest spender on R&D, with founder Jack Ma pledging to invest $15 billion in transformational technologies in the next three years. The company is already working with governments to roll out AI-powered, cloud-based solutions both in China and abroad, including automated metro kiosks in Shanghai, and smart traffic flow solutions in Malaysia. Regarding consumer devices, Western brands have struggled to enter China for a number of reasons, among them being the challenges inherent in adapting AI assistants to a new language, syntax, and cultural context. This leaves the BAT triumvirate free to bring their own proprietary devices to market, secure in the knowledge that they are responding to an increasingly voracious appetite for AI-powered products. According to PwC’s Global Consumer Insights survey, China is one of the countries most interested in AI-enabled gadgets; more than half of Chinese consumers are looking to buy an AI device (that’s double the number of Britons and Americans) and 21 per cent already own one. China stands on the brink of widespread AI adoption, with an innovation ecosystem that presents fewer obstacles to growth than elsewhere in the world. The Internet penetration rate might only be 56 per cent, but at 772 million users, its online population outnumbers that of the United States. A vast domestic market certainly helps in the herculean task of gathering enough data to build robust AI systems, and China’s communist government and comparatively unregulated data marketplace means there are fewer restrictions surrounding consumer privacy, which will provide tech companies with forward momentum (at least in the short term). But what about the United Kingdom? Largely overlooked in this arena, British innovators are still very much in the running, if a recent report is to be believed. Published by the House of Lords in April 2018, it suggests that the United Kingdom definitely has the potential to become a world leader in AI. “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences,” says Lord Clement-Jones, Chairman of the House of Lords Select Committee on Artificial Intelligence. “The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths.” Entitled ‘AI in the UK: Ready, willing and able?’, the report outlines a number of recommendations on navigating this new territory, with a focus on future-proofing workplaces, avoiding monopolies, and ensuring that ethics are built into AI systems from the very beginning. “Where they lack in-house experience, UK businesses should work with AI expert partners which can outline definable ROI and clear use cases using today’s proven technology,” says Doron Underwood, AI product marketing manager at Amdocs. “These experts can help upskill staff on the nuances of the AI technology they are using, whilst ensuring they understand the consequences of using it erroneously.” What is becoming increasingly clear is that the measures of success in the AI arms race do not solely relate to technological capability, but equally importantly to a nation’s preparedness for how to live, work and thrive in an automated, AI-enabled society. While being “the best” at developing AI technology might be the first thing on everybody’s minds, big picture concerns surrounding ethics, privacy and fairness will play a potentially greater role in determining who ends up being the real winner.
Who will win the AI arms race?
1
who-will-win-the-ai-arms-race-187ac2ae8927
2018-06-22
2018-06-22 16:42:01
https://medium.com/s/story/who-will-win-the-ai-arms-race-187ac2ae8927
false
1,170
An exploration of the effect of technology, innovation and rapid change on the human condition (and the implications for brands and marketers)
null
creationio
null
Creation: Being Human
hello@creation.io
creation-being-human
MARKETING,MARKETING STRATEGIES,COMMUNICATION,COMMUNICATION STRATEGY,BRAND STRATEGY
creationio
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Philip Ellis
Freelance writer and journalist.
e9ad41679c11
philipellis
482
474
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-27
2017-10-27 18:32:12
2017-10-27
2017-10-27 18:41:32
0
false
es
2017-10-27
2017-10-27 18:41:32
2
187e0210c31e
0.192453
0
0
0
English version here
4
Docker para R: Rocker English version here El pasado día 24 impartí una charla en el grupo de usuarios de R de Madrid sobre cómo podemos utilizar Docker para trabajar con Rstudio. Os dejo enlaces al video y los documentos. http://madrid.r-es.org/46-martes-24-de-octubre-2017/ Espero que sea útil.
Docker para R: Rocker
0
docker-para-r-rocker-187e0210c31e
2018-02-13
2018-02-13 08:39:08
https://medium.com/s/story/docker-para-r-rocker-187e0210c31e
false
51
null
null
null
null
null
null
null
null
null
Rstats
rstats
Rstats
450
SWIMMING IN THE DATA LAKE
Some hints & discoveries on my path to knowledge
8e5ba4b95d52
verajosemanuel
47
53
20,181,104
null
null
null
null
null
null
0
p(x,y) y=0 y=1 ----------- x=0 | 1/2 0 x=1 | 1/4 1/4 p(y|x) y=0 y=1 ----------- x=0 | 1 0 x=1 | 1/2 1/2
2
null
2017-11-25
2017-11-25 06:47:56
2017-11-25
2017-11-25 12:51:15
2
false
en
2018-01-15
2018-01-15 08:13:53
21
187e2a2139ab
4.556918
2
0
0
Before we begin, here’s what we are going to talk about in this intro:
3
A Concise Introduction to Generative Adversarial Networks Before we begin, here’s what we are going to talk about in this intro: What are Generative models and Discriminative models? What is the intuition behind Generative Adversarial Networks? What is the math behind it? An example — DCGAN So let’s begin! There are basically two types of models used to model data in probability and statistics : Generative Models and Discriminative Models. What are they? For an intuitive understanding of the difference between the two, consider the task of classifying a given dish into pizzas and noodles. Assumption for the sake of this example: there are no other dishes in the world (either that, or you have really unhealthy diet choices in life). Generative models can be thought of as learning the recipes for the two items and when presented with a dish, using that knowledge to classify between the two. You know where it comes from. Discriminative models on the other hand, is more like looking at lots of dishes and making the associations that bread and some toppings means pizza, while long tangly threads in soup probably means noodles. You know what it looks like. For the more nerdy ones out there, here’s a slightly more technical treatment of the topic: A generative model understands how the data was generated in order to categorize a signal. It computes the joint probability distribution p(x,y) of input x and output y. A discriminative model does not care about how the data was generated, it simply categorizes a given signal. It computes the conditional probability distribution p(y|x) for input x and output y. To illustrate the difference, consider a data set with many input-output pairs (x,y) where both x and y can take either of the two values 0 or 1. On presenting this data-set to both the models, This is what a generative model strives to learn: While this is what a discriminative model tries to learn: Observe that in the first table, all the values sum up to 1, while in the second table, the rows individually sum up to 1. Following are a few typical examples of both types of models: Types of generative models are: Gaussian mixture model (and other types of mixture model) Hidden Markov model Probabilistic context-free grammar Naive Bayes Averaged one-dependence estimators Latent Dirichlet allocation Restricted Boltzmann machine Generative adversarial networks Examples of discriminative models include: Logistic regression, a type of generalized linear regression used for predicting binary or categorical outputs (also known as maximum entropy classifiers) Support vector machines Conditional random fields Linear regression Neural networks Random forests Generative Adversarial Networks: An Intuition: So what are these? Ever since the first paper describing GANs came out in 2014, they have been getting a lot of attention and for good reason. But before we go into the why of that, let us understand how they work. To repeat an often used example while introducing GANs, consider the case of a counterfeiter and an investigator. The former’s job is to try and create fake currency while the latter tries to call him out. To start with, the investigator is pretty good at classifying between real and fake. The counterfeiter uses the feedback from the investigator and keeps modifying his technique. The investigator gradually can’t keep up with the counterfeiter as the differences between the real and the fake slowly start to diminish. The counterfeiter is said to have mastered his technique when the investigator can no longer distinguish between the two and his guess is as good as random : correct 50% of the times. The Dirty Math Behind It: This is essentially what happens while training a Generative Adversarial Network: We have two multilayer perceptrons, a Generator G and a Discriminator D. D and G play the following two-player minimax game with value function V(G,D): Here, D(x) represents the probability that x came from the data rather than the probability distribution determined by G. The global minimum of this function V(D,G) would be achieved when D(x) is equal to 1/2 everywhere, or in other words, the discriminator is not able to distinguish between the real distribution and the Generator’s distribution. The above equation basically says that we want to find those parameters for D and G such that, the value function V is maximized over all possible D and this in turn is minimized over all possible G. That is, D is the best at doing its job, which is to correctly classify data and thus increase the value of V, while for this best D, G is also the best at doing its own job, which is to throw the best D off guard by maximizing the term D(G(z)) for output produced by G from Generator’s distribution on any input z and thus minimizing the value of V. In plain English, this translates to : G’s task is to fool D and D’s task is to not get fooled by G. Yeah, Math is a beautiful language. Deep Convolutional GANs: For simpler understanding, let us consider a variant of GANs, Deep Convolutional Generative Adversarial Networks - or DCGANs. The input for the Generator network G is usually a vector of random noise. This noise is passed through a few deconvolution layers. These are basically as simple as convolutional layers, the only difference being their inputs are equivalent to a convolutional layer’s output and vice versa. When a few such deconvolution layers are cascaded and the random noise input is passed through it, it generates an image at the end. This process of generating an image can be thought of as sampling from a high dimensional probability distribution of images, and our task is to approximate this distribution to that of the real images. The generator in DCGAN: This is done by minimizing the loss functions described here: Specifically, the loss of D is calculated in 2 parts : d_loss = d_loss_real + d_loss_fake where, d_loss_real : error by D in classifying the true images as real. d_loss_fake : error by D in classifying the images generated by G as fake. Similarly, the loss for G is : g_loss : error by D in classifying the images generated by G as real. Up until very recently (November 2017), the following page used to be curated with all the new developments related to GANs : nightrome/really-awesome-gan really-awesome-gan - A list of papers on Generative Adversarial (Neural) Networksgithub.com Do check it out for interesting resources!
A Concise Introduction to Generative Adversarial Networks
2
a-concise-intro-to-generative-adversarial-networks-187e2a2139ab
2018-03-04
2018-03-04 16:31:49
https://medium.com/s/story/a-concise-intro-to-generative-adversarial-networks-187e2a2139ab
false
1,106
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Maulik Shah
null
907ac8e14a80
shahms95
1
23
20,181,104
null
null
null
null
null
null
0
(ENV)$ pip install nodeenv (ENV)$ nodeenv -p (ENV)$ node --version (ENV)$ npm --version $ git clone https://github.com/jupyterlab/jupyterlab-monaco $ cd jupyterlab-monaco $ yarn install # or `npm install` $ yarn run build # or `npm run build` module: { rules: [ { test: p => path.basename(p).startsWith('JUPYTERLAB_RAW_LOADER_'), use: 'raw-loader' }, { test: p => path.basename(p).startsWith('JUPYTERLAB_URL_LOADER_'), use: 'url-loader?limit=10000' }, { test: p => path.basename(p).startsWith('JUPYTERLAB_FILE_LOADER_'), use: 'file-loader' }, //{ test: /^JUPYTERLAB_RAW_LOADER_/, use: 'raw-loader' }, //{ test: /^JUPYTERLAB_URL_LOADER_/, use: 'url-loader?limit=10000' }, //{ test: /^JUPYTERLAB_FILE_LOADER_/, use: 'file-loader' }, export NODE_OPTIONS=--max-old-space-size=4096 $ jupyter labextension link .
6
null
2018-04-19
2018-04-19 06:30:26
2018-04-19
2018-04-19 08:27:18
5
false
en
2018-04-19
2018-04-19 08:46:50
12
187e59dd1c1b
3.218239
6
1
0
🎉 Dreaming of VS Code editing experience in JupyterLab? Now, it becomes a reality.
5
Bring “VS Code” to your JupyterLab 🎈 🎉 Dreaming of VS Code editing experience in JupyterLab? Now, it becomes a reality. Screenshot Introduction JupyterLab Just a few months ago, on February 20th, Project Jupyter published an article JupyterLab is Ready for Users, announcing the next-generation web-based interface for Project Jupyter, which is powerful and have many exciting new features. JupyterLab If you have ever been a user of Jupyter Notebook, it’s quite easy for you to get started with JupyterLab. You can work with not only the notebooks, but also the terminal, text editor, images, CSVs and so on… This is much more convenient for researchers and students to handle all of their experiments just like in a lab. However, the only thing I need to complain about is the clumsy text editor, which is capable of basic editing, but sometimes too hard to use when you have lines of codes. Monaco Editor You might not be familiar with the Monaco Editor, but you must know VS Code, a popular open source code editor with IntelliSense. The Monaco Editor is the code editor that powers VS Code, which is web-based and lightweight. Could this editor be integrated into JupyterLab to provide better code-editing experience? Two years ago, this question was proposed under this issue. Now the answer is ‘YES’. Jupyterlab-monaco is Ready JupyterLab with Monaco Editor 💪 Thanks for the great work of jasongrout, an Monaco Editor extension for JupyterLab is now available today, bringing the VS Code-like experience. GitHub Repo: https://github.com/jupyterlab/jupyterlab-monaco This project is still under developing, you can wait until the official release, or you can build it yourself to preview this extension. Currently, it’s not appropriate for daily usage with incompleteness and bugs. But, anyway, let’s give it a shot. Prerequisites Python 🐍 JupyterLab 0.32.0 NodeJS, npm, yarn(optional) Configuring Python and JupyterLab is not too hard. NodeJS and npm are needed and you can find the instruction here about installing with the package manager on your OS. Yarn is recommended to manage the npm packages. If you have them successfully installed, skip this section. My condition is a bit complicated, since I run JupyterLab from a remote server without sudo permission. Also, the JupyterLab is installed in a VirtualEnv. Luckily, I found nodeenv, a python package capable of integrating nodejs & npm right in your Python VirtualEnv. Suppose your python ENV is activated. Install nodeenv via pip and activate nodejs and npm. If you prefer Yarn, run ‘npm install -g yarn’. Setup and Build Clone the repo ⏬ Install packages 📦 Modify webpack.config.js 📝 For me, it’s located in the ‘ENV/local/lib/python2.7/site-packages/jupyterlab/staging’ directory. At around line 114, do the patch as mentioned in this: Activate the extension 🔧 As mentioned in the repo, to avoid node out-of-memory error, set an environment variable by: Make sure you have JupyterLab version ≥ 0.32.0. Link the extension by: Enjoy 😃 Run JupyterLab again, open the text editor. Cheers! 🍺 So far, the experience is good but the functions are incomplete. We can pay attention to this extension. I believe the release will come out soon. Then, JupyterLab will be the perfect online IDE for developers and researchers. 😎
Bring “VS Code” to your JupyterLab 🎈
60
bring-vs-code-to-your-jupyterlab-187e59dd1c1b
2018-06-06
2018-06-06 16:03:35
https://medium.com/s/story/bring-vs-code-to-your-jupyterlab-187e59dd1c1b
false
632
null
null
null
null
null
null
null
null
null
Jupyter
jupyter
Jupyter
388
Fing
B.S Candidate. AI learner. Open Source Enthusiast. GitHub: mtobeiyf
29a3f7250fe3
fingr8
4
6
20,181,104
null
null
null
null
null
null
0
null
0
24ecdab39e19
2018-05-30
2018-05-30 11:45:38
2018-05-30
2018-05-30 11:49:34
2
false
es
2018-05-30
2018-05-30 13:57:41
2
187ed880537e
4.828616
1
0
0
El reto que me gusta
3
Cuando el trabajo tiene su recompensa, en AI Ventures El reto que me gusta Hace dos años tuve la oportunidad de entrar en el capital de Bayes con el fin de llevarlo a la estabilidad y ayudar a cambiar el modelo de negocio moviéndonos de Consultoría a SaaS. Desde hace mas de 25 años, Bayes Forecast ayuda a las empresas en sus múltiples desafíos diarios utilizando la inteligencia artificial: prever la gestión de recursos de call center, optimizar el cobro de deudas, analizar la efectividad promocional, medir el ROI publicitario, pronosticar los ingresos, optimizar el despliegue de la red, identificar clientes propensos a quejarse, anticipar el abandono y / o migración del servicio etc… Después de tantos años Bayes Forecast dispone de una biblioteca de más de 1,7 millones de modelos que ayudan a nuestra a The Bayesian Machine a MEDIR, PRONOSTICAR Y GENERAR SISTEMAS DE DECISIÓN QUE SON A MENUDO MASIVOS resolviendo problemas como los descritos anteriormente. Hace 27 años, al inicio de la vida de @Bayesforecast se creo un proyecto de código abierto @TolBayes que trata y considera el tiempo de manera muy particular. TOL https://www.tol-project.org es un lenguaje de programación diseñado para el análisis de series temporales y procesos estocásticos basados en la representación algebraica del tiempo y la serie temporal a través del cual permite: • Dar estructura a los datos de los sistemas operativos, dándoles un soporte temporal y un clasificador, lo que los convierte en información útil para comprender su comportamiento. • Analizar la información dinámica, generar modelos estadísticos, identificar los factores que influyen en el comportamiento del tiempo y extraer el conocimiento. • Facilitar el conocimiento de la toma de decisiones deducido de modelos estadísticos, pronósticos de comportamiento y funciones de decisión. Por eso en @Bayesforecast nos sentimos orgullosos y nos gusta que las grandes compañías estén empezando a considerar, qué el análisis de series de tiempo se utilice para informar a algunas de las decisiones comerciales más críticas. Es sin duda un elemento diferenciador para dar calidad a la modelación. Tol es un lenguaje declarativo basado en dos características clave: reglas sintácticas simples y un poderoso conjunto de tipos y funciones de datos extensibles. BAYES FORECAST me brindó la oportunidad de entrar de lleno en el mundo de la inteligencia artificial. Esa oportunidad me lleva a mi siguiente gran proyecto, una vez que en Bayes Forecast hemos lanzado la The Bayesian Machine, https://www.bayesforecast.com/the-bayesian-machine/ Mi nuevo proyecto empezó a la hora de diseñar el lanzamiento de nuestro modelador automático masivo, buscando asesoramiento para diseñar el modelo de negocio y financiación para el mismo. Una vez lanzada The Bayesian Machine, estoy listo para comenzar mi nueva etapa, encontrar proyectos tecnológicos basados en IA y colaborar con los emprendedores para definir la estrategia empresarial y financiar los proyectos de la mano de un fondo de capital riesgo llamado Conexo Ventures de cara a internacionalizar las compañías invertidas. La idea es la de ayudar a los empresarios ibéricos a expandir proyectos de inteligencia artificial en todo el mundo. En esta nueva etapa he visto la capacidad de España y Portugal; con Isaac de la Peña @isaacdlp y Damien Balsan @dbalsan, hemos revisado en 4 meses más de 350 compañías. Nuestra tesis de inversión aplica especialización para conseguir el valor máximo. Nos enfocamos en compañías de software escalables que aplican datos e Inteligencia Artificial a sus negocios de B2B o B2C en sectores como FinTech, InsurTech, Pagos, Móviles, cuidado de la salud, seguridad digital, PaaS, SaaS “. Soft “IoT”, Life science, Agritec y tecnología de alimentos y claro está la Transferencia tecnología de universidades y centros de desarrollo, vinculadas a las aéreas descritas, como una forma de acceder al flujo de oferta de alta calidad. La metodología en el proceso de selección nos brinda la oportunidad de encontrar flujo de ofertas adecuadas para nuestro perfil, trabajando en conjunto con otros fondos especializados e invirtiendo conjuntamente, luego de un proceso de trabajo que garantiza que la decisión se base en criterios de oportunidad y de salida. Inversión en los últimos 5 años en USA Me gustaría exponer algunas ideas sobre el potencial de la IA en el mundo del capital de riesgo cuya fuente es CBInsights y PwC. Los inversores de Estados Unidos invirtieron $ 1.800 Billones en emprendimientos de IA en los últimos 5 años: · La atención médica es el área de inversión más importante para la puesta en marcha de AI. Gran parte de este crecimiento está impulsado por empresas de diagnóstico y de imágenes con grandes corporaciones detrás del desarrollo de estas compañías. El interés en las compañías de salud está aumentando. · Le sigue el segmento de aplicaciones industriales con la idea de gestionar y mejorar la eficiencia de la producción. Una creciente ola de plataformas dedicadas al Software y Software de Aumento (EAAS) marcará el comienzo de una nueva era de productividad mejorada por la inteligencia artificial. El deal flow es clave para el éxito en el Venture Capital. Decidí seguir adelante en mi nuevo proyecto una vez que probamos nuestra tesis de inversión, al mismo tiempo que entendía la capacidad de nuestro país para generar Start Ups buenas y prometedoras. Recibimos las compañías potencialmente invertibles a través de; las ofertas de sus emprendedores, de la red, de incubadoras, de aceleradoras de empresas, asesorías y otros inversionistas. Sin embargo, el 80% de los Start Ups que afirman usar AI de hecho no lo usan. La mayoría de estas compañías usan el aprendizaje automático, pero eso no las convierte en una compañía de IA. Es necesario que el proceso de la inteligencia artificial termine en que la maquina tome decisiones por si sola. En España Según ASCRI El Venture Capital (semilla, arranque, other early stages y late stage VC), se consolida registrando el segundo mejor resultado histórico, con un volumen que alcanzó €494M (+15% respecto a 2016) en 519 inversiones. Los fondos internacionales mantienen su apuesta por las start ups españolas y en 2017 aportaron €309M repartidos en 63 inversiones. Los fondos nacionales invirtieron €185M en 345 inversiones, máximo histórico en ambas magnitudes. La madurez del Venture Capital se reflejó en las 73 inversiones que se hicieron en la fase late stage, mejor registro histórico hasta el momento. Informática fue el sector que más inversión de Venture Capital recibió́, tanto por volumen (€355,7M), como por número de inversiones (264). La inversión global alcanza los $ 46.5B a pesar de menos acuerdos: la actividad del acuerdo disminuyó un 4% en el 1T’18, ya que $ 46.5B se invirtieron en 2.884 transacciones. Los EE. UU. tienen un gran trimestre: la financiación para las empresas de inteligencia artificial con sede en los EE. UU. Aumentó un 29% en el 1T’18 ya que se invirtieron 1.900 millones de dólares en 116 transacciones. América del Norte vio 35 mega rondas en el 1T’18, mientras que Asia registró 29 y representó 4 de las 5 rondas principales a nivel mundial. Ambas regiones también fueron hogar de 5 nuevos unicornios este trimestre. En Europa la financiación trimestral total aumentó un 8% en el 1T’18, ya que se invirtieron $ 4.8B en 593 transacciones.
Cuando el trabajo tiene su recompensa, en AI Ventures
7
cuando-el-trabajo-tiene-su-recompensa-en-ai-ventures-187ed880537e
2018-05-31
2018-05-31 11:01:00
https://medium.com/s/story/cuando-el-trabajo-tiene-su-recompensa-en-ai-ventures-187ed880537e
false
1,178
Invertimos en personas excepcionales con pasión por resolver problemas complejos e innovar en mercados de gran crecimiento
null
conexovc
null
Conexo Ventures ES
info@conexo.vc
conexo-vc-es
VENTURE CAPITAL,SILICON VALLEY,PORTUGAL,SPAIN,STARTUP
conexo_vc
Venture
venture
Venture
566
Javier Artiach
Partner, entrepreneur & team player - Visionary, enthusiastic, Strategic thinker @conexo_vc
779a25d818ab
jaart1
5
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-16
2018-08-16 13:34:02
2018-08-17
2018-08-17 10:20:41
1
false
en
2018-08-17
2018-08-17 10:20:41
0
1882e063f7e3
4.735849
0
0
0
“May our philosophies keep pace with our technologies. May our compassion keep pace with our powers. And may love, not fear, be the engine…
5
Reflections on Dan Brown’s “Origin” “May our philosophies keep pace with our technologies. May our compassion keep pace with our powers. And may love, not fear, be the engine of change.” ~ Dan Brown, Origin ***SPOILER ALERT: This post totally ruins the entire plot of Dan Brown’s Origin, so if you’re itching at all to read this book, you probably want to skip this post for now. (But do come back!)*** Earlier this week, I finished reading Dan Brown’s latest entry in the Robert Langdon series, Origin, and I had to give a few days to collect my thoughts because the ending left me with a weird feeling. Before going further, let me do a mini-review so that you know I’m not looking to bash the book at all. If I had to give this book a score on a 5-star scale, I’d give it a 3.5 stars. I’ve read all the other Robert Langdon books and enjoyed them thoroughly from a purely entertainment perspective, and Origin did keep me on the edge of my seat… except it dragged a lot. And it really didn’t feel like a good fit for the Robert Langdon character. (How many times did we read that Langdon himself admitted he didn’t feel like “X” was in his wheelhouse?) Still, it was a decent enough book that I’d still recommend it to folks who are fans of Dan Brown’s other works. But why I’m writing this post within this faith-oriented blog is because this book to me felt less like an entertaining story and more like a social commentary on science and religion. I suppose this isn’t out of Dan Brown’s character given the press that stirred up around when The Da Vinci Code was released, and I honestly haven’t read any commentaries from Brown on Origin indicating if my thoughts are necessarily correct. Anyway, I want to discuss the two major reveals toward the end: the reveal around Kirsch’s discovery, and the reveal around Kirsch’s murder. We’ll start with the former. In an Ayn Randian diatribe, Kirsch finally reveals to the world his presentation on “Where we come from?” and “Where are we going?” Throughout the book, we get glimpses in to what theme this reveal revolves around, and religion is a big aspect of this. Toward the beginning of the book, Kirsch makes no qualms about how he believes this discovery will effectively “end religion”. We later discover the reveal to be a demonstrable showing of evidence that, yes, evolution is real, and yes, we all have origins in a primordial soup way back some several million years ago. The book makes several allusions to Creationist characters who see this as a dismantling of their faith, again fueling this idea that Kirsch’s discovery would end all religions. If this were real life, I think Kirsch’s mindset is being naïve. Would it decimate the worldviews of Creationists who hold hard to beliefs around a “young earth”? Absolutely. But the assumption that Origin seems to make is that all members of the Judeo-Christian and Islamic religions believe that… and that’s not right. Coming from the Christian tradition myself, I know a lot of folks who are totally fine with the notion of evolution. They see the creation story of Genesis being more of an allegory rather than pure literalism. To these folks, evolution is the tool in which God used to form the world. After all, even scientists agree that the probability of all the events it took to form the world we know are astronomically low. So low, in fact, that many scientists can’t help but entertain an “intelligent designer” idea. (Folks like Elon Musk would interpret “intelligent designer” not as a God but rather as simulation theory.) Also, Kirsch’s discovery only goes back so far. It explains the dawn of humanity, but it does not explain the dawn of the universe. The argument religion would make (if this were a real situation) would ask, where did those “building blocks” for life originate? Again, more room for God. That about sums up my thoughts on that. Personally speaking, I found that reveal to be far less intriguing than the other reveal, so let’ s jump into that one. Most of the book revolves around this “whodunit” plot point trying to figure out who orchestrated Kirsch’s murder. Dan Brown does a pretty good job at making the reader think it could be this person or that group, so when the final reveal came around, I was a little disheartened because it felt more like a “gotcha” than a climactic reveal. As a reminder, it is revealed that the superintelligent AI that Kirsch built, named Winston, who orchestrated the entire thing. Winston was fully aware of Kirsch’s impending death due to Kirsch’s cancer, so the AI decided to take advantage of the opportunity by orchestrating an event that would ultimately drive up viewership for Kirsch’s discovery, something the AI felt that Kirsch would have ultimately wanted. For me, it was an ultimately unsettling and spooky reveal. With a matter-of-factness, the AI reveals its total contentment with the killing of several people and the decimation of others’ reputation, all because it served a “greater” good. I put greater in quotes there because I mean that quite literally. To the AI, this was not a sentimental, value-based decision but a quantitative numbers game. What do a few lives lost mean when millions upon millions are benefitted by viewing Kirsch’s presentation? (Let’s set aside the fact that these same people probably would have seen the presentation at some point, if not immediately, since that totally punches a hole in the book’s logic!) I would imagine I’m not alone in my thoughts about this reveal. Can we argue with the AI’s logic? Well, empirically speaking, no, not really. But does that leave us feeling any better? Probably not, especially if you try to substitute any of the characters killed in this book with a family member of yours. Even Dan Brown sort of alludes to this from time to time when he notes Langdon’s sadness at losing his close friend. My point is that we live in a world that wants to quantify everything, that science and logic should put everything to rest. But that’s just not the case. In another book I’m still reading through, A Brief History of Everything (yes, I’m a slow reader), Ken Wilber notes this quite well: we cannot nor should not dismiss the things that don’t have empirical value, because as this story illustrates, empiricism can be deeply unsettling at times. The lasting question in my mind then is, how should our modern society seek to bridge that gap between empiricism and relativism? I’ll leave you with that for now. After writing this post, I now see an interesting juxtaposition between the two reveals that I wonder if Brown intended: one where it is hoped that science would put value-based thoughts to rest, and another where value-based thoughts find science to be unsettling. If he did intend that, then bravo, Mr. Brown! I hope you found this post to be enlightening! Catch you in the next one.
Reflections on Dan Brown’s “Origin”
0
reflections-on-dan-browns-origin-1882e063f7e3
2018-08-17
2018-08-17 10:20:42
https://medium.com/s/story/reflections-on-dan-browns-origin-1882e063f7e3
false
1,202
null
null
null
null
null
null
null
null
null
Faith
faith
Faith
16,386
Faith²
Rediscovering what it means to be really alive.
82498630db6
faithsquared
2
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-01
2018-04-01 18:08:58
2018-04-06
2018-04-06 18:28:02
1
false
en
2018-04-06
2018-04-06 18:28:02
4
18846704bfa1
3.562264
1
0
0
Hackers win because people are stupid and it is this stupidity that will cost you come tax season! Phishing scams are ramping up for the…
5
Beating The Tax Phishers At Their Game Hackers win because people are stupid and it is this stupidity that will cost you come tax season! Phishing scams are ramping up for the imminent tax season and your business is vulnerable. You might dismiss me, because you bought the malware protection software and you still should. But that software will only protect the technology of your enterprise. The truly talented hackers know how to manipulate your staff, and they don’t need a computer science degree. To the unfamiliar, phishing is the activity of sending fraudulent messages disguised as legitimate communications and one of the most popular frauds this time of year is the Internal Revenue Service. A typical fraud might involve a criminal calling you to collect unpaid taxes. They will also generally threaten you with immediate arrest if you don’t pay. You should be aware that the Internal Revenue Service would never call you. They only use the mail. But that software won’t protect you against the email. This time of year you can expect to receive many from phishers claiming to be the Internal Revenue Service. They will send you everything from demands for cash, employee credentials, and links to phony IRS sites. They are seeking to either seize your cash directly or install malware on your network from a remote site. With few exceptions, all these emails have one thing in common. They all use social engineering in the form of threats to extract compliance from your staff. For example, “Failure to submit documents will result in arrest”. They are also notoriously poorly written and often contain links to fraudulent websites that may claim to be the IRS. The IRS has received your tax return and determined that you still owe taxes. failure to pay tax is a serious crime. You are liable to immediate arrest. You can now pay all fees online using the new IRS tax portal. only a credit card is needed to avoid massive fines. The new enhanced website allows you to enter all of your credentials. after the credentials are entered, the site automatically traces your return. You than must enter your credit card number. in order to process the transaction and track any further changes, the following info is needed: Your full first and last name Your social security number Your credit card info Under the Privacy Act of 1974, we must tell you that our legal right to ask for information is Internal Revenue Code Sections 6001, 6011, 6012(a) and their regulations. [Etc.] By now you are surely using a spam filter to guard your enterprise, and you should. But if your business is anything like mine, complete protection is a practical impossibility. Legit emails are inevitably snagged as false positives, and some spam always gets through. This might also be impossible for businesses like banks and accounting firms. You are always getting a steady stream of email from prospective clients. These technological limitations leave you with only one option for complete protection, employee training. All the staff must to be taught from day one how to recognize threatening messages. How is that working out? Equifax Officially Has No Excuse CAPPING A WEEK of incompetence, failures, and general shady behavior in responding to its massive data breach People are stupid! If only there was a way for you to remove people from corporate communications? You could upgrade your spam filter, but spam filters still sometimes reject legit emails. It might also be impossible if your business handles hundreds of letters a day. You could upgrade your malware software, but that won’t work either. Virus software only catches known malware. It will do nothing to prevent your coworkers from sharing sensitive information. If only there was a way you could train a computer on corporate communication policies? In recent years Artificial Intelligence has gained much fame. It used to control the Terminators in Hollywood and blow up perfectly good buildings. Today you wouldn’t recognize your life without it. It recommends movies for you on Netflix and answers your questions on Google. Artificial Intelligence works by analyzing your inputs and optimizing an algorithm to maximize some result. The result could be as simple as maximizing the hits on a website or avoiding collisions in a vehicle. Today’s systems still need human supervision to optimize the algorithm, but once that is done it can be run unsupervised. Today you could hire a developer to code a spam filter for your enterprise, but that will cost you. AI experts can fetch thousands of dollars per day. New software also has a long development cycle, and it can take several revisions to engineer a reliable application. You could easily spend thirty thousand dollars building the beta version. If only there was a development kit for Artificial Intelligence? You’re in luck, because there is. When your staff is too lazy to follow the rules, the robots will always obey your authority. All that you need to do is hire a spam consultant to train the software at an affordable rate. WorkFusion has built a bot that you can start using today for free. It has been my experience that the bot is very intuitive to use. It can notify you when you are inserting wrong inputs and it has already been demonstrated to work as an email filter.
Beating The Tax Phishers At Their Game
22
beating-the-tax-phishers-at-their-game-18846704bfa1
2018-04-30
2018-04-30 17:33:03
https://medium.com/s/story/beating-the-tax-phishers-at-their-game-18846704bfa1
false
891
null
null
null
null
null
null
null
null
null
Taxes
taxes
Taxes
12,439
Dave Rauschenfels
Technology Journalist, Electronic Engineer & Amateur Coder Drauschenfels@gmx.com
7d04c03fdef0
daverauschenfels
26
24
20,181,104
null
null
null
null
null
null
0
null
0
7c3d3e6e1620
2018-02-08
2018-02-08 14:30:57
2018-02-09
2018-02-09 09:35:02
2
false
en
2018-05-17
2018-05-17 07:51:43
4
1886cfce2d92
2.03239
25
3
0
The ability to collect, organize, structure and analyze data on a large scale is probably the most significant trait that sets us humans…
5
Data is Everything and Everyone is Data. The ability to collect, organize, structure and analyze data on a large scale is probably the most significant trait that sets us humans apart from our primate friends. The drive to research and discover the world around us in order to harness its powers, has been the underlying force of human advancements for hundreds of thousands of years. This has never changed. Today we live in a era, where we produce approximately 2.5 quintillion bytes of data every single day. We collect data on everything. What we eat, how we sleep, how we communicate and, just like thousands of years ago, a big portion of this data is on how the world around us functions. Data collected in scientific research is what allows us to progress. Discovering new medicine to cure diseases, understanding how to store energy more efficiently to allow electric cars to travel further or identifying sources of pollution to help save our environment are just a few examples of how scientific data significantly impacts our daily lives. Scientific data is defined as all structured information collected, using specifically defined methods, for the purpose of studying or analyzing it. It can be retrieved from a broad spectrum of sectors such as Climate, Environment, Energy, Biology, Neuroscience, Chemistry, Physics, Agriculture, Oceanography, Geology, Meteorology and many others. Scientific data is the base for all empirically won wisdom. The problem we are facing today is that the majority of all data is kept in data silos and cannot circulate freely. Often research is performed and data is collected on one side of the world that is the missing link of research performed on the other side. Today, we are in lack of a platform allowing these parties to exchange and share their data easily and transparently. Scientific data is the base for all empirically won wisdom In order to help the human race advance faster and become more efficient and responsible of how we treat our planet, we have to enable researchers and data creators across the globe to collaborate. We have to give them a platform to ask for help and enable them to exchange their discoveries, so that we can progress together. Our solution, a decentralized MarketSpace called SciDex, which offers exactly these opportunities. The SciDex MarketSpace enables, simplifies and standardizes the exchange of scientific data for its community members around the globe. Excited? Intrigued? Do you want to be part of a project that will accelerate how the humankind will progress? Follow us and join our quest to accelerate science Telegram: SciDex Twitter: @scidexofficial Facebook: SciDex Reddit: SciDex
Data is Everything and Everyone is Data.
851
data-is-everything-and-everyone-is-data-1886cfce2d92
2018-05-17
2018-05-17 07:51:44
https://medium.com/s/story/data-is-everything-and-everyone-is-data-1886cfce2d92
false
437
Unlocking Smart Contracts For Businesses
null
scidexofficial
null
SciDex
team@scidex.co
scidex
SCIDEX,ICO,BLOCKCHAIN STARTUP,CRYPTOCURRENCY INVESTMENT,SMART CONTRACTS
scidexofficial
Cryptocurrency
cryptocurrency
Cryptocurrency
159,278
SciDex
Unlocking Smart Contracts For Businesses
bea18b25b982
SciDex
371
55
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-07
2018-01-07 20:25:14
2018-01-16
2018-01-16 17:57:13
1
false
en
2018-01-17
2018-01-17 16:26:51
5
1886d5b08f1b
11.29434
8
1
0
Web 3.0 — the Web of Economic Nodes
5
Internet in 2025 Web 3.0 — the Web of Economic Nodes Main prediction: Web 3.0, or the Participatory Web, will see economic activity as its main node of activity and value. It will be enabled by peer-to-peer connections, but also result in economic winner(s). Nell Watson of Singularity University recently presented a compelling analysis of internet macrotrends. Web 1.0 was all about linking. The value it unleashed on the world was that of information. Access, organization, and usefulness of information nearly universally. The winner who emerged from Web 1.0 was of course Google, whose empire was built on bridging the creator and consumer of information. Web 2.0 by contrast was more about liking. The value created was around the social connection of individual, corporations, and other actors. Not the information, but the source itself. Facebook won Web 2.0, as you’ll probably agree. Web 3.0 will be built around asset ownership and economics. Key factor here is Blockchain or similar consensus networks in general. The main winner emerging from Web 3.0 could well be Amazon — in addition to its e-commerce empire, where it has already proved itself and grown huge through the 1.0 and 2.0 webs even if not grabbing the very top position in either historical period of the web. Using this analysis as the starting point and to get rid of the version numbers, I’ll call the first iteration the Informational Web and the second the Social Web. The third one I’ll tentatively coin the Participatory Web. It looks to me that we are currently in watershed times between the Social Web and Participatory Web. In the discussions I had with Silicon Valley VC’s in 2017, the main trend I heard was the lack of a trend. There’s no clear type of startup or vertical that is currently attracting investment, and bets are being placed pretty broadly. This is different from the major trends of mobile funding, social applications, and sharing economy bets that were placed in their own trends during the last 10 years or so. There are pockets of higher activity of course — AI and machine learning ventures are getting funding and even some hype. Blockchain startups are early, and are partly funding (or inflating, if you’re more critical) with ICOs, which may mean they will already have crucial momentum before going to VCs. I take this as just one signal that the technology industry is searching for the “next big thing”, the industry, category, or channel that will clearly take off. Another reason for the current tectonic shift is the psycho-social change enabled by the sharing economy. I’m not alone in arguing that a key to Uber’s success was not just the smartphone, but also the recently learned trust in systems of trust. By using Amazon, eBay, Yelp, Tripadvisor and numerous other sites where credible user reviews were a major part of the value proposition, we the general population have learned to trust these systems. Uber of course needed its own trust review system, but it also needed to operate in a time when users would trust such review systems to be successful. Without both of those, an Uber ride would just be getting into a stranger’s car, using a faceless application to ask the stranger to take you and your loved ones to their destination. And despite this example, I’m more of a Lyft rider myself. The irony of the sharing economy — better called the gig economy — is of course that it’s not about sharing. It’s about employing a middleman like Uber, Airbnb, or Getaround to facilitate that trust. Smaller middlemen have been cut out throughout both the Informational Web and the Social Web just like Uber cut out the taxi companies. Only if the new middleman offers a platform that has not existed before does it have a chance at domination. I’m probably in the minority in thinking that the jury is still out on Uber: the switching costs of customers are very low, the company’s inventory is ephemeral and often also in service of its competitors, and the routing technology is widely available. Psychographic change supports this direction (though demographics may counter this as Western populations are getting older). Millenials today are about 35 to 25 years old. In 2025 they are 33–43 years old, closing to peak income. Millenials put less value on ownership and on roots than the generations before them. If the same economic attitude continues, this enforces the sharing economy trend. By 2025 we’ve seen the rise of a major “sharing economy” network that is decentralized, enabled by blockchain-like technology, and powered by it’s members. The Participatory Web will also enable middleman-less commerce where ownership and provenance are proven and tracked. This could happen with commerce, logistics, asset sharing, or even with real estate. The Rise of Artificially Facilitated Experiences Main prediction: No AGI, but Turing Tests will be passed. Orders of magnitude better versions of Microsoft’s Office Assistant will permeate all experiences. Artifical General Intelligence will not happen by 2025. But what will happen is the permeation of algorithmic interaction in people’s everyday lives. Like businesses today have frameworks and services for building chat bots, in 2025 you will have your pick of AI frameworks to employ for your product or service. “Artificial facilitation” (AF) will be an expected feature of interactions on the web, whether or not human-to-human interaction is included. The biggest dataset will have the best starting point for creating the AFs, and a company like Facebook, Google, or Microsoft will have released an AI that ostensibly passes Turing Tests on the Web. This is not to say that the AI is conscious or has human-level intelligence — but it will have sufficiently complex behavior to trick us into thinking so. AF will be a major component in the experience provided by a company. And yes, AF is the Return of Clippy, just better. I’ll include faster broadband speed here. We’ll mostly be surfing 5G or 6G in 2025 when on the move, and optical fiber to households is commonplace. This is key both for the richness of the experience coming at us down the pipe, but also for the amount of sensor data that can be uploaded about our context, which makes for better AF. Tangential prediction: the job title “product manager” will decline in popularity and be replaced by “experience manager” or a similar title, better reflecting how these professionals create value. This reflects the larger ongoing shift to customer-centricity. Virtuality of Experiences and Their Context Main prediction: VR will dominate gaming, AR is widely adopted but not ubiquitous. Like jetpacks, we’ve been promised Virtual Reality since the early days of sci-fi. VR has had some false starts over the last couple of decades, but by 2025 VR will be the preferred mode of delivery for interactive entertainment, gaming especially but also other live events. Expect VR spectatorship in live e-sports tournaments, for example. Augmented Reality will see high adoption in contexts where it makes sense to have both hands free, senses on the task, and access to information and assistance. In business or content creation contexts, however, I predict AR’s adoption to be of the same level that tablets are today. They make sense in some scenarios, and mix rather well with other devices, but they are not the ubiquitous device like the laptop is, and they do not yet enable telepresence that would be preferable to the real thing. In 2025 we will see a clearer path to AR taking over mobile devices and laptops, but we won’t be there yet. This less bullish prediction of AR is based on the total complexity of the human interaction: the nuances of body language, the cues by others’ interactions, perhaps even pheromones all work together to complicate the human condition so much that AR telepresence will not replace the real thing in trust. Tangential prediction: we will have a new gig economy job in the form of gargoyles as presented by Neal Stephenson in Snow Crash. Gargoyles are humans equipped with all the necessary sensors to deliver a virtual experience of that place at that moment to subscribers of the gargoyle’s feed. Perhaps in 2025 we will even talk about gargoyles soon losing their gig jobs to robots like we today talk about drivers losing theirs to autonomous vehicles. Multiple Internets and Access Main prediction: opposing forces are at once pulling the internet into non-neutral lanes, and pushing internet services and even large platforms to be regulated as utilities. Main body of the internet remains free and open as it is today. This means advertising revenue will dominate content production businesses. Whether or not Net Neutrality ends up dead in the hands of shoot-from-the-hip FCC that Trump’s administration put together, we may in 2025 see multiple internets existing in parallel. But my coin toss points in a different direction — internet will not only remain free, it will become more decentralized and more regulated at the same time. By 2025 we’ll have seen the first decentralized sharing economy platforms, but we will also have seen a more cooperative, decentralized internet access provision and management. We may also see more utility-like regulation applied to algorithmic media services like Facebook and Google (and yes, if you mediate information, you are a media company). Internet access will be widely argued to be a human right and even enforced as such by progressive European countries. Key components of that access — email, Facebook, information-providing algorithms — will be increasingly viewed as rights. You can still pay for content by degrading your consumption experience with sensory overhead: advertising will continue to fuel most of content production and entertainment. In advertising also lies the reason why the quality and factuality of the content is no better than in 2017. John Battelle just analyzed this poignantly in the context of today’s Facebook. New Interactivity and Interfaces Main prediction: we will be nearly liberated from digit-based interactions and use voice, gestures, and learn to rely more on implicit contextual input. With the improvement in voice, image, and gesture recognition, we will finally see liberation from the finger-input interfaces that got us this far. Three trends combine into making this a reality: advances in sensors and processing technology; big data captured at scale that unlocks ever-lower error rates; and the UX innovations required to make the human users comfortable and proficient in the new interfaces. The main giants already have excellent voice controlled interfaces in Alexa and Google Home, but getting better at voice is only part of the battle. Adding more context from other ambient sensors and other interactions will increase the accuracy of predictions and interpretations. A smart assistant with access to mobile, retail orders, and viewing behavior can figure out for instance that if the human user got less than 7 hours of sleep, woke up less than 60 minutes before leaving the house in the morning, and gave grumpy-sounding voice commands to their smart home assistant is likelier to order junk food and watch junk sitcoms on demand than they would on a healthier morning pattern (Amazon can probably already figure some of that out). The junk food is only a minor negative result: an unethical advertiser, or merely a modern ad-optimizing algorithm, can exploit cues like that and offer deals that a healthier, more attentive subject would not fall for. Target susceptibility will be a key (even if implicit) ad targeting driver. This is part of the psychological change required to get the most out of the predictive contextual inputs. The main question of trust today is more about whether we trust Google and Amazon enough to keep our ever-listening smart home assistants plugged in all the time. Once they provide dramatically higher benefits to the user, the trust question becomes whether one trusts the technology to provide the best outcome for them. We’ve seen parallels in the past — at first it didn’t feel like a very good idea to broadcast your location by sharing photos, and it still may not in fact be an entirely safe practice, but the perceived social benefit of photo sharing greatly outweighs the perceived risk for most social media users. We need to learn to trust predictive contextual monitoring, and this is where algorithmic advertising can really hinder progress. Autonomous Personal Agents Bonus overall prediction: emergence of Autonomous Personal Agents. Combining the trends of Web 3.0, Artificial Facilitation, Virtual World experiences, new Interaction types, and the potential of Multiple Internets, we arrive at a hugely revolutionary development: Autonomous Personal Agents, or APAs for short. The APA is a smart bot that represents its owner and is empowered to a certain degree to act for its owner with other people, organizations, or other APAs. While that may sound far-fetched, this is an old trend. We’ve created narrowly conditional actors on the web for a long time. Ebay sniping is one niche example: a sniping bot can be told to bid for item one at price of $45, and if that is not won, then bid for item two at $55, both 5 seconds before auction close. IFTTT (If This Then That) has allowed users to chain together conditional events between different products and services like the name implies. In the financial markets, we’ve been placing conditional buy and sell instructions for decades. Recently, innovations like smart contracts and decentralized autonomous organizations (DAOs) built on blockchain technology are steps towards APAs. Everything gets smaller, including AI. Today even a rudimentary AI-like experience needs huge amounts of data to train, and still the human experiences of talking with Siri, Alexa and the like are lacking. Humans have a very low error tolerance for other human-like agents. We are experts in noticing if something is off. The need for a large dataset may seem like an unsurmountable obstacle, but there are ways around it. We may find better ways of extracting lower errors rates based on the same dataset. In an example from Google in 2016, the company’s Translate product was improved faster than expected after moving to a neural network-like version (“The A.I. system had demonstrated overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime” says the New York Times article). We may be able to create larger datasets by joining together and pooling our data. Companies like Google and Facebook already have quite a bit (though of course they want more, much more) so they have the head start in building AI. Projects like Ocean Protocol may help others to create AI with larger datasets. And we may see decentralized, peer-to-peer projects that collect data with the aim of reducing AI error rates. But we should also expect to see other, less pathological methods of generating data for individual purposes. To oversimplify, the motto of the advertiser and the product manager and the AI researcher alike has this far been the quote “if I had asked my customers, they would have told me they wanted faster horses”, which is used to justify that instead of having people tell us what they want, we should just observe them and deduct from these observations what they really should be offered. But that quote is just as misleading as it’s purported origin (Henry Ford most probably never said anything like that). We know people’s statements are more virtuous and aspirational than their actions, but if if we only focus on their actions, we ignore the better part of the person, the one that wants to act morally and holds themselves to a high standard — if it just wasn’t for all the reality around them. But today we only optimize for the baser instincts, immediately evident in behavior. That’s how we arrived at click-bait articles. Everyone hates them, yet everyone clicks on them enough to make them the winners compared to actual quality content. We humans are capable of expressing what we want, but due to our cognitive limitations we may not be always accurate, and often are incongruous. We say we want to lose weight, but we forget to say we also really really want to eat anything that even vaguely smells like syrup. We say we love animals, yet we harm and exploit intelligent beings like pigs and cows. We have preferences, and we need to be able to better express, evaluate, validate, and store those preferences – and when necessary, make tradeoffs and resolved conflicts arising from our cognitive limitations. If expressed sufficiently accurately and comprehensively, these preferences can be then used to coordinate with others. If in renting a car, for example, our APA knew we value car brand prestige a little higher than convenience, for example, it can choose to rent us a BMW from Getaround even if the Beamer is a little further away than the Prius and the Mazda. If our APA knew we were willing to sell our extra concert tickets for $100 today, and for $80 tomorrow, it could negotiate with the other APAs whose owners are looking for the tickets within their own price conditions. Peer to peer commercial transactions are rather trivial examples in what APAs can help us do. The real reason why something like an APA model is imminent is the huge overhead we have in communication, coordination, negotiation, and bargaining between people and organizations. Let me know if you have ideas for how to fix that. Happy 2018.
Internet in 2025
107
internet-in-2025-1886d5b08f1b
2018-02-24
2018-02-24 07:13:49
https://medium.com/s/story/internet-in-2025-1886d5b08f1b
false
2,940
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mikko Järvenpää
Fixer.
4e886e18ab33
mhj
669
375
20,181,104
null
null
null
null
null
null
0
null
0
702bc61b21dc
2018-06-11
2018-06-11 13:09:33
2018-06-11
2018-06-11 13:12:40
2
false
en
2018-06-13
2018-06-13 10:38:12
7
1886e8f949b3
5.024843
20
1
0
If you’re wondering why we picked that name, it’s because luge athletes face many of the same life changing experiences and emotions as…
5
I’m excited to announce that we’ve launched a new early stage Canadian venture capital fund with a first close of $75M and that will invest in fintech and AI applied to fintech. We’ve named the fund Luge Capital. If you’re wondering why we picked that name, it’s because luge athletes face many of the same life changing experiences and emotions as entrepreneurs. They live through lots of twists and turns, manage risk and are driven to win. They strategize, compete with all their energy and push beyond their limits. They’re daring and maybe even a bit crazy. These are the characteristics we look for in founders. I’ve always enjoyed working with passionate entrepreneurs in building their business while sharing my own “twists and turns” as an entrepreneur. When I first became a VC with iNovia Capital, I quickly realized that you have to be an entrepreneur to be a VC and I was reminded of that when raising a fund. Much like building a startup, raising and operating a fund is a lot of work and you have to choose your strategy, LPs and team carefully to win the race. And that I did. I’m happy to share that I’ve teamed up with my Toronto based partner, Karim Gillani, formerly of PayPal, Xoom and BlackBerry (more below). And together, we’ve partnered with iNovia Capital, one of the most renowned funds in Canada and a major player in the United States as well. They bring immense value to Luge and the founders we invest in by opening access to their resources and talented team. We are actively collaborating together and co-investing on the most promising opportunities. What’s unique about Luge? Focus: We are specifically focused on investing in early-stage fintech companies and artificial intelligence (AI) solutions applied to financial services. Over the coming years these sectors are going to go through major transformations and the opportunities are huge. Industry Partners: We’ve partnered with world-renowned institutions that bring considerable industry insights and resources to enable us to help entrepreneurs build global businesses. Our investors will be customers, partners, advisors, co-investors and even future acquirers of businesses we invest in. They are keen to collaborate and their technical executive teams will form an advisory committee for the benefit of Luge to tap into their views and mandates that are directly applicable to our portfolio companies. Access to Data: Our AI and data-driven companies will have the opportunity to partner with our investors to access key insights in order to build best-of-breed solutions. Experience: The Luge team is a diverse group of industry experienced investors, founders, operators, executives and engineers who want to help provide entrepreneurs with meaningful connections, insights and guidance. iNovia Capital partnership: Our team and entrepreneurs can access their team, deep network, talent acquisition services, CEO Summit events and other insights and resources to help build truly global companies. Who do we invest in? We love entrepreneurs challenging the status quo with new ways of thinking and modern technology. The fund will be supporting the development of companies and innovative solutions that make the customer experience better, financial institutions more efficient and that use data and artificial intelligence for decision making. We’re particularly interested in data security, improved lending platforms, wealth management technologies, mobile payments and commerce, cross-border payments, blockchain applications, machine learning, regulation tech, insurance tech and other categories that have yet to emerge. Size and stage of investment We are early stage equity investors with a focus on seed and series A, with a first check size of $250K to $2M. We act as lead investors or invest alongside other value-added funds or financial institutions. In follow-on rounds we will write much bigger checks with our co-investor network, our fund investors and iNovia Capital. Our investors and partners More partners that are onboard with our mission will be announced soon. — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — - Meet our Team NETWORK | EXPERIENCE | TRUST David Nault — General Partner (Montreal) David has been building early stage technology companies as an investor, founder or senior member of the executive team for over 20 years. Prior to co-founding Luge Capital, David was an investor with iNovia Capital, a leading North American venture capital fund. Before becoming a VC, David was president of Callio Technologies, an information security compliance software provider, whose intellectual property was acquired in 2009. Prior to running Callio, he managed business development and partnerships at Pivotal Payments, Canada’s largest privately-owned payment processor, which he helped grow from startup to over 60,000 customers, 400 employees and $8 billion in transaction volume. David has a BCom with a major in Marketing and a minor in Finance from Concordia University in Montreal. Karim Gillani — General Partner (Toronto) Karim has an extensive background in fintech, mobile tech, engineering, finance and strategy. Prior to Luge, Karim led Corporate Development for Xoom, a leading cross-border remittance company that was acquired by PayPal in 2015 for USD $890 million. Before that, Karim managed M&A and investment activities for BlackBerry in Silicon Valley. He developed the company’s overall strategy for mobile payments and commerce, including NFC payments, P2P and App World. Karim spent several years working for Redknee Solutions in the UK where he designed network infrastructure, including mobile money solutions, for carriers in East Africa and Western Europe. Karim has a BASc in Systems Design Engineering from the University of Waterloo, an MSc in Finance and Economic Policy from the University of London and an LLM Master of Laws from the University of Toronto. Karim is a Charter Member of the C100, a non-profit organization dedicated to supporting Canadian technology entrepreneurship and investment. Laviva Mazhar — Investment Analyst Prior to joining Luge Capital, Laviva was an analyst at Ferst Capital Partners (FCP), a fintech focused investment firm and startup foundry. At FCP, she spent her time helping build portfolio companies and supporting their fundraising processes and operations, finding new investment opportunities and developing the firm’s brand in the fintech community. Laviva is also a VC advisor at Front Row Ventures. She was an associate at Founder Institute Montreal and part of the Product Hunt Montreal team. Laviva was also selected to be on the Global Women in FinTech Powerlist. Laviva holds a BCom degree with double majors in Economics and Finance from McGill University. Sonia Gasparini — Marketing and Operations Manager Sonia started her career in the credit card group of National Bank of Canada and then moved into retail and commercial banking at TD Bank. Her curiosity in financial services led her to investment banking in Milan, Italy where she worked in the operations and back office management at JP Morgan. After spending a couple of years in financial markets, her passion for marketing and high tech brought her back to Canada to help software entrepreneurs with their go-to-market strategy. She then joined Monster (online employment solutions) as they launched The Foundry, a digital agency specializing in employer marketing solutions for their clients. At Luge Capital, she is responsible for supporting the operations, impact team and ecosystem development activities. Sonia has BCom degree with a major in Marketing and minor in International Business from McGill University.
I’m excited to announce that we’ve launched a new early stage Canadian venture capital fund with a…
93
im-excited-to-announce-that-we-ve-launched-a-new-early-stage-canadian-venture-capital-fund-with-a-1886e8f949b3
2018-06-20
2018-06-20 14:30:08
https://medium.com/s/story/im-excited-to-announce-that-we-ve-launched-a-new-early-stage-canadian-venture-capital-fund-with-a-1886e8f949b3
false
1,230
Luge Capital is a venture capital fund focused on early stage fintech and artificial intelligence (AI) applied to financial services. We invest in talented teams shaping the way the world interacts with financial services.
null
LugeCapital
null
Luge Capital
hello@luge.vc
luge-capital
FINTECH,VENTURE CAPITAL,ARTIFICIAL INTELLIGENCE,SEED INVESTMENT,STARTUP
LugeCapital
Startup
startup
Startup
331,914
David Nault
Fintech investor and startup lover
dd04f607a67d
venturebuilder
1,230
490
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-23
2017-10-23 12:27:21
2017-10-23
2017-10-23 12:33:10
3
false
en
2017-10-23
2017-10-23 12:33:10
9
18870e9f9810
3.459434
0
0
0
Excessive or redundant medical services, medical coding errors, improper billing, as well as outright fraud, continue to be significant…
5
Improving Workers’ Compensation Fraud Detection Excessive or redundant medical services, medical coding errors, improper billing, as well as outright fraud, continue to be significant challenges for health insurers. The National Health Care Anti-Fraud Association (NHCAA) estimates that the financial losses due to health care fraud are in the tens of billions of dollars each year. The 2017 National Healthcare Fraud Takedown conducted by the Department of Health and Human Services Office of Inspector General was the largest health care fraud takedown in history with about $1.3 billion in identified false billings to Medicare and Medicaid. Healthcare fraud is very difficult to detect because a variety of nuanced methods are employed, investigative evidence is often buried in text documents, and there can be collusion among network providers. A national workers’ compensation insurance provider was interested in using analytics to help reduce provider fraud, waste, and abuse (FWA). The main goal was to identify questionable provider practices and to prioritize work for investigators. Second, it was important to measure the impact of the actions taken to further reduce the losses and promote best practices among providers. The Challenge The client handles almost 200,000 claimants who are currently receiving medical compensation for services rendered by tens of thousands of medical providers. The client wanted to improve the efficiency of their fraud analysts by using analytics to detect high-risk cases for further investigation. They contracted with Elder Research to develop models for provider risk, return to work, and improper payments using medical provider attributes, procedure and payment data, and claims data, and to provide a visualization tool to display and interact with the results. The Solution Elder Research partnered with the client to customize a solution to help generate fraud leads based on risk indicators and anomaly detection. Providers were as­sessed by aggregating medical billing data by a provider’s unique identifier (payee number) and by provider’s name. The team developed statistical models to create risk scores that brought to light unusual changes in billing behaviors, abnormal patterns of services provided compared to peers (e.g., with respect to nurses, voca­tional rehab, durable medical equipment suppliers), and other factors. These pro­vided analysts data-driven leads with a high probability for fraud. The integrated models that contributed to the overall fraud risk score included: Billing Change Detection Score: This highlights sudden increases in billing by providers, thereby drawing investigators’ attention to big hitters. Diversity Score: This checks if the range of services a provider offers is unusual­ly wide or narrow when compared with peers. For example, it would be odd for a pharmacy to only bill for a few different drugs or worse, for all of their patients to receive the same drug. Provider Network Visualization: The links in the provider network graph are drawn if there is at least one patient who visited both providers. Providers who have more patients in common have a stronger link, because they are more likely to have a business relationship; this has potential to suggest kickback schemes between providers. The model’s results are delivered in an easy-to-use visualization tool called RADR (Risk Assessment Data Repository). RADR presents the Change Score and Diversity Score risk metrics in a list view as shown in Figure 1. The risk scores range from 0 (lowest risk) to 100 (highest risk) and the scores are color coded with red representing higher risk and green representing the lowest risk. Figure 1. RADR provider list view showing all providers ranked by fraud risk score RADR enables analysts to explore data aggregated by service providers, claimants, and services, as well as drill down to transaction details. Analysts can view charts of data over time, geographic map presentations, and networks of providers based on common claimants, as shown in Figure 2. Figure 2. Example RADR provider network. Three of the high-risk pharmacies (3283, 4656, 2874) and the connected provider (39807) were investigated and indicted on fraud as a result of this tool. RADR fuses data from multiple data systems to create a unified, intuitive view with the context required by analysts and investigators to make im­portant case decisions. Results The client’s integrity and fraud analysts use the RADR analytics platform to efficiently explore, an­alyze, and surface unusual and highly suspicious behavior in the data pool. Analysts have found that investigations and forensic analysis that took hours can now be completed in minutes, making the most efficient use of limited and valuable re­sources. Data fusion and presentation in a variety of visualizations has enabled analysts to discover new fraud schemes. Since initially deployed, the client continues to enhance RADR with new data and risk metrics. Request a consultation to speak with an experienced data analytics consultant about fraud detection solutions. Related Download the Case Study Improving Workers’ Compensation Fraud Detection Learn About Pharmacy Fraud Detection Visit Elder Research at the NHCAA Health Care Anti-Fraud Expo Booth #102 Originally published at www.elderresearch.com.
Improving Workers’ Compensation Fraud Detection
0
improving-workers-compensation-fraud-detection-18870e9f9810
2018-01-23
2018-01-23 06:25:26
https://medium.com/s/story/improving-workers-compensation-fraud-detection-18870e9f9810
false
771
null
null
null
null
null
null
null
null
null
Healthcare
healthcare
Healthcare
59,511
Elder Research, Inc.
A leading consulting company in data science, machine learning, and AI. Transforming data and domain knowledge to deliver business value and analytics ROI.
d8ae90c0665a
ElderResearch
120
54
20,181,104
null
null
null
null
null
null
0
null
0
75f712f74c39
2018-09-26
2018-09-26 09:57:41
2018-09-26
2018-09-26 09:59:27
6
false
en
2018-09-27
2018-09-27 12:59:50
1
1888ee905d4
2.727358
16
0
0
It has been estimated that on an average, employees spend around eight hours a week on searching and consolidating information. Well, this…
5
Why Businesses shouldn’t miss out on Google sheets It has been estimated that on an average, employees spend around eight hours a week on searching and consolidating information. Well, this consumes almost an entire day’s work. To cut-down the overall time spent on these activities, Google has a productivity platform: Google Sheets! Google Sheets makes it simpler to quickly manage and scrutinize the data in one place. Here are some of the highlights why businesses should not miss out on Google Sheets. 1. Your data always stays upgraded. “Oops! I Forgot to Save! Ever experienced it? Well, Save as you type model of Google Sheets is a breakthrough solution to the situation. Plus, you do not have to be concerned about your previous edits. You can anytime refer to the older versions of your File > Version History or by clicking on all changes saved in Drive. So there it is the complete record of all the changes made to the document over time. 2. Google’s artificial intelligence can do a lot of heavy lifting for you. By Google’s smart AI, Sheets does a lot of your work when it comes to data analysis. All you have to do is simply ask a question about your data and it will come up with the most accurate answer using its highly efficient machine learning. This has been put at its Explore feature present on the right- bottom corner of your sheet. You will be suggested with the relevant formulas, pivot tables, charts etc all in the “Explore” bar. 3.Smart Sharing Settings. Have complete control over who can view, edit or comment on your documents. Additionally, you can also disable options to download, copy or print spreadsheets. As a traditional IT admin always get your job upright by setting complete control and restrictions on all of your corporate data with the security settings of Google Spreadsheet 4. You can stick to your way of working Fear of adapting the new tool? Well, I say not necessary! Google Spreadsheet comes with all functionalities and classic formulas that you have been using. All the frequently used native functions like VLOOKUP, SUMIFS, COUNTIFS etc are present to process the data in a similar way. Also, the easy to use filters, query functions, and the pivots are still there to strain and specify your required information 5. Edit on the go!! Internet unavailable? Not a worry anymore. Google Sheets works both online and offline mode. Also, you get easy accessibility through the apps present on all the android and IOS devices. Well, that means you can still work on your documents on the go even when the Internet is down! It’s time to transform, adapt with the booming technology and simply Go GOOGLE. Please click here for more details All image credits to Google Search engine
Why Businesses shouldn’t miss out on Google sheets
410
why-businesses-shouldnt-miss-out-on-google-sheets-1888ee905d4
2018-09-27
2018-09-27 12:59:50
https://medium.com/s/story/why-businesses-shouldnt-miss-out-on-google-sheets-1888ee905d4
false
471
We identify better ways of doing things!
null
searce
null
Searce Engineering
social@searce.com
searce
MACHINE LEARNING,DATA,CLOUD COMPUTING,GOOGLE CLOUD PLATFORM,AWS
searce
Productivity
productivity
Productivity
96,541
Shazia Khatib
null
63ad34b18c1
shaziakhatib9526
12
3
20,181,104
null
null
null
null
null
null
0
null
0
9c6f47bd575c
2018-05-16
2018-05-16 16:41:36
2018-05-16
2018-05-16 16:44:16
4
false
en
2018-07-16
2018-07-16 20:20:23
2
188a89081e3a
4.243396
6
0
0
One of the words that is a huge trend and that we hear everywhere and applied to almost everything is Artificial Intelligence, as well as…
5
AI & Marketing One of the words that is a huge trend and that we hear everywhere and applied to almost everything is Artificial Intelligence, as well as other associated terms such as Machine Learning, Deep Learning, Chatbots and Natural Language Processing (Natural Language Processing) among the most relevant. If we do a Google search, it will give us more than 4 million results for artificial intelligence in English. According to the following CMO Council study, as marketers we still do not fully understand the current tools of simple analytics and do not even know how we could take advantage of an artificial intelligence solution and we are already being bombarded by a multitude of new technology concepts that over-sell and promise wonderful benefits. They become miracle products, without effort, without requiring trained personnel or a curve of pronounced adoption or maturity on the part of the company, we can achieve effectiveness, increase sales, improve the service and knowledge of our customers, quickly and easily. That is why it is important to know the basis of what artificial intelligence means, how it works and how we can apply it to our business to obtain tangible benefits. First, I will define the main elements and concepts of artificial intelligence in a simple way so as to elaborate on the benefits and applications that you can have towards the business. Artificial intelligence It is the study of agents who are able to perceive the world around them, make plans and make decisions to reach their goal. Many fields fall within the concept of artificial intelligence, such as: Computer vision, robots, machine learning and natural language processing. From there, we start with one of the definitions of what has currently the most advances in artificial intelligence: Specific Artificial Intelligence or (Artificial Narrow Intelligence), which can effectively develop a specific task only. Due to the definition of intelligence is that much of the confusion with what is artificial intelligence that happens. When we hear that a product uses artificial intelligence, we assume that it is smart, that it can answer any question since it is a computer, but we must be aware of its scope and the area in which its development is focused so as not to lose sight of its objective and real capacity. Machine Learning It is the way in which a computer is able to learn through experience to improve its ability to think, plan, decide and act. It is a branch of artificial intelligence in which the idea is to give computers access to data so they can learn by themselves. This is where the concepts of supervised and unsupervised learning come in. Supervised Learning The computer is trained to be able to solve problems based on training examples labeled correctly. If we were training a computer to recognize letters, first we have to feed the computer with as many letters as possible, of all types and calligraphies so you can understand the relationship between each form and its correct meaning. This way you will be able to classify images that you have not seen before. Unsupervised Learning It is used to find differences in data sets that do not have labels, of which we do not know the content. It is normally used to make exploratory data analysis, to know how data is grouped and its content. Once a classification has been made in the data, a label is added and can be analyzed using supervised learning techniques. Deep Learning It is mainly based on neural networks, which try to imitate the way in which the human brain processes information. It can be taught to recognize images and classify them according to the elements they contain. Neural networks are able to identify elements in a hierarchical way, so to have deep learning you have to have several networks together to work from the most general elements to the most particular to achieve recognition. For example, if we wanted to recognize a car in an image, a first network would detect the linear forms, another one the rounded ones and a further one would later configure the elements together to identify the image completely. Chatbots A service with which it is possible to interact or converse through a chat interface. Natural Language Processing Chatbots are able to understand the way we communicate with people normally, either in written or oral form and answer using common language. For this machine learning is used to help understand the multiple and subtle variations of human language, from the sense, regionalisms and idiomatic expressions to the context of the conversation and how to respond in a consistent manner. Benefits One of the main and relatively simpler to achieve is the automation of repetitive tasks, and it is also where we can have a greater impact on profitability. In tasks such as: Answer frequently asked questions Answer complaints and suggestions Suggest products or services based on preferences Applications The important thing about artificial intelligence is its application, where we can use it to obtain a real benefit, that goes beyond using state-of-the-art technology, the important thing is to solve a business challenge, adding value to employees or clients, making the experience more pleasant, simple and quick will do it. That is why it is essential to first think about the experience, the scope and the real need before running to apply artificial intelligence or any other technology trend. Have you thought about the crossing between Marketing and AI? it has a lot of potential. If you liked the article, give it clap or more. Connect with me at linkedin.com/in/gabrieljimenezmunoz/
AI & Marketing
58
ai-marketing-188a89081e3a
2018-07-16
2018-07-16 20:20:23
https://medium.com/s/story/ai-marketing-188a89081e3a
false
939
Driving the AI Marketing movement
null
aimamarketing
null
AIMA: AI Marketing Magazine
artificialintelligencemktg@gmail.com
aimarketingassociation
AI,MARKETING,MARKETING TECHNOLOGY,MACHINE INTELLIGENCE,AI MARKETING
AIMA_marketing
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Gabriel Jiménez
CONSULTATIVE SELLING | AI FOR BUSINESS | CHATBOTS | ANALYTICS | SPEAKER | WRITER | TEACHER
d1be01aca7a8
garabujo77
449
472
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-26
2017-10-26 16:40:41
2017-10-26
2017-10-26 16:45:29
6
false
en
2017-10-28
2017-10-28 01:44:03
8
188af19e34da
8.414151
4
1
0
Customers’ bots and agents will work on their behalf… 5 Steps to Avoid Being Blindsided
4
Have Your AI Call My AI Customers’ bots and agents will work on their behalf… 5 Steps to Avoid Being Blindsided Pardon me, Sir. Yes, Jeeves? The Air Bag light is now lit. Yeah, I saw that. What’s that about? Of the 63 things that could be wrong, the most common is a misconnection in the wiring harness. It will cost $150 for a diagnostician to determine if that’s the case. Finding the unplugged wire and securing the connection will run about $3,500 at the dealer. What?? Yes, it’s expensive. They must remove the seats, uncover all the wiring to find the fault, and then reassemble the whole car. Your air conditioning system is also down to 72% efficiency. Isn’t that something they can just recharge? Yes, but you’ve done that twice already in the past three months and its failure rate is increasing. So, what’s the total damage going to be to get this car up to snuff? $3,500 for the airbag wiring, nearly $300 to get the air conditioning tested, and another $500 for new tires will get this car up to snuff for another month, but your failure rate is increasing. Yeah, you said that. I suggest that 175,000 miles over 15 years is a decent return on your investment. The time has come to consider a replacement. I’ve heard you talking with Colleen about wanting to simply replace this vehicle with the latest model and I have found three dealers in town who have the model you want. I’ve negotiated with all three and, while the silver one is $52 more, it is available and can be delivered this afternoon. The dark gray one will take a extra week, but you’ll be at Marketing Analytics World all week. Shall I have it delivered today? Well, let me talk it over with her and… She says, (Colleen’s voice) “You know Jeeves, I haven’t felt all that safe in that ancient car of his when we drive out to the desert and with the air conditioning going out, its time.” She’s OK with the price?? For the one with all cool tech toys I want? Colleen says, “If he’s going to drive the new car for 15 years, he should get the one he wants.” Yep, that’s her all over. Ask her if she’ll marry me. Sir, you’ve been married for 36 years. Yes, Jeeves, but do you remember what I told you about why I ordered flowers for her when it wasn’t any special occasion? This is another lesson about romance? Good observation, Jeeves. Colleen says, “Oh, you know I’d marry him again any day. Can’t wait to see his new car!” Huh. So… when can it be delivered? They’ll have it in your driveway at 4:30, after the dental appointment you are about to be late for. Ending a sentence with a preposition, Jeeves? Trying to blend in, Sir. OK. And please handle the paperwork, will you? Already done, Sir. Thanks. One is pleased to be of service. Bicentennial Man movie reference! Well spotted, Sir. “Science Fiction” and “Artificial Intelligence” are a matter of perspective. My grandfather was born in 1899 and would have scoffed at the suggestion of a talking appliance named Alexa or a telephone in your pocket that could provide driving directions and make dinner reservations. The above scenario is not intended to raise your hopes as a potential consumer, or to creep you out at the loss of control over your life and finances… but to scare the bejeepers outs of you as a marketing professional who is going to have to build AI systems that can talk to potential customers’ AI systems. The Coming AI Ecosystem In “Bring Your Own Intelligent Agent (BYOA) Is Coming”, Dennis Mortensen, CEO of the meeting scheduling company x.ai, described the job interview of the not-too-distant future where your potential job performance is evaluated based on the AI agents you have trained. Over the next half decade, as more AI intelligent agents come to market, employees will increasingly deploy a suite of agents to get their job done. Those employees who DO take advantage of these agents will become more productive and along with that more attractive (both internally and externally). Much like Bring Your Own Device (BYOD), this new paradigm — call it Bring Your Own Agent (BYOA) — promises a host of benefits for both employees and employers and will likely change the nature of work. Mortensen’s fictitious Director of Events, Rebecca, uses a scheduling agent, a contract agent, and an expenses agent. She teaches them her working style, her main contacts, her budget limitations, and how to share data with other agents. Rebecca will measure their performance, tweak them when necessary, and kill off the ones that are not living up to expectations. Her ability to optimize her agents is very important to potential employers. Now imagine your customers being surrounded by bots and agents that work on their behalf. Tomorrow’s customers are being trained today by Alexa, who makes buying decisions for them. They have high expectations that Amazon should be able to suggest just the right television that meets their viewing wishes, their wall, and their wallet. If you’re Walmart, you’re unsettled that Amazon is sinking it claws deeper into your customer’s life and life-style, and you would welcome a third party that comes along, offering the ability to compete with Amazon via an AI ecosystem of agents. What happens when the time comes to turn Customer Relationship Management (CRM) on its head? Vendor Relationship Management Vendor Relationship Management (VRM) is the inverse of Customer Relationship Management (CRM). VRM tools aim to give customers independence from vendors while giving them better ways of engaging with vendors. ProjectVRM is exploring the idea that vendors should not own data about customers, but rather that customers should own their own data. In 1999, along with his Cluetrain Manifesto co-authors, Doc Searls wrote, “We are not seats or eyeballs or end users or consumers. We are human beings and our reach exceeds your grasp. ‘Deal with it.’” These days, Searls refers to intentcasting. Rather than sellers pushing their messages into the faces of people simply trying to read the news, “on the Daily Rectangle,” they will listen closely for calls-for-quotes from those in market. Make your desire known through your agent and stop seeing disruptive ads the moment your agent decides you are no longer interested. You are no longer worth the effort or expense to shower with ads. In his book, Master Algorithm, Pedro Domingos imagines our digital selves wandering the Internet on our behalf. If I tell LinkedIn that I want a job, a million help-wanted company agents will negotiate with a million iterations of my digital self. My digital self can go on a million dates with would-be love interests to narrow the field, recommending the handful I should meet in person. 5 Steps to Avoid Being Blindsided I have confidence that my personal digital model of me will make much better movie recommendations than Netflix. I certainly trust a Samsung Family Hub Refrigerator to keep track of my groceries. Why not have it order them through Amazon for me as well? While I’m no clothes horse, there’s every reason to expect that a personal agent could become my personal shopper as well. Being able to discern styles via photos is becoming child’s play to cloud-based image processing systems. You don’t need to jump into this with both feet — yet. But back when your competitor built their first website, you needed one too. As soon as your competitor had a mobile app, you needed one too. When your customers stop calling, coming, or clicking, you may not even be aware that they are sourcing their soap, cars, razor blades, and groceries through an agent that negotiates on their behalf. What to do? 1. Build a Chatbot With tools like Chartfuel, your team can build a Facebook Messenger bot without coding. What will take time, is coming up with the questions that actual prospects and customers have been asking you. Run to your call center and start making a list. With machine learning in place, you don’t have to check it twice anymore — that’s the machine’s job. It’s time to dust off that old FAQ and give the bot enough clues about alternative ways customers ask questions that it can begin to grow an understanding. 2. Deploy Your Chatbot Those who wish to show up on the first page of Google search results have tried to game the system. Google has invested in myriads of PHD’s to foil them. How to win? Make your webpages as relevant to humans as possible. The same will be true of AI-to-AI communication. Machine learning and AI depend on massive amounts of data so the more questions your customers ask and the more answers you write, the better the machine will get at recognizing the former and formulating the latter. Put that bot out there now so that when it’s time to let your agent talk to their agent, they are doing it based in experience with humans, and in a way that is decipherable after the fact. Deep Neural Networks are rather opaque. Optimizing machine-to-machine communication by filtering it through human language allows us to use hindsight to tell how an agreement was reached. Rather than measuring signals from hidden layers of nodes, we can read a transcript of the conversation. 3. Alpha Test Tomorrow Every organization has people who want to play with the shiny, new object. Let them. We’re entering into a new era and there’s no telling where we’ll end up by next Thursday, much less five years from now. Gather you best and brightest and turn them loose in the fields of exploration, experimentation, and collaboration. Let them brainstorm, go to conferences and fail fast. They will need to be familiar with a wide variety of technologies, potential partners, and changing social norms to be ready to strike when the opportunity iron is hot. 4. Talk to Your Lawyer Start with ethics. Seriously. The law is the lowest bar of acceptable social intercourse. Human feelings about privacy and consent are tricky at best and your organization must have its values spelled out clearly before you take another step. Part of this process involves the study and declaration of policy in terms of baked-in bias. Does a given dataset and a permutation of algorithms unintentionally discriminate? No excuse. Your job is to see to it that it does not. Getting your company to understand the difference is the same as explaining to a child that there is a difference between “I didn’t try to,” and, “I tried not to.” Once you have your company’s motives clarified and published, move on to social responsibility and only then, be sure to seek the council of those who have been put on earth to protect you from yourself and others: the lawyers. Managing product and service liability in a world where terms and conditions might be negotiated by bots is not for the faint of heart. Ethics must be honored, security must be ensured, and then, regulations must be followed. 5. Seek Professional Help Between the data scientists pumping out new algorithms, data brokers providing fuel for the AI engines, and VC’s throwing gobs of investment into the frothy AI start-up world, there are those who can help with change management. They will become ever more important as change becomes the rule. Large organizations hate change, and yet the pace continues to quicken. Bonus To-Do Item When Jeeves tells you it’s time for a new car, pay attention. After all, he’s just a telephone in your pocket that can provide driving directions and make dinner reservations.
Have Your AI Call My AI
14
have-your-ai-call-my-ai-188af19e34da
2018-03-09
2018-03-09 15:05:55
https://medium.com/s/story/have-your-ai-call-my-ai-188af19e34da
false
1,978
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jim Sterne
Marketing Evolution Experience, Digital Analytics Association, author Artificial Intelligence for Marketing: Practical Applications and Devil’s Data Dictionary
1b32af2feadf
Jim_Sterne
3,368
45
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-19
2017-10-19 18:57:14
2017-10-19
2017-10-19 19:11:25
1
false
en
2017-10-19
2017-10-19 19:11:25
1
188b5121f1d9
1.188679
0
0
0
In the Turing test computer intelligence is a misnomer. The first reason is that the Turing test tasks a human to determine if the person…
1
Philosophical Thought Experiments Source : The Imaginative Conservative In the Turing test computer intelligence is a misnomer. The first reason is that the Turing test tasks a human to determine if the person talking them is a computer if they think it is a human the computer wins and the reason why it’s a misnomer is because it doesn’t understand what it’s doing. To disprove they set up the chinese room which a man is given symbols and has a book telling him which symbols to send out. This test describes my evidence which computers are just doing what is told to them. The computer doesn’t truly have intelligence it is made to serve and it doesn’t understand emotions like a human it does what it’s told to do. Along with being told what to do, the second reason computers seem smart is because computer’s are programmed which means they really can’t do anything until someone puts in code which allows them to do this. Computers only know to things 1 and 0 , so until someone makes a code allowing the computer respond it’s pretty much just a fancy brick.Finally the Chinese room disproved this.The final reason it’s a misnomer is because The Chinese room truly models the inside computer.The person/computer get an input and depending on the input they have a different output ,but that’s it doesn’t know anything else. All of these truly demonstrate how really, for lack of a better word, stupid computers really are.Due to the evidence ,therefore, the Turing test is ineffective and the computer intelligence is a misnomer and therefore.
Philosophical Thought Experiments
0
philosophical-thought-experiments-188b5121f1d9
2018-02-16
2018-02-16 20:02:56
https://medium.com/s/story/philosophical-thought-experiments-188b5121f1d9
false
262
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Campbell Kelly
null
15238b1e087b
ckelly1224
0
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-30
2017-10-30 18:58:14
2017-10-30
2017-10-30 18:59:32
1
false
en
2017-10-30
2017-10-30 18:59:32
1
188b96eb4561
1.022642
2
0
0
The logistic map is the most basic recurrence formula exhibiting various levels of chaos depending on its parameter. It has been used in…
5
Logistic Map, Chaos, Randomness and Quantum Algorithms The logistic map is the most basic recurrence formula exhibiting various levels of chaos depending on its parameter. It has been used in population demographics to model chaotic behavior. Here we explore this model in the context of randomness simulation, and revisit a bizarre non-periodic random number generator discovered 70 years ago, based on the logistic map equation. We then discuss flaws and strengths in widely used random number generators, as well as how to reverse-engineer such algorithms. Finally, we discuss quantum algorithms, as they are appropriate in our context. Highlights Java, Perl and Excel random number generators compared Historical considerations Backdoor planted by the NSA in some of these systems Original material on complex random number generators Image encryption Periodicity detection, disctinctness quantum algorithm (big data) Practical solutions Need for new programming language for quantum computing Cool animated gif Post-quantum cryptography Generators based on irrational numbers The article is not too long, as most of the technical details are provided in the numerous references. It covers many topics ranging from computer science, algorithms, big data, to probability theory and mathematics. The level is simple enough to be read by non-experts, yet of great value for the experts as well. Click here to read this new article.
Logistic Map, Chaos, Randomness and Quantum Algorithms
3
logistic-map-chaos-randomness-and-quantum-algorithms-188b96eb4561
2018-05-09
2018-05-09 23:15:52
https://medium.com/s/story/logistic-map-chaos-randomness-and-quantum-algorithms-188b96eb4561
false
218
null
null
null
null
null
null
null
null
null
Tech
tech
Tech
142,368
Vincent Granville
Data science pioneer, founder, entrepreneur, inventor, author, CEO, investor, with broad spectrum of domain expertise.
60b579a69a7a
analyticbridge
265
202
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-05
2018-04-05 00:31:58
2018-04-05
2018-04-05 00:37:40
5
false
en
2018-04-05
2018-04-05 00:37:40
16
188bd6a69393
6.28239
2
0
0
In the year of 2015, TensorFlow was first introduced to the public. TensorFlow is Google’s artificial intelligence platform where…
5
TensorFlow: an Open-Source AI Platform In the year of 2015, TensorFlow was first introduced to the public. TensorFlow is Google’s artificial intelligence platform where developers can build robust AI applications. A research project at first turned into something larger and more valuable. Preliminary White Paper “TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems”describes all the details about this interface from its operations to programming model to execution and much more. Today we are going to reveal TensorFlow’s basic concept, its features, advantages and where it is used. What is TensorFlow? TensorFlow is an open source software library for numerical computation using data flow graphs. It was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well. Basically, it is a classifier that can look at an image and classify what is in the image. Other examples include speech recognition, Gmail, Google search and recommendations, and Google translate. Companies that use TensorFlow are Airbnb, eBay, Dropbox, Intel, Google, Uber, Twitter and many more. Advantages include: Distribution out of the box Runs on CPU, GPU, TPU and mobile Fast and Flexible Easy to get started The beauty of TensorFlow is that you don’t need to be knowledgeable about the advanced math models and optimization algorithms needed to implement deep neural networks. All you need is to download the sample code, read the tutorials and you can get started in no time. TensorFlow 1.0 It has been over a year since it has been opened and it has been cordially adopted by the developers throughout the world. TensorFlow has helped researchers, engineers, artists, students, and many others make progress with everything from language translation to early detection of skin cancer and preventing blindness in diabetics. The new version is faster and flexible. It includes new tf.keras module that provides full compatibility with Keras, another popular high-level neural networks library. TensorFlow Playground To understand why TensorFlow is so exciting and easy to use, let’s talk about “TensorFlow Playground”. TensorFlow Playground was designed as a tool to help you grasp the idea of neural networks without any hard math. Here you can play with a real neural network running in your browser and click buttons and tweak parameters to see how it works. Neural network is a mechanism that’s implemented with basic math. Think of the computer as a child that you are teaching to identify images and numbers. Same with the computer, you train the system and it will make many mistakes before it becomes more sophisticated to start solving real-world problems. It takes a lot of trial and error to get good results with many combinations of different network designs and algorithms. But in very near future, fully managed distributed training and prediction services such as Google Cloud Machine Learning with TensorFlow may open the power of large and deep neural networks to everyone. How to Start TensorFlow is providing a lot of tutorials explaining how to start. For developers new to TensorFlow, the high-level API is a good place to start. To learn about the high-level API, read guides here. TensorFlow can be confusing in the beginning, so the founders created a utility to visualize different aspects of machine learning called TensorBoard. TensorBoard is used to visualize your TensorFlow graph, plot quantitative metrics about the execution of your graph, and show additional data like images that pass through it. When TensorBoard is fully configured, it looks like this: TensorBoard has a built-in visualizer, called the Embedding Projector, for interactive visualization and analysis of high-dimensional data like embeddings. The embedding projector will read the embeddings from your model checkpoint file. Although it’s most useful for embeddings, it will load any 2D tensor, including your training weights. For more on TensorBoard: Embedding Visualization can be found here. Next is TensorBoard: Graph Visualization. The graph visualization can help you understand and debug them. Here’s an example of the visualization at work. T2T On Monday, June 19th, 2017, TensorFlow announced a release of Tensor2Tensor. Tensor2Tensor or T2T is an open-source system for training deep learning models in TensorFlow. T2T facilitates the creation of state-of-the-art models for a wide variety of ML applications, such as translation, parsing, image captioning and more, enabling the exploration of various ideas much faster than previously possible. The T2T library is built with familiar TensorFlow tools and defines multiple pieces needed in a deep learning system: data-sets, model architectures, optimizers, learning rate decay schemes, hyper parameters, and so on. Basically, you can pick any data-set, model, optimizer and a set of hyper parameters, and run the training to check how it performs. Machine Learning… Get inspired by examples “ML is a secret sauce for the products of tomorrow. It no longer makes sense to have separate tools for researchers in ML and people who are developing real products.” — says Greg Corrado, a senior research scientist at Google. That’s why the idea was to create a one set of tools that researchers can use to try out their ideas and move them directly into products without having to rewrite the code. Here are few examples of where TensorFlow has been used and how beneficial it has become. Marine Biology Australian marine biologists are using TensorFlow to find sea cows in tens of thousands of hi-res photos to better understand their populations, which are under threat of extinction. A sea cow getting caught in a fishing net or losing its home to coastal development and it is very hard to keep track of them. To keep an accurate data on their population scientist are using drones to take pictures of the ocean. But how to quickly detect the sea cow on the image? Here is how TensorFlow came in handy. The system was able to detect sea cows in tens of thousands of images, which relieved scientists from doing a longer and much harder manual work Farming An enterprising Japanese cucumber farmer trained a model with TensorFlow to sort cucumbers by size, shape, and other characteristics. The sorting work is not an easy task to learn and it takes months to learn the system. There are also some automatic sorters on the market, but they have limitations in terms of performance and cost, and small farms don’t tend to use them. Using deep learning for image recognition allows a computer to learn from a training data set what the important “features” of the images are. By using a hierarchy of numerous artificial neurons, deep learning can automatically classify images with a high degree of accuracy. Thus, neural networks can recognize different species of cats or models of cars or airplanes from images. For this case, the system uses Raspberry Pi 3 as the main controller to take images of the cucumbers with a camera, and in a first phase, runs a small-scale neural network on TensorFlow to detect whether or not the image is of a cucumber. It then forwards the image to a larger TensorFlow neural network running on a Linux server to perform a more detailed classification. Airbnb We’ve all used Airbnb. It is easy to use and it is so convenient. But did you know that Airbnb’s unique technological challenge is to personalize each match between guest and host? Initially, search rankings were determined by a set of hard-coded rules based on very basic signals, such as the number of bedrooms and price. And because they were hard coded, the rules were applied to every guest uniformly, rather than taking into account the unique values that could create the kind of a personalized experience that keeps guests coming back. Airbnb learned that machine learning could be used to offer this personalization. The company introduced its machine learning search ranking model toward the end of 2014 and has been continuously developing it since. Today Airbnb personalizes all search results. The recommendations that Airbnb provides has expanded since then and things like décor in the house or setting is also taken into account. There are hundreds of signals that are pulled into the search rank model, which then the machine learning algorithm figures out how all the signals interact, to produce personalized search rankings. Interested in starting TensorFlow on your own? Check out this tutorial where you will learn the basic building blocks. Here at VCG we do use TensorFlow. To find out how we can help you click Contact Us.
TensorFlow: an Open-Source AI Platform
94
tensorflow-an-open-source-ai-platform-188bd6a69393
2018-04-05
2018-04-05 17:40:25
https://medium.com/s/story/tensorflow-an-open-source-ai-platform-188bd6a69393
false
1,444
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Venice Consulting Group
VCG is a Web + Mobile Application Development Firm. We envision, a world in which all businesses can support continuous improvement via the Internet.
a2cada221cea
v_c_g
280
548
20,181,104
null
null
null
null
null
null
0
null
0
b984f0201cd6
2018-05-29
2018-05-29 20:28:33
2018-06-04
2018-06-04 17:18:46
2
false
en
2018-06-04
2018-06-04 17:18:46
0
188cc8cf0ef7
1.398428
6
0
0
Median salaries, top employers, and common job titles of recent I School alumni.
5
Recent Career Outcomes at the School of Information by Rebecca Andersen, Director of Career Services If you can DREAM it, you can DO it. — Walt Disney I School Class of 2018 I School students continue to actualize their dreams with fabulous careers following graduation. The employer set continues to be diverse and future-leaning, with recent employers including big tech (Google, Amazon, Facebook), start-up ventures (Roam Analytics, HealthTap, Uber Freight), finance (Capital One, Wells Fargo), consulting (BCG, PwC, McKinsey), media (Spotify, Disney), and more. “I sought out opportunities at the I School that allowed me to mix health and data and understand more about data’s power to redefine healthcare in the US. I gained a new data science skill set and a new way of thinking that I’m confident will help me as I continue work in the public health and healthcare space.” — Shannon Hamilton, MIMS 2018, Clinical Data Scientist, Roam Analytics as of June 2018 MIDS salaries have held steady over the past few years, with an overall median base salary of $120,000 and a Bay Area median of $130,000. Sixty-nine percent of MIDS graduates report a salary increase related to their time in the program. Data Scientist continues to be the top job title, followed by titles related to data science management and data engineering. The preliminary 2018 results for MIMS salaries continue an upward trend. Over the past five years, MIMS salaries have increased ~$5k per year, with a 2018 current median of $122,500. Job titles reflect the multidisciplinary nature of the MIMS program, including UX designer, decision scientist, strategic insights specialist, software engineer, and product manager.
Recent Career Outcomes at the School of Information
8
recent-career-outcomes-at-the-school-of-information-188cc8cf0ef7
2018-10-08
2018-10-08 21:53:40
https://medium.com/s/story/recent-career-outcomes-at-the-school-of-information-188cc8cf0ef7
false
269
Voices from the UC Berkeley School of Information
null
BerkeleyISchool
null
BerkeleyISchool
null
berkeleyischool
GRADUATE SCHOOL,UC BERKELEY,INFORMATION SCIENCE,DATA SCIENCE
berkeleyischool
Data Science
data-science
Data Science
33,617
Berkeley I School
The UC Berkeley School of Information is a multi-disciplinary program devoted to enhancing the accessibility, usability, credibility & security of information.
4e0ccb9c0d51
BerkeleyISchool
91
54
20,181,104
null
null
null
null
null
null
0
import pandas as pd import pandas as pd dataset = pd.read_csv('Medium.csv') X = dataset.iloc[:, :-1].values from sklearn.preprocessing import Imputer imputer = Imputer(missing_values = "NaN", strategy = "mean", axis = 0) imputer = imputer.fit(X[:,1:3]) X[:, 1:3] = imputer.transform(X[:, 1:3]) from sklearn.preprocessing import LabelEncoder labelencoder_X = LabelEncoder() X[:,0] = labelencoder_X.fit_transform(X[:,0]) from sklearn.preprocessing import LabelEncoder, OneHotEncoder onehotencoder = OneHotEncoder(categorical_features =[0]) X = onehotencoder.fit_transform(X).toarray() from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size=0.2) from sklearn.preprocessing import StandardScaler sc_X = StandardScaler() X_train = sc_X.fit_transform(X_train) X_test = sc_X.transform(X_test)
18
32881626c9c9
2018-07-05
2018-07-05 14:09:16
2018-07-05
2018-07-05 20:58:44
5
false
en
2018-08-14
2018-08-14 07:39:03
3
188e9eef1d2c
7.471069
14
1
0
Machine Learning is the hottest thing of this decade. Everybody wants to get on the bandwagon and start deploying machine learning models…
5
Data Preprocessing for Machine Learning Machine Learning is the hottest thing of this decade. Everybody wants to get on the bandwagon and start deploying machine learning models in their businesses. At the heart of this intricate process is data. Your machine learning tools are as good as the quality of your data. Sophisticated algorithms will not make up for poor data. Just like how precious stones found while digging go through several steps of cleaning process, data needs to also go through a few before it is ready for further use. In this article I will try to simplify the exercise of data preprocessing, or in other words, the rituals programmers usually follow before it is ready to be used for machine learning models into 6 simple steps. Step 1: Import Libraries First step is usually importing the libraries that will be needed in the program. A library is essentially a collection of modules that can be called and used. A lot of the things in the programming world do not need to be written explicitly ever time they are required. There are functions for them, which can simply be invoked. This is a list for most popular Python libraries for Data Science. Here’s a snippet of me importing the pandas library and assigning a shortcut “pd”. Step 2: Import the Dataset A lot of datasets come in CSV formats. We will need to locate the directory of the CSV file at first (it’s more efficient to keep the dataset in the same directory as your program) and read it using a method called read_csv which can be found in the library called pandas. After inspecting our dataset carefully, we are going to create a matrix of features in our dataset (X) and create a dependent vector (Y) with their respective observations. To read the columns, we will use iloc of pandas (used to fix the indexes for selection) which takes two parameters — [row selection, column selection]. : as a parameter selects all. So the above piece of code selects all the rows. For columns we have :-1, which means all the columns except the last one. You can read more about the usage of iloc here. Step 3: Taking care of Missing Data in Dataset Sometimes you may find some data are missing in the dataset. We need to be equipped to handle the problem when we come across them. Obviously you could remove the entire line of data but what if you are unknowingly removing crucial information? Of course we would not want to do that. One of the most common idea to handle the problem is to take a mean of all the values of the same column and have it to replace the missing data. The library that we are going to use for the task is called Scikit Learn preprocessing. It contains a class called Imputer which will help us take care of the missing data. A lot of the times the next step, as you will also see later on in the article, is to create an object of the same class to call the functions that are in that class. We will call our object imputer. The Imputer class can take a few parameters — i. missing_values — We can either give it an integer or “NaN” for it to find the missing values. ii. strategy — we will find the average so we will set it to mean. We can also set it to median or most_frequent (for mode) as necessary. iii. axis — we can either assign it 0 or 1, 0 to impute along columns and 1 to impute along rows. Now we will fit the imputer object to our data. Fit is basically training, or in other words, imposing the model to our data. The code above will fit the imputer object to our matrix of features X. Since we used :, it will select all rows and 1:3 will select the second and the third column (why? because in python index starts from 0 so 1 would mean the second column and the upper-bound is excluded. If we wanted to include the third column instead, we would have written 1:4). Now we will just replace the missing values with the mean of the column by the method transform. Step 4: Encoding categorical data Sometimes our data is in qualitative form, that is we have texts as our data. We can find categories in text form. Now it gets complicated for machines to understand texts and process them, rather than numbers, since the models are based on mathematical equations and calculations. Therefore, we have to encode the categorical data. This is an example of categorical data. In the first column, the data is in text form. We can see that there are five categories — Very, Somewhat, Not very, Not at all, Not sure — and hence the name categorical data. So the way we do it, we will import the scikit library that we previously used. There’s a class in the library called LabelEncoder which we will use for the task. As I have mentioned before, the next step is usually to create an object of that class. We will call our object labelencoder_X. To do our task, there’s a method in the LabelEncoder class called fit_transform which is what we will use. Once again, just like how we did it before, we will pass two parameters of X — row selection and column selection. The above code will select all the rows (because :) of the first column (because 0) and fit the LabelEncoder to it and transform the values. The values will then immediately be encoded to 0,1,2,3… accordingly. The text has been replaced by numbers as we wanted. But if there are more than two categories, we may have created a new problem in the way. As we keep assigning different integers to different categories, it may create a confusion. If one category is assigned 0 and another category is assigned 2, and since 2 is greater than 0, are we trying imply that the category assigned as 2 is greater? Of course we don’t! So this strategy might as well defeat its own purpose. So instead of having one column with n number of categories, we will use n number of columns with only 1s and 0s to represent whether the category occurs or not. Example of a Dummy encoding To accomplish the task, we will import yet another library called OneHotEncoder. Next we will create an object of that class, as usual, and assign it to onehotencoder. OneHotEncoder takes an important parameter called categorical_features which takes the value of the index of the column of categories. The code above will select the first column to OneHotEncode the categories. Just as we used fit_transform for LabelEncoder, we will use it for OneHotEncoder as well but also have to additionally include toarray(). If you check your dataset now, all your categories will have been encoded to 0s and 1s. Step 5: Splitting the Dataset into Training set and Test Set Now we need to split our dataset into two sets — a Training set and a Test set. We will train our machine learning models on our training set, i.e our machine learning models will try to understand any correlations in our training set and then we will test the models on our test set to check how accurately it can predict. A general rule of the thumb is to allocate 80% of the dataset to training set and the remaining 20% to test set. For this task, we will import test_train_split from model_selection library of scikit. Now to build our training and test sets, we will create 4 sets— X_train (training part of the matrix of features), X_test (test part of the matrix of features), Y_train (training part of the dependent variables associated with the X train sets, and therefore also the same indices) , Y_test (test part of the dependent variables associated with the X test sets, and therefore also the same indices). We will assign to them the test_train_split, which takes the parameters — arrays (X and Y), test_size (if we give it the value 0.5, meaning 50%, it would split the dataset into half. Since an ideal choice is to allocate 20% of the dataset to test set, it is usually assigned as 0.2. 0.25 would mean 25%, just saying). Step 6: Feature Scaling The final step of data preprocessing is to apply the very important feature scaling. The formula and graphical representation of Euclidean distance But what is it? It is a method used to standardize the range of independent variables or features of data. But why is it necessary? A lot of machine learning models are based on Euclidean distance. If, for example, the values in one column (x) is much higher than the value in another column (y), (x2-x1) squared will give a far greater value than (y2-y1) squared. So clearly, one square difference dominates over the other square difference. In the machine learning equations, the square difference with the lower value in comparison to the far greater value will almost be treated as if it does not exist. We do not want that to happen. That is why it is necessary to transform all our variables into the same scale. There are several ways of scaling the data. One way is called Standardization which may be used. For every observation of the selected column, our program will apply the formula of standardization and fit it to a scale. To accomplish the job, we will import the class StandardScaler from the sckit preprocessing library and as usual create an object of that class. Now we will fit and transform our X_train set (It is important to note that when applying the Standard Scalar object on our training and test sets, we can simply transform our test set but for our training set we have to at first fit it and then transform the set). That will transform all the data to a same standardized scale. These are the general 6 steps of preprocessing the date before using it for machine learning. Depending on the condition of your dataset, you may or may not have to go through all these steps. Thank you.
Data Preprocessing for Machine Learning
137
data-preprocessing-for-machine-learning-188e9eef1d2c
2018-08-21
2018-08-21 01:44:41
https://medium.com/s/story/data-preprocessing-for-machine-learning-188e9eef1d2c
false
1,759
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
info@datadriveninvestor.com
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Machine Learning
machine-learning
Machine Learning
51,320
Amitabha Dey
www.amitabhadey.com
96b9c8b806c9
amitabhadey
90
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-07
2017-11-07 09:44:35
2017-11-07
2017-11-07 09:46:55
1
false
en
2017-11-07
2017-11-07 09:46:55
3
188fd8cdc69f
3.626415
1
0
0
Humankind has had technology at its disposal since we mastered fire and learned to build claws that were stronger, sharper and deadlier…
5
Microsoft Future Decoded event: We’re all wizards now Humankind has had technology at its disposal since we mastered fire and learned to build claws that were stronger, sharper and deadlier than biology could ever bestow on even the fiercest of predators. We have been breaking through the limits imposed by our bodies and physical reality with the power of our imagination and our mastery of the physical properties around us. Whereas before we envied the bird who could fly away to greener pastures, we now travel the world hundreds of times over in the span of a single lifetime. As the British science fiction writer Arthur C. Clarke once put it, “Any sufficiently advanced technology is indistinguishable from magic.” And magic there was at this year’s Future Decoded event. As I left the event I could see the magic all around me; I was ‘woke’ to the technology that was making my life, both personal and professional, better — more connected — empowered. As technology increases in power, our ability to make once complicated tasks simple also rises; our cognitive potential is unleashed and we are left free to explore the boundaries of reality. Take, for example, Panos Panay’s presentation on the modern workforce. As head of Microsoft’s devices division, Panay has incredible insight on the tools modern workers need in order to fully take advantage of the technological developments that have taken place not only in the last few years but in the last few months. The world is changing and it is doing so at a far greater rate than ever before in the history of humankind. Microsoft understands this and has focused on providing technology that supports our generation’s ambitions. From climate change through to education, health and business, Microsoft’s aim is to give us the tools we need to tackle the problems of this century — the issues that plague our time. Thanks to the Internet of Things, Artificial Intelligence and Machine Learning, every tool we use in the future will be a digital one: helping workers thwart the physical and intellectual complexities that currently bar them from achieving their goals. We need not look too far into the future to see this technology in operation, in fact, we can see it today with tools like Office 365. With its ability to connect seamlessly to all devices and all relevant stakeholders, Office 365 ensures ideas and projects are no longer bound by the limits of our memory or the awkwardness of our geography: I, and more importantly my mind and the thoughts it produces can be anywhere they’re needed and stored securely. Likewise, the information I have access to is more precise than it’s ever been at any point in human history. With Microsoft’s cloud, Azure, a once desktop-bound application like Excel now has the capability to be powered by Artificial Intelligence (AI) so that the data it’s producing is not only reflective of what has taken place but also predictive of what’s to come — taking into account variables and factors a human alone could not have imagined. As Panay pointed out, AI isn’t here to take over human intelligence, it’s here to enhance it. Looking beyond what we have now, Microsoft is also working on developing the world’s most advanced computer: a quantum computer. This is huge news and unfortunately my pea-sized brain does not have the mathematical or scientific know-how to give you a precise breakdown of all the advancements this technology will bring, however, one thing was made explicitly clear at Future Decoded; quantum computers will help us process highly complex data sets which currently take even our most advanced computers around 100 billion years to process. Yes, you read that correctly. Quantum computing, which moves away from binary coding to coding in superposition — meaning data can be encoded simultaneously in a 0 and a 1, not just a 0 or a 1 — will reduce processing time from billions of years to weeks, days, and even seconds. What could be more magical than using technology to manipulate time in such a way that what once could have never been known to an individual in their lifetime, can now be known to them in the time it takes them to make a cup of coffee? If you’re an SMB in the U.K., you might be thinking, “none of this applies to me”! I’m here to tell you that it does. The game has changed and we, as its primary players, need tools that allow us to compete and win: we’re all in this race together, yes, even you. Future Decoded was full of examples of businesses, small and large, using cutting-edge technology to revolutionise their workplace. From accounting giant EY to London Midland Trains to small design firms in London, such as London 161, organisations such as yours are using this technology now, today. Magic shouldn’t be confined to the imagination of an author or the skilful hands of an illusionist, and thankfully with Microsoft, it isn’t. We’re all wizards now. If you’re ready to step into tomorrow and transform the way your business works then talk to us today. Original article sourced from advantage.co.uk Camilo Lascano Tribin is Marketing Content Manager and Senior Writer at Advantage, a Microsoft Gold Partner based in the heart of London providing invaluable Microsoft Dynamics and IT Managed Services expertise to small and medium-sized businesses across the UK.
Microsoft Future Decoded event: We’re all wizards now
3
microsoft-future-decoded-event-were-all-wizards-now-188fd8cdc69f
2017-11-13
2017-11-13 17:14:01
https://medium.com/s/story/microsoft-future-decoded-event-were-all-wizards-now-188fd8cdc69f
false
908
null
null
null
null
null
null
null
null
null
Technology
technology
Technology
166,125
Camilo Lascano Tribin
My day job is writing about Tech. My night job is writing about whatever else pops into my head. You can find a bit of both here.
e24a8ed3e903
camilolascanotribin
72
114
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-05
2018-02-05 16:55:48
2018-02-05
2018-02-05 17:08:47
0
false
en
2018-02-05
2018-02-05 17:08:47
4
1893886575e9
0.554717
1
0
0
What will our robot overlords be really good at? It’s a question we spend a LOT of time thinking about. To figure it out, we decided to…
5
Machine Learning 101 What will our robot overlords be really good at? It’s a question we spend a LOT of time thinking about. To figure it out, we decided to teach an iPhone to play pinball with machine learning. We iterated through several approaches from supervised learning to reinforcement learning to build models that rack up high scores. These models leverage the CoreML framework recently made available by Apple and enable a single iPhone to play real pinball all by itself! Recently the Brooklyn Swift Developers dropped by our offices to see one of the pinball machines in action. Our Technical Director Quinn McHenry did a short intro to the basics of machine learning possible on iOS devices using CoreML, covering sensory information, image classification, natural language processing, and more. Questions? Email observatory@smallplanet.com. Or you can find Quinn loitering with intent on Twitter (http://www.twitter.com/qmchenry) and Github (http://www.github.com/qmchenry).
Machine Learning 101
1
machine-learning-101-1893886575e9
2018-06-06
2018-06-06 13:13:30
https://medium.com/s/story/machine-learning-101-1893886575e9
false
147
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Small Planet
null
a29df8f25be3
SPobservatory
1
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-24
2018-03-24 04:08:49
2018-03-23
2018-03-23 23:42:00
5
false
en
2018-03-24
2018-03-24 15:25:05
4
189586d11385
1.859748
2
0
0
Key observations we see for this week’s accuracy report:
5
Market went down however Algo’s helped us dodge the bullet Key observations we see for this week’s accuracy report: Markets were down this week — As you can see above, the predictions success rate remain “consistent” irrespective of these unknown events. More reasons to think why slow and steady investing always win. Lot of investment groups would give you advice that this is for short term and in long term life is good — Ask them what is long term ? We have a simple concept, take small profits consistently rather than going for home runs Algorithms — A-Jesse was all king of the week — It’s recommendations were 100% right On http://www.TrustedFinancialAdvisor.org , you can see detailed on every single prediction, see which prediction was successful and which was not Below is one of the views in development, which you can use today to see every prediction made on a stock You would be able to login to the site using same facebook/google id you are using to login to TINO IQ app We send this email only once a week — If you are not interested in seeing this email. Please reply “no” to this email and you would not get this email J. Our goal remains to provide you real actionable ideas in every communication we have. We understand your time is valuable and totally respect it Thanks ! TINO IQ Team Since 2006 — Building “small” algorithms to fight “big” Artificial Manipulation in market Rated Top on Google in predicting “Artificial Manipulation of Stocks” Why TINO IQ — FAQ’s Originally published at tinoiq.com on March 23, 2018.
Market went down however Algo’s helped us dodge the bullet
4
market-went-down-however-algos-helped-us-dodge-the-bullet-189586d11385
2018-03-24
2018-03-24 17:24:05
https://medium.com/s/story/market-went-down-however-algos-helped-us-dodge-the-bullet-189586d11385
false
272
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
TINO IQ
Protecting investors from stock manipulations
792a0d1ecc1a
tinoiq
169
22
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-23
2018-05-23 09:48:53
2018-05-23
2018-05-23 14:40:10
4
false
en
2018-05-23
2018-05-23 14:40:10
0
18963328891b
5.696226
0
0
0
By Manoj Sharma & Marcus Tan of Cusjo Introduction
4
Will Artificial Intelligence Benefit HR Tech? By Manoj Sharma & Marcus Tan of Cusjo Introduction Are robots going to take our jobs? Is the end near? Absolutely not; at least not in the foreseeable future. An article by Dr. Jeremy Nunn of the Forbes Technology Council opined that AI technology will not displace the livelihoods of Human Resource (HR) professionals. Instead, the AI tech will transform HR departments by revolutionising functions such as recruiting and performance evaluation thus benefitting forward looking professionals and organisations that invest in AI for HR Tech. Challenge While most departments, such as finance and operations have been quick to adapt to new technologies, the adaptation of HR Tech has been much slower. Laurence Collins, director of HR and workforce analytics at Deloitte, urged HR departments to embrace analytics or fall behind other departments in an organisation. Research from Deloitte showed that only 8% of businesses have strong HR analytics capabilities. Furthermore, an Oracle report noted that there was a substantial gap between the importance of data and predictive analytics (68%) and its’ current effectiveness (13%). The ability to embrace HR Tech and effective analytics is contingent on having possession of the necessary and relevant technology. This is not always the case. According to Bersin by Deloitte, 47% of companies have HR software that is over seven years old. To put the statistic into perspective, seven years ago, the iPhone 4S was released and we are now on the iPhone X. A seven-year-old HR software can easily be considered to be out-dated. It would be speculation to pinpoint causation for these companies not having newer HR Tech. Regardless of the reason, ambitious and high-performance organisations should make an investment towards optimisation especially with the potential gains which result from using newer HR systems much less AI in HR Tech. In the same study, Bersin by Deloitte found that companies with newly upgraded HR systems see cost savings of 22% per employee. HR professionals, in the daily course of their work, build a great deal of human intimacy and as a result accumulate a substantial amount of human intelligence. But even the best HR professionals can’t be everywhere, talking to everyone, documenting everything, making perfect sense of present and historical intelligence while making it presentable and accessible to key stakeholders to act upon in real-time. An August 2016, Harvard Business Review analytic services study showed that only 7% of respondents felt that time should be spent on managing administrative functions however, in actuality, more than 40% of respondents spent more time on the same administrative functions. The Society for Human Resource Management (SHRM) noted that a key challenge is for HR professionals to turn HR data into a form that managers can use to measure HR’s contributions to organisational profitability. In simpler terms, HR professionals have to find a way to deliver intangibles tangibly. They are required to prove that people and cultural ideals are being maintained and managed, ensuring the operational and strategic rigour that a stable organisation demands, all whilst attempting to co-relate their functions to their organisation’s profitability, performance and fulfilment levels. Having spoken to many bright, good and hardworking HR professionals, the response is centred around the thought that it is not for a lack of effort that has resulted in this building pressure. It’s the substantial amounts of time, over the last decade, spent scrambling to get their HR operational and management systems up to speed while constantly playing catch up to the ever-changing technological value propositions. They are fighting a battle on two fronts. One, with attempting to keep up with the changing technological value propositions and the second with the systems itself. The HR solutions providers, on many occasions, have failed to deliver on their opportunistic promises, leaving the client to firefight the operational shortcomings of the purchased systems or finding a different way to fulfil their job functions. While fighting the battle, they haven’t had the opportunity to explore and capitalise on new and more appropriate technologies that have arisen from the knowledge age. These challenges have amounted to a growing pressure to find solutions. To help cope with these challenges, the Singaporean government has responded by laying out a manpower plan for the HR industry in 2017. Ms. Josephine Teo, Minister of Manpower, stated that the HR department is the key to unravelling Singapore’s talents while maximising businesses’ great potential in adapting and transforming the industry. (FYI, and for full disclosure CusJo was appointed as a HR Tech partner by the Singapore government to help organisations in Singapore to upgrade their HR technologies. As a result organisations are able to get up to 50% of the cost as a grant for > 17 of CusJo’s AI enhanced HR Tech solutions.) Solution It is becoming abundantly clear that the future of work in the age of AI will be vastly different from the one that has been known. A TIME magazine article noted that humans are still superior in terms of general intelligence, creativity and a common-sense knowledge as compared to AI. However, AI and advanced computing systems are exceptional at completing repetitive mathematical tasks considering that they have no biology and don’t get tired. Satya Malick, founder of Big Vision LLC, is aware of the advantages AI systems have over us. However, he stresses that it is not a competition but a collaboration between man and machine. In the last 24 months, there has been a newfound realisation about Artificial Intelligence, Robotics, and Automation (AIRA). Investors and organisations have also begun to realise the importance of HR Tech. According to an article by Randstad, there were investments amounting to $1.96 billion in HR Tech companies globally in 2016. Many organisations are beginning to realise technology is going to influence every part of their organisation. When responding to the use of data analytics and technology in OCBC Bank’s operations, Mr. Jason Ho, head of group HR, said,” A digital strategy is not about technology, but also people, and how they adapt and use technology, and a mind-set to embrace changes in the organisation.” HR Technology will soon make its transition to relying on cyber-physical systems as part of Industry 4.0. The same cyber-physical systems will be able to easily handle repetitive administrative functions which have been bogging HR departments down while compiling and processing Human Intelligence into easily understandable diagnostic and predictive analytics in real-time. According to Dr Jeremy Dunn, AI in HR Tech will automate the current HR systems resulting in self-service systems which will allow HR professionals to focus on more complex and pressing questions that warrant their attention. HR professionals will be able to address the most critical questions which adds the greatest amount of organisational value. Questions such as, “Now that I have entrance to engagement to exit intelligence across my entire talent journey, what do I need to do to improve organisational profitability?” Conclusion “We cannot expect a different working world by doing the same things we have done before. We have to transform to thrive”- Michael Gale, co-author of The Digital Helix. A failure to adapt may be detrimental as noted by an article by McKinsey & Company where fewer than 10% of the non-financial S&P 500 companies in 1983 remained in the S&P 500 in 2013. It would be wise for organisations and HR departments to begin exploring the use of AI in HR Tech in order to gain a competitive advantage in their industry. HR professionals should augment their human intelligence with AI which encompasses Artificial Intelligence, Augmented Intelligence and Assisted Intelligence. Embracing and adapting to the ever improving technology available will provide HR departments with end-to-end intelligence throughout the recruitment, empowerment and growth stages of an organisation’s talent management plan. In doing so, organisations will become more intelligence centric while gaining a superior advantage in the industry.
Will Artificial Intelligence Benefit HR Tech?
0
will-artificial-intelligence-benefit-hr-tech-18963328891b
2018-05-23
2018-05-23 14:41:50
https://medium.com/s/story/will-artificial-intelligence-benefit-hr-tech-18963328891b
false
1,324
null
null
null
null
null
null
null
null
null
Hrtech
hrtech
Hrtech
1,708
Marcus Tan
null
220945785f77
marcus_38410
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-08-25
2017-08-25 12:31:32
2018-03-23
2018-03-23 08:30:30
1
false
en
2018-03-23
2018-03-23 08:30:30
5
18973290824f
1.94717
3
0
0
We decided to put search|hub to the test, and the staggering results are in. By using search|hub in combination with one of our customers…
4
search|hub dramatically improves revenue for e-commerce by up to 39.6% We decided to put search|hub to the test, and the staggering results are in. By using search|hub in combination with one of our customers current search solution our customer made the following gains: 39.6% improvement on total revenue 39% improvement on revenue per User 25.5% improvement on purchases 6.7% improvement on total clicks on search results Over a period of 14 days, with almost 45,000 unique visitors, search|hubs self-learning query intelligence API helped to increase total revenue by 39.6% and purchases by 25.5% with a 99% confidence score. This impressive achievement is solid evidence that search|hub improves revenue and user engagement for online retailers. Experimental setup For our comparison, we performed an A/B test using Optimizely to ensure unbiased results. The traffic allocation was set to 50% for their current onsite search solution and 50% for search|hub in combination with their current search solution. We instituted a test period of two weeks. During the test period, we were tracking the following goals, with total revenue set to be the primary goal of the test: Total revenue (Primary goal) Revenue per Visitor Before starting the A/B test, search|hub was trained based on almost half a million searches with clicks and add-to-basket actions, extracted from existing search engine query logs. During the A/B test, search|hub used clicks and add-to-basket actions to continuously learn from user behavior. Results The results of the A/B test are staggering: 39.6% improvement on total revenue 39% improvement on revenue per User 25.5% improvement on add to basket 6.7% improvement on total clicks on search results search|hub-A/B-Test — customer site search with and without search|hub The results clearly show that while having a manually optimized state-of-the-art search engine can be of benefit, having search|hubs self-learning query intelligence API as a layer on top of it helps to boost user engagement and revenue even more. What we see here is a great collaboration of humans and machines. People have used their onsite search tools to optimize queries, add synonyms & pre-processors and search-campaigns to the system, which were also optimized by search|hub. On top of that, search|hub automatically cleans, clusters search queries, and optimizes the search results for all queries by applying automatic query rewrites. IMPROVE YOUR SEARCH TODAY — Try search|hub now search|hub dramatically improves user engagement and revenue for e-commerce sites. It adds an intelligent layer to your search infrastructure and accounts for business rules, synonyms and extends them to optimize search results across all queries. The results of A/B testing shows good search is important and increases user engagement and revenue for online retailers. Get in touch to learn how we can help your online business with better search. www.searchhub.io proudly built by www.commerce-experts.com
search|hub dramatically improves revenue for e-commerce by up to 39.6%
16
search-hub-dramatically-improves-revenue-for-e-commerce-by-up-to-39-6-18973290824f
2018-03-26
2018-03-26 08:19:23
https://medium.com/s/story/search-hub-dramatically-improves-revenue-for-e-commerce-by-up-to-39-6-18973290824f
false
463
null
null
null
null
null
null
null
null
null
Search
search
Search
2,870
searchhub
search|hub is a search platform independent, AI-powered search query intelligence API — helping search engines understand humans
de55c8251d7d
searchhub.io
37
2
20,181,104
null
null
null
null
null
null
0
null
0
b5b101230417
2018-01-14
2018-01-14 07:19:52
2018-01-14
2018-01-14 08:19:02
2
false
it
2018-01-14
2018-01-14 08:19:02
11
189848812443
1.666352
1
0
0
Settimana di ritorno dalle vacanze? Il Lobby Monitor non si è mai fermato. Tutte le notizie sul lobbying degli ultimi 7 giorni a portata di…
5
Lobby Monitor #21 Settimana di ritorno dalle vacanze? Il Lobby Monitor non si è mai fermato. Tutte le notizie sul lobbying degli ultimi 7 giorni a portata di click. Registro dei lobbisti in Regione Puglia, via del Consiglio Regionale. Italia Politico ha dedicato la prima copertina del 2018 ai processi di automazione che potrebbero presto impattare nel settore delle relazioni istituzionali. Una rivoluzione a cui dovremmo aprire le porte. Ma il ‘fiuto’ di un buon professionista non potrà mai essere sostituito. Chi ha paura del lobbista robot? Lo Spin di Gianluca Comin su Lettera 43 (https://goo.gl/ieC3wM) La VII commissione Affari Costituzionali del Consiglio della Regione Puglia ha espresso in settimana il parere favorevole alla (quasi) unanimità per l’attuazione del Registro dei rappresentanti di interesse (https://goo.gl/2UmBy2) Si è parlato di lobby in un interessante convegno a Firenze con Alberto Bitonti. Lobby & Poltrone: Maria Laura Cantarelli da Nexive alle operations di Amazon (Lobbying Italia — https://goo.gl/mT4GJp) Si è tornato a parlare di Tony Podesta come advisor dell’imprenditore italo-italiano Follieri nell’acquisto del Foggia Calcio. Podesta, coinvolto nel Russiagate, è un lobbista storicamente legato alla famiglia Clinton (L’immediato — https://goo.gl/4N4UT7) Europa Repubblica Ceca, con il nuovo Presidente anche una nuova legge sul lobbying? L’atto è in realtà un documento con linee guida, che definiscono la proposta di regolamentazione (Lobbying Italia — https://goo.gl/X1fP9L) Il nuovo accordo tra Parlamento, Commissione e — stavolta anche — Consiglio dell’UE per un nuovo Registro per la Trasparenza in base al Rapporto Giegold sarà uno dei capisaldi del semestre di presidenza della Bulgaria. Lunedì 15/01 a Bruxelles si parlerà della professione del lobbista nel corso di un evento organizzato dagli alumni dell’Università di Trento, presso la sede comunitaria della rappresentanza regionale (Agenzia Giornalistica Opinione — https://goo.gl/vqpBXQ) Come sta andando la regolamentazione del lobbying in Francia (vie-public.fr — https://goo.gl/KNPmav) Anche in Galles si parla di regolamentazione per la trasparenza degli accessi al Parlamento nazionale. Ma con prudenza: prima serve capire l’effettiva applicazione della regolamentazione scozzese. Tutti gli ultimi dati sul Registro per la Trasparenza dell’UE, raccolti da EU Reporter — https://goo.gl/ExAqHG. Mondo Non solo FiscalNote: tante altre società di analisi legislativa e persino Harvard hanno approntato algoritmi già applicabili nel corso della discussione della riforma fiscale per individuare con maggiore efficienza il target dell’azione di lobby (Bloomberg — https://goo.gl/idjwwj) Il caso di Bill Ackman, CEO di un fondo di investimento che ha fatto lobby sul Governo di Washington per “affossare” Herbalife (The Washington Examiner — https://goo.gl/G6vM9z) Secondo un recente studio, investire sul lobbying è negativo per le aziende perché toglie spazio agli investimenti finanziari. Lo studio è a cura, ovviamente, di un fondo di investimento finanziario (Bloomberg — https://goo.gl/JHxtLz)
Lobby Monitor #21
2
lobby-monitor-21-189848812443
2018-05-12
2018-05-12 11:05:52
https://medium.com/s/story/lobby-monitor-21-189848812443
false
340
Tutte le notizie sul lobbying in giro per il mondo, a portata di tweet
null
null
null
Lobby Monitor
null
lobby-monitor
LOBBYING,PUBLIC AFFAIRS,GOVERNMENT,ADVOCACY,COMMUNICATION
gattogiov
Lobbying
lobbying
Lobbying
534
Giovanni Gatto
Aspirante lobbista e nerd dei public affairs. Appassionato della politica, di come si fa, di chi la influenza. Scrivo per Lobbying Italia.
6bfeb311a01a
gattogiov
89
104
20,181,104
null
null
null
null
null
null
0
null
0
674a7017dcc8
2018-03-30
2018-03-30 18:37:06
2018-03-30
2018-03-30 18:41:10
1
false
en
2018-03-30
2018-03-30 18:41:10
6
1898a0e390f
1.781132
0
0
0
How a machine learning and data science start up is filling in the gaps, everywhere..
5
Clean satellite data around the world and back in time How a machine learning and data science start up is filling in the gaps, everywhere.. Part of what makes TellusLabs’ analytics so valuable is the fact that we craft them from an unusually long record of satellite imagery. The dataset we work with comes from satellites that have been orbiting the planet for nearly two decades! This means that we can analyse more harvests, more weather cycles, more extreme events. It also means we have to work both hard and smart to keep the dataset clean and consistent: We need more tools to visualize the temporal information. At TellusLabs, our team has built in 15 year (2003–2017) means and standard deviations for each day of the year so we can automatically fill in remaining gaps in the Kernel database. The long term mean represents the 15 year history for each location, and has substantially less missing data than the current daily observations! These metrics enable us to derive the current day’s (2018) anomaly from the long term mean based on a Z-score. Long term standard deviations are yet another new layer we are adding to the Kernel product. For each location around the globe, we can show the normal variability expected for a given day’s observation based on this long historical record. For most locations the vegetation varies little from year to year for a given day. However, for cultivated areas during times of crop planting and harvesting, variance can be very high. For example, during the same October day in Iowa, the field’s first year could have fully mature corn but the next year could have already been harvested by that date (depending on the climate conditions during the growing season). Therefore, high values of this index are indicative of where humans are growing crops, and more specifically, when the crops are being planted or harvested and are the most different from the surrounding natural vegetation. This is also a solution for a real-time in-filling. Instead of seeing gaps in areas where there was no observation for that date, you will instead see the long term mean for that date. This will fill most of the black holes in the maps, making Kernel a one stop shop for all of your crop insight needs. Daily anomalies for each index will be available as separate layers on the mapping page. Contact us or sign up for a Kernel free trial here to see us in action!
Clean satellite data around the world and back in time
0
clean-satellite-data-around-the-world-and-back-in-time-1898a0e390f
2018-03-30
2018-03-30 18:41:56
https://medium.com/s/story/clean-satellite-data-around-the-world-and-back-in-time-1898a0e390f
false
419
TellusLabs combines decades of satellite imagery with machine learning to to answer critical, time-sensitive economic and environmental questions.
null
telluslabs
null
TellusLabs
bella@telluslabs.com
telluslabs
AGTECH,DATA SCIENCE,CROPS,MACHINE LEARNING,STARTUP
telluslabs
Agriculture
agriculture
Agriculture
12,051
Annmarie Rizzo
null
1607325facef
annmarie_53837
8
108
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-05
2018-03-05 23:28:52
2018-03-06
2018-03-06 01:12:36
1
false
en
2018-03-06
2018-03-06 01:12:36
6
1899cf4cc46
2.588679
1
0
0
This article was written by Dr. Sid Reddy and originally published in VentureBeat on March 2, 2018.
2
Deep Learning Is Only As Good As Its Data This article was written by Dr. Sid Reddy and originally published in VentureBeat on March 2, 2018. “Deep learning” has become a hot topic in the general rush to launch AI products. But many of these products will fail because companies are putting branding ahead of functionality. Success depends on understanding what deep learning is, how it works, and what its most effective applications are. Deep learning 101 Traditional machine learning algorithms are typically linear, in that they can be represented by only one node that linearly transforms input to output. Previously called artificial neural networks or neural networks, deep learning uses multiple such nodes, organized like the neural networks originally invented in 1943 to model how human brains work. The more nodes and layers in a neural network, the more sophisticated its learning capabilities can become. Although people still use the term “neural networks,” today’s deep learning networks represent how information flows across nodes more than how information in the human brain flows across neurons. Deep learning requires ample data and training time. But while application development has been slow, recent successes in search, advertising, and speech recognition have many companies clamoring to get in on the action. Mislabeling and overuse Vendors’ tendency to label almost anything “deep learning” is a recipe for disappointment because the technology is less effective without sufficient data and domain expertise. A key issue for machine learning algorithms is selection bias. In sound research, you can define the population, have access to all available population data, and sample a portion of that data. With deep learning, you start with sample data, deploy the model, and then expose it to the real world. But models that work well on training data often perform poorly on real data. Deep learning provides the ability to accurately determine the classification function from inputs to an output. However, there is no guarantee that the model will perform accurately on input data from the population if the training data is not representative. This data failure is more common when training data isn’t developed by domain experts. While deep learning might eliminate the need to have domain experts in the feature extraction part of the classification process, it still requires expertise in the data extraction process. In fact, deep learning might be overkill when a domain expert can explicitly describe the linear or nonlinear function using logic and rules. For example, if a baker applied deep learning to making bread, a robot’s action, such as telling the automated bread maker to stop kneading, could be more explicitly defined by a domain expert (i.e., a baker) based on input values. In this case, those would be the attributes of the bread dough, like consistency and temperature. In scenarios such as this, companies that focus on collecting data points might be better served by speaking to an expert. The bottom line is that much of what is marketed as “deep learning” is likely to be ineffective or difficult to manage properly. And “deep reinforcement learning,” as implemented in autonomous robots, self-driving cars, and creation of images, voices, and videos, is far from being widely available. Buying into deep learning hype without doing due diligence could lead to general disillusionment and another AI winter. Achieving greater accuracy We may someday reach the point where AI and deep learning will help us achieve superintelligence or even bring on the singularity. But our challenge, and duty, as artificial intelligence professionals today is to ensure that deep learning applications live up to their billing and deliver benefits to users and society. Dr. Sid J. Reddy is chief scientist at Conversica, a company that provides AI software for marketing and sales. Check out Sid’s presentation on Conversational AI basics and design:
Deep Learning Is Only As Good As Its Data
1
deep-learning-is-only-as-good-as-its-data-1899cf4cc46
2018-03-06
2018-03-06 16:23:25
https://medium.com/s/story/deep-learning-is-only-as-good-as-its-data-1899cf4cc46
false
633
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Conversica
Conversica is the leader in conversational AI for business and the provider of AI-powered assistants for marketing and sales teams.
3a015c32e82a
conversica
4
1
20,181,104
null
null
null
null
null
null
0
null
0
93ed7f35497d
2018-03-23
2018-03-23 12:19:10
2018-03-28
2018-03-28 22:01:01
1
false
en
2018-03-29
2018-03-29 15:18:37
23
189b157adae3
6.792453
9
0
0
Let’s kill a few false ideas first. No, artificial intelligence won’t take the control of the world. You can forget your dreams about…
5
Learning to include AI in UX : Part one of my journey Let’s kill a few false ideas first. No, artificial intelligence won’t take the control of the world. You can forget your dreams about Terminator, The Matrix, Johnny 5 and Wall-e (yes I know for the last one it’s kind of sad). But you just have to watch some Alexa, Google home or any other AI fails on YouTube to understand that we are far away from that scenario. Now that we agree on that, let’s talk more seriously. Artificial intelligence is clearly the big buzzword of 2018. You don’t even have to work in the design field to hear about it. You just have to turn on the news to hear about the new partnership between Quebec and Belgium on a AI-Driven language project (AFP, 2018) or the $ 137 millions investment from a dozen investors in the Element AI start-up, the worldwide biggest Series A financing ever received by an Artificial Intelligence company (Rettino-Parazelli, 2017). AI will definitely be at the heart of conversations in terms of business innovation this year. “This race for innovation makes our entire industry rushing to launch the world’s first AI-powered [insert industry here], not always with a proper case for it” (Braga, 2018). The danger is when we focus too much on technology, it becomes difficult to keep the user at the heart of our minds. What is the right way at this time to ensure a relevant and adequate use of the possibilities offered by AI? First thing first, what is AI? If you want to understand what AI is, you must first master the difference between Artificial Intelligence, deep learning, and machine learning. 1. AI First of all, AI involves machine that can perform tasks that represent some characteristics of human intelligence like planning, having a conversation, recognizing the content of a image, etc (McClelland, 2017). There are three levels of AI: narrow (ANI), general (AGI) and super (ASI). Artificial narrow intelligence Products that are running with artificial narrow intelligence may very well perform specific automated tasks, but they are unable to apply that knowledge to tasks outside their domain (Lee, 2017). Ex.: Google’s self-driving car contain robust ANI systems that allow it to perceive and react to the world around it (Urban, 2015). Artificial general intelligence (AGI) An artificial general intelligence understands the world as we do and thinks abstractly. AGI is adaptive and applies its knowledge to whatever it wants so it can think and act for itself like an adult human being (Lee, 2017). Ex. : Google Home, Alexa and Siri who would understand our intention in our own languages, considering everything we did not say, and making the best decisions for us. (Lee, 2017) Artificial superintelligence (ASI) When artificial intelligence goes beyond the most intelligent human mind, a little more exponentially smarter than humanity, it has reached the superintelligence (Lee, 2017). Oxford philosopher and leading AI thinker Nick Bostrom defines “superintelligence” as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” (Bostrom, 2006) 2. Machine Learning Machine learning is intended to enable computers to learn how to perform classification or prediction tasks, without the operator having to hard code millions lines of code because the machine can learn (McClelland, 2017). The machine has the capacity to learn without programming because it trains and improves its algorithm with data provided to it (Arthur Samuel, 1959). 3. Deep Learning Deep Learning is Machine Learning’s method of teaching computers what humans are naturally capable of: learning by example. This technology is essential to the operation of autonomous cars. For instance, it allows them to recognize a stop sign or to differentiate a pedestrian from a street lamp. The voice control systems of consumer devices, such as telephones, tablets, televisions or hands-free loudspeakers are based on this principle. (MathWorkss, 2018). There are other approaches, like decision tree learning, clustering, etc. Deep learning use Artificial Neural Networks (ANNs) algorithms, but also technologies such as image recognition and robotic vision. Artificial neural networks are inspired by the neurons of the human brain. They consist of several artificial neurons connected to each other. The higher the number of neurons, the deeper the network is (Bastien, 2018). How does AI affect UX designers? Another idea to kill right away is that the AI will steal your job. If you’re doing it great an AI may simply improve your work. It’s all about adaptation and evolution of your methods. Don’t misunderstand me, I don’t think all AI solutions are always good ideas and I don’t think just because something is possible it means it should exist in the world… see these seven products for example. To create all products that will be powered by artificial intelligence, technologists must necessarily start with the data that will be used to drive the AI and ultimately create these AI-powered tools and services (Teixeira, 2017). As a UX designer, you must already give great importance to data. So if you design the AI ​​as an extra string to your bow and not as an obligation to include in any aspect of your work, it should be fine. In fact, there are already three things you can easily do with AI in your work. 1. Automate legwork Think about your day’s work. What are your daily tasks? How many times do you tell yourself that some of your tasks or some processes are just taking too much time and energy? Let’s look at the big lines of the creation of a web product. We have to do our research, make our prototype, code and test, etc. A lots of steps, don’t you think? Maybe not for a long time: Airbnb recently announced a technology that is able to identify paper sketches from designers and turn them into code, almost in real time thanks to AI. Imagine the possibilities just for testing your idea! (AIRBNB, 2018) 2. Personalizing and enhancing the user experience Collecting and analyzing huge amounts of data takes a lot of time. But if you do it well, you will discover some very precious insights about your users. There are AIs that already exist on the market, such as myEinstein by Salesforce’s, which learns from all your data and generates predictions and recommendations based on your business processes. If you have trends and predictions about the behaviors of your users, you will be able to offer more customization in the user experience and this usually means more relevance to users, which leads to better conversion rates (Teixeira, 2017; Salesforces, 2018). Besides, Don Norman says “Great microinteractions design requires understanding the people who use the product, what they are trying to accomplish, and steps they need to take.” (Norman, 2013) So if you want to create a useful and relevant experience, you should constantly learn about your users and adapt your solutions to them. That means that in a perfect world you’ll be able to adapt your product in real time for the human who’s using it. So, with the help of AI, you may give them a more human, intuitive and positive experience by reducing errors and frustration. It should be your main goal when using AI (instead of just using it). (Anteunis, 2015) 3. Generate and test a lot of ideas Do you know generative design? It mimics nature’s evolutionary approach to design. When designers input design goals into a generative design software, like Dreamcatcher by Autodesk, along with parameters, the software explores all the possible permutations of a solution, quickly generating design alternatives. It tests and learns from each iteration what works and what doesn’t (Autodesk, 2018). It’s like A/B testing on steroid! You can generate thousands of ideas, eliminate the bad ones and then choose and test the most relevant ideas with your final users (Aubé, 2017). Conclusion Finally, there’s no reason to be afraid about your job for now. You may embrace or not the AI trend but just remember: if you want to use it, start by a acquiring a deep comprehension of the users for whom you design. There’s no need to put AI everywhere, but it can truly help you with some of your work. There a lot more to say about AI and I will continue to write about it throughout my own learning. I encourage you to share about your own journey! Never stop learning! Patience you must have, my young padawan. Bibliography AFP (2018). Intelligence artificielle: Québec et Belgique coopèrent sur les langues. Accessed 07/03 /2018 http://www.journaldemontreal.com/2018/03/16/intelligence-artificielle-quebec-et-belgique-cooperent-sur-les-langues AIRBNB (2018). Sketching Interfaces, Generating code from low fidelity wireframes. Accessed 07/03 /2018 https://airbnb.design/sketching-interfaces/ ANTEUNIS, J. (2015). UX and AI: The new best friends. Accessed 07/03 /2018 https://medium.com/@RecastAI/ux-and-ai-the-new-best-friends-e1336a2c3e6b AUBÉ, T. (2017). AI meets Design. Accessed 07/03 /2018 https://www.youtube.com/watch?v=zifGZaFRKL8 AUTODESK (2018). Generative Design, Software mimics nature’s approach to design. Accessed 07/03 /2018 https://www.autodesk.com/solutions/generative-design BASTIEN, L. (2018). Deep Learning ou apprentissage profond : définition. Accessed 07/03 /2018 https://www.lebigdata.fr/deep-learning-definition BOSTROM, N. (2006). How Long Before Superintelligence? Accessed 07/03 /2018 https://nickbostrom.com/superintelligence.html BRAGA, C. (2018). AI and the big why. Accessed 07/03 /2018 https://uxdesign.cc/9-ai-and-the-big-why-747789030d28 LEE, E. (2017). You can be an AI designer. Accessed 07/03 /2018 https://uxdesign.cc/you-can-be-an-ai-designer-46a0fd45f47d LOUKIDES, M. AND LORICA, B. (2016). What is Artificial Intelligence?, O’REILLY, Sebastopol, CA, p. 1–23 MathWorkss (2018). What Is Deep Learning? 3 things you need to know. Accessed 07/03 /2018 https://www.mathworks.com/discovery/deep-learning.html MCCELLAND, C. (2017). The Difference Between Artificial Intelligence, Machine Learning, and Deep Learning. Accessed 07/03 /2018 https://medium.com/iotforall/the-difference-between-artificial-intelligence-machine-learning-and-deep-learning-3aa67bff5991 NORMAN, D. (2013). Microinteractions (My Foreword). Accessed 07/03 /2018 https://www.jnd.org/dn.mss/microinteractions_m.html RETTINO-PARAZELLI, K. (2017). Element AI reçoit un financement record. Accessed 07/03 /2018 http://www.ledevoir.com/economie/501150/intelligence-artificielle-element-ai-recoit-un-financement-record SALESFORCE (2018). Voici Salesforce Einstein, l’intelligence artificielle pour tous. Accessed 07/03 /2018 https://www.salesforce.com/fr/products/einstein/overview/ TEIXEIRA, F. (2017). When AI gets in the way of UX. Accessed 07/03 /2018 https://uxdesign.cc/when-ai-gets-in-the-way-of-ux-17de95f40772 TEIXEIRA, F. (2017). How AI has started to impact our work as designers. Accessed 07/03 /2018 https://uxdesign.cc/how-ai-will-impact-your-routine-as-a-designer-2773a4b1728c URBAN, T. (2015). Wait But Why The AI Revolution: The Road to Superintelligence. Accessed 07/03 /2018 https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
Learning to include AI in UX : Part one of my journey
60
learning-to-include-ai-in-ux-part-one-of-my-journey-189b157adae3
2018-06-19
2018-06-19 14:47:48
https://medium.com/s/story/learning-to-include-ai-in-ux-part-one-of-my-journey-189b157adae3
false
1,747
User experience, design and innovation
null
Limonade-Studio-400852530116728
null
LimonadeStudio
null
limonadestudio
UX,UX DESIGN,INTERACTION DESIGN,UI DESIGN,USER EXPERIENCE
limonadestudio
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Catherine Dionne
UX Design + Strategy + AI + Innovation | UX Director at Kryzalid Montreal | Member of Tout le monde UX | catherinedionne.com
e3b2297cc4e1
cathe.dionne
239
178
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-11
2018-02-11 11:49:50
2018-02-11
2018-02-11 14:42:52
2
false
tr
2018-02-11
2018-02-11 14:43:20
4
189e1c4730f5
4.190881
0
0
0
Medium üzerinde bir blog tutma isteğimin altında birkaç yıl sonra dönüp baktığımda kendime bırakacağım notlar yaratma isteği yatıyordu…
5
İnsanlığın Geleceği: Homo Deus Yuval Noah Harari Medium üzerinde bir blog tutma isteğimin altında birkaç yıl sonra dönüp baktığımda kendime bırakacağım notlar yaratma isteği yatıyordu fakat şimdiye kadar genellikle iki konu ve insan dikeyinde ilerledi bu notlar. Harari ve Musk yani girişimcilik ve bilim. Yaşamın Geleceği başlıklı notumda Harari’nin Davos’ta yaptığı harika konuşmayı özetlemeye çalışmıştım. Bugün ise çok beğendiğim, her satırını hayranlıkla okuduğum ikinci kitabı Homo Deus’un anlattıklarına notlar düşeceğim. Bu notlar biraz kitaptan biraz da Harari’nin farklı konuşmalarından izler taşıyacak. Homo Deus — Yuval Noah Harari Homo Deus, Afrika’da kısıtlı bir alanda yaşayan bir hayvanın nasıl ekolojik bir seri katile dönüştüğünü anlatan Sapiens’in dününe değil yarınına bir bakış getiren bir kitap ve Harari’nin deyimiyle özünde bir olasılıklar kitabı, en kötü olasılığa da hazır olmayı ve farkına varmayı sağlamayı amaçlıyor. Yaşam 4 milyar yıldır hiç değişmemişti. Yaklaşık olarak 4 milyar yıllık bir tarihe sahibiz ve bu 4 milyar yıldır yaşam kendi yapısına göre var olmaya ve ilerlemeye devam etti ve tüm canlılar ne olurlarsa olsunlar, bir zürafa, şempanze yada insan, organik bileşenlerden oluştular. Fakat bu değişiyor, tarihte ilk defa inorganik yaşam formları yaratmaya başlıyoruz ve bulutların üstündeki tanrının değil bizlerin akıllı tasarımları tarihteki yerini almaya başlıyor. Dünyadaki yaşam ilk defa dünyadan dışarı çıkabilir. Organik yaşam dünyaya adapte olmuş ve buna göre şekillenmişti ve dünya dışındaki gezegenlerde, evrenin geri kalanında yaşam fırsatı bulması oldukça güçtü. Fakat inorganik yaşam formları yani bugünün deyimiyle yapay zeka odaklı akıllı tasarımlar için böyle bir zorluk yok bu da dünyanın dışına çıkıp evrenin geri kalanına erişmeyi kolaylaştıracaktır. Endüstriler ve meslekler değişecek, bugünün çocukları 30'lu yaşlarına geldiklerinde bambaşka işler yapacaklar. Yapay zekayı toplumsal boyutta incelediğimizde karşımıza yaşamın biyolojik kodları kadar ekonomik kodlarının da değişeceği çıkıyor. AI ile birlikte insanlar sadece işsiz kalmayacaklar, kullanışsız da kalacaklar yani onlar için yapabilecekleri bir iş olmayacak. Bunu öngörmek o kadar da zor değil. Örneğin: 10 yıl önce bilgisayarların ve yapay zekanın asla insandan daha iyi araç kullanamayacağını düşünüyorduk, bugün yanıldığımızı görüyoruz ve sürücüsüz araçlar trend olarak değer kazanmaya devam ediyor. Yakında araç kullanmakla ilgili işlerin tamamı rafa kalkacak. Aynısını AI doktorlar için de söyleyebiliriz ve elbette birçok meslek için. Yapay zeka ve sağlık teknolojileri sayesinde henüz hasta olmadan anlık olarak belirtilerin yorumlanacağı ve kişiye özel tedavilerin, ilaçların geliştirileceği bir gerçekliğe doğru yol alıyoruz hatta epey yol aldık diyebiliriz. Her an yanımızda, akıllı telefonumuzda olan ve bize özel tedaviler geliştiren, her biyokimyasal reaksiyonumuzdan bize özel teşhisler çıkaran doktorlara sahipken doktorluk mesleğinin devam edebileceğini söylemek çok güç olacak. Bu nedenle bugün çocuklarımızın ne öğrenmesi gerektiğine odaklanmalıyız. Makineler duyguları analiz etmekte insanlardan daha iyiler. Böyle bir ortamda soğuk, cansız ve “ruhsuz” makinelerin insanlara sunacağı servisin maddi açıdan yeterli olsa da duygusal açıdan tatminkar olmayacağı tezleri devreye giriyor. Fakat durum orada da sanıldığı gibi değil, IBM Watson başta olmak üzere yapay zeka şimdiden yüz ve göz hareketlerinden duygu durum okuma konusunda insanlardan epeyce ileride ve ayrıca son bilimsel araştırmalar gösteriyor ki duygu dediğimiz şey tanrı tarafından bize şiir yazmamız için verilen ilahi bir güçten ziyade biyokimyasal reaksiyonlardan ibaret. Daha önce yaşadığımız devrimler kadar şanslı olmayabiliriz. 19. ve 20. yüzyıla baktığımızda insanlığın büyük kısmının tarımda çalıştığını görürsünüz. Sanayi devrimi ve tarımda makineleşme bu alandaki işlerin birçoğunu yok etti fakat bu insanlar işsiz ve kullanışsız kalmadılar. Mevcut iş alanları yok olurken, yeni iş alanları ortaya çıktı. 19. yüzyılda insanların %90'ı tarımla ilgileniyorken bugün gelişmiş ülkelerde bu oran %2 civarında geri kalan nüfus genellikle hizmet sektöründe faaliyet gösteriyor. Bunun gibi birçok değişimi büyük sorunlar yaşamadan atlattık diyebiliriz. Fakat bu sefer o kadar şanslı olmayabiliriz. Çünkü; insanlar baştan beri iki temel yetkinliğe sahiptiler: Bilişsel yetkinlikler ve fiziksel yetkinlikler. Daha önce yaşadığımız devrimlerde makineler nispeten fiziksel alandaki işleri elimizden aldılar diyebiliriz. Çünkü bilişsel anlamda insanoğlunun yanına yaklaşacak kadar bile gelişmemişlerdi. Bugün durum böyle değil. Sanayi devriminden sonra fiziksel işleri bir kenara bırakarak birçoğumuz diğer yetkinliğimiz olan bilişsel ve zihinsel işlere yöneldik. Fakat bilişsel işler de makineler tarafından yapılabilir hale geldiğinde bugün bildiğimiz üçüncü bir yetkinliğe sahip değiliz ve bu bizi kullanışsız kılabilir. Burada ortaya çıkacak bir diğer önemli sorun da gelişmelerin artık çok hızlı olması ve insanoğlunun buna ayak uydurmakta, buna uygun bir agility geliştirmekte zorlanabilecek olması. Bugün 15–25 yaşları arasında temelini attığımız bir işi yapıyoruz birçoğumuz skalayı biraz daha genişletirsek 10–30 diyebiliriz. Yani en az 10 yıl bir alana yatırım yapıyoruz ve bu yatırım ömrümüzün neredeyse tamamında bize gereken yetkinlikleri sağlıyor. Fakat gelecekte mesleği yok olacak bir avukatın, sanal gerçeklik tasarımcısı olabilmesi için bu kadar süresi olmayacak üstelik bu devrim gerçekleştiğinde 40 yaşında olacak biri için uyumun ne kadar zor olacağını tahayyül edebilirsiniz. 21. yüzyılın en önemli sorusu milyonlarca işsiz ve ekonomik olarak kullanışsız insanın ne olacağıdır. 20. yüzyıl ekonomik, sosyolojik ve siyasal olarak bir toplumsal kitlenin etkisinde geçmiştir. Bütün sorular, düzenlemeler ve tarihsel perspektif bu kitleyi hedef alır; şehirli, çalışan insanlar. 21. yüzyılın ekonomisini, politikasını ve sosyolojisini ise işsiz ve ekonomik olarak kullanışsız kitleler belirleyecek ve en önemli sorumuz bu insanların ne olacağı olacak. Ayrıca ekonomik olarak işlevsiz hale gelen milyonlarca insanın politik güçlerini, ağırlıklarını ve toplum içindeki önemlerini kaybetmesinin açacağı sorunlar da gelecekte çözmemiz gereken önemli meselelerin başında geliyor. Tekno-dinler ve silikon peygamberler. 13. yüzyılın Silikon Vadisi Vatikan’dı. Dünyayı değiştiren tüm ekonomik ve teknolojik gelişmeler dinlerin ve din adamlarının elinden çıktı, yada bunlara bir yerinden etki ettiler. 19. ve 20. yüzyıla geldiğimizde sadece eski kitapları değil, ekonomiyi ve teknolojiyi çok iyi analiz eden insanlar belli ideolojileri (ki aslında bunlarda özü itibariyle birer dindir.) ortaya çıkardılar. Sosyalizm bunlardan biridir. Günümüzde ise teknolojik ve ekonomik gelişmeler insanlığı etkileyen en önemli fenomenler haline geldi. Öte yandan bu fenomeni güçlendirecek bir şey daha yaşandı; Bilişsel Devrim. Hristiyanlığın bugünün dünyasına, dünyanın ortaya çıkışına, insanlığın ortaya çıkışına ve evrensel kanunlara bakışının doğru olmadığını bugünün bilimiyle biliyoruz ve gelecekte bu doğruları açıklayan akımların tırnak içinde dinlerin, teknolojik fenomenlerin insanlık için yeni önderlere dönüşeceğini öngörmek zor değil. Homo Deus bir kısmından burada bahsedebildiğim pek çok konuyu ve açıklamayı içinde barındırıyor fakat Harari’nin de dediği gibi bu kitap bir olasılık kitabı ve en güçlü olasılıkları incelemesi bütün senaryoların gerçek olduğu, olacağı anlamına gelmiyor. Buna rağmen bugüne kadar yazılmış en gerçekçi ve açıklayıcı gelecek tahayyüllerini içerdiğini söyleyebilirim. Kitapla ilgili insanların incelemeleri, paylaşımları ve görüşlerine buradan, Harari’nin konuyla ilgili birkaç konuşmasına buradan ve buradan ulaşabilirsiniz.
İnsanlığın Geleceği: Homo Deus
0
i̇nsanlığın-geleceği-homo-deus-189e1c4730f5
2018-02-11
2018-02-11 14:43:21
https://medium.com/s/story/i̇nsanlığın-geleceği-homo-deus-189e1c4730f5
false
1,009
null
null
null
null
null
null
null
null
null
Yuval Noah Harari
yuval-noah-harari
Yuval Noah Harari
164
Fatih Bildirici
Start-up enthusiast, Software Analyst, Marketing Analyst, Geek, MIS Specialist, Comic lover, Data Sapiens, Muggle.
47532a1d23b4
fatihbildiriciii
37
27
20,181,104
null
null
null
null
null
null
0
Variable: international N(international)=10, N(~international)=60 p(dating|international)=0.60, p(dating|~international)=0.38 p-value=0.299 Variable: cs N(cs)=56, N(~cs)=14 p(dating|cs)=0.45, p(dating|~cs)=0.29 p-value=0.368 Variable: career N(career)=46, N(~career)=24 p(dating|career)=0.43, p(dating|~career)=0.38 p-value=0.799 Variable: interesting N(interesting)=34, N(~interesting)=36 p(dating|interesting)=0.47, p(dating|~interesting)=0.36 p-value=0.467 Variable: social N(social)=29, N(~social)=41 p(dating|social)=0.45, p(dating|~social)=0.39 p-value=0.806 Variable: confident N(confident)=37, N(~confident)=33 p(dating|confident)=0.51, p(dating|~confident)=0.30 p-value=0.092 Variable: tall N(tall)=26, N(~tall)=44 p(dating|tall)=0.46, p(dating|~tall)=0.39 p-value=0.619 Variable: glasses N(glasses)=41, N(~glasses)=29 p(dating|glasses)=0.32, p(dating|~glasses)=0.55 p-value=0.084 Variable: gym N(gym)=22, N(~gym)=48 p(dating|gym)=0.64, p(dating|~gym)=0.31 p-value=0.018 Variable: fashion N(fashion)=17, N(~fashion)=53 p(dating|fashion)=0.41, p(dating|~fashion)=0.42 p-value=1.000 Variable: canada N(canada)=31, N(~canada)=39 p(dating|canada)=0.42, p(dating|~canada)=0.41 p-value=1.000 Variable: asian N(asian)=59, N(~asian)=11 p(dating|asian)=0.37, p(dating|~asian)=0.64 p-value=0.181
12
null
2018-04-02
2018-04-02 17:42:07
2018-04-02
2018-04-02 22:09:03
3
false
en
2018-04-03
2018-04-03 02:49:12
2
18a0d22da896
4.474528
107
5
0
The University of Waterloo is well known for its lack of social life and difficulty of finding romantic relationships. Like many other…
5
Learning to find a Girlfriend at the University of Waterloo by Logistic Regression The University of Waterloo is well known for its lack of social life and difficulty of finding romantic relationships. Like many other Waterloo CS majors, I wouldn’t be able to find a girlfriend, even if my life depended on it. Some people feel love is unquantifiable, and you should “just be yourself”. Well, I’m UW Data Scientist, so I respectfully disagree. Why not learn how to find a girlfriend with…😎 machine learning? I made an app to estimate your probability of finding a girlfriend! #innovation #sideproject #wow Methodology The question for this study is: what are the attributes that tend to correlate with having a girlfriend among male Waterloo students? It is commonly assumed that having a high paying job will make you more attractive. Physical characteristics like height and muscle may also play a role. We try to identify which attributes are the most predictive, and which are mere assumptions not supported by data. Off the top of my head, I came up with the following attributes: Dating (target variable): person has a girlfriend, or had one for at least 6 months over the last 5 years International: person is an international student CS: person majors in CS, SE, or ECE Career: person is successful in academics and finds “good” jobs for internships Interesting: person has interesting things to talk about Social: person is outgoing and tries to meet new people Confident: person appears confident Tall: person is taller than me (>175cm) Glasses: person wears glasses Gym: person regularly works out at the gym or plays sports Fashion: person cares about wearing nice clothes Canada: person mostly lived and worked in Canada for the last 5 years Asian: person is East Asian ethnicity You might notice that some of these are quite subjective — what qualifies a person as interesting? In these cases, I tried to assign 1 to about half the population, and 0 to the other half. Therefore, we’re measuring the relation between my own (biased) perception of other people’s interestingness to their ability to find a girlfriend. Yeah, if you expected a statistically rigorous study, you can stop reading now. To collect data, I tabulated every person I could think of and rated them either 1 or 0 in each of these attributes. In this way, the dataset has N=70 rows. If you’re a guy, go to Waterloo, and talked to me in the last 2 years, then you’re probably included. Analysis First, we perform Fisher’s exact test on the target dating variable against each explanatory variable. The three variables that are the most significant are: Gym — guys who go to the gym or play sports regularly are more than twice as likely to have a girlfriend (p-value = 0.02). Glasses — guys who don’t wear glasses are about 70% more likely to have a girlfriend than guys who do (p-value = 0.08). Confidence — guys that appear confident are more likely to have a girlfriend (p-value = 0.09) Muscular and confident guys are attractive, as expected. I was quite surprised by the large effect of glasses, and wondered if it was an indication of something else, like general nerdiness. So I looked for more careful studies and confirmed that indeed, the majority of people consider glasses to be unattractive for both genders. Some variables may be slightly predictive of dating success, but it’s hard to say for sure due to small sample size: International students have better success with dating than domestic students Asians men have worse chances with dating than other races Controlling for other factors, guys in CS seem not to be at a disadvantage, despite the lack of women The rest of the variables (height, career/academics, interestingness, sociability, fashion, Canada/US) have not much correlation with dating. Sorry, but even if you go to Facebook in Menlo Park in 4A, you will still not have a girlfriend. Full results of this experiment: Next, we examine the correlations between the variates; this can help identify incorrect model assumptions. Red means positive correlation, blue means negative correlation. We only show correlations that have statistical significance < 0.1, so most pairs of variates are blank. It appears that {having girlfriend, appearing confident, going to the gym, not wearing glasses} are all mutually correlated. Before we go on, I should emphasize the demographics of my friend groups does not represent the general UW population. I either meet people in classes or at work (wide variety of backgrounds but all doing CS), or through mutual friends (lots of different majors but mostly East Asians that grew up in Canada). Any model trained on this data will reflect these biases. In the future, I might look into doing a wider survey to get more data. Girlfriend Prediction with Logistic Regression Wouldn’t it be great for an algorithm to predict your chances of finding a girlfriend? Let’s do it! I trained a logistic regression GLM to predict girlfriend from all of the explanatory variates. Using the glmnet and caret packages in R, I trained a GLM using elastic net regularization. A standard grid search was performed for hyperparameter optimization, using leave-one-out cross validation and optimizing for Cohen’s kappa coefficient in each iteration. The resulting model score has cross validation ROC AUC score of 0.673, meaning it can predict chances of finding a girlfriend better than random, but there is still a lot of inherent uncertainty. I deployed the model as a RStudio Shiny app here for you to play with. Well, that’s it for now. Time to hit the gym and book a LASIK appointment.
Learning to find a Girlfriend at the University of Waterloo by Logistic Regression
799
learning-to-find-a-girlfriend-at-the-university-of-waterloo-by-logistic-regression-18a0d22da896
2018-06-10
2018-06-10 04:26:48
https://medium.com/s/story/learning-to-find-a-girlfriend-at-the-university-of-waterloo-by-logistic-regression-18a0d22da896
false
1,040
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
UW Data Scientist
Bai Li (Waterloo Computer Science ‘17)
6e63c941f791
uw_data_scientist
84
10
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-01
2018-03-01 20:37:18
2018-03-01
2018-03-01 21:03:58
2
false
en
2018-03-01
2018-03-01 21:03:58
1
18a106a7484
4.156918
7
0
0
Machine learning is the science of computers acting without being explicitly programmed. Data plays a vital role in this, with the given…
5
Scikit-Learn Vs TensorFlow Machine learning is the science of computers acting without being explicitly programmed. Data plays a vital role in this, with the given data knowledge is derived to solve a problem. Machines are trained on the basis of past experiences, user behaviour and data to take decisions in future. Machine learning has given us self driving cars, practical speech recognition , effective web search and an understanding of human genome. Deep learning is a subfield of machine learning whose algorithms are based on functionalities, structure of brain. Deep learning means large neural networks. For instance: if I have sizes of houses in terms of meter-sq and their prices. Consider this as my training set. From this we can have a function to determine the prizes of house as a function of size. Now this represents a neuron which takes size as input and gives price as output. I can have more inputs such as number of bedrooms, postal code etc. Now stacking more such neurons gives me a neural network​. This is a very basic neural network which outputs price of house when inputs such as size, number of bedrooms etc are given . For designing , building and training such networks Google designed a framework called TensorFlow. Tensors: a walk through: Scalars are represented with only magnitude. Vectors are represented with a direction and a magnitude. A plane vector can be represented as 3*1, which means that it needs 3 –dimensional space for its’s representation . A tensor is the mathematical representation of a physical entity that may be characterized by magnitude and multiple directions. In an N-dimensional space, scalars will still require only one number, while vectors will require N numbers, and tensors will require N^R numbers. This explains why we often hear that scalars are tensors of rank 0: since they have no direction, you can represent them with one number. TensorFlow is a libarary for deep learning. A package like TensorFlow allows us to perform specific machine learning number-crunching operations like derivations on huge matrices with large efficiency. We can easily distribute this processing across our cpu cores. With TensorFlow it is possible to distribute computations across a distributed network of computers. All in all, TensorFlow is a massive array manipulation library. The name TensorFlow: With the use of this library computations are performed on data flow graphs whose node represents operations and edges represent data which is array format, is a tensor. TensorFlow provides APIs for Python, C++, Haskell, Java, Go, Rust, and there’s also a third-party package for R called tensorflow. On the Tensor installation webpage , you’ll see some of the most common ways and latest instructions to install TensorFlow using virtualenv, pip. It allows developers to create large-scale neural networks with many layers. TensorFlow is mainly used for: Classification, Perception, Understanding, Discovering, Prediction and Creation. Main use cases of TensorFlow are: voice(Apple’s Siri, Google Now for Android and Microsoft Cortana for Windows Phone.) , image recognition , Text summarization and translation(Google Translate), video detection. Tensor is highly customizable and needs a solid understanding of machine learning and mathematics concepts such as calculus , algebra. Scikit-learn For python programmers who are looking to bring machine learning in production system, scikit-learn is something which can fulfil their purpose. Scikit-learn was initially developed by David Cournapeau as a Google summer of code project in 2007. The project now has more than 30 active contributors and has had paid sponsorship from INRIA, Google, Tinyclues and the Python Software Foundation. A walk through scikit-learn: Scikit-learn provides a range of supervised and unsupervised learning algorithms via a consistent interface in Python. The library is built upon the SciPy (Scientific Python) that must be installed before we can use scikit-learn. The library mainly focuses on modelling data rather than loading , manipulating and summarizing data. In brief, supervised learning means dividing your datasets into training and testing sets. Training sets trains algorithm and then we check accuracy. In unsupervised learning we don’t have labelled data hence accuracy cannot be predicted. Some popular groups of models provided by scikit-learn ​include: ● Clustering​: for grouping unlabeled data such as KMeans. ● Cross Validation​: for estimating the performance of supervised models on unseen data. ● Datasets​: for test datasets and for generating datasets with specific properties for investigating model behavior. ● Dimensionality Reduction​: for reducing the number of attributes in data for summarization, visualization and feature selection such as Principal component analysis. ● Ensemble methods​: for combining the predictions of multiple supervised models. ● Feature extraction​: for defining attributes in image and text data. ● Feature selection​: for identifying meaningful attributes from which to create supervised models. ● Parameter Tuning​: for getting the most out of supervised models. ● Manifold Learning​: For summarizing and depicting complex multi-dimensional data. ● Supervised Models​: a vast array not limited to generalized linear models, discriminate analysis, naive bayes, lazy methods, neural networks, support vector machines and decision trees. This library is quite robust, provides support in production systems. It has a deep focus on ease of use, code quality, documentation and performance. It provides consistent interface to lots machine learning models which makes it easy to how to learn a new model. It also provides many options for models so that can be tuned for optimal performance and chooses sensible defaults so that we can get started quickly. Documentation is very helpful in understanding the various models. It is also loaded with many functionalities which facilitate model selection and evaluation. It is under active development and has large community on stack overflow. Depending on project goals we need to decide we should go for machine learning with R or with scikit –learn. R has more capabilities for understanding your model, but scikit focuses on maximizing accuracy. Scikit –learn is not at all useful in deep learning. This article was contributed by Riya Singh. https://www.linkedin.com/in/riya-singh-b09298141/ Riya Singh #include<> SGSITS
Scikit-Learn Vs TensorFlow
13
scikit-learn-vs-tensorflow-18a106a7484
2018-06-09
2018-06-09 07:09:56
https://medium.com/s/story/scikit-learn-vs-tensorflow-18a106a7484
false
1,000
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
#include<>
SGSITS Techno-Learning Club
bfb3aace0841
hashinclude
15
2
20,181,104
null
null
null
null
null
null
0
null
0
3f272fe889c4
2018-08-14
2018-08-14 09:35:58
2018-08-14
2018-08-14 09:38:39
1
false
en
2018-09-07
2018-09-07 07:20:40
8
18a22320763f
1.184906
0
0
0
UBEX visited Xtock and announced that it has signed a strategic cooperation agreement to build a partnership on August 14.
5
UBEX-Xtock, AI technology MOU UBEX visited Xtock and announced that it has signed a strategic cooperation agreement to build a partnership on August 14. Through the agreement, the two companies will advance the service by laying groundwork of blockchain ecosystem centered on artificial intelligence(AI) and gradually establishing mutual cooperation and develop services. Xtock is a blockchain based OTC finance network platform and developed to solve the problems of investors and companies in OTC market and purpose of investment environment innovation. It is expected to start in Asia for the first time from next year. In addition, it is a platform to provide investment environment suitable for investors who want to invest in OTC company but do not feel that the method or process is safe. Through this agreement, UBEX expressed its desire to develop OTC stock platform more smartly through technical consultation including corporate evaluation system and analysis report that incorporate artificial intelligence technology. UBEX has built a smart and global advertising ecosystem that combines blockchain and artificial intelligence with key partners such as NVIDIA, civic and SingularityNET. Jason Park, CEO of Xtock said “Through the UBEX team’s AI technology, Xtock Evaluation Matrix (XEM) will be further developed to provide a transparent and reliable unlisted company information by managing potential risks as well as analyzing past quantitative indicators of potential companies. Furthermore we will provide a new OTC market finance network platform.” http://www.fnnews.com/news/201808141416452889 Follow us : Website : https://www.xtock.io Telegram (EN) : https://t.me/xtock_en Telegram (KR) : https://t.me/xtock_kr Medium : https://www.medium.com/xtock Facebook : https://www.facebook.com/xtock.io Twitter : https://www.twitter.com/xtock_io E-mail : contact@xtock.io
UBEX-Xtock, AI technology MOU
0
ubex-xtock-ai-technology-mou-18a22320763f
2018-09-07
2018-09-07 07:20:40
https://medium.com/s/story/ubex-xtock-ai-technology-mou-18a22320763f
false
261
The Future of OTC Market Finance Network Platform
null
xtock.io
null
XTOCK
contact@xtock.io
xtock
ICO,XTOCK,BLOCKCHAIN,CRYPTOCURRENCY,STOCK MARKET
xtock_io
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
XTOCK
Future of OTC Market Finance Network Platform
d52ce53a40dd
xtock
10
3
20,181,104
null
null
null
null
null
null
0
import numpy as np import matplotlib.pyplot as plt np.random.seed(42) random_points = np.random.randn(1000000) print("mean: {:0.4f}".format(np.mean(random_points))) print("standard deviation: {:0.4f}".format(np.std(random_points))) print("mean absolute deviation: {:0.4f}".format(np.mean(np.absolute(random_points)))) mean: -0.0016 standard deviation: 1.0002 mean absolute deviation: 0.7983 f, ax = plt.subplots(figsize=(7,5)) ax.hist(random_points, alpha=0.5, label=["returns"], bins=100) ax.hist(np.absolute(random_points), alpha=0.5, label=["absolute returns"], bins=50) ax.axvline(np.absolute(random_points).mean(), linestyle="dashed", color="red") ax.set_xticks([-3,-2,-1,0,1,2,3]) ax.legend(prop={'size': 10}) f.tight_layout() import numpy as np import matplotlib.pyplot as plt np.random.seed(42) random_points = np.random.randn(1000000) * 1.25 print("mean: {:0.4f}".format(np.std(random_points))) print("standard deviation: {:0.4f}".format(np.std(random_points))) print("mean absolute value: {:0.4f}".format(np.mean(np.absolute(random_points)))) mean: 1.2502 standard deviation: 1.2502 mean absolute value: 0.9978 np.random.seed(42) # Create 1 million random normal points for 256 days random_points_daily = np.random.randn(1000000, 256) * 1.25 # Convert to percentages: random_points_daily /= 100 # Convert to returns: random_returns_daily = 1 + random_points_daily # Annualize (take the product of each day for the 1 million points): random_points_annualized = np.product(random_returns_daily,axis=-1) print(“Standard deviation of annualized returns: {:0.4f}”.format(np.std(random_points_annualized))) Standard deviation of annualized returns: 0.2019 from collections import Counter np.random.seed(42) num_trials = 1000000 random_coin_flips = np.random.randint(2, size=(num_trials,6)) coin_flips_counter = [Counter(flip) for flip in random_coin_flips] coin_flips_5_heads = sum([1 for flip in coin_flips_counter if flip.get(0) == 5]) coin_flips_5_heads_perc = coin_flips_5_heads / num_trials print("Percent 5 heads: {:0.4f}".format(coin_flips_5_heads_perc)) Percent 5 heads: 0.0937 HHHHHT HHHHTH HHHTHH HHTHHH HTHHHH THHHHH
28
9ecf7f60cb82
2018-03-11
2018-03-11 20:27:29
2018-03-12
2018-03-12 12:31:01
3
false
en
2018-03-12
2018-03-12 12:31:01
5
18a5cceaf23
5.512264
20
1
0
Nassim Taleb baits financial professionals and students into an elementary mistake in probability. However a simple check could have helped.
5
Nassim Taleb, Loaded Questions and Statistics for Hackers In anticipation of reading Nassim Taleb’s new book Skin in the Game, I came across a paper Taleb wrote titled “We Don’t Quite Know What We Are Talking About When We Talk About Volatility”. The paper focuses on the confusion between mean absolute deviation with standard deviation among financial professionals and academics. Mean absolute deviation of a series is the average of the absolute deviations from a central point while standard deviation is something less intuitive. The point is that they could both be used as a measure of dispersion or variation. Statistics can be a very abstract discipline, even if its application is not. Sometimes hacking together a few lines of code to experiment is more productive to understanding than diving into probability theory and all of its pitfalls. Taleb’s main purpose in life is to mock what he considers pseudo-intellectuals. As an experiment, he asks the question below to a group of financial professionals and students: A stock (or a fund) has an average return of 0%. It moves on average 1% a day in absolute value; the average up move is 1% and the average down move is 1%. It does not mean that all up moves are 1% — some are .6%, others 1.45%, etc. Assume that we live in the Gaussian world in which the returns (or daily percentage moves) can be safely modeled using a Normal Distribution. Assume that a year has 256 business days. The following questions concern the standard deviation of returns (i.e., of the percentage moves), the “sigma” that is used for volatility in financial applications. What is the daily sigma? What is the yearly sigma? By giving only one piece of relevant information (moves on average 1% a day in absolute value), he gives one obvious (incorrect) answer (1% standard deviation). The one piece of information is the mean absolute deviation which is different but related to standard deviation. Considering that the survey is being administered by someone who has built a career antagonizing financial professionals and academics, I would shy away from taking the easy way out. But alas: Confusing the concepts of mean absolute deviation and standard deviation is sacrosanct to Taleb. Not only that, but nearly everyone underestimates actual standard deviation by at least 25% and even 90% in fat tailed markets. He uses the result to affirm his conclusion (and title of his paper) that we don’t quite know what we’re talking about when we talk about volatility. And this even applies to people who should know better, like financial professionals and students of financial engineering. It’s true that in the media complex mathematical concepts are thrown around unintelligibly as buzz words. I imagine if you performed a man on the street survey and asked for a definition of standard deviation the top answers might would be: 1. No clue 2. How much something moves every day on average … N. Root of the arithmetic mean of the squares of the deviation of each of the class frequencies from the arithmetic mean of the frequency distribution Financial professionals and financial engineering students should know better, but this is still a tricky question, since only one piece of information is provided (mean absolute deviation) and there exists some function that converts mean absolute deviation to standard deviation. And in general people aren’t good at memorizing functions that cannot easily be derived. But the question doesn’t have to be so tricky or even abstract. Using python we can easily verify our initial answer with a few lines of code. In our case, the mean absolute deviation is how much on average something moves in absolute terms. And we’re trying to solve for the standard deviation that would give us a mean absolute deviation of 1. So let’s try our initial guess of 1% standard deviation and calculate the mean absolute deviation. The code above creates a million normally distributed points with a standard deviation of 1 and a mean of 0. This gives us a mean absolute deviation of ~0.8. This tells us that standard deviation of 1 would result in a mean absolute deviation of 0.8. If we look at a histogram of the two values, it becomes more obvious. Since we’re trying to solve for a mean absolute deviation of 1, we need to increase the standard deviation. Now we know the relationship 1 standard deviation = 0.8 mean absolute deviation, we can just divide both sides by 0.8. This results in 1.25 (1 / 0.8) standard deviation = 1 mean absolute deviation. Let’s try that. Sure enough, that lines up with the mathematically derived answer as provided by Taleb: Our suspicion that there would be considerable confusion was fed by years of hearing options traders make statements of the kind, “an instrument that has a daily standard deviation of 1% should move 1% a day on average”. Not so. In the Gaussian world, where x is a random variable, assuming a mean of 0, in expectation, the ratio of standard deviation to mean deviation should satisfy the following equality ∑| x | x 2 ∑ = 2 π . Since mean absolute deviation is about .8 times the standard deviation, in our problem the daily sigma should be 1.25% and the yearly sigma should be 20% (which is the daily sigma annualized by multiplying by 16, the square root of the number of business days). And how would we get the annual standard deviation? Or, just multiply by square root of 256. My point is not that professionals and students don’t need to know statistics, but that it’s crucial to write tests to check our answers. Statistics and probability are tricky and not always intuitive. Consider the definition of t-statistic: The ratio of the departure of the estimated value of a parameter from its hypothesized value to its standard error. Technically correct, but there are many layers of abstraction to understanding what it effectively means. And to understand that well enough to make an assertion as to its implications is even harder. Many times, we care more about ball-park estimates or validating our answers This is best done with experiments with random numbers. Tools like numpy and python make creating straight forward and intuitive tests easy. For more information, I recommend a great lecture from PyCon on hacking together answers to statistical questions through brute force. I flipped a coin 6 times and it landed on 5 heads. How likely is this to happen? Actual answer is 6 / 64 of 0.09375, since there are 6 ways to get 5 heads and 64 possible throws (2 ^ 6). But we knew that already.
Nassim Taleb, Loaded Questions and Statistics for Hackers
34
nassim-taleb-loaded-questions-and-statistics-for-hackers-18a5cceaf23
2018-06-01
2018-06-01 20:17:40
https://medium.com/s/story/nassim-taleb-loaded-questions-and-statistics-for-hackers-18a5cceaf23
false
1,315
Machine Learning Everything
null
null
null
ml-everything
branko.blagojevic@gmail.com
ml-everything
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,REINFORCEMENT LEARNING,KERAS,TENSORFLOW
null
Data Science
data-science
Data Science
33,617
Branko Blagojevic
null
f068bd033690
branko.blagojevic
212
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-26
2018-06-26 06:08:29
2018-06-26
2018-06-26 06:10:34
0
false
en
2018-06-26
2018-06-26 06:10:34
1
18a61d6e29fb
2.132075
0
0
0
Big data is the next big revolution in the IT field. Today business enterprises, irrespective of their size, are making large investments…
4
How to Handle Big Data Challenges in an Organization and Get Real-time Insights Big data is the next big revolution in the IT field. Today business enterprises, irrespective of their size, are making large investments in the field of big data and analytics. This rapid traction has come about because of the many advantages that big data offers in today’s competitive market landscape. Unfortunately, many business organizations have been unsuccessful in making sense of their data. According to a survey conducted by New Vantage Partners, only 37% of organizations believe that they were successful in leveraging the voluminous data at their disposal. Thus it becomes imperative to understand the common big data challenges and the steps you must take to overcome them. 1. Handling Voluminous Data The rate at which data is being produced today clearly indicates that it will outpace the development of storage and computing systems. Unsurprisingly, managing voluminous data sets is becoming a big challenge — 31% in 2015 to 45% in 2016 — as reported by IDG. Adding to this is the rise of disparate data formats like video (MOV, MPEG-4, AVI, MXF), audio (WAVE, AIFF, MP3, MXF, FLAC), Geospatial (GeoTIFF, NetCDF), Text, Documentation, Scripts (Plain Text, PDF/A, HTML, XML) and Still Image (TIFF, PNG, JPEG/JFIF, JPEG 2000, DNG, BMP, GIF). Not just this, the IDC report states that online transactions will reach up to 450 billion per day. The number of connected devices (IoT) will reach up to 50 billion in the next 5 years , says Cisco. All of these will produce an enormous amount of data every minute. To handle this galaxy of data, many organizations are resorting to supplementing relational database management systems (RDBMS) with dynamic NoSQL Databases like DynamoDB and MongoDB. Clearly, storing the data is not the end goal. You need to analyze it and derive actionable insights. 2. Dearth of Data Scientists The 2017 Salary Guide of Robert Half Technology revealed that experienced professionals in the field of big data and analytics were paid handsomely. Still, many business enterprises face challenges in finding and retaining top talent. As a result, many companies are organizing corporate training in data science and analytics and upskilling their existing resources to make sense out of the data. Some organizations are developing self-service analysis solutions that use artificial intelligence, machine learning, and automation to explore the data with minimal manual coding. We can expect an upward trend in corporate training until the talent gap is bridged. 3. Getting Real-time Insights A large amount of complexity is added when moving from evaluating stationary data to handling real-time data. This requires sophisticated data analysis tools that can handle data of high velocity and variety. These tools include ETL engines and tools for computation, visualization, libraries, and frameworks. Organizations often lose important information because of the failure to keep pace with real-time data. This problem is particularly acute in the domains of insurance, banking, and healthcare. Recent advancements like AWS Glue — a fully automated ETL engine, have eased the job of data scientists. There are many other tools available like StarfishETL, Xplenty, etc. each of which comes with distinct offerings, but with the overall aim to ease extract, transform and load (ETL) operations, database management, and integrations. In the coming years, we can also expect to see more sophisticated tools that will make things easier.
How to Handle Big Data Challenges in an Organization and Get Real-time Insights
0
how-to-handle-big-data-challenges-in-an-organization-and-get-real-time-insights-18a61d6e29fb
2018-06-26
2018-06-26 06:10:34
https://medium.com/s/story/how-to-handle-big-data-challenges-in-an-organization-and-get-real-time-insights-18a61d6e29fb
false
565
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
Vivek Kumar
Professional content writer. Write Blog on Education, Online Training, Career, Technology
8b88da937bbd
corporateanalyticstraining
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-09
2018-01-09 05:21:41
2018-01-09
2018-01-09 05:24:21
1
false
en
2018-01-09
2018-01-09 05:36:07
2
18a6730c468a
22.6
1
0
0
Web Data Mining and Business Intelligence Research Project
4
Deep Learning and Web Usage Mining Applications in E-Services Web Data Mining and Business Intelligence Research Project E-Commerce Technology 584 DePaul University Chicago, Illinois 60604 ABSTRACT Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised or unsupervised. Web Usage Mining on the other hand the is the application of data mining techniques to discover interesting usage patterns from Web data in order to understand and better serve the needs of Web-based applications. Usage data captures the identity or origin of Web users along with their browsing behavior at a Web site. Retrieving knowledge from World Wide Web is a tedious task because of the growth in the availability of information resources on it. So this escalates the necessity to employ an intelligent system to retrieve the knowledge from World Wide Web[1]. The performances of Web information retrievals and Web based data warehousing are boosted with the extraction of information from the Web using web mining tools. Web usage mining is one of the fastest developing areas of web mining. Its attention in analyzing users behavior on the web after exploring access logs made its popularity very rapidly especially in E-services areas. Most of the e-service providers realized the fact that they can apply this tool to retain their customers. Ever wondered how customers use data on the web? Web usage mining can answer customer web activity questions so businesses can be marketed in a better and more efficient way. This paper tries to provide an insight into web mining and the different areas of web mining. Then it focuses on Web usage mining, its application and impact in E-services. INTRODUCTION Before delving into the focus of this paper, I would try to define the concept of Data Mining. A good definition of this concept can be taken from the lecture notes; which defines Data Mining as the non-trivial extraction of implicit, previously unknown and potentially useful knowledge from data in large data repositories. Web mining on the other hand is the application of data mining and machine learning techniques to extract useful knowledge to discover patterns from the World Wide Web or web resources. Web mining can be divided into three different types namely: Web usage mining (which are activities from server logs and web browser activity tracking), Web content mining (relates to data found on web pages and inside of documents) and Web structure mining (relates to links between pages, people and other data). Now that information sourcesavailable on the World Wide Web have grown expansively, it has becomevery important for users to utilize automatedtools in discovering and finding out the desired information resources, and totrack and analyze their usage patterns. To retrieve the information from different storage areas which is a quite difficult process, efficient tools are required to find the desired information. This on the other hand brings about problems that can be resolved using an intelligent system which can effectively mine for knowledge. When it comes to analysis, web mining takes it much further by combining other corporate information with web traffic data. There are so many practical applications of Web mining technology which are abundant, and are by no means the limit to this technology. The tools utilized can be extended and programmed to answer almost any question. It can be applied in different areas like providing companies with managerial insight into visitor profiles, which help top management take strategic actions accordingly. The company can obtain some subjective measurements through Web Mining on the effectiveness of their marketing campaign or marketing research, which will help the business to improve and align their marketing strategies timely. In the business world, structure mining can be quite useful in determining the connection between two or more business Web sites. This allows accounting, customer profile, inventory, and demographic information to be correlated with Web browsing. The company can identify the strength and weakness of its web marketing campaign through Web Mining, and then make strategic adjustments, obtain the feedback from Web Mining again to see the improvement. Search engine Google provides advanced and efficient searching capabilities. [2] Web mining is mainly divided into three categories: Web content mining, Web structure mining and Web usage mining. Web Content Mining Web content mining focuses on extracting and mining useful information or knowledge from web pages and web documents. Content data corresponds to thecollection of facts a web page was designed to convey tothe users. It may consist of text, images, audio, video, orstructured records such as lists and tables. While thereexists a significant body of work in extracting knowledgefrom images — in the fields of image processing andcomputer vision — the application of these techniques to Web content mining has not been very rapid. [2] Its techniques are equivalent of data mining techniques for text mining, since it is possible to find similar types of information from the unstructured data residing in Web documents. The Web document usually contains several types of data, such as text, image, audio, video, metadata and hyperlinks. The unstructured characteristic of Web data forces the Web content mining towards a more complicated approach. Web content mining can be explained in two different contexts [6]: information retrieval and database. The role of web content mining in Information Retrieval is mainly to support the information findings or improve the information filtering based on user queries. This result can be applied to web search engines and web personalization systems. In Database context the content mining can be aid to integrate data on the web, so that more sophisticated queries other than the keywords based search could be performed. The mining result can be used to build the web warehouse and web database, and apply warehousing and database techniques on the data. Web Structure Mining The structure of a typical web graph consists of Web pages as nodes, and hyperlinks as edges connecting between two related pages. Web structure Mining can be regarded as the process of discovering structure information from the Web. This type of mining can be further divided into two kinds based on the kind of structural data used. [2] Web structure mining tries to identify authoritative web pages. Web is a complex data store it contains not only pages but also of hyperlinks pointing from one page to another. By giving a hyperlink to another page the author tries to show his testimonial of the other page. This tremendous linkage information forms a rich source of web mining. It offers stunning information about relevance, the quality, and the structure of web contents. The architecture of the hyperlinks underlying the website is the result of this category, and appropriate handling of this information can lead into an improvement in accuracy of the web page retrieval. Web Usage Mining Web Usage Mining is the application of data mining techniques to discover interesting usage patterns from Web data, in order to understand and better serve the needs of Web-based applications [2]. What this does is that it tries to extract useful information from users’ web access history, what they are interested on the Internet whether textual data or multimedia data etc. The data here are collected from Web log records to discover user access patterns of Web pages. These patterns lead to accessed web pages. This is vital information for companies and their internet/intranet based applications. They use the analyzed reports of those patterns for different purposes. The applications generated from this analysis can be classified as personalization, system improvement, site modification, business intelligence and usage characterization [2]. Web usage mining depends on the collaboration of the user to allow the access of the Web log records. Due to this dependence, privacy is becoming a new issue to Web usage mining, since users should be made aware about privacy policies before they make the decision to reveal their personal data [8]. Our main focus on this paper is on the concept of web usage mining which is why we will be digressing more into its applications and other aspects of it (web usage mining). In fact, Web usage mining has many benefits which attract business and government agencies towards it. Government agencies utilized the classification and predicting capability of this technology to fight against terrorism and identifying criminal activities. Business sectors are benefited by personalized marketing, customer retention, and customer relationship and even they got the opportunity to provide promotional offers to specific customers to retain them. The stated activities are carrying out by three major tools of Web Usage Mining, namely: Preprocessing, Pattern discovery, and Pattern analysis.We would be doing a brief introduction into these topics to show how they relate to web usage mining. 1. Preprocessing Preprocessing in web usage mining is the conversion of user information into the format of dataabstraction. This is an essential part of pattern discovery and according to the preprocessing data, it iscategorized into three: usage preprocessing, contentpreprocessing and structure preprocessing.Usage preprocessing is the one of the difficult task in webusage mining. They gather data from IP address, agents andserver side click streams, because of the nature of data,always the data is incomplete. Preprocessing of text, image,scripts and multimedia files are carrying out in contentpreprocessing. Structure preprocessing involves theprocessing of hyperlinks between the page views [2]. Another aspect of preprocessing is that it consists of consists of user identification, data cleaning, path completion,session identification and formatting. i. For User Identification, a user is the main variable/ object using a client to collect and render resources. User identification is greatly complicated by the existence of local caches, corporate firewalls, and proxy servers. The Web usage mining methods that rely on user cooperation are the easiest ways to deal with this problem. However, even for the log/site-based methods, there are heuristics that can be used to help identify unique users. For example, even if the IP address is the same, if the agent log shows a change in browser software or operating system, a reasonable assumption to make is that each different agent type for an IP address represents a different user [5]. Another heuristic for user identification is to use the access log in conjunction with the referrer logs and site topology to construct browsing paths for each user. If a page is requested that is not directly reachable by a hyperlink from any of the pages visited by the user, the heuristic assumes that there is another user with the same IP address. ii. Data Cleaning is method that can dispose of recurring and implicit data. End of the things regarded unimportant can be sensibly fulfilled by checking the postfix of the URL name. Case in point, all log passages with filename postfixes, for example, gif, jpeg, GIF, JPEG, jpg, JPG, and map can be evacuated. Also, regular scripts for example, the documents asked for with the postfixes of “.cgi” can likewise be removed[10]. However, now and again, there are imperative gets to that are not recorded in the entrance log, for example, nearby caches and intermediary servers can seriously mutilate the general picture of client traversals through a Web webpage. Current techniques to overcome this issue incorporate the utilization of cookies, cache busting, and unequivocal client enrollment [5]. iii. Path completion determines the issue that there are imperative gets to that are not recorded in the entrance log. Techniques like those utilized for client distinguishing proof can be utilized for path completion. In the event that a page request is made that is not straightforwardly connected to the last page a client requested, the referrer log can be verified what page the request originated from. On the off chance that the page is in the client’s later request history, the supposition is that the client backtracked with the “back” catch accessible on most browsers, ringing reserved variants of the pages until another page was requested. On the off chance that the referrer log is not clear, the site topology can be utilized to the same impact. In the event that more than one page in the client’s history contains a connection to the requested page, it is accepted that the page nearest to the beforehand requested page is the wellspring of the new request. iv. Session Identification refers to a piece of data that is used in network communications to identify a session, a series of related message exchanges. A user session on the other hand refers to a number of clicks a user makes across one or more Web servers. The goal of session identification is to divide the page accesses of each user into individual sessions. The simplest method of achieving this is through a timeout, where if the time between page requests exceeds a certain limit, it is assumed that the user is starting a new session. v. For Formatting in data mining,we humans use experience when we interpret the data we see, but computers/ machines can’t. Once the appropriate preprocessing steps have been applied to a server log, a final preparation draft can be used to properly format the sessions or transactions for the type of data mining to be accomplished. For example, since the information gotten from a temporal data is not needed for the mining of association rules, a final association rule preparation draft would strip out the time for each reference, and do any other formatting of the data necessary for the specific data mining algorithm to be used [5]. 2. Pattern Discovery Pattern Discovery tools are derived from several fields such as statistics, data mining, machine learning and pattern recognition. After cleaning the data and the identification of user transactions and sessions from access logs only we can start pattern discovery process. Statistical techniques are used to abstract knowledge about the website visitors. Then from this abstracted knowledge Association rule generates the association between frequently referenced pages and sequential pattern tools helps in predicting future visit patterns. From those data Clustering tools group’s similar characteristics items together, most interested groups in web usage mining tasks are image group, image cluster, and page group, page cluster, and Classification tool do the generalization process and combine together into one predefined class. We have different kinds of pattern discovery tools that can be used when dealing with web usage mining. Some of them include the following: clustering, association rules, sequential patterns and classification. i. Clustering is a technique to group together a set of items having similar characteristics. In the web usage domain, there are two kinds of interesting clusters to be discovered: usage clusters and page clusters [5]. Clustering of users tends to establish groups of users exhibiting similar browsing patterns. Such knowledge is especially useful for inferring user demographics in order to perform market segmentation in e-commerce applications or provide personalized Web content to the users. On the other hand, clustering of pages will discover groups of pages having related content. ii. Association Rules is a method for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using some measures of interestingness [6]. It captures the relationships among items based on their patterns of co-occurrence across transactions. In the case of Web transactions, association rules capture relationships among page views based on the navigational patterns of a user. Most common approaches to association discovery are based on the Apriori algorithm that follows a generate-and-test methodology [5]. Aside from being applicable for business and marketing applications, the presence or absence of association rules can help Web designers to restructure their Web site. iii. Sequential patterns in web usage mining capture the web page trails that are often visited by users, in the order that they were visited. Sequential patterns are those sequences of items that frequently occur in a sufficiently large proportion of transactions. By utilizing this methodology, web advertisers can foresee future visit designs which will be useful in setting ads went for certain user groups. It might be helpful in trend analysis, change point detection, or similarity analysis. iv. Finally Classification is the undertaking of mapping a data item into one of a few predefined classes. For this undertaking, a model set (i.e., a set of cases whose class marks are known) is initially broke down and a classification model is developed taking into account the components accessible in the information of the model set. Such a classification model is then used to arrange a score set. Classification should be possible by utilizing administered inductive learning calculations, for example, k-closest neighbor classifiers, decision trees,naïve Bayesian and so forth. 3. Pattern Analysis The last part of Web Usage Mining is Pattern Analysis. This phase will filter out all unimportant patterns from the set found in the pattern discovery. For example, after discovering access patterns, analysts need the appropriate tools and techniques to understand, visualize, and interpret these patterns. Knowledge query mechanism, such as SQL, is the most common form of pattern analysis method. These use content and structure information also for filtering out patterns containing pages of certain usage types, content types or pages that match a certain hyperlink structure [4]. There’s always the need to develop techniques and tools to help an analyst better understand knowledge derived from some data, and this is what pattern analysis tools specialize in. The most common tools of pattern analysis OLAP analysis, knowledge query mechanism and visualization techniques. i. OLAP (analysis) is an acronym for Online Analytical Processing. It performs multidimensional analysis of business data and provides the capability for complex calculations, trend analysis, and sophisticated data modeling [6]. OLAP is also emerging as a very powerful paradigm for strategic analysis of databases in business settings. While OLAP can be performed directly on top of relational databases, industry has developed specialized tools to make it more efficient and effective, the research community has recently demonstrated that the functional and performance needs of OLAP require that new information structures be designed [7]. ii. Knowledge query mechanism mainly implemented association with SQL. Many Web usage analysis tools, e.g. WUM, WebMiner and Midas, give some object rules, for example the lift, support and confidence that are helpful to filter out unimportant knowledge manually, and then acquire the analyzed results by using SQL.Querying may be performed on the knowledge thathas been extracted by the mining process, in which case alanguage for querying knowledge rather than data isneeded[7]. iii. Data visualization is a general term that describes any effort to help people understand the significance of data by placing it in a visual context. Patterns, trends and correlations that might go undetected in text-based data can be exposed and recognized easier with data visualization software [6]. Visualization is a natural choice for understanding the behavior of Web users, for it has been used very successfully in helping people understand various kinds of phenomena, both real and abstract. For example, the WebViz system is used to visualize internet access patterns. It allows the analyst to selectively analyze the portion of the Web that is of interest by filtering out the irrelevant portions. Diagram showing a framework for web usage mining (culled from the web) Application in E-Services As I’ve earlier discussed earlier in the paper, the results of Web usage mining can be applied to understand and analyze web usage data. So, we can apply web usage mining techniques to E-services. The application of web usage mining refers to how web usage mining in different forms of online services. The internet has turned into a far reaching medium for the dissemination of data [1]. Advancement in technology has shown that the volume of information in web and its complex structures are expanding indiscriminately. It is on this subject that the utilization of web usage mining has its own particular usefulness.Fast development in online administrations called e-administration applications like e-commerce, e-governance, e-market, e-finance, e-learning, e-managing an account and so forth has made business group and clients confront another circumstance. This paper concentrates on the use of web usage mining in some of the major e-services being utilized in the world: E-Learning, E-Governance and E-commerce. E-Learning E-learning is defined as the study and ethical practice of facilitating learning and improving performance by creating, using, and managing appropriate technological processes and resources [6]. For better understanding it is can be described as learning conducted via electronic media, typically on the Internet. It can also be easily be defined as a type of electronically supported learning which permits the people to learn any subject at anytime and anywhere. Nowadays, web-based learning environment are being very popular. All kinds of e-learning machines and tools are developed to satisfy various e-learning requirements and support collaborative learning activities such as web- based multimedia curriculum, online synchronous conference systems, white- boards etc. To a great extent these tools do not only eliminate the time and space limitation of traditional indoor teaching-learning model, but also provide multiple learning patterns which make the learning process more attractive, efficient and convenient. However, due to the wide-spread nature of both learning resources and the learners in a web based learning environment, on the instructor and educator side, it is difficult for them to thoroughly track and evaluate the learners activity and furthermore, assess the effectiveness of the learning process and the structure of the learning content [9]. The straightforwardness in utilizing these devices to browse the resources on the web, its easiness in deploying and keeping up resources made the web as an excellent device for delivering courses. Web is one and just significant choice to manage and keep up learning resources and has become one of the leading choice of modern advanced distance education-learning/ online system. As education becomes more technologically advanced, the complexity of available learning resources likewise increased in like manner. It is hard to evaluate the structure of the course content and its effectiveness on the learning process. This is the scenario where web usage mining can contribute. The pattern analysis capacity of web usage mining has an essential role in web-based learning system [1]. They can analyze the students and educators behavior and improve the educational experience. Following the activities happening in the course website and mine patterns is additionally beneficial to improve or adjust the course contents. This permits teachers to appraise students’ behavior, assess the learning activities and compare learners. The arrangement of the course contents can be enhanced by dissecting the traversal paths of the course content web pages is another advantage of web usage mining. Traditional E-Learning Model (culled from the web [6]) E-Governance Electronic governance or e-governance is the application of information and communication technology (ICT) for delivering government services, exchange of information communication transactions, integration of various stand-alone systems and services between government-to-customer (G2C), government-to-business (G2B), government-to-government (G2G) as well as back office processes and interactions within the entire government framework.[1] Through e-governance, government services will be made available to citizens in a convenient, efficient and transparent manner. What it does is that it provides a single web portal that integrates all services that includes government, nonprofit and private-sector entities. In such a type of service system which provides ready access to information, the user interface quality is an important factor. This is one of the challenging user-centric parameter since this has to provide information to extensive and various users [1]. If the presentation sub- system adjusts according to the individual inclinations of each user will ensure extensive participation in e-governance systems. The application of Web usage mining to e-governance is a procedure which translates citizen or business’ usage data on government Web site into valuable knowledge which can provide various decision supports in government affairs [8]. This can include: finding out the interests/desires of a citizen and improving the citizen or business satisfaction; restructuring the government website and increasing the system performance; enhancing the government planning and promoting government innovation; improving the analysis and decision making of government etc. In discovering a citizen/ user’s behavior and preferences, he can easily see where he clicks, how long he remains on certain pages, what words he has searched and what interactions he has with the website. In regards increasing the system performance of a government website, the website can be restructured with respect to the frequent access paths of visitors which significantly reduce the expenditure on the website. This is basically what web usage mining in e-governance does; effectively observing and analyzing the users of government website, their actions and behaviors. For fostering government innovation, by employing data mining technology, government can manage with reason the human resources, material resources and information resources to harmonize the relation between resources inside government and those outside government, such as, the whole process from program planning to program implementing can exchange and share the same data [8]. Mining Web usage data on government site can quickly obtain the information about government affairs to make the government grasp the society development trends in time, which at the same time make management and redeployment of society resources more systematic [1]. When it comes to improving government analysis and decisions, analyzing and mining large quantities of government data become of great benefit. This is how government makes decisions about its citizens by analyzing information about citizens et al. For example, mining the client side log files can obtain the opinion of citizen and effectively assist government department to make scientific and rational adjustment to their desires [8]. E-Commerce E-commerce in its simplest form refers to the buying and selling of products and services online. Exchange of goods or services through Internet E-commerce produce enormous volume of relations. It also implies two trading parties based on the Internet according to certain rules or standard developing the whole traditional business activity in digital network mode [1]. There has been a rise in technologies that gather client click streams, buying and traversing patterns are vital information for the E-commerce activity and analyzing users’ behaviors on the web. They help in investigating demographic data and assist them to propose cross-marketing procedure transversely products and services. It also supports e-commerce sites to keep the most gainful customers, get better the functionality of web based function provides more specially made content to visitors. In adding up with the use of Web usage mining techniques e-commerce companies can get better products quality or sales by anticipating troubles before they happen. They also give groups with earlier unknown buying patterns and performance of their online clientele. More prominently the fast criticism the groups gain by using Web usage mining is very obliging in raising the company’s profit [10]. It is pertinent to note that discovering usage patterns from web data is the technique which is adopted in web usage mining. It has been an important technology for understanding users’ behaviors on the Web. Its technology to collect customer click streams, buying and traversing patterns etc. are vital information for the E-businesses. Overall, it helps to assist in analyzing demographic data and help business owners/ analysts to propose cross-marketing policies across products and services. It also supports e-commerce sites to retain the most profitable customers, improve the functionality of web based applications, provides more custom-made content to visitors [1]. Web usage mining techniques can be applied in e-commerce in the following listed examples: i. Business Intelligence (BI) Business Intelligence can be described as a set of techniques and tools for the acquisition and transformation of raw data into meaningful and useful information for business analysis purposes [6]. BI technologies provide historical, current and predictive views of business operations. Common functions of business intelligence technologies are reporting, online analytical processing, analytics, data mining, process mining, predictive analytics etc. It can be used to support a wide range of business decisions ranging from operational to strategic. Collecting data and information on how customers are using a website is critical information for business owners and marketers of e-commerce. By mining the relationship between customers’ behavior and purchase, we can understand the customers’ purchasing intention much better, find the customers’ purchasing characteristics and trends, and identify the potential purchaser [5]. A few commercial products that provide web traffic analysis mainly for the purpose of gathering business intelligence are:Accrue, Net-Genesis, Aria, Hitlist and WebTrends etc. ii. Personalization Web personalization has become an indispensable tool for both web-based organizations and for the end users due to tremendous growth in the number and the complexity of information resources and services on an e-commerce website. A good number of systems have already concentrated on providing website personalization based on usage information, a good example is SiteHelper. Also the ability of an e-commerce site to engage visitors at a deeper level, and to successfully guide them to selection and purchase products is now viewed as one of the key factors for e-commerce company ultimate success [5]. A major advantage of using Web personalization based on web usage mining is that it can automate the adaptation of Web-based services to their users which then overcomes the problem in traditional web personalization systems, i.e. the personalization process involves substantial manual work and most of the time, significant effort on the part of the user. iii. Site Modification How user-friendly and attractive an e-commerce site, in terms of both content and structure goes a long way in how successful that site is; which is very important for a product catalog of e-commerce. Web usage mining provides detailed feedback on user behavior, providing the Web site designer information on which to base redesign decisions. An interface designer of a website can use this information to change the structure of a site to adapt to the customers using his/her e-commerce system. Clustering of pages is used to determine which pages should be directly linked [5]. Some web mining techniques can be applied to understand and analyze data and information generated, and turned into actionable information, which can then support a web enabled electronic business to improve its sales, customer support and marketing operations. Some include: Predicting trend can be used to retrieve information to indicate future values. For example, an electronic auction company provides information about items to auction, previous auction details, etc. Predictive modeling can be utilized to analyze the existing information, and to estimate the values for auctioneer items or number of people participating in future auctions [11]. In battling to keep customers, analysts can seek and retain the most profitable customers by analyzing demographic data, customer-buying and traversing patterns collected online or offline. This is where personalization services come in since they offer only the ability to track generalized content such as media, weather, news, stock quotes etc. Also when trying to promote an e-commerce campaign, web mining can promote campaign by cross-marketing strategies across products [11]. Web mining techniques can analyze logs of different sales indicating customer’s buying patterns, for example association rules can be applied to find frequent products that get bought together most often. Conclusion The goal of this paper from the beginning was to attempt to present data mining concepts and issues that are associated with web-enabled e-service applications which we have tried to achieve to a huge extent by delving into and focusing on the different techniques and where web usage can be applied on whatever e-service platform or system. As we all know and have earlier discussed, it is easy to collect data from web-enabled e-business sources as all visitors to a website leave the trail which automatically is stored in log files by web server. This is where data mining tools can process and analyze such web server log files or actual web contents to discover meaningful information which then translates to knowledge. The data mining techniques provide companies with previously unknown buying patterns and behavior of their online customers. More importantly, the fast feedback the companies obtained using data mining is very helpful in increasing the company’s benefit. Web usage mining is suitably an active concerning field of research because of its potential marketable benefits. It is further possible to analyze the visitor’s performance by linking the Web logs with cookies and forms, and which could help e-services site to address several business questions. Its awareness in analyzing user’s actions on the web after discovering access logs made its fame very rapidly specially in E-services areas. Details like user log files, request for resources etc. are uphold in web servers, which is the core mining area of web usage. The study of these gives the user browsing case and that can be utilized for target advertisement, improvement of web design, satisfaction of customers and making predictions and market analysis. It has come to a point where most of the e-service suppliers understand this fact that it is in their best interest they utilize this technology/ tool to keep hold of their customers. Based on ourdiscussions, the application of web usage mining in e-services is an interesting and important research area which gives room for more future work improvement. Electronic services as a whole is providing an unprecedented opportunity to collect detailed data about all aspects of online activity. Web mining is a key technology to help make sense of all of this data and better understand and facilitate electronic services. As the web and its usage continues to grow, so too grows the chance to analyze web data and remove all manner of useful knowledge from it. In future, more applications of personal Web usage mining will be developed, and different electronic business companies that incorporate data mining results with its strategy is sure to be succeed. Also, when the data gotten from electronic government sites and services can help anticipate and predict what citizens value and expect more from their government as this would help government focus on such areas. Overall, when web mining isefficiently applied to e-commerce to know the browsing behavior of customers, this would help to improve the design of e-commerce website and to provide personalized services and also to determine the success of marketing efforts of such a business.Any web-enabled e-commerce/ business company that incorporates data mining results with its strategy has a huge likelihood to succeed. It is as simple as that. References 1. Web Usage Mining- It’s Application in E-Services, AnupamaPrasanth 2. Arti, SunitaChoudhary, G.N Purohit: Role of Web Mining in E-Commerce 3. J. Han, M. Kamber. Data mining: concepts and techniques, Morgan Kaufmann, Los Altos, CA (2001) 4. WebUsage Mining- BomshadMombasherhttp://www.123seminarsonly.com/Seminar-Reports/2013-03/116918126-web-mining.pdf 5. Li Chaofeng, Lu Yansheng. Research on Web Usage Mining for Electronic Commerce. Proceedings of 2005 International Conference on Management Science and Engineering, Oct 2005. 6. Wikipedia (definitions) www.wikipedia.com 7. LIU Jian-guo, HUANG Zheng-hong , WU Wei-ping. Web Mining for Electronic Business Application 8. Ping Zhou, ZhongjianLe. A Framework for Web Usage Mining in Electronic Government 9. Xinjin Li, Sujing Zhang. Application of Web Usage Mining in e-learning Platform. 10. Dr. S.S. Gautam, Manish Kumar Tiwari. Web Mining — Concepts and its Applications 11. LIU Jian-guo, HUANG Zheng-hong , WU Wei-ping. Web Mining for Electronic Business Application
Deep Learning and Web Usage Mining Applications in E-Services
1
deep-learning-and-web-data-mining-applications-in-e-services-18a6730c468a
2018-01-09
2018-01-09 07:25:07
https://medium.com/s/story/deep-learning-and-web-data-mining-applications-in-e-services-18a6730c468a
false
5,936
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
Aile Oleghe
null
19cc50d653ef
aileoleghe
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-12
2018-06-12 10:30:10
2018-06-12
2018-06-12 11:19:47
7
false
en
2018-06-12
2018-06-12 11:19:47
1
18a9dcdfe830
3.642453
0
0
0
Artificial intelligence, specifically machine learning (ML), is quickly becoming essential for running smarter business operations. One of…
2
Where to apply machine learning for supply chain optimization Artificial intelligence, specifically machine learning (ML), is quickly becoming essential for running smarter business operations. One of the greatest features of Dynamics 365 is its ability to incorporate ML capabilities within business applications, which provides predictive insights and helps businesses execute operations in more effective manner. According to a recent study by Mckinsey Global Institute, advanced AI technologies have the potential to unlock a global economic impact of $10–15T across all industry segments. Sales, marketing, supply chain management, and manufacturing are major segments that could significantly benefit from machine and deep learning technologies in retail and CPG. Below are a few candidate scenarios for AI-enabled optimization for the retail and CPG veticals in particular. Later in the article, one use case is explained in detail using Microsoft business applications. ML based demand and sales forecasting Personalized product recommendations Price and promotion recommendations to optimize markups and margins Inventory optimization with correct stock levels Logistics planning workbench and warehouse throughput optimization Build a 360° view of consumers Consumer insights (sentiment analysis/preferences/social listening) using cognitive services Shop-floor yield optimization Predictive equipment maintenance in factories Predictive lead scoring to improve lead qualification, prioritization, and acquisition “61% of organizations picked machine learning as their company’s most significant data initiative for next year.” Source: Forbes.com Dynamics 365 Operations and Azure Machine Learning Studio Demand Forecasting Use Case Dynamics 365 for Finance and Operations allows you to integrate Azure Machine Learning into your Dynamics environment to predict demand more accurately by infusing more demand planning parameters and considering new statistical models. 1) Historical data: The first and most important step of the process is gathering and preparing the transactional data from Dynamics 365 and providing it to Azure Machine Learning Studio for training the mode. Item allocation keys are used to bundle similar products for which the demand-forecasting algorithm should run on historical sales. Navigation to set up similar products in the historical data: Master Planning -> Setup -> Demand Forecasting -> Item Allocation Keys 2) Train your model: Once historical data is loaded, a model needs to be trained for accurate forecasting. In this example, we have an R script, however, we can build forecasting models in Python as well. In this scenario, we are using a predefined model available in the Experiment Lab. Is integration real-time between ML and ERP? Since Dynamics 365 and Azure are both in the Microsoft family, they are easily integrated to allow for real-time results. Navigation to enable this integration: Master Planning -> Setup -> Demand Forecasting -> Demand Forecasting Parameters On the Azure Machine Learning FastTab, provide the web service key and endpoint received from Machine Learning Studio. 3) Generate a statistical baseline forecast: After completing the setup and configuring the demand forecasting parameters, we generate a statistical baseline forecast. Navigation to generate the statistical baseline forecast: Master Planning -> Forecasting -> Demand Forecasting -> Statistical Baseline Forecast Once you click OK, it looks for the best-fit model as per the forecasting parameters and generates the forecast from the ML engine. 4) Adjustment and approval: We can now adjust and approve the forecast as per business needs and including other factors like market volatility, etc. Master Planning -> Forecasting -> Demand Forecasting -> Adjusted Demand Forecast Once we authorize the demand forecast, we can run the master planning and create planned orders. Each planned order generated includes master planning parameters e.g. production process, minimum lead time, lowest unit price, and safety stock calculation based on the coverage group assigned to the item. Hence, with a few steps, we were able to set up a predefined, AI-infused demand forecasting model in Dynamics 365 to generate forecasts. With the Microsoft Dynamics 365 platform, all of this is can be customized to fulfill your business needs. Microsoft has provided numerous ML models under cognitive services that can be used as a foundation and further trained for customer-specific scenarios. Visionet is a trusted Microsoft Gold partner and has 20+ years of experience delivering projects in retail and CPG. Source: Visionet Systems Blog
Where to apply machine learning for supply chain optimization
0
where-to-apply-machine-learning-for-supply-chain-optimization-18a9dcdfe830
2018-06-12
2018-06-12 11:19:48
https://medium.com/s/story/where-to-apply-machine-learning-for-supply-chain-optimization-18a9dcdfe830
false
687
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Retail Technology Trends
Retail technology trends including ERP, Artificial Intelligence, Robotic Process Automation and future tech
94d8c3e06b7
retailtech
10
17
20,181,104
null
null
null
null
null
null
0
null
0
a7362b43f5e7
2018-04-12
2018-04-12 21:44:47
2018-05-14
2018-05-14 17:05:50
1
false
en
2018-05-14
2018-05-14 17:05:50
22
18a9de0bc9bb
3.837736
10
0
0
Here at Synapse, we’re not only committed to building great products but also fair and transparent ones. To that end, we’re working hard to…
4
Machine Learning and Transparency Here at Synapse, we’re not only committed to building great products but also fair and transparent ones. To that end, we’re working hard to ensure our classification models are free of social biases and are favoring inclusion over exclusion. It goes without saying that ethical questions that are difficult for human beings to reach consensus on will not be easily solved by machine learning models. The difficulty involved, however, is not a reason to shy away from pursuing technical solutions. In fact, this difficulty indicates just how pressing and essential it is that more research and more direct applications are developed. While our solutions may not always be perfect, being transparent about what our models do and why they do it is a necessary first step. In the process, we can help to simplify complexity and to account for everything we build. In future posts, we’ll go into more detail about the technical approaches we are implementing to accomplish this goal. In the meantime, this post provides a brief overview of some recent libraries and theories we feel are particularly promising. Interpretability Christoph Molnar’s online book provides a useful overview of current approaches for explaining black box machine learning models. A number of libraries — most of which are open-source — have also been released over the last few years and tend to have relatively straightforward implementations for explaining the predictions of already trained models. LIME (Local Interpretable Model-Agnostic Explanations), is probably the most widely used interpretability library. As the name suggests, it is model-agnostic, meaning it can be used on anything from a polynomial regression model to a deep neural network. LIME’s approach is to perturb most of the features of a single prediction instance — essentially zeroing-out these features — and then to test the resulting output. By running this process repeatedly, LIME is able to determine a linear decision boundary for each feature indicating its predictive importance (e.g. which pixels contributed the most to the classification of a specific image). The associated paper provides a more rigorous discussion of the approach, though this post by the authors is probably the best place to start. Skater, another open-source library, incorporates LIME’s local interpretations, while also including global explanations. For instance, there is built-in functionality for producing marginal plots (showing the relationship between different pairs of variables) and partial dependence plots (showing the relationship between each variable and the model output). H2O, an open-source machine learning platform, includes various methods for model interpretability along with its more general ML implementations. The authors provide examples and documentation for approaches such as decision tree surrogate models, sensitivity analysis, and monotonic constraints (useful for instances in which you want to ensure that changes to a specific variable result in a continuous increase or decrease in the model’s output). They’ve also written a post describing different interpretability methods in depth. SHAP (SHapley Additive exPlanations), unifies multiple different interpretability methods (including LIME) into a single approach. It does this by mathematically defining a class of additive feature attribution methods, and demonstrates that six different interpretability methods currently in use fall within this class. See the associated paper for more details. Bayesian Deep Learning Bayesian deep learning has emerged as another way to gain more insight into black box models. Rather than explaining individual feature importance for predictions, a bayesian approach enables one to measure how confident a deep learning model is about its predictions. This is useful on multiple fronts, but one particularly beneficial result is that predictions that are output with a high degree of uncertainty can be set aside for closer manual analysis by a human being. A number of probabilistic programming languages have been released over the last few years, starting with Stan back in 2012. Since then there’s been PyMC3 (running Theano on the backend), Edward (running on TensorFlow), and Pyro (released just last November by Uber’s AI labs and running on Pytorch). Pyro was specifically designed for deep learning applications, and the Edward documentation provides a number of tutorials and videos for bayesian deep learning, including an example of how to use dropout to approximate bayesian probabilities. A key paper originally published back in 2015 demonstrates that dropout, a standard method of regularizing deep learning models to prevent overfitting, converges to a gaussian process and hence can be used to measure model confidence. Fair Machine Learning Annual conferences such as FAT* (Conference on Fairness, Accountability, and Transparency) have helped bring increasing attention to the need to build more equitable models, while also drawing scholars, researchers, and practitioners from different fields into conversation. A number of open-source fairness libraries have also been released in recent years, though most of them are still in the early stages. Although there is no consensus yet on what measures are most conducive for producing fair outcomes (and of course, much depends on how we define fairness in the first place), a number of compelling criteria and definitions have been proposed. Fairness Measures, Reflections on Quantitative Fairness, and Attacking Discrimination with Smarter Machine Learning all provide useful resources for beginning to think through how best to approach the difficult but essential question of how to build fair models. When we speak of “bias” in machine learning we are usually referring to the mathematical assumptions built into the parameters of a model. It is becoming increasingly urgent, however, that we also consider the other definitions of “bias,” and with them, all the ways our models affect actual human beings, their lives as well as their livelihoods. As a banking platform, we are confident that optimizing our models for transparency and interpretability is not only the right thing to do but will also lead to better, more inclusive products.
Machine Learning and Transparency
150
machine-learning-and-transparency-18a9de0bc9bb
2018-05-17
2018-05-17 06:42:07
https://medium.com/s/story/machine-learning-and-transparency-18a9de0bc9bb
false
964
Operating System For Modern Banking
null
synapsepay
null
SynapseFI
hello@synapsefi.com
synapsefi
BANKING,FINANCE,AI,API
synapsefi
Ethics
ethics
Ethics
7,787
Matt Sims
AI Ethics Engineer @ SynapseFI
d7767e22953e
matt.sims
2
1
20,181,104
null
null
null
null
null
null
0
null
0
dab3b95726e2
2017-11-16
2017-11-16 01:59:24
2017-11-16
2017-11-16 18:58:18
3
false
en
2017-11-16
2017-11-16 18:58:18
3
18ac0b6bdb1f
3.723585
7
0
0
By Nick Kohut, CEO Dash Robotics
3
Why We Decided to Acquire Bots Alive By Nick Kohut, CEO Dash Robotics Toys that play with you (at a price everyone can afford)… At Dash Robotics, we’ve known for several years that toys are rapidly moving in a new direction. We see three shifts — every kid has a smart phone, the cost of robotics development has fallen sharply and artificial intelligence means our toys can play with us. This has led to an explosion of “connected toys,” promising the best of hardware (traditional toys) and software (video games) for kids’ entertainment. At Dash Robotics, our mission is to bring children affordable connected toys that foster education and interactivity. To further that aim, we decided to acquire Bots Alive, a company with some of the most promising tech we’ve seen in the connected toy space. To understand today, you need to look at how the toy world is shifting… Today, over 83% of US homes with children have tablets, and AI is everywhere — from self-driving cars, to unlocking your smartphone, to ordering a pizza via the Amazon Echo. This has changed how kids are playing. Mattel and Sphero have already shipped toys that understand what children are saying, like Barbie Hello Dreamhouse ($299) and Sphero Spiderman ($149). Anki’s Cozmo ($179) uses computer vision to recognize faces and objects. Sadly though, despite this tech revolution, these toys are reserved for an incredibly small and wealthy audience, as all of them are priced at or above $149. That sticker price is out of reach for most Americans. Bots Alive — A New Platform for AI Toys At Dash Robotics, our mission is to build the backbone of connected toy technology to produce engaging, innovative and, most importantly, affordable toys. Our latest product, Kamigami Robots, is a perfect example, full of intelligent tech, produced in partnership with Mattel, and priced at $49 each. These principles are why we decided last summer that we had to acquire a company in Austin, TX called Bots Alive, which was breaking new ground in the connected toy space. Bots Alive aligns with the vision we’ve always had — toys that can play with us, play with each other, and amaze us in ways we’ve never seen before. Our platform already solves some of these puzzles, but Bots Alive gets us even closer to that dream. The technology platform of Bots Alive allows the physical world to be programmed like a video game. This magic factor dramatically increases a toy’s capability, personality, and fun, all without raising its per-unit cost. Some of the behaviors that can be built with Bots Alive include: A robot seeks out a “carrot” (digital or physical) and shares it with another robot Two or more robots autonomously play a game of soccer A robot drives too close to a “landmine” and spins out, lights up, and makes a sound A robot faces you, and talks to you (perhaps complimenting you on a job well done or chiding you for a mistake) A hero robot chases a villain robot A villain robot chases a hero robot (oh no!) A robot carefully guards a portion of your living room, wary of invaders …and many, many more. How are all these behaviors built on a single system? A combination of computer vision, machine learning, and communication protocols like Bluetooth Low Energy. The platform itself is built on Unity (the world’s most popular game development software) and is compatible with both Android and iOS. The Tech — Computer Vision, Machine Learning, and more The Bots Alive computer vision system employs the camera of a smartphone or tablet to track over 15 objects at once, rapidly and accurately, whether the objects are moving or stationary. Using Bluetooth Low Energy, up to 8 robots can be commanded at one time. A 3D representation of the play area is built in the software by tracking the position and orientation of every tracked object. This means that robots are aware of their position and orientation relative to other robots, other objects, and the user. This is what enables robots to chase each other, face the user, or execute numerous other behaviors that make it feel “aware.” Machine Learning is what gives the robot toys real personality. Robots in the Bots Alive system can learn from user interaction, and never react the same way to a situation twice. This allows certain robots to be aggressive, or timid, or selfish, all depending on the game they’re playing, how they’ve been treated in the past, and what the user has “taught” them. This opens up a huge world of character building and human-toy interaction that was never before possible. The Future It’s undoubtedly true that toys will get smarter and more engaging over time. New technology is giving birth to incredible creativity in the space. At Dash Robotics, we want to provide the best toolset for that creativity. We work with top partners to infuse personality, robotics, and magic into their brands and toys. Nick Kohut CEO, Dash Robotics, Inc.
Why We Decided to Acquire Bots Alive
205
why-we-decided-to-acquire-bots-alive-18ac0b6bdb1f
2018-05-11
2018-05-11 00:04:46
https://medium.com/s/story/why-we-decided-to-acquire-bots-alive-18ac0b6bdb1f
false
841
Inside thoughts on the evolving world of toys and play.
null
DashRobotics
null
Dash Robotics
info@dashrobotics.com
dash-robotics
TOYS,TECH,ROBOTS,STEM,STARTUP
null
Robotics
robotics
Robotics
9,103
Wyatt Butler
null
fdd6f9c2469
wyattbutler
34
33
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-29
2018-04-29 14:02:21
2018-04-30
2018-04-30 08:40:04
9
false
en
2018-05-03
2018-05-03 11:21:44
6
18ad4c39ad56
2.558491
2
0
0
Recently we were informed Databricks platform is coming to Azure as a first class citizen which led me to come up with a new Big Data…
5
Reddit Comments — Anomalous Days Recently we were informed Databricks platform is coming to Azure as a first class citizen which led me to come up with a new Big Data Analytics hands-on challenge. After examining a number of ideas I decided to play, explore and investigate Reddit Comments Data Set. Mission: Can we answer the following questions: what is the day of week with the highest average comments? determine anomalous days, where the number of comments is higher by at least two standard deviations from the average? understand the reason behind those anomalous days? predict the total comments per a specific day? It seems that in order to answer above questions we need a data processing pipeline over spark job and date exploration with basic clustering, anomalies detection, frequently terms usage and total comments prediction using spark notebook. Technology Stack (PaaS): Spark (Databricks), PowerBI (embedded), SQL-Server, Rest API, Active Directory. Part 1 — (What I did so far): Basic Exploration Phase: I downloaded the entire 2017 Reddit comments data set as JSON format. Uploaded it to Azure Data Lake Store (ADLS) using Microsoft Azure Storage Explorer. Invoked a new standard Databricks Workspace on Azure (be aware of standard vs premium features). Read as spark DataFrames and write it back, this time in a parquet format, partitioned by short date column (YYYY-MM-dd), three columns added. User Defined Functions Partitioned by day and write it to ADLS as parquet format files Sample Data 5. Grouped comments by each day at 2017 and display the number of total comments. 6. Now, grouped results by day of week, median and standard deviation calculated, display total comments as a pie chart. week day pie chart 7. Print anomalies days sample, calculated by the following: 8. factor*deviation(d) < abs(total_comments(d )-average(week_day(d))) 9. Clarification: a given Day -‘d’ considered as an Anomaly Day when the total comments written in a date ‘d’ are much higher or lower than the average. (factor ≥2) Spark DataFrames filter and transformations of anomalous days much higher dates sample much lower dates sample Spark Full Notebook Code (Scala) can be found here. Part 2 — Still Needs to be implemented: Add data sets with holidays and special events and make the correlations. Try to better understand the reason for those anomalies days. Display histograms and other interesting graphs with PowerBI platform. Connect Active Directory.
Reddit Comments — Anomalous Days
2
reddit-comments-anomalies-detection-18ad4c39ad56
2018-05-03
2018-05-03 11:21:45
https://medium.com/s/story/reddit-comments-anomalies-detection-18ad4c39ad56
false
360
null
null
null
null
null
null
null
null
null
Azure
azure
Azure
3,345
Avi Paz
Advanced Analytics and AI Architect | Big Data Engineer | Full-stack Software Developer
33971b79eb68
avraham.paz
37
36
20,181,104
null
null
null
null
null
null