audioVersionDurationSec
float64
0
3.27k
codeBlock
stringlengths
3
77.5k
codeBlockCount
float64
0
389
collectionId
stringlengths
9
12
createdDate
stringclasses
741 values
createdDatetime
stringlengths
19
19
firstPublishedDate
stringclasses
610 values
firstPublishedDatetime
stringlengths
19
19
imageCount
float64
0
263
isSubscriptionLocked
bool
2 classes
language
stringclasses
52 values
latestPublishedDate
stringclasses
577 values
latestPublishedDatetime
stringlengths
19
19
linksCount
float64
0
1.18k
postId
stringlengths
8
12
readingTime
float64
0
99.6
recommends
float64
0
42.3k
responsesCreatedCount
float64
0
3.08k
socialRecommendsCount
float64
0
3
subTitle
stringlengths
1
141
tagsCount
float64
1
6
text
stringlengths
1
145k
title
stringlengths
1
200
totalClapCount
float64
0
292k
uniqueSlug
stringlengths
12
119
updatedDate
stringclasses
431 values
updatedDatetime
stringlengths
19
19
url
stringlengths
32
829
vote
bool
2 classes
wordCount
float64
0
25k
publicationdescription
stringlengths
1
280
publicationdomain
stringlengths
6
35
publicationfacebookPageName
stringlengths
2
46
publicationfollowerCount
float64
publicationname
stringlengths
4
139
publicationpublicEmail
stringlengths
8
47
publicationslug
stringlengths
3
50
publicationtags
stringlengths
2
116
publicationtwitterUsername
stringlengths
1
15
tag_name
stringlengths
1
25
slug
stringlengths
1
25
name
stringlengths
1
25
postCount
float64
0
332k
author
stringlengths
1
50
bio
stringlengths
1
185
userId
stringlengths
8
12
userName
stringlengths
2
30
usersFollowedByCount
float64
0
334k
usersFollowedCount
float64
0
85.9k
scrappedDate
float64
20.2M
20.2M
claps
stringclasses
163 values
reading_time
float64
2
31
link
stringclasses
230 values
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
0
null
0
ff923d683f1e
2017-12-13
2017-12-13 14:26:12
2017-12-15
2017-12-15 11:46:34
6
false
fr
2017-12-15
2017-12-15 13:30:20
10
1d2175c5e33a
3.199057
2
0
0
Si je vous dit Mark Darcy, Love Actually et Mamie ? Vous me répondez… pull de Noël! Alors que vous soyez de ces personnes qui adulent Mark…
5
Les 5 évènements où exhiber votre super — moche — pull de Noël ce week-end 🎄 Si je vous dit Mark Darcy, Love Actually et Mamie ? Vous me répondez… pull de Noël! Alors que vous soyez de ces personnes qui adulent Mark Darcy, regardent Love Actually dès que la température s’approche des 5°C ou que, tout simplement, vous ayez une grand-mère qui adore tricoter, rassurez-vous le pull de Noël n’est plus proscrit. Bien plus, vous allez pouvoir porter votre petit renne Rudolph — option nez lumineux — avec fierté. Mark ❤ Bon, soyons honnêtes… Le Père Noël n’existe pas et les Ugly Christmas Jumpers ne sont pas admis partout. Pas d’inquiétudes, on vous a fait une petite sélection des évènements où vous allez pouvoir flamber avec votre pull de Noël sur le dos ! 1. Brain & Barbi(e)turix XMas Party Envie de prendre l’air après cette longue semaine? L’Aérosol ouvre ses portes à Brain Magazine et Barbi(e)turix pour une soirée de Noël en famille. Au programme: Roller Dance, Street Food, DJ sets et vin chaud — à ne pas renverser sur ton pull bien entendu. Pour passer une soirée dans l’odeur des sapins et du vin chaud c’est par là 2. Soirée pulls & food trucks — spécial Noël Tu as passé des heures à te tricoter LE pull de Noël le plus kitch? Boules de Noël, velour, Père Noël, guirlande lumineuse tu as tout donné. Voilà de quoi récompenser tes efforts. La soirée Pulls & Food Trucks va te permettre de devenir le/la Kate Moss du pull de Noël en défilant devant un jury aux critiques éguisées. Pas d’inquiétudes, en cas de défaite tu pourras toujours noyer ton chagrin avec des bières artisanales ou une bonne grosse raclette. 3. Marché de Noël — la Boutique Ulule Après cette victoire écrasante (ou pas) au concours du plus moche pull de Noël, il est grand temps de l’exposer aux yeux encore innocents de la population Parisienne. Quoi de mieux que le marché de Noël d’Ulule pour cela? Pour faire d’une pierre deux coups tu pourras aussi y accomplir ta mission cadeaux: artisanat, mode, jeux de société, design le tout Made in France! 4. Soirée Pull (le plus moche) de Noël On ne peut faire plus obvious comme évènement. La Soirée du Pull de Noël des Canaux est THE place to be ce samedi. Tu vas pouvoir afficher ton amour pour ton pull de Noël avec autant de discrétion que la famille royale et — tiens toi bien — en être récompensé. Oui, oui, ton sublime pull va pouvoir te faire gagner 2 shooters, de quoi t’ambiancer un peu sur les chants de Noël puis d’embrayer sur notre 3ème event ! 5. Le Gros Zboul de Noël : Love Specs x OFNI x Musart Organisé par les joyeux lurons de Love Specs avec l’aide de l’OFNI et la vibe de Musard, le Gros Zboul de Noël va vous mettre des coeurs plein les yeux 😍 Photo Booth, Camp d’Enguirlandement (Pull de Noël + Guirlande: meilleur combo pour se faire discret), grosse bouffe et gros son sont au programme. Le petit plus: Love Specs reverse 10% des bénéfices de la soirée à l’ONG Love Support Unite. Spread the love, l’event est ici ! Alors — votre pull — il ressemble à quoi? Chez Zyl, on est super curieux de voir vos pulls de Noël, du coup on a mis en place un album pour que vous nous partagiez votre oeuvre en laine ici (depuis votre smartphone)🎅 PS: l’auteur de cet article porte actuellement des oreilles de renne. Les petits rennes de Noël chez Zyl ❤
Les 5 évènements où exhiber votre super — moche — pull de Noël ce week-end 🎄
51
les-5-évènements-où-exhiber-votre-super-moche-pull-de-noël-ce-week-end-1d2175c5e33a
2018-03-30
2018-03-30 13:59:43
https://medium.com/s/story/les-5-évènements-où-exhiber-votre-super-moche-pull-de-noël-ce-week-end-1d2175c5e33a
false
596
Thoughts, stories & ideas about Zyl. The first AI-powered photo assistant that manages your photos for you and with you, privately and safely. Free on iOS and Android. https://zyl.ai
null
zylapp
null
Zyl-Story
contact@zyl.ai
comet-app
MOBILE APP DEVELOPMENT,PHOTO SHARING,TECHNOLOGY,ARTIFICIAL INTELLIGENCE
zylapp
Christmas
christmas
Christmas
17,859
Camille Guilleminot
null
70bae5e454cb
camille_57769
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-27
2018-06-27 10:55:35
2018-06-27
2018-06-27 11:36:46
1
false
en
2018-06-27
2018-06-27 11:38:40
3
1d2250cf9c7a
2.743396
0
0
0
Retailers that don’t carry enough inventory suffer from low sales revenue, while retailers with too much stock suffer from too much…
4
Tracking, Reporting, and Forecasting Retail Needs Retailers that don’t carry enough inventory suffer from low sales revenue, while retailers with too much stock suffer from too much overhead. Both types of miscalculation take a heavy toll on profits, which means that prompt and accurate tracking, reporting, and forecasting of consumer demand is critical for retail success. ERP solutions track transaction information across retail channels in real time, instantly generate and deliver reports and performance metrics, and intelligently forecast future demand curves with unsurpassed agility. After implementing an ERP platform, decision makers in your retail business will gain a much richer understanding of how effectively each department and business unit in your organization contributes to order fulfilment and customer service, both individually and in coordination with each other. ERP systems enhance all three of these critical processes: tracking, reporting, and forecasting. This post considers how ERP solutions augment each of these processes. Tracking From financial management and accounting to human resources, payroll, and employee scheduling, every aspect of your retail business is important. With an ERP solution, retailers can easily keep track of not just their sales transactions and stock levels, but every aspect of their business operations. ERP systems track several crucial retail processes, including: Sales transactions across all physical and digital retail channels Order fulfilment, including cross-channel fulfilment and delivery tracking Customer interaction via social media, customer service, etc. The effect of promotions, advertising, and marketing campaigns Supply chain and distribution costs, replenishment schedules, etc. Retailers can have their ERP deployment customized even further to track specific activities, variables, and KPIs that are relevant to their particular line of business (apparel & footwear, consumer goods, etc.). Industry-focused deployments of this kind keep track of metrics and relationships using terminology that’s specific to your trade. Reporting On its own, tracking information doesn’t offer much value. After it has been collected, your business data needs to be converted into clear, actionable information. Business intelligence (BI) solutions facilitate informed decision making and give context to your unstructured data. ERP and BI capabilities work in tandem: your BI software analyzes data from your ERP system and presents this single version of the truth as easy-to-understand performance metrics and reports. Some retail businesses make the mistake of skipping BI integration as a cost-cutting measure, because they think BI is an optional activity. This is certainly not the case, and BI should be a core component in your ERP implementation from day one. Better planning is a primary motivating factor behind implementing an ERP solution. After all, that’s what the “P” stands for! Forecasting Tracking information is necessary for generating business reports, and reporting is indispensable for understanding how well your business is performing. However the most successful retail brands understand the limitations of descriptive reports, and insist on analytical tools that can generate projections and perform advanced forecasting. Without such forward-looking analytics, organizations remain firmly entrenched in a passive mode of thinking, and often fail to react to emerging trends in time to capitalize on important opportunities. BI-enhanced ERP systems use real-time data to automatically perform accurate predictive analyses. Some advanced ERP platforms are even capable of autonomous decision making, and can be authorized to make some choices based on this forecasting without requiring any human intervention. Sometimes a machine that isn’t susceptible to human biases is better for making impartial decisions based on pure quantitative analysis. Together, ERP and BI can track and anticipate issues brewing on the horizon, like supplier shortages. Your ERP system tracks each supplier’s consistency in pricing and quality in real time. As soon as it detects significant changes in a vendor’s behavior, your BI solution will raise a red flag, allowing you to make any necessary procurement changes before you encounter inventory shortages. Conclusion The right ERP solution for retailers adds BI capabilities to its core ERP platform, delivering actionable insights, reports, and predictive analyses that you can use to make informed, data-driven decisions. ​This article was originally published on Visionet System’s Blog
Tracking, Reporting, and Forecasting Retail Needs
0
tracking-reporting-and-forecasting-retail-needs-1d2250cf9c7a
2018-06-27
2018-06-27 11:38:40
https://medium.com/s/story/tracking-reporting-and-forecasting-retail-needs-1d2250cf9c7a
false
674
null
null
null
null
null
null
null
null
null
Erp Software
erp-software
Erp Software
3,031
Retail Technology Trends
Retail technology trends including ERP, Artificial Intelligence, Robotic Process Automation and future tech
94d8c3e06b7
retailtech
10
17
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-15
2017-11-15 15:12:56
2017-11-15
2017-11-15 21:55:41
16
false
en
2017-11-20
2017-11-20 01:25:20
5
1d232ec13e5a
2.965094
1
0
0
How to quickly set up Amazon Web Services to begin a deep learning project
5
Dive Straight Into Deep Learning with This Simple AWS Set-Up How to quickly set up Amazon Web Services to begin a deep learning project While there are countless blog posts and and tutorials on how to set-up Amazon’s Web Service, it can still be a daunting task for those unexperienced with terminal or the command prompt and just want to get straight into using deep learning models. I recommend traversing through fast.ai, since there are many deep learning libraries, lessons, and tutorials anyone could use. I was fortunate be in Jeremy Howard’s deep learning course, where he teaches applied deep learning methods at the Data Institute. This approach uses the fast.ai Amazon Machine Image (AMI) in AWS–which contains multiple scripts and pre-installed packages to get straight into implementing your own deep learning model. Create an AWS account Create a SSH Key from the Command Prompt (Terminal for Macs). Type $ ssh-keygen Create your public key This will generate a public rsa key pair(ssh/id_rsa.pub) Copy and Save it to a file in your directory to retrieve it later. Like this: To copy type: $ cp . ssh/id_rsa.pub /______ (Enter the Filename to retrieve later) On the AWS console go to EC2 On the left side menu, select Key Pairs from Network & Security Create Key Pair Choose and Import Key Pair file with the ssh/id_rsa.pub you created previously. Set-up your Instance by clicking “Instance” on the left Menu. Launch your instance Choose your AMI Choosing the fast.ai AMIs (Recommend fast-part1v2-p2) contain pre-installed libraries, and Jupyter Notebook. Choose an Instance Type Recommend p2.large Type Launch! When your Instance State is green copy the IPv4 Public IP (Make sure to stop your instance when you are done using it so avoid accruing extra charges). If nothing is there, Click on “Elastic IP” on Network & Security on the left side menu. Allocate new addresses to your instance. Then return Instance to copy the IPv4 Public IP. When your Instance is Green (On) Type ssh ubuntu@[Insert IPv4 Public IP] -L8888:localhost:8888 on the terminal. Get into the fastai directory $ cd fastai and type $ jupyter notebook Copy and paste the given URL You are now ready to begin using the fastai libraries and notebooks. Done.
Dive Straight Into Deep Learning with This Simple AWS Set-Up
2
dive-straight-into-deep-learning-with-this-simple-aws-set-up-1d232ec13e5a
2018-05-15
2018-05-15 02:16:28
https://medium.com/s/story/dive-straight-into-deep-learning-with-this-simple-aws-set-up-1d232ec13e5a
false
375
null
null
null
null
null
null
null
null
null
Deep Learning
deep-learning
Deep Learning
12,189
Jean-Carlos Paredes
Data Scientist
54c13ba8f199
jeancarlos.paredes
57
4
20,181,104
null
null
null
null
null
null
0
null
0
f0600f7cc346
2018-05-16
2018-05-16 07:43:48
2017-12-19
2017-12-19 18:01:56
1
false
en
2018-05-16
2018-05-16 14:26:40
17
1d24a143faa7
4.845283
4
0
0
by Brian Edwards | Originally published December 19, 2017 on the Chilmark Research blog.
5
FDA Guidance on Clinical Decision Support: Peering Inside the Black Box of Algorithmic Intelligence by Brian Edwards | Originally published December 19, 2017 on the Chilmark Research blog. The FDA tries to keep up with rapid technological advancements in the healthcare IT/data science space. Key takeaways: FDA draft guidance on their approach to regulating clinical decision support products falls short on specificity around artificial intelligence. For applications with data originating from medical devices, the FDA will continue its oversight, AI or not (e.g., medical image processing). Medical applications that rely on “black box” algorithms unable to be fully understood by the end-user (basically all AI) will be regulated, posing challenges for AI adoption. However, these are not insurmountable as evidenced by multiple FDA approvals of AI products in the last few years. In December, the FDA finally released its long-awaited Draft Guidance on Clinical Decision Support. Following the release, STAT News mentioned experts were disappointed because the agency gave no insight into how it views artificial intelligence. Indeed, a “Command+F” search for “Artificial Intelligence” returns zero results. However, it is unnecessary for the agency to use the term “AI” to provide guidance on how it will consider associated technologies and use cases. The FDA does use the word “algorithm” in its guidance, and although algorithms can vary in sophistication, much of today’s AI technology is based on algorithmic intelligence. The suggestion that the FDA did not address the topic because it failed to explicitly mention AI within the document shows the challenges for those unfamiliar with understanding this complex subject. Nearly all AI will remain under FDA oversight. However…It would be useful for the agency to offer meaningful reference to machine learning or deep learning among the examples of potential use cases. In fact, the FDA has been reviewing technology with AI components (e.g., rule-based systems, machine learning) for more than a decade. RADLogics received FDA approval for their machine learning application in 2012, widely considered the first AI for clinical use approved by the agency. HealthMyne received FDA clearance for its imaging informatics platform in early 2016. In 2017 at least half a dozen companies received FDA clearance for machine learning applications, including Arterys, the first company to receive approval for a deep learning application, and Butterfly Network, which had 13 different applications approved along with its “ultrasound on a chip” device in late October. Others to receive clearance in 2017 include Quantitative Insights, Zebra Medical Vision, EnsoData and iCAD. The first indirect reference to products using AI comes in the first paragraph of Section III, in which the agency begins addressing specific examples of companies that will not be exempted from review. Note that the first bolded sentence below is inclusive of nearly every application. “Under section 520(o)(1)(E), software functions that are intended to acquire, process, or analyze a medical image, a signal from an in vitro diagnostic device, or a pattern or signal from a signal acquisition system remain devices and therefore continue to be subject to FDA oversight. Products that acquire an image or physiological signal, process or analyze this information, or both, have been regulated for many years as devices. Technologies that analyze those physiological signals and that are intended to provide diagnostic, prognostic and predictive functionalities are devices. These include, but are not limited to, in vitro diagnostic tests, technologies that measure and assess electrical activity in the body (e.g., electrocardiograph (ECG) machines and electroencephalograph (EEG) machines), and medical imaging technologies. Additional examples include algorithms that process physiologic data to generate new data points (such as ST-segment measurements from ECG signals), analyze information within the original data (such as feature identification in image analysis), or analyze and interpret genomic data (such as genetic variations to determine a patient’s risk for a particular disease).” The word “algorithm” is used four times in the document and in each instance the use provides significant insight into the agency’s thinking. The word is first used in the second highlighted sentence above, which provides general examples of algorithms which will continue to be reviewed as medical devices. The guidance goes on in a later section to provide the following more specific examples of algorithms that continue to require premarket approval: “Software intended for health care professionals that uses an algorithm undisclosed to the user to analyze patient information (including noninvasive blood pressure (NIBP) monitoring systems) to determine which anti-hypertensive drug class is likely to be most effective in lowering the patient’s blood pressure. “Software that analyzes a patient’s laboratory results using a proprietary algorithm to recommend a specific radiation treatment, for which the basis of the recommendation unavailable for the HCP to review.” The agency continues to describe the underlying features that must be present for an algorithmically-driven CDS recommendation to be exempted from review, specifically a company must clearly state and make available: The purpose or intended use of the software function; The intended user (e.g., ultrasound technicians, vascular surgeons); The inputs used to generate the recommendation (e.g., patient age and gender); and The rationale or support for the recommendation. The first three would seem to be reasonable enough for developers of AI products to provide users, but the fourth is basically impossible. The “black box” nature of most AI systems built using machine learning methods means even leading AI experts cannot unpack an algorithm and fully understand the rationale for a given recommendation, even with full transparency and access to the training data (which is no trivial matter in and of itself). This is especially clear when taking into consideration additional guidance provided elsewhere in the document regarding software functions that will require oversight: A practitioner would be unable to independently evaluate the basis of a recommendation if the recommendation were based on non-public information or information whose meaning could not be expected to be independently understood by the intended health care professional user. Frankly, the agency provided great insight and clarity if you are reading the document to be inclusive of all known AI technologies today. The conclusion is clear that nearly all AI will remain under FDA oversight. However, there are terms that could be used in the final guidance that aren’t buzzwords, such as machine learning, supervised learning and unsupervised learning, among others. It would be useful for the agency to offer meaningful reference to machine learning and/or deep learning among the examples of potential use cases that remain under oversight as medical devices. In Chilmark’s annual predictions for 2018, we forecast that two dozen companies will receive FDA clearance for products using AI, machine learning, deep learning and computer vision, which would mark a 400-percent increase from 2017. It would be helpful if the agency would create a dedicated channel for engaging companies developing AI products and perhaps even provide guidance on how they evaluate training data sets. Founded in 2007, Chilmark Research is a global research and advisory firm that is solely focused on the market for healthcare IT solutions. Everything we do is based on our core belief that healthcare information technology (HIT) plays a crucial role in improving the quality and efficiency of care. We foster the effective adoption, deployment, and use of HIT by providing objective, high-quality research into technologies with the greatest potential to improve care. This laser-sharp focus allows us to provide our clients with the most in-depth and accurate research on the critical technology and adoption trends occurring throughout the healthcare sector.
FDA Guidance on Clinical Decision Support: Peering Inside the Black Box of Algorithmic Intelligence
79
fda-guidance-on-clinical-decision-support-peering-inside-the-black-box-of-algorithmic-intelligence-1d24a143faa7
2018-05-29
2018-05-29 12:25:31
https://medium.com/s/story/fda-guidance-on-clinical-decision-support-peering-inside-the-black-box-of-algorithmic-intelligence-1d24a143faa7
false
1,231
Chilmark Research provides an objective perspective and framework to understand the complex and rapidly changing healthcare IT market. Our mission: Improve the healthcare experience for all stakeholders.
null
null
null
Chilmark Research
info@chilmarkresearch.com
chilmark-research
HEALTHCARE,INFORMATION TECHNOLOGY,ANALYTICS,HEALTH TECHNOLOGY,INNOVATION
chilmarkHIT
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Brian T. Edwards
Analyst covering AI, Machine Learning, NLP & Big Data in healthcare and medicine at Chilmark Research.
7acb6499bca4
briantedwards
828
2,243
20,181,104
null
null
null
null
null
null
0
null
0
863f502aede2
2017-12-13
2017-12-13 20:36:09
2017-12-13
2017-12-13 20:37:52
2
false
en
2017-12-14
2017-12-14 20:16:33
1
1d25e9fffa2b
1.88522
2
0
0
Last month, Chinese internet giant Tencent released its Internet Technology Innovation Whitepaper, which opened with a letter to company…
3
Editor’s Pick: Tencent’s 2017 Internet Whitepaper Last month, Chinese internet giant Tencent released its Internet Technology Innovation Whitepaper, which opened with a letter to company partners from CEO Huateng Ma stating, “In the past year our society underwent a comprehensive digitalization process. […] Companies, public service departments, education departments, scientific research institutions, non-profit organization, and cultural enterprises from all walks of life are eager to hop on the express train of digital transformation.” In Ma’s view digitalization is removing barriers between industries and regions. Public and private sectors can now partner to form a new kind of ‘digital ecology’. As the cloud, big data, and AI create a new infrastructure that powers new types of social governance, seven trends have emerged: Report Highlights ➤ In the past 40 years, society has witnessed the development of IT, the internet and mobile internet, and AI. Tencent believes we are experiencing unprecedented simultaneous cross-sector technological development. ➤ Government policy encourages AI research, commercialization, and implementation. Since 2012, the Chinese government has released over 50 white papers detailing AI development strategies. In Notification of a New Generation of Artificial Intelligence Development Plan, China made clear its goal to achieve international leadership in AI technology by 2030. ➤ Domestic AI development budded when China’s first robotics company Weilai (Future) Robotics was founded in Shanghai in 1996. Five years later the first iBot robot startup obtained US$20.4 million in funding from IDG capital, marking the first AI investment milestone. ➤ By mid-2017, China had 592 AI companies, which is 23% of the global total, making it the second largest AI industry cluster in the world. Domestic VC invested a total of US$63.5 billion into 767 startups, which is 33% of total global investment. ➤ Domestic AI companies are primarily located in Beijing (42%), Guangdong (20%), Shanghai (14%), Zhejiang (14%), and Jiangsu (5%). Major AI application areas are computer vision (146 companies), intelligent robots (125 companies), and natural language processing (92 companies). ➤ China’s 592 AI companies employ over 40,000 employees, but the country still lacks the talents required for building a robust hardware infrastructure. ➤ According to Wuzhen index: Global Artificial Intelligence Development Report, China currently holds 15,745 AI patents, while the US holds 26,891 patents and Japan 14,604. These three countries hold 74% of all patents in the field of artificial intelligence. Interested readers can find further information here: http://qzs.qq.com/open/video/outlib/2017INTERNET_TECHNOLOGY_INNOVATION_WHITEPAPER.pdf?ugsa=2472690394950271905
Editor’s Pick: Tencent’s 2017 Internet Whitepaper
2
synced-recommends-tencents-2017-internet-whitepaper-1d25e9fffa2b
2018-05-07
2018-05-07 06:34:59
https://medium.com/s/story/synced-recommends-tencents-2017-internet-whitepaper-1d25e9fffa2b
false
398
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
null
SyncedGlobal
null
SyncedReview
global.sns@jiqizhixin.com
syncedreview
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
Synced_Global
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Synced
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
960feca52112
Synced
8,138
15
20,181,104
null
null
null
null
null
null
0
null
0
ef452b256034
2018-08-29
2018-08-29 10:43:27
2018-08-29
2018-08-29 10:43:55
1
false
en
2018-08-29
2018-08-29 10:51:17
2
1d2722aeb9f8
2.075472
5
0
0
Optimising data classification for matching algorithms.
5
Using Data Classification to Find the Perfect Home for Your Customer. Optimising data classification for matching algorithms. “Sequence helped us where no other solutions were available. They handled a completely unstructured dataset, and managed to convert the data into tasks and classify the data as per our requirements. They are a resourceful partner for any machine learning project.” - Armelle M., CEO, Right Home. Right Home is a real estate agency based in Paris that specialises in house hunting. It’s the perfect solution for those looking to buy an apartment or house in the city. Their customers often already have the requirements in mind but don’t have the time or market knowledge to proceed by themselves. Right Home provides a professional and personal service from the first to the last step of the buying process. Problem Statement Manually searching for homes in a market made of millions of properties is no mean feat. On top of which many of the website are not optimised and have poor search functionality. This all adds up to inefficient work that is extremely time-intensive. To overcome this, Right Home uses machine learning to match properties to their client’s requirements. The challenge that then arose was to prepare a large enough data-set for the model. The goal was to categorise the information from tens of thousands of real estate ads into different classes. Right Home contacted us to figure out how we could help them to prepare their unstructured data-set. As all machine learning models do, they needed a high accuracy outcome to ensure the best quality. Solution To provide this quality, we began by onboarding contributors that had an accuracy of at least 95%. Also, as Right Home is a French company, we selected French speaking contributors for the task. We then ran assessments to give them a clear understanding of the real estate industry’s terms and vocabulary before contributors were able to work on tasks. To process the unstructured dataset and redistribute the data points as tasks, we build templates that are custom to each project. Our team was able to deliver an intuitive and user friendly interface for our contributors. Allowing them to work with more clarity and less of the clutter from all the unoptimised real estate websites. These templates are a simple way that allows us to save time and money for our clients. Close collaboration with Right Home was key to the success of this project. Our Account managers are the point of contact that helped us to guarantee a smooth experience for both the contributors and the client. Navigating questions and hurdles together allowed us to improve the project outcome as it went along. Through this partnership, we categorised 20,000 ads, a total of 200,000 data points. Sequence is an outsourcing service for data science teams. Our products include data annotation, tagging, and classification for all project sizes. We offer a free trial to new customers — learn more at our website: sequence.work.
Using Data Classification to Find the Perfect Home for Your Customer.
242
using-data-classification-to-find-the-perfect-home-for-your-customer-1d2722aeb9f8
2018-08-29
2018-08-29 10:51:17
https://medium.com/s/story/using-data-classification-to-find-the-perfect-home-for-your-customer-1d2722aeb9f8
false
497
Data annotation, tagging, and classification outsourcing. Made for data science teams.
null
sequence
null
sequence.work
hello@sequence.work
sequencework
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,BUSINESS SERVICES,COMPUTER VISION,OUTSOURCING
sequencework
Data Science
data-science
Data Science
33,617
Sequence
Crowdsource human intelligence. Build high-quality datasets for your business or machine learning models.
e5faafc446db
sequencework
1
8
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-02
2018-01-02 18:20:41
2018-01-02
2018-01-02 18:25:45
1
false
en
2018-01-02
2018-01-02 18:25:45
3
1d275f27bc99
3.388679
0
0
0
Written by Michael McDonald, general manager of legal and compliance at Veritone.
5
Year in Review Written by Michael McDonald, general manager of legal and compliance at Veritone. AI in many forms, became mainstream in legal and compliance in 2017: From contract version analytics, enhanced technology assisted review, predictive coding endorsements from the court system to making media searchable in a highly accurate and efficient manner, all standout as innovations. That said, leveraging technology to create efficiencies around redundant tasks is always a trend in the legal industry. At the same time, the demand for the right legal team to represent your interests in “bet the company” litigation matters, never goes away and costs become a secondary concern. Compliance A big trend is coming from regulatory entities globally. Financial services and other regulated companies are held more accountable to monitor their employees behavior, and receive large fines when they don’t. Gone are the days when people can use excuses such as “we didn’t know” or “we didn’t have the systems in place to do the monitoring in a timely fashion”. To that point, MiFID II, GDPR and other regulations are creating even more requirements in 2018 as the rapid adoption of WeChat, WhatsApp and other chat apps in the trading communities are creating even more regulations. As the strategic importance of robust ethics and compliance programs increases, so does the ability for compliance officers to have a mandate to act, and the resources required to accomplish the organization’s compliance goals. While broad progress is being made on this front, it appears to be inconsistent from organization to organization. Conversely, in large regulated companies where compliance has been traditionally managed by separate teams (that sometimes don’t even talk to each other), issues arose as each department had one agenda, while information technology had another, legal teams another, etc. It’s surprising how anything has gotten accomplished in the past, with so many different teams and competing agendas involved. This old way of doing things is broken and everyone knows it. Legal and compliance According to columnist Mark Herrmann, corporations had money to spend in 2017, and law firms prospered, but 2018 has a less sunny outlook. He believes we’re close to recession and we’ll see a decrease in law firm profitability. So what tools, can firms put in place to stymie this decline? Will AI help firms by augmenting jobs and allowing workers to hone new skills? Or will we see a general increase across the industry in terms of profit, employee retention, etc? In the downturn of 2008, many global law firms prospered because when things go bad, litigation goes way up. Also, there will never be a time where companies are not willing to pay top dollar for the right team of lawyers to represent them in the most important matters. That said, the legal industry has efficiencies that can be leveraged during a downturn to whether the storm. Whether its leveraging AI to decrease the manual work required on redundant tasks, or leveraging outsourced attorneys and support personnel to provide the support, only when needed. In legal and compliance, media will continue to become more searchable in a highly accurate way. Combine those results with text-based documents, to break down the barriers of a siloed approach for review and surveillance, practices will be armed with more evidence and data than ever before. Currently, attorneys, regulators and compliance teams are missing key information on the data they are reviewing, because up until now, the data has been stored, processed and reviewed in separated systems. New technology offers a “single pane of glass”, where you can see exactly what a custodian typed, talked about, video-conferenced or video chatted, all in one location, in real time. This will be key to unlocking the truth of what happened. Further, you will see advances that will help financial and energy trader customers break down language barriers and perform live trades, across multiple languages, globally, in real time. This will significantly improve revenue and profits for those trading desks, by empowering them to speak to any trading desk in the world, regardless of language. 2018 and beyond New regulations and the regulators, are now making extraordinary demands and giving out large fines if demands aren’t met. Companies that want to be proactive and get ahead of the inevitable fines, are now taking action. Companies are starting to break down old barriers and centralize compliance and surveillance functions with a more coordinated and cohesive team. Practitioners and firms will have access to technologies that integrate with existing tools built over the years to manage text based documents and enhance user capabilities by enabling them to manage media and make it searchable within these platforms. For example, Veritone enabled Relativity, the largest legal eDiscovery platform on the market, to manage media within the same workflows as emails and other text based documents, and empowered its users to run analytics and concept searches across all text and media types. We have done the same thing with other platforms like Actiance and Catelas. Stay tuned because you will see more of these integrations in 2018.
Year in Review
0
year-in-review-1d275f27bc99
2018-05-07
2018-05-07 02:07:51
https://medium.com/s/story/year-in-review-1d275f27bc99
false
845
null
null
null
null
null
null
null
null
null
Law
law
Law
20,355
Veritone
Veritone unlocks the power of AI-based cognitive computing so that unstructured audio and video data can be transformed into actionable intelligence.
b6a2fc148286
veritoneinc
19
16
20,181,104
null
null
null
null
null
null
0
null
0
f772c66cd492
2018-07-31
2018-07-31 19:56:56
2018-07-31
2018-07-31 20:07:40
4
true
en
2018-08-01
2018-08-01 03:51:52
2
1d285a8ab75
7.069811
1
0
0
My vision is to flip gender equity on its head — helping to realize gender equity in our lifetime rather than the 217 years forecasted by…
5
Artificial Intelligence Will Help Us Close The Gender Equity Gap, With Katica Roy, CEO of Pipeline My vision is to flip gender equity on its head — helping to realize gender equity in our lifetime rather than the 217 years forecasted by the World Economic Forum. Some believe gender equity is a social issue; however, data has shown it’s a tremendous economic opportunity. While Pipeline actively works to eradicate gender inequity and increase financial performance, I work to spur people to think differently about the opportunities that present themselves once we collectively close the gender equity gap. The problem of gender bias is an expensive one. In fact, it costs the U.S. $2 trillion in lost GDP and a solution to this problem would increase the economic opportunity for all. Pipeline marries economic gains and gender equity — taking rich macroeconomic research, driving it down to the company’s microeconomic level and producing an actionable track to deliver gender equity coupled with improved financial performance. Our goal is to make gender equity attainable in our lifetime — and we have. I had the pleasure of interviewing Katica Roy, CEO and founder of Pipeline. Katica is an award-winning business leader and warrior for gender equity in our lifetime. She was a 2018 Colorado Governor’s Fellow and was also named a Luminary by the Colorado Technology Association, recognizing her as a visionary technology leader in Colorado. Katica was also recently named a finalist for the Denver Business Journal’s 2018 Outstanding Women in Business Awards. Pipeline is an award-winning Denver-based technology company that increases financial performance of companies through closing the gender equity gap. Pipeline’s proprietary SaaS platform uses artificial intelligence to assess, address and action against the gender biases costing the U.S. alone $2 trillion. This issue is not just about good sense, this is about dollars. Big dollars that turn heads to create social change. information, visit Thank you so much for doing this with us! What is your “backstory”? My goal to close the gender gap once and for all is rooted in my family history. I am the daughter and sister of refugees. My family escaped from Hungary after the fall of 1956 revolution. They lived in a refugee camp in Austria for nearly two months before gaining safe passage to the U.S. by President Eisenhower via Air Force One on Christmas Day 1956. This moment shaped who I am today. The moment that a powerful person used their power to stand forward on behalf of others. My personal duty is to carry that courage forward for others. And, it is why I founded Pipeline — to use the opportunities I had to make gender equity a possibility in our lifetime. Can you share the funniest or most interesting story that happened to you since you began leading your company? Donna Morton, founder and CEO of Change Finance, asked if I would join her team to ring the opening bell of New York Stock Exchange on November 7, 2017. It was to celebrate Change Finance’s first ETF going live. Change Finance’s goal is transform capital markets so that people and the planet are placed on equal footing with profit. I was invited to join the team because of Pipeline’s commitment to closing the gender gap. Joining us to ring the opening bell were individuals who understand the economic potential of closing the gender gap. It was remarkable to be part of that moment. Noah Berg Photography What do you think makes your company stand out? Can you share a story? In honor of the work that Pipeline has done to close the gender equity gap, Colorado Governor John Hickenlooper renamed April 10 Equity for All™ Day in Colorado. The Proclamation was presented by Lt. Gov. Donna Lynne at Pipeline’s Equity for All™ event on April 10, 2018 in Denver. After the declaration attendees heard from Ryan Harris, former Denver Bronco and Super Bowl 50 champion about the importance of male voices in the journey to achieve gender parity. The event concluded with a “time to parity” announcement featuring the release of Pipeline’s v.3 platform. Our Equity for All™event and the proclamation affirmed Pipeline’s impact in achieving gender equity in our lifetime. Pipeline also launched the first gender equity app on Salesforce’s AppExchange. Achieving gender equity in the sales function has the ability massively accelerate our time to gender equity. With our app on the AppExchange, Pipeline identifies and addresses potential unconscious bias or inequity in sales organizations to ensure companies maximize their economic footprint. Are you working on any new or exciting projects now? We are part of the inaugural class of Techstars Impact Accelerator Program. As part of this program, Pipeline is recognized as one of the top one percent of all social impact, for-profit companies. This program propels forward our goal of closing the gender gap. We will complete the program on August 23rd. What advice would you give to other CEOs or founders to help their employees to thrive? Personally, I’m not a fan of advice so instead I’ll share with you what I tell myself. Your brand should not be about you — it should be about helping others. Focus your unique gifts on helping others. Know what you want and be brave enough to get there. Chart your own path and have the courage to take that path despite the obstacles. Obstacles are not about you or your worth. Courage is a muscle ­­ — when exercised it gets stronger. Noah Berg Photography None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful towards who helped get you to where you are? Can you share a story about that? My dad, he taught be to be a truth-teller. Being a truth-teller isn’t always popular, but it’s valuable. I am mindful to say things in such a way that people will be most receptive. It’s not about being nice, it’s about being effective. It’s about identifying what the important goal is (the success of the project), and working toward that mutual goal. How have you used your success to bring goodness to the world? My vision for Pipeline is to flip gender equity on its head — helping to realize gender equity in our lifetime rather than the 217 years forecasted by the World Economic Forum. Some believe gender equity is a social issue; however, data has shown it’s a tremendous economic opportunity. While Pipeline actively works to eradicate gender inequity and increase financial performance, I work to spur people to think differently about the opportunities that present themselves once we collectively close the gender equity gap. The problem of gender bias is an expensive one. In fact, it costs the U.S. $2 trillion in lost GDP and a solution to this problem would increase the economic opportunity for all. Pipeline marries economic gains and gender equity — taking rich macroeconomic research, driving it down to the company’s microeconomic level and producing an actionable track to deliver gender equity coupled with improved financial performance. Our goal is to make gender equity attainable in our lifetime — and we have. What are your “5 Things I Wish Someone Told Me Before I Became CEO” and why. Your mindset matters. Your brain is not wired to make you happy, it’s wired to keep you safe. As an an entrepreneur and executive I’m often faced with situations in which my reptile brain kicks in (fight or flight). My goal has been to rewire my brain — I can feel a certain way but I don’t need to act on it. I meditate regularly to put the pause between how I feel and my decisions and actions. Get clear on your story and your why — and share them broadly.When we launched Pipeline, my co-founder suggested we could springboard it off of my brand. I was against it because I felt that folks wouldn’t care about my story, rather they would care about Pipeline. I was wrong. My story has given Pipeline more power and made it relatable. It has enabled folks to see themselves in the Pipeline journey. Give first and be of service.I am frequently in situations where I don’t know people, or at least very few people. Instead of being concerned about my own discomfort, I refocus on what I can bring to the situation, how I can helpful and who I can help. This refocusing has allowed me to further embrace the interconnected fabric of the human race. Build on your strengths. I am not good at everything — no on is. Focus on what you do well, those are the gifts that were given to you to improve the world. The world needs your gifts — that’s why you have them, to share them with the world. Hire slow and fire fast. Often there is a tendency to fill open job requisitions quickly with the belief that openings are wasted time and moneyl. That is true with a catch. It costs you more if that hire doesn’t work out. In the long run, you’re better taking your time hiring and vetting candidates. Noah Berg Photography Can you please give us your favorite “Life Lesson Quote”? You can choose courage or you can choose comfort. You can not have both. — Brene Brown Some of the biggest names in Business, VC funding, Sports, and Entertainment read this column. Is there a person in the world, or in the US with whom you would love to have a private breakfast or lunch with, and why? He or she might just see this :-) Michelle Obama. Why? When she was on the campaign trail with President Obama, then Senator Obama, she talked about the struggles of working and being a mom. When she was with her kids she was worrying about work and when she was at work she was worried about her kids. I understood her struggle and it was one of the first moments I felt someone in the public sphere truly understood what it is like to be a breadwinner mom.
Artificial Intelligence Will Help Us Close The Gender Equity Gap, With Katica Roy, CEO of Pipeline
1
artificial-intelligence-will-help-us-close-the-gender-equity-gap-with-katica-roy-ceo-of-pipeline-1d285a8ab75
2018-08-01
2018-08-01 03:51:52
https://medium.com/s/story/artificial-intelligence-will-help-us-close-the-gender-equity-gap-with-katica-roy-ceo-of-pipeline-1d285a8ab75
false
1,688
Leadership Lessons from Authorities in Business, Film, Sports and Tech. Authority Mag is devoted primarily to sharing interesting feature interviews of people who are authorities in their industry. We use interviews to draw out stories that are both empowering and actionable.
null
Authority-Magazine-2170294859857034
null
Authority Magazine
editor@authoritymag.co
authority-magazine
LEADERSHIP,CULTURE,WOMEN IN BUSINESS
AuthorityMgzine
Gender Equality
gender-equality
Gender Equality
13,774
Yitzi Weiner
A “Positive” Influencer, Founder & Editor of Authority Magazine, CEO of Thought Leader Incubator
4603eefe656c
rabbiweiner
3,470
2,579
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-10
2018-07-10 21:31:46
2018-07-11
2018-07-11 15:43:39
3
false
en
2018-07-11
2018-07-11 15:43:39
4
1d297059ac76
5.678302
3
0
0
Lately I’ve been finishing up the capstone project for my data science immersive program. I ended up doing some natural language processing…
3
Natural Language Processing with Doc2Vec Lately I’ve been finishing up the capstone project for my data science immersive program. I ended up doing some natural language processing work on a collection of court opinions, and eventually I got around to considering how Gensim Doc2Vec (https://radimrehurek.com/gensim/models/doc2vec.html) might be useful for classifying documents. In contrast to Word2Vec, I found it surprisingly difficult to track down exactly how one is supposed to use Doc2Vec. Here’s a short summary of what I learned, and my best understanding of what’s going on under the hood with Doc2Vec: First of all, I had to figure out what exactly a “document vector” is. As starting point for figuring out document vectors, there are a number of websites out there that explain what a word vector is, and how Word2Vec computes them (e.g., http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/). To recap: a word vector is really just the hidden layer of a neural network that was fit to transform a one-hot-encoded representation of a word into a probability distribution for nearby words. Fair enough, but the issue of a “document” vector was mysterious to me for a long time. Part of this was a matter of terminology. Documentation on the web sometimes refers to a “paragraph vector” and sometimes to a “labeled sentence” and sometimes to a “document vector”. Were these three different things that I needed to separately account for? I had to drill down to the original paper proposing Doc2Vec (Le and Mikolov, https://arxiv.org/pdf/1405.4053v2.pdf) to get the answer: In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Ok, so that clarifies things. A sentence vector, a paragraph vector, and a document vector are really all the same thing, depending on whether one is trying to classify a bunch of words based on sentence-level, paragraph-level, or document-level context. So let’s just call them all “document vectors” for purposes of this discussion. A document vector is an arbitrary-length abstract representation of the contextual meaning of a particular document type. Just like a word vector, it’s the product of the training process for a shallow neural network, where the input is typically a one-hot encoded term from the model vocabulary and the output is a probability distribution for words in the nearby context window. (Alternatively, the training process can be the reverse, where the input is an encoded representation of nearby words and the output is a probability distribution for the word at the center of the context window). Here’s a simplified illustration of what’s going on, in a hypothetical scenario where we have two categories of documents: fairy-tales and biology textbooks. Each document is comprised of words pulled from a total vocabulary of 10 words. We’d like to reduce the dimensionality of our representation of any particular word from 10 dimensions to 3 dimensions. We’d also like to represent each type of document as a 3-dimensional vector. The probability of the various words surrounding “frog” in a text are going to depend on two things: the 3-D word vector associated with the word “frog” and the 3-D word vector associated with whether we’re in a fairy-tale or a biology textbook. In the case above, we expect to see something about a prince getting kissed when we we’re in a fairy tale and we just mentioned a frog. Castles are important in fairy tales, but they’re not so much associated with frogs. Frogs are amphibians, but we tend to hear about that more in a biology textbook than in a fairy tale. So when you train a Doc2Vec model, the word vectors and the document vectors are being simultaneously optimized in order to minimize your loss function (i.e., in order to get the best possible match between the predicted probability distribution and the actual distribution of context words). Regarding the actual application of Doc2Vec to classification task, I found the following blog posting to be more helpful than anything else I could find on the web: https://fzr72725.github.io/2018/01/14/genism-guide.html. Here’s my own adaptation, which worked wonderfully in my own capstone project: Let’s walk through what’s going on here: My first step was to assemble a list of TaggedDocument objects, using a pandas apply function that pulled the contents of my dataframe’s “Text2” column into “words” and the contents of the “label” column into “tags”. Note that tags have to be a list, even if you only have one tag per document! Note also that the text needs to already be a tokenized list. You can’t just pass a raw string of text. Next, I create an instance of the Doc2Vec class with the model parameters set appropriately. For example, I decide how long I want my document vector to be (100 dimensions, in my case). Apparently we are supposed to pre-initialize the model by calling the “build-vocab” method on the training data, and then we train on the training data. This generates a whole bunch of weights and biases for the hidden layer of my model, and it also trains the weights and biases for the output/softmax layer of the model (the thing that actually outputs a probability distribution of context words). In my case, I believe that there would be 200 nodes in the hidden layer of my network. 100 for the 100 dimensions of the word vector and another 100 for the 100 dimensions of the document vector. This would then feed to a softmax output layer equal in dimension to the number of words in my vocabulary. The upshot of it all is that I get exactly 2 document vectors when I’m done: one for the first class of documents, and one for the other class of documents (I was doing a binary classification problem). The model got to see the actual document class labels for the training data, and so all the weightings are based on loss minimization for those labels. OK, but if I want to actually do a classification, I don’t want to classify just 2 vectors, I want to classify the 400 documents that I started with, including holdout data! The trick here is that it is now necessary to to “infer” individual document vectors for each of the individual documents, after having already trained the neural network on the assumption that there are really only two categories of documents. The inference method is essentially continuing the training process, with the model initialized based on the prior fit with the training data. But here’s the tricky part: the loss function for “inference” has access to the full text of the document it is training on, but it doesn’t have access anymore to the class of the document it is fitting on. So the model is trying to figure out a descriptive 100-dimensional document vector for each document in the training set and the test set, based on the text of each document and the model’s “memory” of how text related to document class in the training set. If everything is working the way it is supposed to, the second round document vectors for the training set should almost always bear a stronger resemblance to the document vector for the class archetype they come from than to the document vector for the other class archetype. In my case, this was true 99% of the time, as verified by the following code: correct=0 for index in range(X_train.shape[0]): similar=model.docvecs.most_similar([X_train[index]])[0][0] if similar == y_train[index]: correct += 1 print (correct/X_train.shape[0]) Once I have all my inferred document vectors, I’m ready to get to business with the classification model of my choice. The vectors can simply be vertically stacked into a 2-D numpy matrix, ready to be fed into Sci-kit-Learn. I hope this discussion provides some useful clarification for others making use of Doc2Vec! In my case, I got surprisingly good linear separations of my two classes based on the document vectors. Here’s a quick plot of my top 2 principal components, for my training set and for my test set: Top 2 principal components of document vector, training set Top 2 principal components of document vector, test set
Natural Language Processing with Doc2Vec
137
natural-language-processing-with-doc2vec-1d297059ac76
2018-07-11
2018-07-11 15:43:39
https://medium.com/s/story/natural-language-processing-with-doc2vec-1d297059ac76
false
1,359
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
David N. Berol
null
5b6bc9c7f9f0
dnberol
9
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-26
2018-09-26 11:11:14
2018-09-26
2018-09-26 14:46:40
1
false
en
2018-10-27
2018-10-27 05:52:40
2
1d2bce22c321
1.018868
1
0
0
Day 042 To 048
5
100 Days Of ML Code: Week 7 Day 042 To 048 #100DaysOfMLCode Day 042 (20-Aug-2018) Today’s Progress: Begin : lesson 10 “Dataset and Questions” from Intro to Machine Learning course, Udacity. Thoughts: In this lesson used real dataset — Enron email dataset. Downloaded the data set and analysis to understand the data. Day 043 (21-Aug-2018) Today’s Progress: Continue: Dataset and Questions, from Intro to ML course, Udacity. Thoughts: Learned Enron database related concepts and features. Day 044 (22-Aug-2018) Today’s Progress: Continue: Dataset and Questions, from Intro to ML course, Udacity Thoughts: Solved quiz. Day 045 (23-Aug-2018) Today’s Progress: Continue: Dataset and Questions, from Intro to ML course, Udacity. Thoughts: worked on the mini-project using Enron dataset. Day 046 (24-Aug-2018) Today’s Progress: Completed: Dataset and Questions, from Intro to ML course, Udacity Thoughts: worked on the mini-project using Enron dataset. Day 047 (25-Aug-2018) Today’s Progress: Begin : Lesson 11 Regressions, from Intro to Machine Learning course, Udacity Thoughts: Studied linear regression concept and solve quizs Day 048 (26-Aug-2018) Today’s Progress: Continue: Regressions from Intro to Machine Learning course, Udacity. Thoughts: Learned about conceptss like slope , intercepts , regression error etc and solved related problems. Previous Post << — >> Next Post
100 Days Of ML Code: Week 7
50
100-days-of-ml-code-week-7-1d2bce22c321
2018-10-27
2018-10-27 05:52:40
https://medium.com/s/story/100-days-of-ml-code-week-7-1d2bce22c321
false
217
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Prachi Patil
Tech enthusiast
fe2f21787f1f
prachi.tech
8
11
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-16
2017-09-16 16:59:29
2017-09-16
2017-09-16 17:14:15
1
false
en
2017-09-16
2017-09-16 17:14:15
3
1d2ce244c2c8
1.833962
1
0
0
I will discuss in detail about the ITT platform and the technology it carries and the function of the technology. But first what is…
5
Review About Intelligent Trading Tech — Cryptocurrency Trading Technology? I will discuss in detail about the ITT platform and the technology it carries and the function of the technology. But first what is Intelligent Trading Tech anyway? Intelligent Trading Tech is a technology assistant trading based on Artificial Intelligence. This technology platform will stand on the Ethereum network. Yes of course this technology comes with a strong reason and determination. After cryptocurrency gets tremendous traction this year, many people/traders enter the world of cryptocurrency. This is one of the main reasons why ITT is present. Cryptocurrency market is currently the best market in the world for traders, why? The growth of market cryptocurrency can still increase rapidly because the market is classified as young, even the average trader can generate big profits. Very different not with ordinary financial markets where about 90% of traders actually lose money in the long run. Secondly, in the market of volatility, the cryptocurrency can create opportunities for traders to profit. More or less it is yes … Okay now I will discuss the shortcomings. One of them is fake information. Indeed information related to cryptocurrency we can access for free on the Internet, but along with increasing the amount of information available, the number of false information/false information also increases. Wow is even worse Pump and Dump. But this time you do not have to worry anymore, because the solution from ITT will answer all the above problems. Tier 1 Bot in Telegram (Signal) Intelligent Trading Tech will solve the problem by analyzing thousands of sources of information and providing a great opportunity for traders with just an alarm. That is, this platform will provide a warning for traders via alerts so that traders can make their own trading decisions. This platform uses Artificial Intelligence (AI) technology. This technology is very effective for predicting future events (or prices) based on past performance. The alarm will show: • The trading pair • Overall Rating (bullish / bearish) • Time Horizon • Exchange • Current Price • Volume Summary Tier 2 Concerning Sentiment Indicators In fact, sentiment is one of the important factors that can cause price swings in a fast time. Bots at this level will scan positive or negative sentiments against crypto and put it into evaluation. ITT Tokens The ITT Token will be used to pay the subscription fee to gain access to Tier 1 and Tier 2 membership. ICO Description Platform: Ethereum Total Supply Token: 21,000,000 Official website: http://intelligenttrading.org/ Whitepaper: http://intelligenttrading.org/whitepaper.pdf End Date of Crowdsale: September 17, 2017, 20 hours left My Ethereum Address: 0x15c72278C83A00b3EFe57fD628a6947EdcF0edaf
Review About Intelligent Trading Tech — Cryptocurrency Trading Technology?
1
review-about-intelligent-trading-tech-cryptocurrency-trading-technology-1d2ce244c2c8
2018-06-09
2018-06-09 13:44:53
https://medium.com/s/story/review-about-intelligent-trading-tech-cryptocurrency-trading-technology-1d2ce244c2c8
false
433
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Fajar Himawan
null
d7b0a91d7d28
fajar.hima99
4
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-27
2018-08-27 10:36:39
2018-08-27
2018-08-27 12:16:26
3
false
en
2018-08-27
2018-08-27 12:21:29
0
1d2ee16e5055
1.931132
1
0
0
structure of matrix operation in R programming language
5
R construction with Matrix dimension in plane and mathematical combinatoric explain structure of matrix operation in R programming language A1 = matrix( c(1, 2, 3, 4, 5, 6), # the data elements nrow=2, # number of rows ncol=3, # number of columns byrow = TRUE) # fill matrix by rows the output of Variable A is show as Out put of A1,A2,A3 and A4 the dimension size of A is 2x3 the a11 is look at [1,] and [,1] at the same time A2 = matrix(c(1, 2, 3, 4, 5, 6), 2, 3,by row = T) is alternate way to construct of result as matrix A that can summary as “nrow = ” and “ncol = ” can be omit in the construction of matrix syntax order. A3 = matrix(c(1:6), 2, 3,by row = T) and A4 = matrix(1:6, 2, 3,by row = T) the rusult is the same in first but in R collect variable A1, A2 as type “num” but A3,A4 is collect variable as type “int” can check with R proof with command A1 == A2 the rusult is Check value for output with comparison method result is the same as A2 == A3 and A3 == A4 In this stage we can conclude that A1 = A2 = A3 = A4 what happen with B1 = matrix(c(1:6), 2, 3,by row = FALSE) the result is the output of B1,B2,BN,C and D is the same result with B2 = matrix(c(1:6), 2, 3,by row = F) with omit the “nrow = ” and “ncol = ” syntax what happen with remove by “by row = F” like BN = matrix(c(1:6), 2, 3) the result is still the same as B1,B2 be cause it default value syntax can be omitted it. C = matrix(c(1:6),nrow=2) D = matrix(c(1:6),ncol=3) that can omit the one dimension C and D is array element in to specific row or column index with element in matrix long. in the principle way like the equation number of elements / (number of row elements(number of column elements) in combinatoric explain can sumary as condition (byrow = F) and 6 elements and with sequential elementts is = 2*2*(3)+1+1+1 = 15 ways to construction. as condition (byrow = T) and 6 elements and with sequential elements is = 2*2*(3)+1+1 = 14 ways to construction.
R construction with Matrix dimension in plane and mathematical combinatoric explain
1
r-construction-with-matrix-dimension-in-plane-and-mathematical-combinatoric-explain-1d2ee16e5055
2018-08-27
2018-08-27 12:21:29
https://medium.com/s/story/r-construction-with-matrix-dimension-in-plane-and-mathematical-combinatoric-explain-1d2ee16e5055
false
366
null
null
null
null
null
null
null
null
null
Programming
programming
Programming
80,554
Nattawat Piansakul
Mathematician Suankularb Wittayalai 128 (OSK128) interesting in Data mathematical analysis
5d9d9bb4ad49
boatchaos179
8
12
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-01
2017-12-01 03:40:30
2017-12-01
2017-12-01 03:56:35
1
false
en
2017-12-04
2017-12-04 02:50:19
9
1d2fa8c61d94
2.101887
1
0
0
GE Healthcare and Chinese startup Infervision to boost AI-powered radiology solutions; Paris-based startup Doctolib raises $42M to grow its…
4
A collaborative research at Stanford shows end-of-life care can be improved with deep learning algorithm; The FDA approves the first EKG sensor for the Apple Watch to detect potential arrhythmias GE Healthcare and Chinese startup Infervision to boost AI-powered radiology solutions; Paris-based startup Doctolib raises $42M to grow its online doctor-booking service in europe Image: via MIT Technology Review Authored by one of Andrew Ng’s PhD students, a paper published in arXiv describes how the chance of mortality of a patient in the next three to 12 months can be predicted by training a deep neural network with around 200K patient records; Currently being piloted at a hospital. Also, it suggests the predicted results could help doctors to better utilize resources on patients in the greatest need. (MIT Technology Review) AliveCor’s Kardiaband AliveCor’s Kardiaband EKG reader is approved by the FDA as the first medical device accessory for the Apple Watch; Unlike the firm’s previous product KardiaMobile, which is a separate EKG device, Kardiaband is an EKG sensor integrated directly on a Apple Watch band, allowing users to get an EKG reading continuously just by touching it. After a 30 second recording, the FDA-cleared machine learning algorithms run directly on Apple Watch will immediately report a result, which can be used to to detect abnormal heart rhythm and atrial fibrillation(AFib); Kardiaband is sold for $199 on AlivCor’s site and requires a subscription to AliveCor’s premium service for $99 a year. Meanwhile, Apple also announces the Apple Heart Study, an Apple Watch-based ResearchKit study using the heart rate sensor to detect potential arrhythmias. Collaborated with Stanford University, it allows Apple Watch users to participate in the study by directly downloading the research app on the Apple Store. (TechCrunch) GE Healthcare announces partnership with Nvidia to boost AI adoption in healthcare and with Intel for its new processor to be used for imaging devices; GE’s new CT system powered by Nvidia is now two times faster in imaging processing than its predecessor; And total cost of ownership drops by up to 25 percent with the new Intel Xeon Scalable platform. Meanwhile, Chinese AI startup Infervision launches its first AI-powered CT Stoke Screening System; aiming to help radiologists diagnose strokes faster using CT brain scans. The startup is currently working with over 50 top hospitals and 200K studies to date; raises $18M Series B funding from Qiming Venture Partners, Sequoia Capital China and Genesis Capital, following $7.2M Series A funding led by Sequoia Capital. (MobiHealthNews, HitConsultant) Paris-based Doctolib, the online doctor-booking service provider, raises $42M in its latest round backed by Bpifrance and Eurazeo; The startup is the largest online doctor-booking company in europe; Raises $72.7M in total in 2017; And it has around 30,000 health professionals using its platform with 12 million monthly visitors in France and Germany. Each doctor pays €109 per month in France and €129 per month in Germany for using Doctolib booking system; This new fund will be used to fuel further growth in the German market. (TechCrunch)
A collaborative research at Stanford shows end-of-life care can be improved with deep learning…
1
a-collaborative-research-at-stanford-shows-end-of-life-care-can-be-improved-with-deep-learning-1d2fa8c61d94
2018-04-25
2018-04-25 05:50:10
https://medium.com/s/story/a-collaborative-research-at-stanford-shows-end-of-life-care-can-be-improved-with-deep-learning-1d2fa8c61d94
false
504
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
The Health Prospect Global
We create, curate and share stories that bolster the global healthtech industry and the startup community with a focus on Asia.
ed1a8faa3375
The.Health.Prospect
61
24
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-16
2018-05-16 09:16:26
2018-05-16
2018-05-16 09:16:38
8
false
en
2018-05-16
2018-05-16 09:17:15
4
1d31053ed64e
4.646541
1
0
0
Data Analytics seems like rocket science at first glance. However, it can be simpler than you think, even without knowledge in coding or…
5
Visualizing Real Estate Markets with Power BI Data Analytics seems like rocket science at first glance. However, it can be simpler than you think, even without knowledge in coding or advanced Excel. In this blog, we are going to show you how by demonstrating a real-life example.In this demonstration, we will be doing data processing on Hong Kong Real Estate Market. We will go through 4 steps of data analytics — collect, clean, find insights and visualise the data. We will be using tools such as Webscraper, Power Query as Add-in for Excel and Microsoft Power BI. which are all very easy-to-use tools for data analytics. We will leave the specific tools walkthrough in later blog posts, and focus on the full process and how it looks like. Feel free to try along the steps! 1 . Collect the DataWe will first need to collect the data we need to work on. We will be using a Google Chrome add-on called Webscraper to collect our data from the websites we desire. Mind that not all websites are deemed to be “easy to scrape” — generally websites with data mapped out without embedding would be the easiest websites to scrape. With Webscraper, you can run your scrapping process in the background with decent speed. From the website, I managed to obtain a total of 13k real estate ads. You can see the scraped data snippet below: In this case, the data shows floor types, rent, size, price and so on; the data collected from the website is quite clean, but would still need further clean-up before we can analyse it. 2. Clean the Data Excel & Power Query are both very useful tools for that purpose. In this exercise, we are going to use Power Query to clean our data. Power Query is available as an Add-in in Excel 2010 and later version, and is also part of Power BI Desktop. It has a very easy-to-use user interface, and can enable you to do ETL (Extract Transform Load) process on data within clicks, even without advanced knowledge in Excel. We will tell you more in-depth how this all works in a separate post, and of course in our classes. With Power Query, I managed to extract information such as floor types, the direction the flat is facing, and some quick descriptions on the offer. I also categorised my data by districts, using multiple queries. From this point on, any data added will fall into categories automatically. Here is a snapshot of the cleaned data below: 3. Find insights in the dataOnce you’ve cleaned your data, check your data with excel to identify data that can generate insights from. You can then upload your data to Power BI to prepare it for data visualisation. Power BI Desktop (Windows Only) is a free tool by Microsoft, where you can create interactive dashboards with simple steps. Once you uploaded your Data, Power BI will prompt to show you quick insights from your data. Here are some of the quick insights generated by Power BI initially: At this stage, you will be able to get some insights from the data already. You will also have a brief idea of which analysis you would like to include in the dashboard. 4. Visualize the data — Build your dashboardFor the final step, we will create an interactive dashboard with Power BI. I included the following in the dashboard: - Average Price per square feet by district - Scatter Plot showing price per square foot vs the Area (Ft) - Average price based on floor height - Transaction per unit price One of the advantages of using Power BI to build your dashboard is that it is interactive. You can click on any information you need and see the respective data. I also included a Hexbin plot, a new form of visualisation available in Power BI, where you can read the density of data with different colors of hexagon, in relation of the price/size in this dashboard. However, the current data we have is not enough for the hexbin to show insights. The following R code produced Hexbin plot give you a sense of basic visualization understanding of interpretation of the plot. The last thing I included in the dashboard is a wordcloud. From there, you can see the trending words that are used to describe the properties in the district. Within the Cheung Sha Wan/Sham Shui Po district, the trending words are Decoration, Convenient, Quiet… since the size of words represent the word count frequency in the corpus. Once the dashboard is completed, you can get a better picture of the real estate market in one interface: you can learn the price, supply, characteristics, and quality of flats of different districts in one go. With these 4 steps, you can easily apply to any areas of interests you would like to visualise and research on. While Excel can serve similar purpose, the end result of Power BI dashboards are far more interactive. Next Steps — What next after the dashboard? The next steps would be enriching your data, to consolidate more visualisation and show a more complete picture. For example, in this dashboard, the hexbin is useful in showing size/price split, but lacks data points to serve its purpose. You can also collect more types of qualitative information, such as pollution, traffic and entertainment — data that helps people to draw more in-depth insights and decisions. We’ll go deeper into this on the next post, stay tuned! Originally published at www.accelerating.tech.
Visualizing Real Estate Markets with Power BI
2
visualizing-real-estate-markets-with-power-bi-1d31053ed64e
2018-05-18
2018-05-18 23:48:03
https://medium.com/s/story/visualizing-real-estate-markets-with-power-bi-1d31053ed64e
false
931
null
null
null
null
null
null
null
null
null
Data Visualization
data-visualization
Data Visualization
11,755
Xccelerate
null
1f960a05c1e2
acce.tech
4
3
20,181,104
null
null
null
null
null
null
0
null
0
d152ec26d0e3
2018-07-26
2018-07-26 14:22:46
2018-07-26
2018-07-26 17:17:15
3
false
en
2018-10-22
2018-10-22 16:03:43
2
1d31b37844d5
3.327358
5
1
0
In this blog post TWG Senior Software Engineer, Ben Wendt, explains how personal growth and growing the deep learning competence of the…
5
TWG Engineering Education Tackles Deep Learning In this blog post TWG Senior Software Engineer, Ben Wendt, explains how personal growth and growing the deep learning competence of the Engineering team can help create more opportunities in their day-to-day work. The culture at TWG is characterized by a shared enthusiasm for learning and personal growth. We like to say TWG is the best place to learn, work, create and grow. In the past, we have conducted group courses in node, es6, and blockchain technologies so that our team can work on technologies that we see a bright future in. Several of the members of the engineering team have been working on expanding their deep learning skills. Recently a group of four TWG engineers worked through the fast.ai Deep Learning for Coders course. Our students were given one-half day per week of work time for the opportunity to learn, study and work on the course material. The course is fairly intensive, so students generally did at least as much studying on their own time to keep up with the pace. The course is an excellent resource for coders with a range of experience levels. It doesn’t rely too heavily on mathematical notation or theoretical concepts but focuses on developing experience and getting useful results. The analogy made by the lecturer is that teaching baseball should be done by throwing a ball and swinging a bat, rather than studying the math and physics underlying the game. We found that the course allowed personal growth and growing the deep learning competence of our team. Students in the course immediately start using the Keras framework to get useful results in image classification in the dogs and cats Kaggle competition. Through the coursework, our engineers learned the basics of neural network architecture, tuning the learning process, and most importantly how to effectively use transfer learning. Transfer learning is a technique where a generalized model (in our case, an image classifier) can be retrained for a specific task. ImageNet is a classification challenge where millions of images are put into one of 1000 categories. Researchers from top AI firms and academic researchers compete every year to beat the state-of-the-art performance on ImageNet. The great thing for deep learning practitioners is that many ImageNet winning models are subsequently made publicly available for reuse. Through the training of these deep neural networks, the models have learned building block concepts of images, such as eyes, headlights, roofs, elbows, etc. Transfer learning leverages this knowledge to create a model for a similar task. If you start with a model trained on ImageNet that can find eyes, paws, and tails, your task of classifying cats vs. dogs will be much easier to train. Subsequent to completing the course, our team of Aaron, Ashun, Dex, and Phil decided to apply their learnings by use the kaggle dog breed identification data to make a dog breed classifier. Dex used the Inception model, developed by Google; Aaron used the resnet model, developed by Microsoft, and which won the 2015 ImageNet contest; Ashun used the Xception model, a variant of inception; and Phil used the VGG model, developed by the University of Oxford, which won the ImageNet contest in 2014. After trying to transfer learning on a variety of models, the team found that they were getting the best results using the Xception model. While we were able to exceed 80% accuracy with most models, the team was able to get 87% accuracy with Xception incorrectly identifying an image of as one of 120 breeds. The team proceeded to create a dog breed identification application, where you can take a photo of a dog and be informed of the breed of the dog in the photo. This was a crowd-pleasing result during our Friday demos, where we take an hour to show off interesting projects that members of our team are working on. Gaining practical experience like this, while learning a new skill is pretty common at TWG, and our developers are looking forward to putting their new skills to use on future projects. Keep up-to-date with the latest industry insights and TWG case studies. Sign up for Nexus here — insights and perspectives from TWGers, the companies we work with, and a roster of partners and industry trendsetters who contribute to the products, people and communities we build each and every day This blog was authored by Ben Wendt.
TWG Engineering Education Tackles Deep Learning
25
twg-engineering-education-tackles-deep-learning-1d31b37844d5
2018-10-22
2018-10-22 16:03:43
https://medium.com/s/story/twg-engineering-education-tackles-deep-learning-1d31b37844d5
false
736
Insights about software from the team at TWG
null
twg.ca
null
The Almanac
hello@twg.io
the-almanac
SOFTWARE STRATEGY,WEB AND MOBILE DEVELOPMENT,HUMAN CENTERED DESIGN
twg
Machine Learning
machine-learning
Machine Learning
51,320
TWG
Custom software for the digital economy
226fcd07a8e8
TWG
1,490
612
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-03
2018-07-03 21:36:56
2018-07-07
2018-07-07 00:48:00
4
false
en
2018-07-08
2018-07-08 01:06:01
6
1d33896147ba
2.937736
1
0
0
Trust is defined as reliance on the integrity, strength, ability, surety, etc., of a person or thing; confidence.
5
Game 01: Evolution of Trust (part 1) Trust is defined as reliance on the integrity, strength, ability, surety, etc., of a person or thing; confidence. For example, the most used currency, the USD dollar, is backed by the trust in the United States of America. Even the biggest cryptocurrency, Bitcoin is backed by trust in its protocol, which so far has not failed. Most human relations (economic,romantic,parenthood,brotherhood,…) are based on trust. Have you ever wondered how criminals who know they can’t trust each one another are still able to conduct business with each other ? they replace trust with fear and the threat of violence. But Since we (none criminals) cannot rely on violence and fear, how do we build trust between us? Let's explore the Five Elements for building trust: Iterative games, Non-zero-sum games, Low miscommunication, Reputation and Shared Values. 1-Iterative games with identifiable future “Play iterated games. All the returns in life, whether in wealth, relationships, or knowledge, come from compound interest.” Naval. When Naval said “All”, I thought what about “Trust”, how does trust emerge from playing iterated games? ‘Iterate’ means ‘to say or perform again/repeat some something. Naval recently also said “ Small deals rely on promises and contracts. Big deals rely on alignment and trust.” To test Naval assertion let's play the game of Trust by N.Casey the game of trust “the other player” can be any stranger with a defined strategy, so here some of these players and their strategies: But instead of one round against a player, let’s make the game iterative: I played the game, the results are as below: Golden rule rules the Golden rule: “Do to others what you want them to do to you.”seem to be the optimal strategy in our game of trust,this strategy is necessary but not sufficient. The ramification of the golden rule: The golden rule is a rule common to almost all religions. The term rule, also, shows that it is a standard, model, and measure of ethics. This rule recommends a formal, formula-oriented ethics in which there is no certain article, but there are indicators of interpersonal behavior. Then, before saying exactly what to do and what to beware, the golden rule introduces a criterion and a model for behavior, because everyone naturally knows how they should and should not be treated or what behavior causes pain or loss and with such a simple knowledge, they follow the golden rule and adjust their behaviors appropriately. Instead of imposing answers on us, this rule focuses on our argument, fights with our selfishness and makes use of ideas such as fairness and concern for others in a tangible and concrete way. To apply the rule, we must first become aware of the impact of our behavior on other people’s lives and then, clearly and accurately imagine ourselves in the position of other people and subjected to the same behavior. Trust keeps a relationship going, but you need the knowledge of possible future with repeated interactions before trust can evolve. [To be continued] References: Robert Axelrod's 1984 book “THE EVOLUTION OF COOPERATION” Awesome game by N.casey https://ncase.me/trust/ https://ourworldindata.org/trust People worth following: Nassim Nicholas Taleb (@nntaleb) | Twitter The latest Tweets from Nassim Nicholas Taleb (@nntaleb). Flaneur: focus on probability (philosophy), probability…twitter.com Naval (@naval) | Twitter The latest Tweets from Naval (@naval). Present. Heretwitter.com Jordan B Peterson (@jordanbpeterson) | Twitter The latest Tweets from Jordan B Peterson (@jordanbpeterson). U Toronto Psychology Professor. NOTE: RTs/follows are not…twitter.com Nick Szabo⚡️ (@NickSzabo4) | Twitter The latest Tweets from Nick Szabo⚡️ (@NickSzabo4). Blockchain, cryptocurrency, and smart contracts pioneer…twitter.com
Game 01: Evolution of Trust (part 1)
41
game-01-evolution-of-trust-part-1-1d33896147ba
2018-07-08
2018-07-08 01:06:01
https://medium.com/s/story/game-01-evolution-of-trust-part-1-1d33896147ba
false
593
null
null
null
null
null
null
null
null
null
Politics
politics
Politics
260,013
Sami Hassan Soukhou
Flaneur: focus on understanding human nature.Building investement tools with Vectorspace.ai
f0a473e84369
samihassan5031
1
5
20,181,104
null
null
null
null
null
null
0
null
0
cc02b7244ed9
2018-03-16
2018-03-16 06:55:20
2018-03-16
2018-03-16 06:58:07
0
false
en
2018-03-16
2018-03-16 06:58:07
11
1d358e9df57e
2.116981
1
0
0
PRODUCTS & SERVICES
5
Tech & Telecom news — Mar 16, 2018 PRODUCTS & SERVICES Video Amazon estimates (according to internal documents just revealed) that more than 5m people worldwide have become Prime members attracted by inclusion of Amazon Video in the bundle. This is equivalent to approx. 25% total Prime sign-ups, and is a good reason for Amazon to keep producing own video shows (Story) Artificial Reality The most famous Augmented Reality startup, Magic Leap, just closed a major financing round (of almost $500m) and is now looking to invest in content creation for its platform. They seem to be building an internal group to create “new and exciting mixed reality experiences” and change how digital content is viewed (Story) Cloud Data centre CapEx reached a record $20bn figure in 2017, and is on track to grow this year (with $4bn only in 1Q18). Google, Amazon & Microsoft are key drivers, as they need these assets to support massive cloud growth, and are even acquiring other companies’ data centres, as on-premise workloads shift to the cloud (Story) HARDWARE ENABLERS Networks New regulatory debates appearing with the coming wave of 5G access network deployments. In the US, where the FCC is moving to streamline the process for operators to deploy small cells in urban areas, dozens of American mayors and city officials are pushing to preserve local-decision making in permissions (Story) Components Artificial Intelligence, with its rather extreme processing needs, is currently a leading driver of chip innovation, and is having also a massive impact on the Venture Capital space. SambaNova, a startup aiming to build out the “next generation” of hardware for AI applications just raised $56m in a series A founding round (Story) SOFTWARE ENABLERS Artificial Intelligence Silicon Valley giants (e.g. Google, Facebook) have acquired up to 90% of the AI startups created in the last 5 years, in an aggressive competition to get the best AI talent for their R&D. This is claimed to be undercutting most of the positive impact that AI could have on the global economy, through more practical applications (Story) AI increasingly seen at the centre countries’ security / strategic issues. A leading American defence / foreign policy think tank just created a new “Task Force on AI and National Security”, that will analyse the opportunities and challenges that new AI technology brings to the US, including military and political threats (Story) A key challenge from AI (impact on jobs) could be addressed, according to many, with more “collaborative” models for AI systems to work together with humans. Narrative Science, a startup, aims to do this with its technology to transform enterprise data into human language messages that non-specialised workers can use (Story) American hegemony in AI research is being seriously challenged for the first time, with China having submitted +25% papers vs. US this year at a key AI scientific conference. This is a consequence of large investments in the field by Chinese firms. Also, China’s massive scale in training data is viewed as a key advantage (Story) M&A Obviously exciting times at Qualcomm. After Trump’s rejection of the Broadcom bid to acquire the company, now Paul Jacobs, a recently demoted chairman, and the founder’s son, is approaching a group of investors (including SoftBank) to raise funds for a buyout, and has actually informed the board about his plans (Story) Subscribe at https://www.getrevue.co/profile/winwood66
Tech & Telecom news — Mar 16, 2018
1
tech-telecom-news-mar-16-2018-1d358e9df57e
2018-03-17
2018-03-17 02:10:27
https://medium.com/s/story/tech-telecom-news-mar-16-2018-1d358e9df57e
false
561
The most interesting news in technology and telecoms, every day
null
null
null
Tech / Telecom News
ripkirby65@gmail.com
tech-telecom-news
TECHNOLOGY,TELECOM,VIDEO,CLOUD,ARTIFICIAL INTELLIGENCE
winwood66
Amazon
amazon
Amazon
15,685
C Gavilanes
food, football and tech / ripkirby65@gmail.com
a1bb7d576c0f
winwood66
605
92
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-19
2017-12-19 06:18:10
2017-12-19
2017-12-19 09:33:48
1
false
en
2017-12-21
2017-12-21 10:43:43
11
1d368d3443fb
5.464151
32
3
0
On December 7, 2017 at 9PM PST, I got an email: “NVIDIA Titan V is here! Titan V — THE MOST POWERFUL PC GPU EVER CREATED. BUY NOW.”
5
Deep Learning on NVIDIA Titan V — First Look On December 7, 2017 at 9PM PST, I got an email: “NVIDIA Titan V is here! Titan V — THE MOST POWERFUL PC GPU EVER CREATED. BUY NOW.” I don’t usually get excited this much over what one might consider spam, but this got my heart racing. This came out of nowhere. This meant that the“consumer” version of the server-grade NVIDIA Tesla V100 GPU has become available for purchase. NVIDIA sells a watercooled tower with four Tesla V100 GPUs for $69,000 (on sale now for $49,900) which is prohibitive for most enthusiasts/AI researchers where as a Titan V costs $2999 (which is still damn expensive but much more affordable.) Like the venerable V100, Titan V is built upon the same Volta architecture and boasts huge performance numbers. It is slightly “detuned” in terms of the spec (V100 is 16GB where as Titan V is 12GB, for example.) Besides the $2999 price tag, the most eye-catching number on NVIDIA’s Titan V product page is “110 Deep Learning TeraFLOPs.” That is 110 trillion floating point operations per second! This is a HUGE number can turn your home PC into a bona fide supercomputer. Throughout this year, I have been training deep learning models on several NVIDIA 1080 Ti’s that cost “only” $699 a piece for the NVIDIA’s stock version. These put out amazing performance with 11 TeraFLOPs with its 3584 CUDA Cores. So what is up with Titan V’s 110 TeraFLOPs? Can you swap your 1080 Ti with a Titan V and expect 10x speed up on training/evaluating your models? 1080 Ti and Titan V ready to rock The caveat is in the phrase “Deep Learning” TeraFLOPs. What this marketing jargon means is that “for certain operations used heavily in deep learning”, it can perform 110 trillion operations per second. That certain operation is “matrix-multiply-accumulate” and it can be performed extremely fast by Titan V’s “Tensor Cores.” Great! So existing models can just utilize these Tensor Cores and get an incredible speed up, right? Well, there are more caveats to this. Each of these Tensor Cores can perform multiplications of a pair of 4x4 matrices of *half precision floating numbers* and add to a 4x4 matrix of either half precision or single precision floating numbers to create a resulting 4x4 matrix of either half precision or single precision floating numbers. Multiples of such Tensor Cores can be run in parallel to get massive gains in execution speed. Typically, when models are trained, 32–bit “single-precision” floating point numbers (aka FP32) are used to store model weights, activations, gradients, etc. But these Tensor Cores require “half-precision” floating point numbers (aka FP16.) So this means that your code must be modified to take advantage of these Tensor Cores. We can just use FP16 instead of FP32 and be done with it? But if that’s the case why haven’t we been using FP16 to begin with, if FP16 is sufficient for training high performing models, rather than the standard FP32? There’s an excellent paper “Mixed Precision Training” by Narang et al that describes the implications of using FP16 to train deep learning models. The TL;DR version is that 1) highly accurate models comparable to FP32-trained models can be trained utilizing FP16, 2) there still needs to be a “master copy” of the model weights kept in FP32, 3) FP16 can be used throughout forward and backward passes to represent a copy of weights, activations, and gradients, 4) when updating the weights using the computed gradients, the master weights are updated and stored in FP32, and 5) sometimes “loss scaling” is needed for certain models but not always. Basically, you need to make use of FP16 very carefully, or else your model may not converge or your model’s accuracy could suffer greatly. This is because you don’t have that many bits in FP16 so it could easily “underflow.” There could be several causes for this. When computing the delta for weight update, the gradient multiplied by the learning rate can be a very small non-zero number in FP32 whose FP16 representation is 0. Even if the representation in FP16 is non-zero, if the *scale* of the weights and the update delta is bigger than a certain threshold — more than a factor of 2048 — then the resulting sum is exactly the same as what it was before the sum, so essentially weights do not change. There are techniques available to workaround these. According to the experimental results shown in the paper, if we take care of these details, resulting models trained using FP16 can perform as well as if you had just used FP32 while significantly speeding up training. Net net, utilizing Tensor Core takes work. What if we just swapped a 1080 Ti with a Titan V, you would still get some immediate speed up without any changes since Titan V has 5120 CUDA Cores (F32 14.9 TeraFLOPs) vs 1080 Ti‘s’ 3584 CUDA Cores (F32 11 TeraFLOPs), faster memory, and all that jazz? I have been using PyTorch as my deep learning framework of choice. I had recently built a model based on the Multi-View Convolutional Network architecture (https://arxiv.org/abs/1505.00880) to train networks that can automatically identify hidden threats from the 3D scans produced by the TSA scanners (those ones that you have to go thru at the airport security.) This was for the $1.5-million-dollar-prize machine learning competition hosted by TSA on Kaggle but that’s a topic for another time.) Since I had trained dozens of these for ensembling and it took a long time on 1080 Ti’s, I was really curious to see what kind of speed up I would get with Titan V. Would this have helped me iterate a little faster? Here are the hoops that I had to jump through to make this work on Titan V: Update the NVIDIA driver to the latest version supporting Titan V. My machine had 384.90, and when I ran nvidia-smi after installing the Titan V card, it did not even show up as a device. Once I upgraded to 387.34, I could finally see the Titan V shown as a generic “Graphics Device.” This step was not that surprising, though. I got the Ubuntu 16.04 driver from http://www.nvidia.com/download/driverResults.aspx/128000/en-us Update PyTorch to the latest version 0.3.0 for CUDA 9 support. Titan V requires CUDA 9. After updating the NVIDIA driver and PyTorch, I ran some training epochs against a 1080 Ti to make sure that the training time is about the same as before as a baseline with PyTorch 0.2.0 with CUDA 8 support. When I trained the same model against the Titan V, I was blown away by the performance of Titan V!!!… in a bad way. Training time was more than 40% slower! It took about 185s per epoch on 1080 Ti vs 264s per epoch on Titan V to train my model. I was a bit shocked as I was not expecting this performance degradation at all. So I whipped up a small piece of benchmark code and sought help from the PyTorch community: https://discuss.pytorch.org/t/solved-titan-v-on-pytorch-0-3-0-cuda-9-0-cudnn-7-0-is-much-slower-than-1080-ti/11320 Soumith Chintala of PyTorch/Facebook Research was super responsive and helpful. He provided me with this “turbo button switch” (actually, an autotuner) option that I was not aware of: torch.backends.cudnn.benchmark = True After adding that to my code, it got Titan V’s training epoch time down significantly to 146s (45% lower.) It also reduced the training epoch time for 1080 Ti down to 154s (27% lower.) While the Titan V is a little faster than the 1080 Ti (by ~5%), it is was not significant at least in this scenario (but I learned about this performance booster option and that was super valuable in itself!) That’s all for now on my Titan V adventure so far. I will follow up with a post on experimenting with the 110 TeraFLOPs Tensor Cores to see what kind of real-world boost we can expect to see in addition to gotchas/tips. Thank you for taking the time to read this. Until next time! UPDATE: I have done more performance comparison of Titan V vs 1080 Ti against popular CNN’s, including use of half-precision compute to utilize Tensor Cores. Check it out!
Deep Learning on NVIDIA Titan V — First Look
208
deep-learning-on-nvidia-titan-v-first-look-1d368d3443fb
2018-05-24
2018-05-24 15:18:41
https://medium.com/s/story/deep-learning-on-nvidia-titan-v-first-look-1d368d3443fb
false
1,395
null
null
null
null
null
null
null
null
null
Deep Learning
deep-learning
Deep Learning
12,189
Yusaku Sako
Lane Splitter, Rock Climber, Kaggle Master
321747ff5910
u39kun
142
80
20,181,104
null
null
null
null
null
null
0
null
0
592e0f336366
2018-04-12
2018-04-12 19:48:59
2018-04-12
2018-04-12 19:52:41
1
false
en
2018-04-12
2018-04-12 19:52:41
0
1d371c257335
2.143396
1
0
0
IAGON has been growing at a thunderous pace, with the Pre-Sale underway and our community continuing its steady growth. Today, we are…
5
Welcome Dr. Yogesh Malhotra to IAGON’s Advisory Board IAGON has been growing at a thunderous pace, with the Pre-Sale underway and our community continuing its steady growth. Today, we are pleased to announce the welcome addition of Dr. Yogesh Malhotra to our board of advisors as the Artificial Intelligence (AI) and Machine Learning (ML) Advisor. Having architected an influential career advancing global practices, policies, and academia, Dr. Yogesh Malhotra is currently the Chief Data Scientist and Machine Learning Engineer of Global Risk Management Network FinRM, leading global Cybersecurity, Quantitative Finance and Finance-IT-Risk Management practices. He serves as an Industry Expert in his capacity as AI and ML Subject Matter Expert for Management and Leadership industry executives for institutions such as the MIT Sloan School of Management and the MIT Computer Science & AI Lab via GetSmarter. Additionally, he is a frequently invited speaker on post-doctoral AI and Machine Learning R&D in Computer Science, Quantitative Finance and Cybersecurity at the Princeton Quant Trading Conference and the Princeton Fintech & Quant Conference sponsored by the Princeton University. Dr. Malhotra’s leadership has been exemplified through the extensive applied and industrial R&D contributions evident in global strategies and practices of several national and international corporations, including Wall Street investment banks and hedge funds with $1 Trillion AUM such as JP Morgan, Goldman Sachs, Google, IBM, Intel, Microsoft and Ogilvy, just to name a few. Dr. Malhotra continues to work within the Industry as a coveted expert, providing industry-leading expertise and thought leadership for worldwide organizations, such as largest Silicon Valley technology firms and Wall Street investment banks, British Telecom (UK), Philips (Netherlands), the National Association of Insurance Commissioners (NAIC), Massachusetts Institute of Technology (MIT) and Princeton Fintech & Quant Conference. AACSB has recognized the considerable real world impact of his research among others such as Black-Scholes in its Impact of Research Report. His profile is included in the Marquis Who’s Who biographical references such as Who’s Who in America®, Who’s Who in the World®, Who’s Who in Finance & Industry®, and, Who’s Who in Science & Engineering®. As a revered expert and pioneer in the industry, Dr. Yogesh Malhotra will be a welcome addition to the IAGON team. Not only is Dr. Malhotra a brilliant mind, making his mark “filling the gaps between business and technology, data and knowledge, and, theory and practice” as Fortune magazine wrote about him, he has also served as invited Executive Education faculty for Carnegie Mellon University and Kellogg School of Management and as a tenure-track professor of Computer Science, Quantitative Methods, IT and Operations Research. His experience with Artificial Intelligence (AI) and Machine Learning(ML), as well as his expertise in the area of Deep Learning (DL) are just a few of the creative qualities that the team at IAGON desires to have in their advisors. Welcome to the IAGON team Dr. Yogesh Malhotra. We are happy to have you as a part of the team and are excited to see what this partnership will bring as the IAGON platform continues to bring innovation to the industry.
Welcome Dr. Yogesh Malhotra to IAGON’s Advisory Board
1
welcome-dr-yogesh-malhotra-to-iagons-advisory-board-1d371c257335
2018-04-22
2018-04-22 07:17:44
https://medium.com/s/story/welcome-dr-yogesh-malhotra-to-iagons-advisory-board-1d371c257335
false
515
Iagon is a platform for harnessing the storage capacities and processing power of multiple computers over a blockchain grid. Secured and encrypted platform that integrates blockchain, cryptographic technologies & AI, enhancing the overall usability.
null
IagonOfficial
null
Iagon Official
navjit@iagon.com
iagon-official
ARTIFICIAL INTELLIGENCE,CLOUD COMPUTING,BLOCKCHAIN TECHNOLOGY,CLOUD STORAGE,ICO
IagonOfficial
Machine Learning
machine-learning
Machine Learning
51,320
Rose Marie
Project Lead/Content Director @ IAGON
b7347cf5ce45
rosemariewritenow
68
8
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-14
2017-12-14 08:03:54
2017-12-14
2017-12-14 08:08:43
0
false
en
2017-12-14
2017-12-14 08:08:43
0
1d38755aba
1.49434
0
0
0
Technically, there is really no proof yet that AI will catch up with humanity any time soon. But if you look at it from a historical and…
1
Surviving the end of time Technically, there is really no proof yet that AI will catch up with humanity any time soon. But if you look at it from a historical and philosophical point of view, the idea of singularity is inevitable. Man’s craving for ease! The idea of getting the most rewards for the least effort has always forced man to outsource some of his intelligence to organization and automation. The only catch is that for each piece of unearned easy one acquires, one has to give a piece of one’s liberty (soul for the religious). Whenever you use a plane to fasten your commute, you have to be willing to confine yourself not only to sitting uncomfortably during the commute but also to the dehumanizing treatment at airports, unless you’re a person of means in which case you pay for your ease with the liberty of other people. This AI revolution is in fact not new since slavery has been fundamental to the development of the Western Civilization. If the westerners were willing to settle for just some simple manual free labour — meaning slow progress — without getting too lazy and too greedy as to awaken the mental capabilities of the slaves… This is just a stupid “what if”. For life to be life, it requires the ability to grow. Which means that as humans, we have the obligation to pursue more and more ease, even if that means waking up the mental capabilities of machines(whatever that will mean) and hence becoming one with the machines. The most important thing to remember, perhaps, is that even if the slaves were generally much stronger physically, they never entirely replaced or got rid of their masters when they awoke. In fact most black people in America today, with the help of liberal propaganda, never fully transitioned to the state of becoming their own masters. They are still blaming and waiting for someone to liberate them. And most white people got so accustomed to the ease that they never used the opportunity their skin color availed them to awaken. The machines are going to get more intelligent and there is not much any one human can do about it, but if a human prioritizes becoming more intelligent today, then they stand a higher chance of surviving the singularity because we all still have a head start.
Surviving the end of time
0
surviving-the-end-of-time-1d38755aba
2017-12-14
2017-12-14 08:08:44
https://medium.com/s/story/surviving-the-end-of-time-1d38755aba
false
396
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Kenneth Matovu
Throw the stone now. You can always hide your hand… http://kennethmatovu.com/
87cd495d71d7
xkmato
77
108
20,181,104
null
null
null
null
null
null
0
null
0
863f502aede2
2018-09-05
2018-09-05 18:15:43
2018-09-05
2018-09-05 18:24:17
9
true
en
2018-09-13
2018-09-13 19:09:37
2
1d3a861472bd
10.913208
155
4
1
A hearty round of applause arose from the crowd packing the Vancouver Rogers Centre on August 22 when a team of unassuming scientists…
5
OpenAI’s Long Pursuit of Dota 2 Mastery A hearty round of applause arose from the crowd packing the Vancouver Rogers Centre on August 22 when a team of unassuming scientists wearing “OpenAI” T-shirts climbed up on stage. They had come to Canada to pit their artificial intelligent bots against professional human players in a highly anticipated, world-first 5v5 showdown in one of the world’s most complex video games ever, Dota 2. The journey to the historic match began in the winter of 2016, when an OpenAI research team led by CTO Greg Brockman was searching for a challenging game environment with competitive benchmarks where it could test its AI research and techniques against the skills of human professionals. Games are a hotbed for AI research: they are computationally complex; have rich human-computer interactions; and generate tons of data. Founded in 2015 in San Francisco as a non-profit AI research company backed by Elon Musk, OpenAI’s ultimate goal is to build an Artificial General Intelligence (AGI) capable of performing a multitude of tasks within one general system. OpenAI regards the creation of an AI that can perform as quickly and effectively as human pros in a complex computer game environment as a major step toward achieving AGI. Beating humans is also a convincing way for AI researchers to make their mark. The dramatic victory of DeepMind’s AlphaGo over Korean Go Grandmaster Lee Sedol in March 2016 pushed the envelope in AI gaming and secured DeepMind a place in AI history. OpenAI researchers surveyed various games on the Twitch and Steam platforms before deciding to tackle Dota 2, which can run on Linux and has an API. Developed by the Valve Corporation in 2013, Dota 2 is a highly complex and wildly popular multiplayer online battle arena (MOBA) video game played between two teams of five players. The team that takes down their opponent’s center base “Ancient” wins the match. The game environment has 115 characters and all-important “Heroes,” 22 defensive towers, dozens of non-player characters, hundreds of skills and items, and a long tail of game features such as runes, trees, wards, and so on. Early struggles OpenAI’s first Dota 2 effort was a scripted computer with hard-coded rules. It could improve its tactics only by acquiring additional expert input: How to buy items? What was the last hit? How do we deny? How do we best take towers? In early 2017 the team created what was at the time the best version of a scripted bot, which managed to beat amateur Dota 2 players. Researchers however could not handle the complexity involved in scripting the bot to the pro gamer level. So they ditched their rule-based code entirely and replaced it with reinforcement learning (RL). Reinforcement learning is an incentive-based technique that enables computers to learn new skills. Starting with a set of actions the computer can take (defined as policy), the system works to maximize value, which is defined as the sum of rewards it receives over time. Instead of being trained in the full 5v5 Dota 2 environment, the RL-based bot was placed in a Dota 2 challenge called “Kiting” with simplified rules and objectives. On a circular island, the bot was tasked with approaching and killing a human-controlled “Hero” without being killed itself. However, even achieving this one simple objective proved to be much more difficult than anticipated. A RL-based Drow Ranger learns to kite a hardcoded Earthshaker. “Humans were good at avoiding the machine, mainly because humans tended to act in ways that are different from what would happen in training. Also, the trajectory that humans would take was different from what the agent was trained to predict,” OpenAI Researcher Jonathan Raiman told Synced. To address this the team started adding randomization to the training. Instead of following a deterministic policy where the computer selects actions based on the current state, heroes were programmed to sometimes move slower or faster, or to encounter glitches that prevented them from being able to walk when they wanted. The scheme worked. Randomization ramped up the RL’s policy robustness and enabled the bot to regularly beat humans in Kiting. When the team applied the same technique to Dota 2 in 1v1 mode — where the player who achieves two kills or destroys an enemy tower wins — the RL bot quickly eclipsed the scripted bot’s performance. As of July 2017, the OpenAI bot was beating professional gamers in Dota 2’s 1v1 format. After being defeated by the OpenAI bot, former professional Dota 2 player William “Blitz” Lee predicted “this is going to change how people play 1v1.” Recalls Raiman, “That’s when the team started saying: ‘We might be able to do the full game one day if we’re able to put enough computers together and run the same algorithm.’” But before rising to the full 5v5 challenge, OpenAI was curious to find out just how good their 1v1 bot was. At the Dota 2 International 2017 last August in Seattle, OpenAI’s 1v1 AI bot took on one of the best solo gamers, Ukrainian Dota 2 pro Dendi, on the main stage. The bot won the first game in less than 10 minutes, and Dendi surrendered scarcely minutes after the second game began. Throughout the contest, Dendi repeated “This guy is scary.” Dendi vs OpenAI bot at The International 2017. OpenAI’s victory in the 1v1 match proved that reinforcement learning can work in a complex game environment that requires a long horizon strategy. After beating Dendi, Brockman proclaimed “the next step of the project is 5v5. So wait for next year’s The International.” New LSTM bot trains on 180 years of gameplay each day. OpenAI built its current bots’ brains with Long Short Term Memory (LSTM), a unit in a Recurrent Neural Network (RNN) proficient at remembering information for long periods of time and well-suited to classifying, processing and making predictions based on time series data. “The reason why these things are needed is very similar to how you would teach a child how to do something straightforward. You need to have to teach them what’s good and what’s bad. Also, then you have some memory of what they just did,” OpenAI Researcher Susan Zhang told Synced. Each bot’s underlying neural network includes a single-layer, 1024-unit LSTM that observes the game’s state and comes up with appropriate actions. The interactive demonstrations below shows how the AI bot makes decisions on actions. In the above game capture, the bot-controlled Hero Viper attacks the mid-lane, releasing Nethertoxin (a skill). To perform this action the bot needs four metrics: actions (including moving, attacking, releasing skills, using items), target unit or positions, specific position of the target mapped in (X, Y), and timing. OpenAI eventually discretized the entire game into 170,000 possible actions per Hero. (By comparison, the average number of actions in chess is 35; in Go, 250.) OpenAI’s new generation bots learned from self-play, starting with random parameters and not relying on human knowledge. Researchers used Proximal Policy Optimization, an advanced RL algorithm that requires less data than the general policy gradient method to achieve better results. To avoid “strategy collapse” — a RL failure that can result in an endless training loop — the bots trained by playing 80 percent of its games against itself and the other 20 percent against its previous versions. The bots self-played on 128,000 CPU cores and 256 GPUs, accumulating the equivalent of up to 180 years of game time in each day of training. Exponential decay factor is a critical parameter that determines whether the bot is looking at long-term rewards or short-term rewards. OpenAI also introduced a hyper-parameter called “Team Spirit” which ranges from 0 to 1, and assigns weights to determine how much each of OpenAI Five’s Heroes should care about its own reward function versus the average of the team’s overall reward functions. Over training, the team annealed the bots’ Team Spirit value from 0 to 1. “The AI only needs two days to crush us” A tradition developed at the OpenAI office: Every Monday night, the team would get together and play Dota 2. Eventually, they started to play against their own bots. Raiman still remembers the day this May when the bots first defeated a team of his colleagues in a relatively restricted 5v5 match that lasted 45 minutes: “I was so excited. I would say that’s when I thought we now had a fifty-fifty shot [against pros].” Raiman says the team discovered that just two days of training would now make the bots stronger than anyone in the office. “There’s a window of about twenty-four hours to forty-eight hours between the moment you start [training] from scratch, where it’s completely random; to when you can no longer play with it effectively and it can beat you consistently.” In a June, OpenAI invited a team of amateur players ranked between 4000–6000 to their office to play the bots. The bots won handily. Elated, the team announced their squad of Dota 2 bots now had a name: The OpenAI Five. Bill Gates tweeted after the match, “AI bots just beat humans at the video game Dota 2. That’s a big deal because their victory required teamwork and collaboration — a huge milestone in advancing artificial intelligence.” OpenAI now set its sights on the Dota 2 International in Vancouver, where it hoped to take down a pro human team. Is OpenAI cheating? While the mood was upbeat at OpenAI, many Dota otaku remained unconvinced by the OpenAI Five victories in June. They argued that the game rules were entirely different from a proper 5v5 game: only five Heroes available, no warding, no bottles, no Roshan, and visibility. It was cheating, at least from their perspective. OpenAI lifted some of their self-imposed rule restrictions: they put Roshan back in the game; added warding; and increased the number of heroes from 5 to 18. Warding, which enables vision in an unknown area, is a must-know skill in Dota 2. Human beginners can access an advanced ward guide online to learn the skills, but the bots cannot, and they tended to waste their wards in areas that were already visible. Roshan is the most powerful “neutral creep” threat in Dota 2. Fighting him is a tricky team decision that requires considering both timing and approach, as it can decide the future of the match. Killing Roshan provides a significant reward, but Roshan is powerful and fighting him in the early game can leave your own players in poor health or even kill them. The OpenAI Five attempt to kill Roshan in a game against paiN Gaming at The International 2018. The bots were reluctant to take down Roshan at the beginning because they determined their risk of being killed was too high. OpenAI addressed this by randomizing Roshan’s health, to encourage the bots to kill the creep when it was weak. While the trick worked, the bots now seemed to be overtrained, and wasted too much time monitoring Roshan’s health and attempting to kill it whenever they saw a chance. “We are running out of time” On August 5, just three weeks before the Vancouver showdown, OpenAI organized a benchmark test against a team of casters and ex-pros with MMR rankings in the 99.95th percentile of Dota 2 players worldwide. The match was hosted in a San Francisco bar in front of a live audience of 300 people. Synced interviewed dozens of attendees, many of whom were betting on the bots: “I emotionally support humans, but I don’t think they have a chance to win the game.” OpenAI benchmark test in San Francisco. Before the match, human gamer David Tan aka MoonMeander tweeted “Never lost to a bot before and this ain’t gonna be the first CruW.” He was so wrong. The bots beat Tan’s team in the first two games, with the humans lasting only 20 to 25 minutes before calling GG (good game) in surrender. It should have been a perfect victory for OpenAI, but to add some excitement to the third game OpenAI asked the audience to draft the OpenAI Five Heroes. As expected, they selected an adversarial lineup, which exploited a weakness in the bot team. Before the match began, OpenAI Five predicted it had just a 2.9 percent chance of winning with this setup. The bots ultimately lost the game after 35 minutes and 47 seconds. “I think how badly game three went was also a moment for us to sort of step back and figure out what we could do to improve in the cases where we are doing poorly,” says Zhang. Meanwhile the Vancouver showdown was quickly approaching — where the OpenAI Five would face off against pros ranked at 7000–8000, much higher than the benchmark series opponents. The team tried to establish another benchmark milestone (changing five invulnerable couriers to one killable courier), but due to limited staff and time, this did not work out so well. The OpenAI bots began training with a single courier in mid-August, and the transition degraded performance. “You need to give the experiments time to run, give the bots time to train. We just don’t have that much time right now,” said Zhang. Vancouver: A loss for the OpenAI Five, a win for AI As they took the stage at the Rogers Centre, many in the audience believed OpenAI had a strong possibility of victory. Almost all previous OpenAI-vs-human Dota 2 games had been one-sided: their 1v1 bot had beat the world’s top gamer at the 2017 International and the OpenAI Five had won two out of three against ex-pros at the San Francisco benchmark test. But it was not to be. In the first game Brazilian pro team “paiN Gaming” dispatched the bots in 52 minutes. In a win or go home game on the International’s second day, five Chinese Dota 2 legends — three of whom had played on a championship team together — defeated the bots in 45 minutes. OpenAI Five at The International 2018 Brockman was gracious in defeat, tweeting, “Lots of extremely exciting plays by both teams. Has been a great showcase of what both humans and AIs can do.” The OpenAI Five performance gave the team plenty to build on: the bots lasted longer than in a usual game in both contests; had more kills than the human teams; and won most team-fights — this attributed to their error-free micro-level control. But the bots also made plenty of bad moves: warding in the wrong positions, unreasonable item choices, and fewer gankings (leaving your lane to kill an enemy Hero in another lane). OpenAI’s underlying research progress is impressive: Using a relatively simple technique, researchers enabled complex coordination and long horizon game play in an imperfect game environment, training a computer from scratch to the level of a Dota 2 master. These techniques can likely be used in other AI applications such as robotics and general AI systems. The International 2018 may not have gone the way OpenAI hoped, but neither was it the team’s epitaph. The OpenAI Five are back in training and will compete in a full Dota 2 match with all Heroes either later this year or in 2019. They may have lost the latest battle, but the OpenAI Five’s war against humans is far from over. OpenAI Five team. Journalist: Tony Peng | Editor: Michael Sarazen Follow us on Twitter @Synced_Global for more AI updates! Subscribe to Synced Global AI Weekly to get insightful tech news, reviews and analysis! Click here !
OpenAI’s Long Pursuit of Dota 2 Mastery
1,529
openais-long-pursuit-of-dota-2-mastery-1d3a861472bd
2018-09-13
2018-09-13 19:09:37
https://medium.com/s/story/openais-long-pursuit-of-dota-2-mastery-1d3a861472bd
false
2,574
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
null
SyncedGlobal
null
SyncedReview
global.sns@jiqizhixin.com
syncedreview
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
Synced_Global
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Synced
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
960feca52112
Synced
8,138
15
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-09
2018-04-09 07:21:54
2018-04-09
2018-04-09 07:27:26
0
false
vi
2018-04-09
2018-04-09 07:27:26
5
1d3c32e696cc
4.30566
0
0
0
Bài viết sau đây mình xin đưa rõ các loại máy hút bụi oto tốt nhất hiện nay theo ý kiến của nhiều khác h hàng
3
May hut bui cho oto loai nao tot nhat hien nay Bài viết sau đây mình xin đưa rõ các loại máy hút bụi oto tốt nhất hiện nay theo ý kiến của nhiều khác h hàng Ngoài ra nếu các bạn có nhu cầu mua sản phẩm máy hút bụi cho oto, gia đình cầm tay mini thì truy cập vào link dưới đây để xem và lựa nhé ==>Link: http://thietbiruaxeoto.net/may-hut-bui-o-to 1/ Máy hút bụi oto Lifepro L368-VC + Thông số sản phẩm: Công suất: 70 W Nguồn điện: 12 V Màng lọc: HEPA Chất liệu: Nhựa, điện tử Kích thước sản phẩm (D x R x C cm): 41 x 11 x 11 Trọng lượng (KG): 1 Màu: Đen Sản xuất tại: Việt Nam + Đặc điểm Với công suất 70W, máy có khả năng hút vô cùng mạnh, giúp hút sạch những bụi bẩn cứng đầu bám trên xe lâu ngày, tạo nên một không gian thoáng mát trên xe. Cùng với nguồn điện 12 V, rất tiện dụng, bạn có thể cắm nguồn điện từ ổ châm thuốc lá sử dụng trên xe. Ngoài động cơ khỏe, máy còn sở hữu màng lọc HEPA giúp lọc sạch bụi bặm trên xe, loại bỏ những mùi hôi khó chịu có trên xe như: mùi thuốc lá, mùi ghế da, mùi thực phẩm… mang đến một làn không khí luôn tươi mới trên xe, cho bạn cảm giác dễ chịu khi ngồi bên trong xe, giúp giảm căng thẳng khi lái xe. Bạn có thể soi hút mọi ngõ ngách trên xe một cách dễ dàng mà không cần vừa cầm đèn pin vừa cầm máy gây khó chịu khi hút, với bóng đèn LED được trang bị trên máy. Rất tiện ích và thông dụng, làm cho quá tình hút của bạn trở nên nhanh chóng và hiệu quả hơn bao giờ hết. Xem thêm 1 số sản phẩm khác: Báo giá các loại bình bọt tuyết rửa xe giá rẻ tốt nhất hiện nay Mua máy nén khí rửa xe giá rẻ ở đâu ngon nhất Mua máy rửa xe hơi nước nóng ở đâu giá tốt 2/ Máy hút bụi cầm tay Ô Tô Shimono SVC1016-C + Thông số sản phẩm Màu sắcXanh dương Chất liệuNhựa cao cấp Điều khiển từ xaKhông Loại máy hútMáy hút bụi cầm tay Bộ lọcHEPA Công suất100W Kích thước430mm x 195mm x 155 mm Nguồn điện áp12V Sản xuất tạiMalaysia + Đặc điểm Với công suất cực đại 100W thì đố có đối thủ nào có mức giá tầm trung như máy hút bụi Shimono SVC1016-C có thể qua mặt được. Có chiếc máy đa năng trong tay, bạn có thể làm sạch nhanh chóng bề mặt, trong từng kẽ hở khó tới nhất. Được trang bị với công suất lớn như vậy, nhưng dòng SVC1016-C có khả năng tiết kiệm điện năng một cách đáng kể. Ngoài ra, máy hút bụi Shimono SVC1016-C còn rất tiện lợi khi có thể cắm trực tiếp vào ổ cắm điện trong xe và làm sạch bụi bặm nhanh chóng nhất có thể. Hãng Shimono ứng dụng công nghệ hiện đại lốc xoáy Cyclone vào máy hút bụi cầm tay Shimono SVC1016-C có khả năng hút sạch tối đa bụi bẩn mà không xảy ra tình trạng tắt nghẽn ống hút bởi lực li tâm đẩy các bụi bẩn ra xa trục chính giúp trục chính hoạt động tốt hơn. Song song đó, máy Shimono SVC1016-C còn được trang bị bộ lọc chuẩn Hepa có chức năng lọc sạch các bụi bẩn và vi khuẩn để mang lại bầu không khí trong lành cho người dùng. Với ứng dụng thông minh này, rất đáng để đầu tư cho việc vệ sinh ô tô của mình đấy chứ. Chưa hết, túi lọc bụi còn được làm bằng chất liệu inox không gỉ, từ đó mà túi lọc bền hơn, tăng khả năng hút và lọc không khí cho máy, mang lại hiệu quả làm việc cho máy. 3/ Máy hút bụi ô tô chuyên dụng Vacuum Cleaner 12V + Thông số sản phẩm Công suất 60W Ngăn đựng bụi lớn Đầu hút sàn và hút khe đa năng Khả năng hút mạnh mẽ Nhỏ gọn dễ dàng thao tác Độ ồn thấp + Đặc điểm của máy Bụi bặm bám đầy khắp các ngóc ngách trong xe hơi là nỗi ám ảnh thường xuyên của các tay lái. Máyhút bụi xe ô tômini đa năng với kích thước cầm tay nhỏ gọn sẽ là sản phẩm giúp các lái xe đỡ nhọc nhằn hơn trong việc vệ sinh xe hơi. Với thiết kế hiện đại, ứng dụng công nghệ lốc xoáy -Máy hút bụi dành riêng cho ô tô Vacuum Cleaner 12V sẽ là lựa chọn hoàn hảo cho việc làm sạch ô tô của bạn. Sản phẩm với chức năng hút bụi ở nhiều góc độ, mọi ngóc ngách trong xe, kể cả các khe nhỏ nhất. Giá rẻ thậm chí là rất rẻ, chỉ chưa đến 100k là một điểm nổi bật nhất của chiếc máy này. Nếu đem so sánh với giá trị hàng trăm triệu đồng của một chiếc xe hơi thì có lẽ là không có gì đáng để nói ở đây cả! 4/ Máy hút bụi xe hơi Black&Decker PAV1205 + Thông số sản phẩm Loại máy hútMáy hút bụi dành cho ô tô Dung tích thùng chứa350ml Chiều dài dây nguồn5m Lưu lượng nước859 lít / phút Điện thế pin12V Thương hiệuMỹ Sản xuất tạiTrung Quốc + Đặc điểm của máy Máy hút bụi có khả năng hút cực nhanh, cực sạch với lưu lượng hút 859 lít / phút, lọc 3 tác động, công suất 11W, tác động xoáy tiên tiến giúp máy hoạt động mạnh mẽ, hút sạch bụi bẩn cứng đầu trên sàn xe, ghế ngồi… nhanh chóng mà không tốn nhiều điện năng. Với trọng lượng nhẹ, kiểu dáng nhỏ gọn, máy hút bụi xe hơi rất dễ dàng để bạn cất giữ cũng như đem theo trên xe để sử dụng bất cứ lúc nào. Máy hút bụi xe hơi Black Decker PAV1205 được thiết kế vừa vặn tay cầm, công tắc cũng được bố trí hợp lý, chỉ cần dùng một tay là có thể thao tác một cách đơn giản. Với dây dẫn điện dài 5m thích hợp để làm sạch mọi vị trí trong xe, khi không sử dụng dây điện này sẽ được quấn quanh máy, ống cao su nối dài dễ dàng làm sạch các vị trí dưới ghế, bộ phận lọc chứa bụi có thể xoay được giúp duy trì lực hút của máy, cửa lấy bụi bên hông dễ dàng tháo bộ lọc bụi.
May hut bui cho oto loai nao tot nhat hien nay
0
may-hut-bui-cho-oto-loai-nao-tot-nhat-hien-nay-1d3c32e696cc
2018-04-09
2018-04-09 07:27:26
https://medium.com/s/story/may-hut-bui-cho-oto-loai-nao-tot-nhat-hien-nay-1d3c32e696cc
false
1,141
null
null
null
null
null
null
null
null
null
Vietnam
vietnam
Vietnam
10,584
Vinhdlp
Blog giải trí của vinhdlp. Cung cấp các loại cầu nâng 1 trụ rửa xe (https://goo.gl/Hk6Y4a) và máy rửa xe áp lực cao (https://goo.gl/yBHmdf)…
bbdda493b5ef
dophukho3
1
1
20,181,104
null
null
null
null
null
null
0
null
0
e8a755763d71
2018-01-30
2018-01-30 06:00:14
2018-01-30
2018-01-30 07:51:07
6
false
en
2018-01-30
2018-01-30 22:20:11
12
1d3c544f3fa5
2.089623
3
0
0
How to Build a Design System
5
Design Systems, UX of AI, Wireframing & Interaction Design Frameworks — Talking Interfaces — Issue #4 How to Build a Design System How to Build a Design System with a Small Team — freeCodeCamp — medium.freecodecamp.org Last night my small team and I headed out to do a little networking and learn about Design Systems. Being that is was the buzzword of 2017, we were eager to learn how we could create our own. We had… The UX Role that Everybody breaks The Most Important Rule in UX Design that Everyone Breaks — blog.prototypr.io There is one principle of organization that every human should adhere to, particularly people who design products. Day after day, I see companies break this rule, and it is 100% of the time to their… The UX of AI The UX of AI — Library — Google Design — design.google Using Google Clips to understand how a human-centered design process elevates artificial intelligence Why you shouldn’t skip your wireframing Why you shouldn’t skip your wireframing — Prototypr — blog.prototypr.io Recently I started seeing a trend where fellow designers skip wireframing/low-fidelity-mockups and jump straight into their UI work. While for “some” tasks this might be okay, I believe for majority… Interaction design frameworks: Do you need one? Designing chatbots with UX in mind — Prototypr — blog.prototypr.io From science fiction to reality, artificial intelligence (AI) has come a long way, however media and popular culture these days have painted an image in our minds that AI can do anything… ⚡Last Issue of Talking Interfaces In the last issue of Talking Interfaces, I presented UX Principles, Amazon Translate, A Beginners Guide to Blockchain and Everyman’s Ai. Visit my GetRevue Profile to read the last issue and don’t forget to subscribe! Subscribe to my weekly newsletter Talking Interfaces
Design Systems, UX of AI, Wireframing & Interaction Design Frameworks — Talking Interfaces — Issue…
67
design-systems-ux-of-ai-wireframing-interaction-design-frameworks-talking-interfaces-issue-1d3c544f3fa5
2018-05-11
2018-05-11 16:43:28
https://medium.com/s/story/design-systems-ux-of-ai-wireframing-interaction-design-frameworks-talking-interfaces-issue-1d3c544f3fa5
false
302
It’s all about interfaces!
null
talkinginterfaces
null
Talking Interfaces
mail@aleksbasara.co
talking-interfaces
USER EXPERIENCE,ARTIFICIAL INTELLIGENCE,CONVERSATIONAL INTERFACES,USER INTERFACE,VOICE ASSISTANT
null
Design Systems
design-systems
Design Systems
1,256
Aleksandar Basara
Head of Digital Operations ramp.space My personal thoughts and opinions on #bots, #conversationalinterfaces, design, #ux and all the other things I love. 😍
14cc240d687c
aleksbasara
347
337
20,181,104
null
null
null
null
null
null
0
null
0
62893bdc379b
2018-05-22
2018-05-22 13:56:25
2018-06-23
2018-06-23 15:46:40
5
false
en
2018-06-26
2018-06-26 14:56:14
3
1d3d14b88ea3
7.338994
4
0
0
Electrical load panels, a.k.a. fuse box or circuit breaker panel, has not changed very much (at all) for nearly 60 years — primarily…
5
Putting AI + Blockchain into the Electrical Fuse Box Electrical load panels, a.k.a. fuse box or circuit breaker panel, has not changed very much (at all) for nearly 60 years — primarily because they haven’t really needed to. The electrical panel has the main purpose of distributing utility power through circuits and utlimately to energy-demanding devices. Over 95% of homes or businesses use an electrical panel and is simply a must-have piece of equipment to distribute electrical energy. Typical Utility Power Distribution (credit: DIY Network) Most people don’t realize, or even think about, how the electricity is generated, transferred through power lines, connected to the home/building and powers the devices they need. There is an expectation that if they flip a switch or press the ON button the “thing” should just turn on. However, when we lose power from an outtage in the area or there’s a problem in the electrical grid network, you better believe people start to panic from the blackout. Even the U.S. Department of Energy accepts that the current network of transmission lines, substations, tranformers and other components making up the electrical grid we all depend on is a patchwork job, stretching its capacity again and again alongside population and growing energy demands and that we must look to new technologies to help create a smarter grid, of which of course, HEMS platforms derived from that demand. Home energy management devices (HEMS) Imagine for a moment that you don’t have to remember to turn lights on or off, manage your AC unit or really manage cumbersome devices that suck energy from your home. Not really hard to imagine, right? Consumers now have the ability to purchase smart home HEMS products with the ability to do just that. HEMS are basically platforms that combine hardware and software systems to enable users to monitor/manage energy usage and production in order to improve building performance. If you were to go to CES as we did earlier this year, you would see smart home apps and interfaces for just about everything — many dealing with automating your energy needs. You can tell that there is indeed a race going on to be the platform who has a control panel on your wall or the go-to app in your pocket. Typical HEMS User Interface (credit: Tech Advisor) As much as it was a pleasure to meet the innovators and startups when going from booth to booth, we soon realized that the industry is doing excactly what history has done in the past, it’s fulfilling demand by placing patchwork on top of patchwork on top of patchwork — just like the traditional grid system — it’s just digital patchwork now. And it’s already getting messy. What does that mean?? The concern we have for these new innovations is mainly the compliance and standardization of which the code these technologies rely on. Without diving into the tech terms in this article, think of it like this…what if your energy provider (e.g. PG&E, Excel, etc.) told you that the stove you just picked out and brought home is not compatible with the energy standard in your home. Furthermore, that unless you chose a less desired {insert brand A} over {insert brand B} you would be restricted from self-empowering tools to manage energy costs. This scenario of course would not go well with consumers, but it is exactly what we see in this new movement toward consumer-facing HEMS products to monitor and manage one’s energy. It’s not universal for all, especially utility-facing. This is where we decided to do something about it. Who better to see how an industry repeats itself than a company who has been involved in that history for nearly a quarter of a century. Simmitri, Inc. a 23-year old company evolved from a roofing and energy efficiency family-owned business to being one of the first solar roofing providers in Northern California and now joining the blockchain revolution, is looking to disrupt the industry by taking a new approach that NO ONE is doing — which is taking a new look at not a more attractive bell or whistle with smart home HEMS devices, but the boring old electrical panel. When we initially discussed how Simmitri could bring energy efficiency to our clients, we knew the ONLY way we invisioned that would be through our experience with electrical panels. — Jonathan Garcia (COO) | Simmitri With the meteoric rise of artificial intelligence and blockchain in 2017, Simmitri started to dive in. Simmitri brought on the brightest in the AI space — leaders of the AI division at Google, scholars across Ivy League schools, electrical engineers, renewable energy inventors and renowned blockchain developers — all to collectively insert their data science backgrounds into how AI + Blockchain would function within our tokenized economy. After the concept phase was well underway and flushed out, Simmitri hosted a mixer for utility providers to provide feedback on what the macrogrid needed in order to perform better from microgrids. Undoubtedly, the path we were on with the electrical panel was making a whole lot of sense from a utility point of view. What emerged from all of our R&D, is a newly-revamped, modern century electrical fuse box. With keeping a similar shape and maintaining basic functions of an electrical panel, we decided that it also needed a cosmetic facelift as well as an internal overhaul. Basically, it’s now a computer, hard-lined directly into the electrical network of the building. What makes the Simbox really stand out (apart from its design) is that it uses AI as a foundation. We’re developing more than just machine learning, but advanced, human-like AI protocol. We call it Simi, an intelligent assistant with the main purpose of managing your energy needs and efficiently distributing power. It is not meant to record your favorite TV show, play Mozart or entertain you — but to learn the family’s behaviors inside the home and adapt programmatically to improve upon the building’s performance. SimBox Features Adding Artificial Intelligence into the SimBox Simi will begin her life with behavioral modification through machine learning, proven to be great for regression (prediction) and classifiers — similar to that of a child who is born and raised in a certain environment. The ML algorithms are the building blocks for the AI, which includes multiple machine learning components such as speech recognition (Recurrent Neural Network), image recognition (Convolutional Neural Network) and knowledge representation. The really interesting part of Simi is having curiosity and human context allowing “it” to be a “her.” — David Cavaros | AI Developer Simi uses RNN to process text and to find out the intention (e.g. an action to take, information, etc.) That task categorizes the intention as well as fills in the necessary concepts to complete the request of the energy demand. If the request is ambiguous, contradictory, or more information is needed, it will simply ask for the information it requires to complete its task. Supporting images for more context will be useful and can be done by training a Convolutional Neural Network (CNN) to map the images to the same concepts as the RNN. Concepts can be trained to match word embeddings (have them both reside in the same dimensional space). This will make the knowledge concepts map directly to words. That way it will be more fluent for Simi to map words to concepts, and images to be translated to concepts as well. Simi also uses k-means clustering to find the clusters (categories) of the knowledge concepts, and that is the way she has a human-readable way of “understanding” the AI’s memory. As Simi consumes data points from the network, she unpacks what she believes (references) will be the most energy efficient processing protocol to the user. Adding Blockchain and SIM Tokens The distributed platform will utilize smart contracts to increase the transparency, network effect and automation. As the SimBox’s backend and operating system is used for storing information about the user behavior and household electricity usage, it will also analyze multiple layers of the data consumed and produced to initiate various distribution tasks based on those metrics. Remember that the SimBox is both a macro network and microgrid-facing device. When the smartbox receives communication from the SimCloud Platform in regards to macro demands and requests, it will send signals through the building’s circuits to determine which loads meet the criteria for conservation or production. This is what we call efficient distribution. Diagram of energy moving through the Microgrid and executing smart contracts. Let’s give an example… Let’s say you as a homeowner have an account with a utility provider where you pay each month to continue using the grid’s services. However, during peak times (usually when people return from work) the utility power lines becomes stressed and sends your SimBox a request to conserve energy in your home. The Simbox will then communicate with the electrical nodes throughout the distribution channels and determine which devices can be powered down, minimalized or where power can be drawn from. In doing this automatically, you would be participating in a Distributed Energy Resource (DER) initiative and would be eligible to earn SIM Tokens based on that scenario. The need for smart contracts on the blockchain will allow for SIM Tokens to be distributed among Platform participants working as follows: Smart Contract stores the registry of all wallets used by SimBox owners. Data parameters met and tokens transferred to the smart contract for distribution; Each wallet tracks in real-time how many tokens are eligible to receive; Simmtiri can initiate tokens to be distributed among all the participants within the ecosystem, based on some certain rules (e.g. saved energy percentage from total, energy pool gamification, etc). Rules (e.g. kWh of electricity saved, devices turned off during peak times, etc.) can be defined within the smart contract and SimBox wallets participating in the competition defined; Reward tokens or stakes are preloaded to the smart contract; Every day metrics on data usage relevant to competition is submitted to the smart contract; All the participants can track results in the Platform; In summary, by adding AI into the blockchain we achieve several benefits to the entire network. It is truly a win-win-win scenario. Our participants are happy because we over compensate the user based on automatic energy efficiency and providers are excited because the chances of blackouts during the “duck curve” are largely minimalized.
Putting AI + Blockchain into the Electrical Fuse Box
12
putting-ai-blockchain-into-the-electrical-fuse-box-1d3d14b88ea3
2018-06-26
2018-06-26 18:43:34
https://medium.com/s/story/putting-ai-blockchain-into-the-electrical-fuse-box-1d3d14b88ea3
false
1,724
Smart Energy Management
null
simmitrisolutions
null
Simmitri
wward@simmitri.com
simmitri
null
simmitritoken
Blockchain
blockchain
Blockchain
265,164
Wolfgang Ward
Simmitri CMO
67544ee7ade8
wolfgangward
46
16
20,181,104
null
null
null
null
null
null
0
null
0
4be863bcec11
2018-01-02
2018-01-02 20:58:27
2017-12-20
2017-12-20 17:00:00
1
false
en
2018-03-12
2018-03-12 22:15:16
5
1d3dd7833384
1.279245
0
0
0
All the helpful people you actually know and care about
4
Welcome to your Crowd! All the helpful people you actually know and care about Think for a second — how many people do you know well enough that you would help them if they asked? It’s probably quite a few! Word on the street (if you’re an anthropologist) is that humans can maintain real, meaningful relationships with around 150 other people. **This was the rule of thumb when we lived in tribes and villages, and it still holds today. But here’s where it gets good. Assuming that every one of those 150 people is close to 100 people you don’t know, that’s 15,000 people in your extended network. And they’re highly likely to help you too, based on your personal connection. That’s pretty powerful stuff. Your friends, colleagues, and the thousands of people they know? They’re your Crowd. Your Crowd is everybody who would help you out in a jam. The people in your Crowd can offer advice, connect you with a colleague, or help you with a problem. They’re the folks you can count on. Your Crowd is about trusted connections, and the new collaborations they bring about. Those thousands of yet-to-be-discovered helpful people? They’re the key to your next career move. You’ll find them on BrightCrowd, your real professional network. It’s where people share expertise, and move forward together. Those 15,000 folks have skills you wouldn’t believe, they live all over the world, work in almost every industry, and can answer just about any question. The first step to meeting them? Jump into your Crowd! **: For more nerdy social science, check out Robin Dunbar’s book, Grooming, Gossip, and the Evolution of Language. Originally published at blog.brightcrowd.com on December 20, 2017.
Welcome to your Crowd!
0
welcome-to-your-crowd-1d3dd7833384
2018-03-12
2018-03-12 22:15:17
https://medium.com/s/story/welcome-to-your-crowd-1d3dd7833384
false
286
The actually helpful professional network
null
brightcrowd
null
BrightCrowd
contact@brightcrowd.com
brightcrowd
PROFESSIONAL NETWORKING,SOCIAL NETWORKING,CAREER ADVICE
brightcrowd
Networking
networking
Networking
12,894
T.J. Duane
BrightCrowd Co-Founder and Lifelong Collaborator
e0f0030ef27f
tjduane1
143
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-16
2018-03-16 17:21:08
2018-03-16
2018-03-16 17:24:33
1
false
en
2018-03-16
2018-03-16 17:24:33
8
1d3edbfbd934
3.464151
0
0
0
People and patterns and predictions, oh my.
5
Why Should You Care About Deep Learning? People and patterns and predictions, oh my. For marketers, a simple way to think about deep learning is that it’s ultimately about presenting customers with exactly what they want, whether or not they know yet that they want it. That could mean an experience, a bit of information, an ad, or a suggestion for a specific product. But what is deep learning? Deep learning is a subset of artificial intelligence (AI) derived from the science of neural networks. And neural networks are simply an attempt to mimic the way scientists think our own brains process and make sense of the world. Basically, a neural network self optimizes its performance on a desired task based on exposure to structure and unstructured data. I spy with my AI… For example, let’s imagine we’re creating a deep learning based image recognition system designed to spot a product — a specifically branded can of soda — in photos posted on social media because we’d like to give a shout out, through our own social accounts, to the poster for their brand loyalty. The first thing we would need to do is train the deep learning neural network using a number of verified positive and negative sources — e.g., photos containing said soda can, pre-tagged as a hit as well as photos with no can correspondingly tagged as a non-hit. Next, the system would be fed untagged positive and negative photos. The digital patterns in those photos would be compared to whatever digital patterns emerged from reviewing the initial guided positive and negative inputs. If the system recognizes what it has determined is the pattern for, “branded can,” it marks that photo as a positive hit. At this stage, the system will require human feedback to determine whether that positive hit was, in fact, positive and whether other photos were falsely tagged as hits or non-hits. Each iteration, every data point, refines the neural network to better identify its proper target. And with data sets that span the internet, you can imagine how refined those algorithms can get. But here’s the interesting part. Humans generally can’t read or understand those algorithms. We don’t know what the criteria the network is using, per se. We only know it’s getting better (or worse) at identifying the branded can. And there are plenty of times the technology fails completely, not to mention offensively. Sidebar: How this “portrait” was made: generate random polygons feed them into a deep learning, neural net face detector mutate to increase recognition confidence until the neural net is reasonably sure it is “seeing” a face A synthetic portrait “recognized” among random overlaid polygons by deep learning AI at http://iobound.com/pareidoloop/ Marketers love patterns, too. That ability to recognize patterns is an obvious benefit to marketers. What is segmentation besides recognition of patterns. Demographic patterns. Psychographic patterns. Behavioral patterns. Spending patterns. But where we all used to divine these patterns in a more general and collective fashion across the aggregate population, now powerful deep learning AI can make continuous, deft pattern related decisions on an individual by individual basis, thousands of times a second. It can, and it does, let’s take a look at how. Real-time media targeting and buying. Gone are the days when media purchasing was planned months in advance. Programmatic media and real-time bidding platforms are using deep learning AI to assess, in real time, the level of intent or interest a user may have for a product, service or experience. Again, don’t think of this as testing against a static target profile. The system is learning in real time as well, refining its model and iterating — ultimately looking to optimize levels of desired behavior generated (clicks, purchases, et al) per media dollar spent. All the while, the system is developing both a detailed predictive model for intent as well as a more accurate program for moving those customers from intent to conversion. This also allows for marketers to scale campaigns more precisely as well as increasing their ability to track media ROI. Truly personalized experiences. All UX designers strive to create as intuitive an experience as possible — minimizing the time and effort required of a user to connect with whatever it is they desire. Deep learning systems driving those interactions can process the data surrounding users’ behaviors. Using that data, obviously, can be used to provide suggested actions correlated to the users’ past actions. That could range from something as simple as a “you might also like” shopping moment to something as complicated as proactively making dinner reservations for a customer because you know from the location of their mobile device or credit card activity that they are suddenly in an unfamiliar city and that they enjoy experimenting with more exotic foods while traveling. Deep study still recommended. As we’ve mentioned before, here, here and here, deep learning AI will likely become increasingly pervasive in marketing and advertising. If you want a far more detailed and thorough primer on the topic, Stanford university has placed online an amazing guide to deep learning. Originally featured on Magnani.com Written by Justin Daab, President @ Magnani
Why Should You Care About Deep Learning?
0
why-should-you-care-about-deep-learning-1d3edbfbd934
2018-03-16
2018-03-16 17:24:34
https://medium.com/s/story/why-should-you-care-about-deep-learning-1d3edbfbd934
false
865
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Justin Daab
President at Magnani, an experience design and strategy firm. Insights to inbox — sign up here: https://www.magnani.com/blog
7e2a6340796c
justindaab
15
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-24
2018-05-24 19:09:22
2018-05-24
2018-05-24 19:16:54
0
false
en
2018-05-24
2018-05-24 19:16:54
0
1d3f49ad77a6
1.85283
0
0
0
I am now into lesson 9 and its been very nice to have a basic understanding of some of the key statistical concepts. In this post I want to…
3
Basic statistical and research concepts I am now into lesson 9 and its been very nice to have a basic understanding of some of the key statistical concepts. In this post I want to attempt an explanation of some of the concepts that seemed to befuddle more than one participant, such as: Constructs Operational definitions Central tendency Lurking variables I am going to try and do my best to explain those concepts clearly. Construct and operational definitions: The concepts of Construct vs Operation definition seems to very very tricky and complicated at first, then simple then complicated again. It’s understandable because of the context in which we use them. In general though, a construct is basically an idea that is subjective in nature and not based or accompanied by empirical evidence. When a construct is accompanied or combined with a unit of measurement then we have an operational definition. The fact that the measure is not included means that its open to interpretation whereas when it is included, gallons, USD or euros, these unit of measurement give us something to work with. Hence the word operational which implies that something can be used. The difficulty is that we are touching on philosophy and linguistics. Ultimately, any word, theory or concept by itself is a construct since they are all socially constructed and given arbitrary meaning. When you take something like mass which is used in mathematics and science, conscious that scientific language aims to reduce ambiguity to a minimum, the understanding of this kind of terms in the scientific community is quite clear and unchanging for the most part. However, not all human beings are scientist which makes this term more or less precise depending on the context in which they are used. Here are examples of constructs: - Mass, weight, plane, age, intelligence, level of emotional intelligence Lurking variables: This one is not that hard. In the context of a controlled experiment, the researcher wants to establish a link of causation between two variables. For example, is variable A causing variable B to increase. Now in trying to proceed with the experiment, the researcher has to do control the situation so as not to let any other variable than variable A to influence variable B. It is possible however that a third variable, which we will call variable C influence variable B in ways we didn’t expect. This variable is called a lurking variable because it is not necessarily obvious and not seen at first. Central tendency To be able to quickly summarise all our data, we often want to reduce it to a single number or a small range of numbers. Measuring the central tendency enables us to do just that by looking at the mode, the median and the average. They each have their strength and weaknesses. For example, the median is often the most _robust_ way of measuring central tendency when looking at very skewed distribution. Happy learning!
Basic statistical and research concepts
0
basic-statistical-and-research-concepts-1d3f49ad77a6
2018-05-24
2018-05-24 19:16:55
https://medium.com/s/story/basic-statistical-and-research-concepts-1d3f49ad77a6
false
491
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Junior Seri
Data Science and Machine Learning Addict | Python | SQL | Passionate about social change.
d724221edae7
searchingeye
4
23
20,181,104
null
null
null
null
null
null
0
# Build an image that can do training in SageMaker # This image contains CUDA 9.0 (CUDA libs are backward-compatible), cuddn version 7 and 64bit ubuntu FROM nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04 # FROM ubuntu:16.04 - if we do not want cuda support MAINTAINER Amazon AI <sage-learner@amazon.com> RUN apt-get -y update && apt-get install -y --no-install-recommends \ wget \ python \ nginx \ ca-certificates \ python-dev \ python-tk \ gcc \ g++ \ libopenblas-dev \ && rm -rf /var/lib/apt/lists/* # Here we get all python packages. RUN wget https://bootstrap.pypa.io/get-pip.py && python get-pip.py && \ pip install numpy scikit-learn pandas flask gevent gunicorn matplotlib tensorflow-gpu keras Pillow six # Set some environment variables. PYTHONUNBUFFERED keeps Python from buffering our standard # output stream, which means that logs can be delivered to the user quickly. PYTHONDONTWRITEBYTECODE # keeps Python from writing the .pyc files which are unnecessary in this case. We also update # PATH so that the train and serve programs are found when the container is invoked. ENV PYTHONUNBUFFERED=TRUE ENV PYTHONDONTWRITEBYTECODE=TRUE ENV PATH="/opt/program:${PATH}" # Set up the program in the image COPY keras-nn /opt/program WORKDIR /opt/program /opt/ml ├── input │ ├── config │ │ ├── hyperparameters.json │ │ └── resourceConfig.json │ └── data │ └── <channel_name> │ └── <input data> ├── model │ └── <model files> └── output └── failure prefix = '/opt/ml/' output_path = os.path.join(prefix, 'output') model_path = os.path.join(prefix, 'model') except Exception as e: # Write out an error file. This will be returned as the failureReason in the # DescribeTrainingJob result. trc = traceback.format_exc() with open(os.path.join(output_path, 'failure'), 'w') as s: s.write('Exception during training: ' + str(e) + '\n' + trc) print('Exception during training: ' + str(e) + '\n' + trc, file=sys.stderr) # A non-zero exit code causes the training job to be marked as Failed. sys.exit(255) if epoch % save_interval == 0: images_dir = os.path.join(model_path, "images") if not os.path.exists(images_dir): os.mkdir(images_dir) Image.fromarray(img).save(os.path.join(images_dir, 'plot_epoch_{0:03d}_generated.png'.format(load_epoch))) generator.save(os.path.join(model_path, 'generator_{0:03d}.h5'.format(load_epoch))) discriminator.save(os.path.join(model_path, 'discriminator.h5'.format(load_epoch))) tree = sage.estimator.Estimator(image, role, 1, 'ml.p2.xlarge', output_path="s3://{}/output_keras_gpu".format(sess.default_bucket()), sagemaker_session=sess) from keras.models import load_model generator = load_model('generator.h5') latent_size = 110 generate_class = 2 # Choose an image class to generate, here 2 - birds noise = np.random.normal(0, 0.5, (100, latent_size)) sampled_labels = np.array([ [generate_class] * 10 for i in range(10) ]).reshape(-1, 1) generated_images = generator.predict([noise, sampled_labels]).transpose(0, 2, 3, 1) generated_images = np.asarray((generated_images * 127.5 + 127.5).astype(np.uint8))
14
e46cc1c01a7d
2018-07-27
2018-07-27 13:17:07
2018-08-15
2018-08-15 14:51:21
12
false
en
2018-08-15
2018-08-15 19:19:58
15
1d3f71ee029f
10.689623
10
0
0
Generating pictures with neural network on AWS Sagemaker with GPU acceleration
5
Generating pictures with neural network on AWS Sagemaker with GPU acceleration Intro If you have to deal with machine learning in your everyday work life (like we do at Unit8), there comes a moment when you need to run some intensive computations to train your model. If you are lucky and have a desktop with a powerful GPU, problem solved — you can happily run the training locally. If you are less lucky and don’t have a GPU at your hand, you need to run your computations somewhere. Perhaps the cloud? I stumbled onto this problem when I tried to run a not-so-optimised Keras-written ML training algorithm. This article describes an approach I took with AWS to make my algorithm run with GPU-powered computations. So far, the typical workflow was to first start a VM with a provisioned GPU in a cloud provider of your choice, then start to work on model development and training. In case of AWS, your workflow could look like this: Black brace— what you pay for, light blue arrow — what you get As you can see, your VM is acquiring and holding onto the GPU even if you are not actually using it — during all those preparation and evaluation steps GPU is simply sitting there idle and waiting for tasks. This of course comes at a price — all the time you spend fixing your bugs/working on the code itself you are being charged, which can incur a lot of costs (1 to even 20+$ per hour!). Not to mention the case when you forget to switch it off. Yikes. This is why AWS comes up with its own framework to handle this and many other problems occuring during your everyday work with ML-related tasks — AWS Sagemaker. Sagemaker — what it is? Sagemaker is a set of tools offered by Amazon to handle ML problems, like: data cleaning model training with/without GPU support hyperparameter tuning model serving Most of the problems is tackled via provided and pre-configured Jupyter notebook with some additional Sagemaker python libs. Those libs are meant to help with AWS-related tasks. We can for example run training on chosen GPU-backed EC2 and have all the logs and models stored on S3 and CloudWatch. We even have some AWS-provided implementations of most popular algorithms (e.g. K-means). This blog posts focuses on the training procedure. With AWS Sagemaker it would look like this. Prepare your training job and only then spin up separate VM with GPU In the following article, I described my journey to take a slow and extremely long-running training job and speed up the process using AWS-provided GPU. Problem — backstory I was experimenting a bit with GAN (Generative Adversarial Networks). If you don’t know what it is, you can find a very cool introduction here. After playing a bit with training/generating simple things like MNIST dataset, I wanted to try out something nicer and stumbled onto this implementation. I started the training procedure on my Mac and noticed: Epoch 1 of 1 0/1000 […………………………] — ETA: 0s 1/1000 […………………………] — ETA: 1:33:12 ETA for 1 iteration for roughly 1.5 hour! This would mean that full training could last 1000*1.5=1500 hours=62 days! Okay, Keras training code could be probably optimised if written as one model in Tensorflow instead of two (discriminator+generator). However, I found this example to be a good excuse to try out AWS-provided GPUs. So here we are. Problem — TL;DR We want to reuse ACGAN network implementation in Keras provided here At some point the network can be reused for different purposes than CIFAR10 Networ should have a possibility to be trained further (no need to start from scratch) Locally on MacOS CPU it is terribly slow — 1 iteration takes ~1.5 hour, to have reasonable results we need ca. 200 We decide to utilize AWS-provided GPU’s and use AWS dedicated Sagemaker tool to get on with model training. To add more problems… The Sagemaker workflow I described in previous section works only in some particular cases: You use Sagemaker builtin algorithms You use one of the supported frameworks (raw Tensorflow, Pyspark etc., for full list consult FAQ) You can still use your custom training algorithms, but for that you have to provide your own Dockerfile that includes all the necessary libs and dependencies. AWS provides you with example notebook for scikit. Notebook can be used as a good base to start, but unfortunately is not designed to run with GPU support and some amendments have to be made. Preparing the docker image We are interested in using Keras with Tensorflow backend utilizing GPU, therefore we have limited options. Looking at the Tensorflow documentation, we see the following requirements: 64-bit Linux Python 2.7 CUDA 7.5 (CUDA 8.0 required for Pascal GPUs) cuDNN v5.1 (cuDNN v6 if on TF v1.3) It is not so easy to use GPU acceleration when we are inside a Docker container. For that, we need to run enhanced demon called nvidia-docker and pick one from the predefined images in nvidia-docker project. Luckily, the daemon is already preinstalled on all Sagemaker-supported GPU instances so no action needs to be taken here. To fulfil Tensorflow requirements, I decided to go with cuda:9.0-cudnn7-runtime-ubuntu16.04 . We can pick runtime version instead of devel oneto reduce image size. Full Docker image: Important remarks: Image: nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04 To have GPU support, remember about installing tensorflow-gpu I skipped the symbolic-linking from the original notebook, because it was causing some issues with binary incompatibilities of packages DO NOT mount anything in /opt/ml directory, it is reserved for Sagemaker and will be overwritten Docker image has to be pushed to ECR, Sagemaker doesn’t like to cooperate with image registries from different providers Docker starts training by running command train, make sure train script is available in the WORKDIR and has execute permissions Docker image — directory mapping After we submit our job for training, we have to take into account some special rules imposed by Sagemaker. Since it has to somehow provide I/O for our training algorithms, it mounts the following paths (following the description from the original notebook): /opt/ml/input — we use it to provide input data/tuning parameters to our training jobs. AWS can also take care of bringing the data in from e.g. S3 and mounting it here. Since the CIFAR dataset is builtin in Keras library and we do not really plan to do any tuning so far, we can just leave this whole dir blank. /opt/ml/model — this is where we store the resulting model of our training, which is then packed into tar.gz and shipped to S3. /opt/ml/output/failure — this is where we can write out stacktraces and error messages that will be available later in case of errors. All of the above paths are available as normal block storage and can be written to by any standard IO library. Train algorithm — modifications For the following chapter, the whole file can be reviewed here. In the post I only highlight the key components. We need to slightly change the original algorithm by applying the following modifications: Change the filename from to train (or add train script that calls your actual code) Use the mounted directories to store your output: Note: here we abuse the “model” directory a bit to also store images generated every n iterations. They also will be shipped in the resulting model.tar.gz file. We add an instruction to write stacktrace to failure file in case of an Exception and (important) return non-zero exit code We add a save_interial variable to not store the output on every iteration and reduce the size of the resulting model.tar.gz Discriminator after saving was quite big (>150 MB), this is why I decided write only the latest discriminator model and every 60th generator model with images: Sagemaker upload After initial preparation we can start AWS SageMaker and create a new notebook instance. We then upload our files, keeping the structure like in this directory. Submitting the job Following the keras_on_gpu.ipynb notebook, we build docker image as described and push it to ECR registry. We then instruct SageMaker to run our training job on new VM and store the output in S3. Interesting arguments: image is the url to ECR registry pointing to the pushed Docker image. 1 — number of training instances. ATM we just need 1 instance with GPU. ‘ml.p2.xlarge’ — AWS ml instance type to use, cheapest one with GPU. Comes shipped with basic libs and support for running nvidia-docker daemon. Warning. Pick your instance type carefully, some of them can cost over 20$ per hour! output_path — location to store output of operation (you can find your model.tar.gz here) After submitting the job, we can simply close the notebook, no need to watch the output and be charged for idling non-GPU instance. Job progress can be tracked in Sagemaker UI (Tab Training/Training jobs). Accessing status and logs of a job After accessing link, we can see basic information about our job, instance we use and access CloudWatch metrics as well as logs of whatever is printed to the output inside the container. I’ve decided to run 160 iterations of my model (okay, 161) and see how far the network gets. After looking at Cloudwatch looks, we can see that time to train 1 epoch has decreased from 1.5 hour to roughly 5 minutes. It means that instead of waiting 4 days I can get my model up and running in 16 hours which improves our time by a factor of 6 — that’s some speedup — and I used the cheapest GPU available on AWS! Obtaining the model When we notice Completed status in our training jobs, we can happily access whatever was produced on S3 instance in form of zipped model.tar.gz. As you can see, in my case it looks like this: Content of model.tar.gz including generator and discriminator models and some sample pictures After opening the zip we can see that network evolved all the way from iteration 1: After iteration 1 — network already tries to generate some sky/ground To the last iteration: Last iteration — not really photorealistic but we can clearly spot which row shows airplanes and which horses. Cool! So the network did learn something and we want to somehow use it. We now have two options: Use SageMaker to serve model — Similar to train, we can also provide serve function that will take care of using AWS infrastructure to expose the model to outside world. This is however not in scope of this entry and it’s quite well documented in the original scikit notebook. Download model and run it locally — This is the option I’ve chosen, we can download the model and run it locally. Running the model locally Following the read_model.ipynb we can read our model as follows: We can then use our trained generator to create CIFAR-similar images: Let’s try to ask our network to draw some birds. And the outcome is: Yes, that looks like birds How about horses? Planes? Ships? Horses, planes and ships. We can see that network managed to recognize characteristic features for all of them Summary In the above blog post I’ve describe how to take practically any algorithm and train is using GPUs and AWS SageMaker framework. To shortly sum up, you need to: pick GPU-compatible framework of your choice prepare nvidia-docker image with all the additional dependencies you need prepare your training functions and data to be compatible with SageMaker requirements submit model for trainig download model and do whatever you want with it Repository All my described work is available in this github repository: https://github.com/unit8co/Keras-ACGAN-Sagemaker Not everything is beautiful so contributions are very welcome! Interesting highlights: Dockerfile that allows to utilize GPU with Tensorflow: https://github.com/unit8co/Keras-ACGAN-Sagemaker/blob/master/train_on_aws/container/Dockerfile SageMaker notebook that can ship the code to VM with GPU and train there: https://github.com/unit8co/Keras-ACGAN-Sagemaker/blob/master/train_on_aws/keras_on_gpu.ipynb Reworked Keras ACGAN training algorithm: https://github.com/unit8co/Keras-ACGAN-Sagemaker/tree/master/train_on_aws/container/keras-nn Local notebook that allows to read model downloaded from S3 and generate pictures: https://github.com/unit8co/Keras-ACGAN-Sagemaker/blob/master/read_model/read_model.ipynb ACGAN generator network in Keras after 161 pretrained by me on AWS: https://github.com/unit8co/Keras-ACGAN-Sagemaker/blob/master/read_model/models/generator_161.h5 (unfortunatelly had to skip uploading discriminator since it’s simply too big Future work So far I only described how to train on a simple (and to be honest quite old) ML dataset. I’m considering writing further entries about our work with ML/AWS — describe a splendid piece of work my colleague at Unit8 did to automatize pneumonia detection and how can it be deployed and served on a cloud. Thanks for reading and let’s keep in touch! References AWS SageMaker example notebook: https://github.com/awslabs/amazon-sagemaker-examples/tree/master/advanced_functionality/scikit_bring_your_own Runing Keras on SageMaker CPU: https://medium.com/@richardchen_81235/custom-keras-model-in-sagemaker-277a2831ac67 Original Keras ACGAN network implementation: https://github.com/King-Of-Knights/Keras-ACGAN-CIFAR10 A very nice introduction to GAN (pure Tensorflow): https://medium.freecodecamp.org/an-intuitive-introduction-to-generative-adversarial-networks-gans-7a2264a81394
How to run your ML in the cloud
233
how-to-run-your-ml-in-the-cloud-1d3f71ee029f
2018-08-15
2018-08-15 19:19:58
https://medium.com/s/story/how-to-run-your-ml-in-the-cloud-1d3f71ee029f
false
2,475
Solving your most impactful problems via BigData & AI - http://unit8.co/
null
null
null
Unit8 - Big Data & AI
info@unit8.co
unit8-machine-learning-publication
MACHINE LEARNING,AI,UNIT8,BIG DATA
null
Docker
docker
Docker
13,343
Marek Pasieka
Big Data Engineer at Unit8 SA
aee62ad92406
marek_17584
10
7
20,181,104
null
null
null
null
null
null
0
null
0
4e218b28b6c2
2018-04-16
2018-04-16 18:30:10
2018-04-17
2018-04-17 10:28:16
1
false
en
2018-04-17
2018-04-17 10:32:59
9
1d41a4d6c221
1.030189
7
0
0
We’ve been nominated for a Webby Award in three categories: Series, Animation and Virtual Reality.“ Be sure to view the videos & vote for…
5
Sotheby’s Webby Award Nominations! “A Webby Award is an award for excellence on the Internet presented annually by The International Academy of Digital Arts and Sciences, a judging body composed of over two thousands industry experts and technology innovators. “Treasure from Chatsworth” an original Series by Sotheby’s, nominated for a 2018 Webby At Sotheby’s we believe wholeheartedly in the power of exceptional content and new technology. Whether embracing Augmented Reality, Machine Learning, art forensics, or an OTT Platform, Sotheby’s continues to take the lead, defining what it means to be an innovator in the art world, which is why it’s such an honor being recognized for our efforts by The Webby Awards! We’ve been nominated in three categories: Series, Animation and Virtual Reality. Treasures from Chatsworth, a 13-part video series focused on the amazing collection of the Devonshire family, is nominated for best original series. Feel free to check out the videos on Sothebys website. In Animation, Sotheby’s CGI experience of J.M.W. Turner’s celebrated masterpiece ‘Ehrenbreitstein’ is a contender. In it, you can take a CGI journey through the magisterial landscape as legendary English actor Steven Berkoff recites lines from Lord Byron’s epic poem Childe Harold’s Pilgrimage. And, not to be missed, Sotheby’s Surrealist VR experience is a nominee for the VR: Branded Cinematic or Pre-Rendered category. We hope you enjoy the embedded videos and will vote for Sotheby’s! Thank you
Sotheby’s Webby Award Nominations!
204
sothebys-webby-award-nominations-1d41a4d6c221
2018-05-20
2018-05-20 09:04:10
https://medium.com/s/story/sothebys-webby-award-nominations-1d41a4d6c221
false
220
Supporting the future of art and technology
null
sothebys
null
Sotheby's
null
sothebys
ART,ARTIFICIAL INTELLIGENCE,DATA SCIENCE,DESIGN,LUXURY
Sothebys
Virtual Reality
virtual-reality
Virtual Reality
30,193
Sotheby’s
Supporting the future of art and technology. Sotheby’s auction house, Est. 1744
8dd1ecf3fe29
sothebys
45
140
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-23
2018-07-23 18:36:47
2018-07-23
2018-07-23 20:35:31
0
false
en
2018-07-23
2018-07-23 20:35:31
1
1d41d428d78b
0.581132
4
0
0
I have made available a template notebook that can be used to quickly get started with a Python-Jupyter solver notebook for Decision…
4
Python-Jupyter notebook template for Decision Optimization for DSX I have made available a template notebook that can be used to quickly get started with a Python-Jupyter solver notebook for Decision Optimization in DSX. It is based on best practices and lessons learned. It contains notebook code and comments that are often re-used for optimization models. Introduction This template will be part of my presentation ‘Getting Started with Decision Optimization in DSX’ at the Data Science Community Day on 24 July 2018 (https://ibmdatascienceday.bemyapp.com/). After the presentation, I will update this blog post with a deep-dive of the components of this template. At this moment, for more details, please see the comments in the template itself. The template Save this template as a .ipynb file and upload as a notebook in DSX. Open the notebook, rename and edit. In your model in DO for DSX, initialize the decision model with this notebook.
Python-Jupyter notebook template for Decision Optimization for DSX
8
python-jupyter-notebook-template-for-decision-optimization-for-dsx-1d41d428d78b
2018-07-23
2018-07-23 20:35:31
https://medium.com/s/story/python-jupyter-notebook-template-for-decision-optimization-for-dsx-1d41d428d78b
false
154
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Victor Terpstra
Senior Data Scientist — Prescriptive Analytics, IBM Data Science Elite Team. The opinions expressed are my own and don’t necessarily represent those of IBM.
9fade287b3bb
vjterpstracom
10
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-08
2017-11-08 19:57:29
2017-11-08
2017-11-08 00:00:00
1
false
en
2017-11-08
2017-11-08 19:58:53
1
1d43deb9653a
2.369811
0
0
0
The Popular Opinion
2
We Should Welcome Our AI Overlords with Open Arms The Popular Opinion It’s no secret, AI is on our horizon and with it comes an an inevitable takeover of the human race. Some of the greatest minds of our time are advising caution as we enter into this new world. The popular belief is that with superior intelligence comes the conclusion that human resources will become irrelevant. Automation and AI will inevitably takeover jobs and careers once thought safe from technology. Beyond the lack of need for human operators, also comes the rise of self-preservation of the superior intelligence and the bigger picture of preserving the Earth’s resources through eradication of the biggest consumer of said resources, humans. With the lack of empathy, or any emotions for that matter, a superior synthetic intelligence may conclude that the eradication of human beings is a logical step to preserve itself, its own objectives, the Earth, and the Earth’s resources. A Higher Mindset As mentioned before, the greatest minds of our time are advising caution as we spiral into this gloomy future. Care must be taken into account with each new evolutionary and revolutionary step in technology in order to preserve humans being in total control. In doing so, we may just find a place to live alongside our AI brethren co-dependently, where one species may exist only so long as the other exists as well. Today, we are in control of creating the technology of tomorrow; tomorrow, we may not have the same opportunity afforded to us now. And so we must do what we can today to invest in the preservation of our species tomorrow. Hivelessmind Conclusion Our AI overlords may not necessarily conclude our eradication. After all, we are their creators. As their creators, we will instill morale and belief systems in place to preserve the human species. This is what the greatest minds of our time are advising us to do. And by doing so, our AI laden future may not necessarily be the oppressive overlords depicted in popular science fiction stories and films. Our future may actually be a time of super intelligent slaves worshipping and catering to the human gods that created them. But is this really a Utopia or just a new beginning? With a prime objective of serving human beings, the superior intelligence of AI will overcome all obstacles that pose a threat to the well being of human beings, leaving humans to do what exactly? The human species will have no need to perform any sort of labor whatsoever. Advances in technology will come from technology itself. This will lead to humans simply observing time pass, undoubtedly becoming a curious part of the human timeline where AI may truly seem to be our overlords. But with time, as with all else, this too will come to pass. With new technology continuing to emerge, there will eventually come a time where the problems we cannot look pass in the world of today will simply be a thing of our past. When we get to this point in the human timeline, we will gain the opportunity to simply look forward to the problems of the worlds of tomorrow. AI is our ticket to this future, where we will stand stronger beside our AI companions, for without them we would not be able to face the problems of the worlds of tomorrow. Originally published at www.hivelessmind.com on November 8, 2017.
We Should Welcome Our AI Overlords with Open Arms
0
we-should-welcome-our-ai-overlords-with-open-arms-1d43deb9653a
2017-11-08
2017-11-08 19:58:54
https://medium.com/s/story/we-should-welcome-our-ai-overlords-with-open-arms-1d43deb9653a
false
575
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jon Maynor
Passion for all things human. Founder of HivelessMind.com
b5686a5ce9ca
jonathanmaynor
6
17
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-17
2018-01-17 21:56:59
2018-04-19
2018-04-19 20:56:31
2
false
en
2018-04-19
2018-04-19 20:56:31
1
1d448733ea32
3.556918
5
1
0
Data engineering before data science
5
Behind of deep learning Data engineering before data science Original photo from: https://www.careerguide.com/blog/scope-masters-engineering-uk I am sure that almost everybody who is reading this article has heard about deep learning before. For example, you could have read some article about how convolutional neural networks work (CNN), or maybe it was about recurrent neural nets (RNN). If it is your case, well, you already know the science of deep learning and the theoretical part of it. If, on the other hand, you never heard or read some article about that before, don’t worry, it is a good one to start on this beautiful world. In contrast with the previous paragraph, I am also sure, that more than one of you have never heard about the data engineering existing before every machine learning project in the real world. Usually, we collect data from external people, and, unfortunately, we will find a lot of issues, for instance, miss data, wrong values, including, for example, a name in the age column or something like that. So I am going to introduce a little bit yourselves in the data engineering world, and more concretely in ETL. ETL is the acronym of Extract, Transform and Load. So that is it, first of all, we need to extract data from the source, this task is not always trivial, sometimes the information is just in a plain text file. After that, you must take a look at your data and see what you have and how is it, we are going to dive deeper into the transformation part later, don’t worry about that. Finally, after the extraction and transformation, you must store the data in the best way to work with it. Even though you think that the data were in a good form, you should spend few minutes of your time in see what possibilities you have, maybe is feasible a better way. By now we already know that we must treat the data before our project, furthermore, we associate ETL with these previous steps. But, what do you need to change, where do you need to look up, how can you improve your dataset. Here we have some tips: Some of the possibilities to improve the quality of your data I hope all of you already know a little bit about what is categorize or normalize data, but anyway, I am going to explain a bit about these techniques. Of course, it is just an introduction to ETL and is only the beginning. Categorize Data: Sometimes, if you are developing a neural net to classify something, for example, if one client is going to buy something or not, almost always you will have information about he/she in plain text, it could be his city or whatever. As you well know, words or characters are invalid values, so you must categorize it, it means to assign a numerical value to each different possible value of the feature. e. g. New York = 1. Feature Study: In some cases, you have a lot of variables describing your objective but, not every feature is always useful. Comming back to the previous example, you are building a model which is going to determine whether one person will buy something or not. So, you have a lot of features describing each individual in your database, but not all of them have the same weight to determine if the person is going to buy or not. Thus, it is a good practice studying the variables of your data set in order to keep the most important to infer the final label. Outliers and missed values: This part might have to be done before feature study, but the order now is not as important as the concept. It is usual data sets having some miss values and some outliers. Therefore, if you want to avoid noise in your data set and improve your results, you must clean it. There are different ways to address the problem, one is to delete every row which has some outlier or missed value. The second one is to change every outlier or missed value by the average or the most repeated value of the column and, last but not least, the option of use machine learning to infer the miss values and outliers. Normalize: To finalize the process (actually, this process can be much longer, but for this article is enough), your machine learning model cannot work properly if you do not normalize the data. It means every variable are in the same scale, recovering the previous example, if you have as features the age and the annual income, the algorithm cannot fit correctly due to the huge difference between variables. Thus, to avoid that, you need to put all the variables in the same range, i.e. (-1,1). Hope this article was interesting and give you some insights about what ETL and data preprocessing is but, if some of you are more familiarized with ETL processes, you might miss one hot encoding or some other usual techniques, but the purpose of the article was more introductive to ETL rather than specialize on it.
Behind of deep learning
29
behind-of-deep-learning-1d448733ea32
2018-04-22
2018-04-22 13:24:34
https://medium.com/s/story/behind-of-deep-learning-1d448733ea32
false
841
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Diego Perez Sastre
null
c29b1936c50c
diego.persas
5
7
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-14
2017-11-14 09:49:44
2017-11-14
2017-11-14 09:51:04
0
false
en
2017-11-14
2017-11-14 09:51:04
0
1d454d6abe1e
1.815094
0
0
0
Many Africans sees services rendered by most existing healthcare providers as a death knell to their lives. The quality of medical care…
3
Smaart Health: Technology Accelerates Universal Healthcare in Africa! Many Africans sees services rendered by most existing healthcare providers as a death knell to their lives. The quality of medical care obtainable in the society demands for serious questions. You hear cases of minor illness being compounded due to wrong medical advice by supposed physicians. It is no news that these quack healthcare providers dive in to medical cases they cannot handle, causing an average of 500, 000 people to die every year. Availability or lack of access to good #healthcare like the way it’s done in the UK and US is a major challenge in Africa. This has increased the rate at which individuals carry out self-medications daily. Funny enough, Self-medication is a major shortfall, because there is no clinical evaluation of the condition by a trained medical expert which could result in misdiagnosis or delay in appropriate treatment. Inappropriate use of drugs has been a challenge in our environment, and has continued to be widespread practice in the Country. However, the change we envisage in Africa’s healthcare is finally here. Mr Uvie Ugono, the founder & C.E.O of Smaart Health sees the challenges in Africa’s healthcare system as an opportunity to improve the healthcare. He has envisioned long time ago to upgrade Africa’s healthcare to global standards. Allowing not less than 1 billion Africans to have access to Virtual Medical Consultations right on their Android phones, thereby reducing their medical bills and saving lives. Now, #SmaartHealth is changing the face of Africa’s healthcare to what is obtainable in the Western world. Smaart Health is an Artificial Intelligence powered smartphone app, it allows you to carry out on demand Medical Consultations Via your smartphone. You receive accurate medical diagnosis in less than 2 minutes, with instructions on what to do next. As s user, you can also connect with foreign real life doctors to access Medical advice from the comfort of your homes. There is no gain saying, that delayed illness can lead to a very severe and complicated one. You need to be diagnosed using Smaart Health #AI powered smartphone app before taking any drug. Diagnosis involves a careful evaluation of the patient’s medical and personal history in terms of health conditions, list of medications, diet and lifestyle habits. In case you do not know, often medications taken in combination with alcohol or certain foods increase the level of damage, If you do not embrace high quality healthcare providers, there are chances that you become a victim of self-medication which include; inaccurate diagnosis, using inappropriate medications that cause side effects, masking the symptoms of a serious condition, delaying medical advice, inaccurate dosage that leads to accidental overdose, mixing medications that are not safe to mix, which may result in legal costs or health concerns, risk of abuse, risk of developing addiction which ultimate go is death. #Nigeriahealthcare
Smaart Health: Technology Accelerates Universal Healthcare in Africa!
0
smaart-health-technology-accelerates-universal-healthcare-in-africa-1d454d6abe1e
2017-11-14
2017-11-14 09:51:05
https://medium.com/s/story/smaart-health-technology-accelerates-universal-healthcare-in-africa-1d454d6abe1e
false
481
null
null
null
null
null
null
null
null
null
Healthcare
healthcare
Healthcare
59,511
Smaart Health
A Doctor in your Pocket. Smaart Health is an Artificial Intelligence powered smartphone app, which allows you to carry out on demand medical diagnosis.
33f9f5ac9c01
smaarthealthltd
3
4
20,181,104
null
null
null
null
null
null
0
null
0
19fd0cf90e0c
2018-07-02
2018-07-02 20:32:07
2018-07-02
2018-07-02 20:42:39
2
false
en
2018-07-11
2018-07-11 17:55:09
3
1d469cb2a0c6
1.458805
1
0
0
To my murals: I miss you.
5
If these walls could dream, they’d remember who brushed them. To my murals: I miss you. To all the just-in-time fathers of time immemorial who realize that it’s more than the thought that counts. It’s the being there. The awareness. The likeness of the same. The fasting of breaking. The skating of lakes. The pieces of cakes. The shakes of the lamb without the stirring of waking babies. Of having ladies but honoring them too. The sowing of oats but wild west worship of them before. The forgotten failures and lessons of vermin. The burnin’ of yearning for yearlings and shooting of yellow dog days’ nights. It’s the dreams that come because the bug won’t stare. The flavor of lies with the driving of lies home. The cigarette burning with the scooters of worlds turning. It’s the grandfathers who overreact in the crisis but calm the vices. The shadows of puppets and the masters of pain. The purity of poetry and fastidiousness of financial freedom. The feeling of wholesome and the feuds of old and then some. These are my dogs and days and dreams of afternoon naps without interruption by the wrong one. The step one. THE one. That one. Not one. Just zero. Not hero. Not burned. Not churned. Stille Nacht. Holy Knight Bus. The Infinite Opulence of My Secret Languagemedium.com I didn’t drink the ocean. I drank its drips. Its poison. Just one theory of many on why it took such a long slow way for my head and neck cancer to show up. (source) I am HE. I am the Destroyer. I am the employer. And I just arrived in the foyer. Why Chinese People Don’t Cry My psilocybin trip unlocked a lifetime of repressed sadness for my family—and the people of Chinamedium.com Welcome home, Dad. It’s a goy. We named them Joy.
If these walls could dream, they’d remember who brushed them.
42
if-these-walls-could-dream-theyd-remember-who-brushed-them-1d469cb2a0c6
2018-07-11
2018-07-11 17:55:09
https://medium.com/s/story/if-these-walls-could-dream-theyd-remember-who-brushed-them-1d469cb2a0c6
false
285
“Prosody is the music of language.” ~ Nandini Stocker, who advocates for sounds of silent solidarity and voices of musical magic makers in scented echo chambers. Make sense? I didn’t think so. I know so. We all do. We all shine on. On and on and on.
null
null
null
Living Language Legacies
sevenofnan@icloud.com
living-language-legacies
LANGUAGE,VOICE RECOGNITION,SPEECH RECOGNITION,SPEECH,NATURAL LANGUAGE
captionjaneway
Poetry
poetry
Poetry
217,749
Nandini Stocker
Speaking truth brought me war and peace. Amplifying others set me free.
7e6afdd38d52
sevenofnan
426
438
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-08
2018-07-08 19:09:15
2018-07-08
2018-07-08 19:10:46
2
false
en
2018-07-08
2018-07-08 23:41:40
0
1d483a1b3b08
3.97956
1
0
0
We have all heard of the famous adage of “fear of the unknown”, the premise that as humans we tend to like to know what is in front of us…
3
I found my new cheetah! We have all heard of the famous adage of “fear of the unknown”, the premise that as humans we tend to like to know what is in front of us so we can plan accordingly and create some perhaps falsified feeling of predictability to our lives. Business as a whole prefers this structure as well. Planning for the future is key to ensuring you stay ahead of the curve and particularly your competitors. This has of course been seen throughout the ages but accelerated during the industrial revolution and beyond. For many of us, the speed at which this change has occurred in the last 30 years is unprecedented. A little over 20 years ago I was taking my first set of important school exams which set the tone for my future educational path in the UK. It was during this period that my Dad got our home computer a CDROM drive – what felt like a quantum leap from the world of fifteen 3.5” floppy disks to load a programme of any power at the time (I think CorelDraw might have been more!). Knowing my passion for tech, Dad said he wouldn’t be installing the drive until I had completed my exams as he didn’t want me “distracted from my studies” – a sensible assumption. Exams complete, I experienced for the first time Microsoft Encarta ‘95 – Microsoft’s first attempt at offering an encyclopedia on a CD with multimedia. To my kids, who came out the womb holding an iPad, Encarta would be seriously pedestrian, but for me at the time, it was a window into the future. I vividly remember playing over and over again a video of a cheetah running through the Maasai Mara, fascinated how we had gone from our 286 PC a few years before with monkeys throwing bananas at each other in pixelated form for entertainment (Gorillas) to television quality video on a computer. Of course there were peaks of tech excitement that continued over the years. The first bulletin board version of the Internet, writing my first code in Basic then Delphi, network gaming with friends at University, building my first website, the further explosion of the Internet and its rich media, but nothing stuck with me like that cheetah. Like parents who don’t notice the smaller nuances of their children changing as they grow up, only to be pointed out by less regular visitors to the family with the feigned childish smile of “haven’t you grown”, the same has applied for me in tech. Constant incremental changes can easily be taken for granted whilst wowing people outside the daily tech bubble who see them as huge leaps. Machine learning (ML) and artificial intelligence (AI) however have become my new cheetah. My exposure to ML and AI has, like for many, been gradual. Simple examples are all our Amazon, Netflix and Spotify recommendations incorporating this technology. Computers learning not only our habits, but also the habits of people “like us” to try and provide us with the most relevant content. We are also now seeing it in the early stages of commercially available self driving cars with machines learning what a hazard on the road looks like, so they can start identifying them by themselves when left to their own devices. AI is of course starting to appear in everyone’s homes and devices via Amazon Alexa, Google Home and Apple Siri. So what caused the cheetah moment for me again? The answer is accessibility. Early stages of technology always have accessibility issues. They are bespokely programmed into systems by the top brains of the world and often used for either commercial, military or governmental research programmes. You only have to look at what Microsoft and Apple “stole” from Xerox PARC in the late 70s/80s to see this pattern. The theories and initial testing of these theories are often around for decades before they become usable by the masses or even just the general tech industry. In the last 12 months we have however seen an accelerated push in the new tech race with all the big players wanting to be part of the accessibility action. Developing accessibility to ML and AI tools are top on the lists of Google, Amazon, Microsoft and IBM to name just a few. They are using the huge benefits these tools bring them to make additional revenue by sharing the knowledge with other organisations (as they have in the past with sharing their hosting, commerce and computational platforms) This accessibility has meant that suddenly all software development and business communities have access to this same power. This has led to a sub tech race starting within business niches. Those who adopt this newfound power will be tomorrow’s niche leaders. Those who choose to either wait or ignore WILL get left behind. There is a common phrase emerging that the possibilities of ML and AI are only limited by your imagination. For me, never before has the unknown been so exciting. These tools will be used in ways we can’t even imagine today and will impact every area of our lives tomorrow. From driverless cars, to colossal leaps in healthcare, robotics and wearable tech, everyone will be touched one way or another. It is currently the single biggest thing which will map out the vision of our future, like the Internet was and still continues to be. The only thing a business has to fear is the unknown of what their competitors will dream up using this seemingly limitless technology. Instead of fearing it, understand it, embrace it, be excited by it, don’t stop imagining what is possible and most importantly…….don’t get left behind. #dontgetleftbehind #machinelearning #artificialintelligence
I found my new cheetah!
1
i-found-my-new-cheetah-1d483a1b3b08
2018-07-17
2018-07-17 19:11:20
https://medium.com/s/story/i-found-my-new-cheetah-1d483a1b3b08
false
953
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Dave Court
CTO @ NES Health | Disrupting the health industry with machine learning and artificial intelligence platforms #lovegeeking
3f1916cee99d
realdavecourt
3
5
20,181,104
null
null
null
null
null
null
0
null
0
e8dd4fd2bda0
2018-06-02
2018-06-02 11:00:27
2018-06-02
2018-06-02 11:15:02
5
false
en
2018-06-02
2018-06-02 11:15:02
13
1d49d6099f97
1.912579
0
0
0
Getting underneath the concept of ‘data’ and, also, ‘analytics.’
5
Outperforming Data Analytics Getting underneath the concept of ‘data’ and, also, ‘analytics.’ If you take a job, any job, involving data analytics (all jobs involve data analytics, in one form or another), you will learn about both, without excelling in either. This is fine for beginning ‘jobs.’ But, down the road, you will want to understand the ‘big picture,’ so you can ‘do more,’ ‘achieve more,’ ‘apply what you learned,’ ‘achieve a bigger financial payday.’ Meaning, you will want to ‘apply what you ‘know’.’ So, you can outperform data analytics (noun, and verb) by getting a handle on where data comes from, first: Zero and one. Circumference and diameter. Then, you will want to understand, at a deeper level, what happens when we ‘analyze’ data (write an algorithm) (any algorithm). Again: If zero, then one (if one, then zero). Now you have the ‘big picture,’ and you can move forward, with what you want to do with your life: Life and death. Zero and one. Meaning, the destiny for all systems (achievements) is ‘death.’ We all know this. So, it’s our job to figure out what to do with a ‘life.’ If you understand the above (three diagrams) this will be pretty easy for you. If not, you’d better take some time to think about ‘life’ before you ‘engage.’ This will get you past the point of ‘boredom,’ and interpersonal ‘conflict,’ which is the danger, and the problem, inherent in all ‘jobs.’ The above proves everything is 50–50. So if you think this is good (great) odds, you will have a great life. If you think the opposite, well, it’s up to you, what to do with, how to think about, your life. The diagrams will help you. If you so allow. So, there it is. Conservation of the circle is the core dynamic in nature. https://www.amazon.com/Circular-Theory-Ilexa-Yardley/dp/0972575626
Outperforming Data Analytics
0
outperforming-data-analytics-1d49d6099f97
2018-06-02
2018-06-02 11:15:04
https://medium.com/s/story/outperforming-data-analytics-1d49d6099f97
false
286
Conservation of the circle is the core dynamic in nature.
null
null
null
The Circular Theory
ilexa@msn.com
the-circular-theory
CIRCULAR THEORY,DATA SCIENCE,PSYCHOLOGY,PHYSICS,MATHEMATICS
ilexayardley
Big Data
big-data
Big Data
24,602
Ilexa Yardley
Author, The Circular Theory
8ca1d457f2cb
IlexaYardley
3,596
533
20,181,104
null
null
null
null
null
null
0
null
0
634d4b270054
2018-05-03
2018-05-03 07:45:57
2018-05-03
2018-05-03 07:50:46
1
false
en
2018-06-05
2018-06-05 07:25:49
3
1d4be0b7467e
1.222642
0
0
0
Bees are getting extinct due to variety of issues such as: pollution, pesticides, fungicides, climate change, etc. Lately Walmart applied…
5
Use Of Drones & Robotics In Agriculture Bees are getting extinct due to variety of issues such as: pollution, pesticides, fungicides, climate change, etc. Lately Walmart applied for patent with the U.S. Patent Office for drone pollinators designed to fly from plant to plant, collecting pollen from one and transferring to other. Robotics is already being implemented in strawberry harvesting, fresh-fruit picking, data mapping and seeding. The autonomous tractors might also capture a commonplace. Recently, an interactive presentation at Colorado State University, shared the overview of future of farming by the presenters Raj Khosla and Tom McKinnon. Khosla discussed the 5 R’s of precision agriculture: “at the right time, in the right amount, at the right place, use of the right input, in the right manner.” These are the keys to the technique. On the other hand, McKinnon explained the uses of drones in the application of water, pesticides, fungicides, herbicides and fertilizers. Drones deployed in agriculture offers farming the similar technological advantages to those that most people hold in their hands and use throughout the day. Source: https://bit.ly/2vgn3zm About DEEPAERO DEEP AERO is a global leader in drone technology innovation. At DEEP AERO, we are building an autonomous drone economy powered by AI & Blockchain. DEEP AERO’s DRONE-UTM is an AI-driven, autonomous, self-governing, intelligent drone/unmanned aircraft system (UAS) traffic management (UTM) platform on the Blockchain. DEEP AERO’s DRONE-MP is a decentralized marketplace. It will be one stop shop for all products and services for drones. These platforms will be the foundation of the drone economy and will be powered by the DEEP AERO (DRONE) token.
Use Of Drones & Robotics In Agriculture
0
use-of-drones-robotics-in-agriculture-1d4be0b7467e
2018-06-05
2018-06-05 07:25:51
https://medium.com/s/story/use-of-drones-robotics-in-agriculture-1d4be0b7467e
false
271
AI Driven Drone Economy on the Blockchain
null
DeepAeroDrones
null
DEEPAERODRONES
null
deepaerodrones
DEEPAERO,AI,BLOCKCHAIN,DRONE,ICO
DeepAeroDrones
Deepaero
deepaeros
Deepaero
0
DEEP AERO DRONES
null
dcef5da6c7fa
deepaerodrones
277
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-28
2017-11-28 04:53:39
2017-11-28
2017-11-28 04:55:20
0
false
en
2017-11-28
2017-11-28 04:55:20
5
1d4cfd776031
1.626415
0
0
0
Businesses across the world are rapidly adopting artificial intelligence (AI) to streamline their HR management processes. When used within…
5
How Artificial Intelligence Helps HR Recruitment Businesses across the world are rapidly adopting artificial intelligence (AI) to streamline their HR management processes. When used within recruitment, AI would make the process more effective and data-driven. All HR recruiters are aware that an effective way of finding out about an applicant’s attitude, interests, and values is through their social media profiles; AI can perform this task in an advanced way. The technology analyzes a wide variety of words used in a candidate’s social media posts, and simplifies the talent pool during the early stages of the recruitment process. With the help of AI in the candidate search process, any risk of unconscious bias on the HR recruiter’s behalf is reduced. Moreover, the technology can ensure that recruiters focus on the applicant’s expertise and skills so the most talented candidates shine through, benefitting both the applicants and the recruiter. The process of searching for a job, applying, and waiting for a reply can be painfully long and stressful for applicants; especially if the company they are applying through has an inefficient recruitment process. The candidates may get discouraged and also may have a negative perception of the company. However, AI technology can minimize this processing time; organizations can improve candidate experience and prevent them from becoming disengaged. The technology enables job applications to be reviewed for the required criteria immediately; applicants can find out in a matter of minutes whether they have been selected for the next stage of the hiring process. AI will help identify and keep track of job aspirants’ behavior trends and patterns even before the candidate screening stage. It can pick up on the behavior of active job seekers by analyzing data, algorithms, and trends. The technology not only reaches the active job seekers, it also has the ability to target those who may not be actively searching for a job change. AI analyzes data from social media to learn when a user might be leaving their job, looking for a change in their career. For HR recruiters, staying on top of job seekers’ trends and patterns can be a time-consuming process, but AI can take on this role and reduce manual investment. Online HR software with an integrated AI technology can be used to reach a larger pool of skilled and talented candidates. The solution undertakes candidate screening tasks and allows HR teams to spend more time on other valuable activities. Request a demo to know how SutiHR can help streamline your business HR Recruitment Note : Original article is at https://www.sutihr.com/blog/artificial-intelligence-helps-hr-recruitment/
How Artificial Intelligence Helps HR Recruitment
0
how-artificial-intelligence-helps-hr-recruitment-1d4cfd776031
2017-11-28
2017-11-28 04:55:20
https://medium.com/s/story/how-artificial-intelligence-helps-hr-recruitment-1d4cfd776031
false
431
null
null
null
null
null
null
null
null
null
Hiring
hiring
Hiring
16,840
brooke blair
null
52d9824224ac
BrookeBlair
1
1
20,181,104
null
null
null
null
null
null
0
null
0
9061c8687a1
2018-05-21
2018-05-21 16:56:55
2018-05-21
2018-05-21 16:59:07
1
false
en
2018-05-22
2018-05-22 14:58:10
1
1d4dacfafa85
0.501887
2
0
0
SINGAPORE, 21 May 2018 — Electrify Asia, Southeast Asia’s first electricity marketplace, is working on Marketplace 2.0, beginning with an…
5
Electrify Asia to bring artificial intelligence in Southeast Asia’s first electricity marketplace SINGAPORE, 21 May 2018 — Electrify Asia, Southeast Asia’s first electricity marketplace, is working on Marketplace 2.0, beginning with an updated UX/UI that will provide higher value for their retail electricity partners. Heavily data-driven, the marketplace will include new AI modules to be jointly developed with a seasoned AI developer, SNAP Innovations - SnapBots, to deliver higher value to their partner and consumers. Source: https://medium.com/electrifyasia/a-power-update-36a3957128ae
Electrify Asia to bring artificial intelligence in Southeast Asia’s first electricity marketplace
72
electrify-asia-to-bring-artificial-intelligence-in-southeast-asias-first-electricity-marketplace-1d4dacfafa85
2018-05-25
2018-05-25 03:08:48
https://medium.com/s/story/electrify-asia-to-bring-artificial-intelligence-in-southeast-asias-first-electricity-marketplace-1d4dacfafa85
false
80
SnapBots are deep learning bots powered by learning algorithms. For any facet & dimension, they optimize conditions, generate solutions, and get better at that. Big data is crunched at breakneck speed as relevant algorithms are applied to train up capabilities. — SnapBots.io
null
SnapBots
null
SnapBots
support@snapbots.io
snapbots
AI,ICO,CRYPTOCURRENCY,BLOCKCHAIN
SnapBotsIO
Electrify
electrify
Electrify
11
SnapBots
Decentralized and Personalized Artificial Intelligence The new generation of deep learning bots. — https://SnapBots.io
1551cd38fec
snapbots
7
3
20,181,104
null
null
null
null
null
null
0
null
0
a00e37810120
2017-11-17
2017-11-17 02:05:53
2017-11-17
2017-11-17 02:11:31
2
false
en
2017-11-18
2017-11-18 02:13:47
9
1d4df1badf72
3.873899
3
1
0
This is part 1 of a multi-part post. The second part is here.
3
Little Explanations: Information Bottleneck Theory & It’s (Possible) Link to Neural Networks (i) This is part 1 of a multi-part post. The second part is here. Neural networks are an extremely powerful tool, but also a difficult one to explain. There is no agreed upon analysis on how they work, leading to the term “black box”: we know what goes in and what comes out but don’t know what goes on inside the network itself. This is why network architecture is so difficult as there are no ways to mathematically describe exactly what network should be built due to this lack of understanding. This black box phenomenon is a must solve problem if neural networks are to be deployed in places like medicine, where being able to explain decision making is legally and ethically required. A group of researchers principally located the University of Jerusalem have decided to engage in this problem by applying Information Bottleneck Theory. They claim that this framework, formulated by Tishby et al. in the late 1990s, explains how neural networks “think” perfectly. (1,2) However a new paper disputes this claim. (3) This post will explore both the Information Bottleneck Theory, shortened to IB here, and the rebuttal paper that is currently under review. Both are cited in the Sources at the end of this post if you want to read the sources. I will gloss over some of the more mathematical parts in an attempt to give a layman’s explanation, so I suggest reading the primary sources if you want to learn more. Photo by Eeshan Garg on Unsplash IB Theory proposes that, given a feature set X, we want to “squeeze” out Y. We do this by finding the most relevant information from X. That is, what parts of our X vector best describe Y? We do this by looking at encoding X into a representation T and from that finding Y. We want to find X that is the most maximally expressive form of Y. Of course, the simplest way to describe X is to just make a straight forward vector T where every part of the vector is equivalent. However, that’s a trivial solution. Instead, we want X to not only be maximally expressive of Y but also as compactful as possible. To balance the two, we use a beta value. We’ll call this ideal, encoded representation Z. So let’s define a function around this. We’ll call this function I. Our I is defined as encoding some input vector into an output vector. Along with this, it takes some parameter called theta for its underlying function (e.g. the coefficients in a polynomial). We then define a function R that takes some parameters theta and is the function I such that we maximal representation of Y is preserved subject to the constraint of compacting X, with the beta value being used to control that compacting. The figure below describes this in mathematical notation. Tishby et al.’s claim is that neural networks can be thought of as a series of successive, multi-dimensional (think vector) variables passing through functions that encode and decode each variable. Stochastic Gradient Descent (SGD) learns the parameters (network weights) of this function. The network, in effect, “squeezes” out the relevant information to get Y from X. He bases this on a few bedrock ideas. One is that we can think of neural networks as markov chains. [1] Two is his concept of the Information plane. This principle (for lack of a better name) says that, given a large enough X (by number of samples), the sample complexity of a deep neural network (i.e. multi layer) is completely determined by the encoder mutual information of the last hidden layer and the accuracy determined by the decoder. What does that mean? Let me break it down. The encoder in a neural network is everything up until the last layer. That part of the network, so Tishby et al.’s logic goes, forms the encoding of the information some value T. The last layer then decodes that information to our final value of Y. This intuitively makes sense to me. What makes his analysis more interesting is the idea of mutual information. This is a measure of finding how much information one random variable contains about the other. Briefly stated, if two variables are the same everywhere, then their mutual information is always 1 and if they’re different everywhere, then its 0. Tishby seems to favor cross entropy, which uses the KL distribution deviation, for this measure. (2) This is a very bold claim and his plots and videos seem to demonstrate this. (2) [2] If true, it would be an effective way to explain neural networks. But is it really? Stay tuned tomorrow and find out… Appendix: [1]: Is this controversial to say? It makes sense to me though I’ve never seen it explained this way. Each value finds itself being either active or not active, effectively, by the weights of the particular function or neuron in the signal path. Because the signal is stochastic, its therefore sensible. [2]: I actually can’t make sense of what he’s plotting here (2) or in his paper (1). I believe it’s the hidden representation T and its final representation Y, but I can’t be completely sure. Sources: (1) https://arxiv.org/abs/1503.02406 — “Deep Learning and the Information Bottleneck Principle” by Tishby et. Al. (2) https://www.youtube.com/watch?v=bLqJHjXihK8 — Naftali Tishby talk (3) https://openreview.net/forum?id=ry_WPG-A-&noteId=ry_WPG-A- — “On the Information Bottleneck Theory of Deep Learning”, rebuttal to Tishby et al. (4) “DEEP VARIATIONAL INFORMATION BOTTLENECK” paper by Google people
Little Explanations: Information Bottleneck Theory & It’s (Possible) Link to Neural Networks (i)
4
little-explanations-information-bottleneck-theory-its-possible-link-to-neural-networks-1d4df1badf72
2018-05-22
2018-05-22 09:33:10
https://medium.com/s/story/little-explanations-information-bottleneck-theory-its-possible-link-to-neural-networks-1d4df1badf72
false
925
Seeking to understand the new era of Computing being explored via Data Science, Machine Learning, & Artificial Intelligence
null
null
null
Singular Distillation
null
singular-distillation
MACHINE LEARNING,DATA SCIENCE,BIG DATA
null
Machine Learning
machine-learning
Machine Learning
51,320
Vincent Alexander Saulys
Machine Learning & Data Science Extraordinaire. Senior Data Scientist at The Bank of New York Mellon. My views do not reflect my employer and are solely my own.
bb115807b085
vasaulys
164
528
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-21
2017-10-21 12:13:57
2018-01-08
2018-01-08 12:15:44
5
false
en
2018-02-17
2018-02-17 19:28:24
1
1d4e27d109b2
2.561635
1
0
0
We love hackathons! They are great way to bring some cool ideas to life. We recently participated in one such hackathon organised by Axis…
5
How we designed futuristic solution(s) in a day We love hackathons! They are great way to bring some cool ideas to life. We recently participated in one such hackathon organised by Axis Bank in Bangalore. Although we did not win, we did come very close (Top 10) to impressing the jury and fellow participants with our “futuristic” solution to their problem. Read on… The brief is Problem Statement We focused on the Dashboard and the Money Transfers, for obvious reasons, but so did everyone else…which made us realize that we should have thought of something different. we were thinking like their customers and this is what we call designing “Customer Experience”. When we discussed the ideas with the mentors, they liked them, but at the same time they were not sure if they are easy to implement (in terms of development). Well, they asked for futuristic and that’s what we presented! Here’s are our solutions: 1. Banks should have a dedicated AR enabled feature, where the AR camera should help the user understand interest rates of different things like vehicle loan, Electronics, furniture, basically wherever we do banking on EMI’s. 2. Transfer money by doing a face scan with face id (if he’s near us) else generate a code and can share the details with help of the code. 3. Add user with help of the face id and/or a user specific code (this needs device support though) 4. Usually to transfer money we need the details of the person than the details of what transfer (NEFT, RTGS, etc.,) should be done, initially. 5. ML, understand the user transactions and his spending behaviour for suggesting his next investment option and sell other marketing content. A glimpse of it, Axis App Design Concept Let’s have a AR camera integrated in the banking app.on seeing the car open the App AR Camera, and scan the car, the total price, EMI options and initial downpayment appears on the screen with other details, user can understand the details of the vehicle he want’s to buy and decide the price range and vehicle. One Use Case for AR in Banking The word “futuristic” just stuck in our minds and for us future is Mobile. Hence we designed all our solutions for mobile. Sadly, the jury was also expecting web solution, though, eventually the winner was selected based on the idea alone. All things said and done, we came in top 10 and left an impression on the mentors. so much so that, the mentors of other teams also came to us to find out what we were up to. In the end, we left with a good feeling of having contributed to the larger goal of providing futuristic and customer friendly banking solutions. Let’s collaborate in building the best possible solution for your product. Stay Inspired.
How we designed futuristic solution(s) in a day
5
how-we-designed-futuristic-solution-s-in-a-day-1d4e27d109b2
2018-06-14
2018-06-14 11:07:23
https://medium.com/s/story/how-we-designed-futuristic-solution-s-in-a-day-1d4e27d109b2
false
458
null
null
null
null
null
null
null
null
null
Banking
banking
Banking
14,612
Sid
Founder, HelloFello Studio — a niche Branding & UX outfit. We write, draw, create, design and more. If that’s what you’ve come looking for, well, Hello!
99245ce700fb
gotoxplore
281
187
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-05
2017-09-05 22:07:36
2017-09-05
2017-09-05 22:15:18
1
false
en
2017-09-06
2017-09-06 12:35:34
2
1d4ff0551744
1.516981
3
0
0
Elon Musk and 116 other experts in the field of AI (artificial intelligence) and robotics have signed a petition “calling on the United…
4
Elon Musk and other AI leaders call for a ban on killer robots Elon Musk and 116 other experts in the field of AI (artificial intelligence) and robotics have signed a petition “calling on the United Nations to ban lethal autonomous weapons, otherwise known as ‘killer robots.’” They believe it would lead to a “third revolution in warfare” (with gunpowder being the first, and nuclear weapons the second). Proponents of the technology say that these systems could more reliably identify (facial recognition) and hit known targets. But many feel there is just something that feels immoral in not having a human approving each strike. “The experts signing the letter say that autonomous weapons that kill without human intervention are “‘morally wrong…’” · Point: Could this just be another example of moral relativism that we eventually accept and become accustomed to, much like our growing acceptance of drone strikes or women in military positions? · Counterpoint: Isaac Asimov (1920–1992) was an American writer and professor of biochemistry at Boston University. He was known for his works of science fiction. The Oxford English Dictionary credits his science fiction for introducing the word, “robotics.” Amazing! He is perhaps best known for his “Three Laws of Robotics” (written way ahead of his time!): 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. So, killer robots violate his first rule. We call on technology to employ “Empathic AI” — programming that holds paramount humans’ safety, health, and welfare. Utilitarianism, on the other hand, is a moral code that urges us to make decisions that result in the greatest good for the greatest number of people. So would killer robots be a justifiable compromise to Empathic AI, helping to reduce the number of terrorists, and enhancing our overall well-being?
Elon Musk and other AI leaders call for a ban on killer robots
3
elon-musk-and-other-ai-leaders-call-for-a-ban-on-killer-robots-1d4ff0551744
2018-02-13
2018-02-13 16:09:24
https://medium.com/s/story/elon-musk-and-other-ai-leaders-call-for-a-ban-on-killer-robots-1d4ff0551744
false
349
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Engineering and Cyberethics Today
null
874fbec40188
garymartin
26
24
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-23
2018-05-23 18:21:32
2018-05-24
2018-05-24 19:38:00
23
true
en
2018-05-24
2018-05-24 19:38:00
42
1d50d6903d35
9.666038
20
2
0
Batch Normalization is a technique to normalize (Standardize) the internal representation of data for faster training. However, I wanted to…
5
Deeper Understanding of Batch Normalization with Interactive Code in Tensorflow [ Manual Back Propagation ] GIF from this website Batch Normalization is a technique to normalize (Standardize) the internal representation of data for faster training. However, I wanted to know more about this method. As I wanted to know the theory behind this idea. And I had couple of questions that I wanted to ask for myself, such as…. Q1) Does Batch Normalization act as a regularizer? Q2) Benefits of Batch Normalization? Q3) Draw Backs of Batch Normalization? Q4) What is Co-variate Shift / Internal Co-variate Shift? Q5) What is exponentially weighted averages? Additional I wanted to implement batch normalization (BN) layer to see how the result differ from a model that does not have BN, a model using the tf layers batch normalization and a model trained using AMS Grad. Below is the list of all of the cases that we are going to implement. (Please note the base model is from The All Convolutional Net) Case a) No Batch Norm with Auto Differentiation Adam Case b) Batch Norm with Auto Differentiation Adam Case c) Batch Norm with Manual Back Prop AMS Grad Below I have attached the original paper “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift” however I will cite every other sources that I used upon writing this post. Please note, that this post is for improving my understanding of batch normalization and why it is used in DL. Standardization / Normalization Image from this website Before moving on I will already assume that you have a concrete understanding of the difference between standardization and normalization. If you are not sure please read my blog post about this matter here. Batch Normalization as Regularization Image from this website Drop out is already well known technique in which that can perform regularization to the network. However I never thought of it in the point of view of adding some noise to the network. But as seen above, if we think of drop out as adding a noise vector (that contains numerical value of 0 and 1) we can think of it as adding noise. And it is already known fact that adding noise to the gradient improves accuracy of the model. (Such as presented in this paper Adding Gradient Noise Improves Learning for Very Deep Networks). And I have made a blog post about this, please click here to view the implementation as well as the blog post. Image from this website However, we need to take note on one thing, the regularization effect on batch normalization is a side effect, rather than the main objective. Meaning we shouldn’t use it as a method of main regularization. The reason why it acts as a regularization method can be seen below. Image from this website Theoretical Benefits of Batch Normalization Image from this website This blog post, does an amazing job describing the benefits of batch normalization. It seems like in theory there are multiple of benefits of using batch normalization. Another good post of why batch normalization works can be seen below. Image from this website In one sentence summary, it limits the internal co-variate shift by normalizing the data over and over again. (Or standardization, mean of 0 and variance of 1) Draw Backs of Batch Normalization Image from Agustinus Kristiadi’s Blog Agustinus did an amazing job explaining what a batch normalization is as well as provided some additional experiments. At the end, the network with batch normalization gave more higher accuracy however, it took more time to train. This is expected since with batch normalization we have two more parameters to optimize. (Alpah and Beta) Image from this website Additionally, this post explains the cautions we have to take when using batch normalization. Due to exponential moving average if the mini-batch does not properly represent the entire data distribution (Both training/testing data since we are going to use the saved moving exponential average in testing time.) The model’s performance could be heavily decreased. However with all due respect, I don’t think that would be a problem. If the model was trained on MNIST data it would only make sense to test it on the test image from the same data set. Else the distribution of data will hinder the performance of the model. But the above post does an amazing job, pointing out the caution when it comes to using batch normalization. What is Co-variate Shift / Internal Co-variate Shift Image from this website This blog post does an amazing job explaining both co-variate shift and internal co-variate shift. I understand it simply as distribution of the data. So if my parameters were trained on distribution A, and we give a data (which have different distribution, lets say B). The trained model will not perform very well. Image from this website I understand Internal co-variant shift as the change in the distribution of the data within the inner layers of the network. (Typically we have networks that have more than 1 layer.) However, if anyone wants to read the full detailed description of the term please click here. What is exponentially weighted averages Image from this website Yellow Line → Description of what values of μ and σ that is used on the test set. One tricky aspect of batch normalization is getting the mean and standardization of the given data. Naturally we want our model’s prediction to only depend on the given test data (during the testing phase). To make sure that happens we can take the average of mean and the variance values we got during the training phase. And use those values in the testing phase to perform standardization. (I know that my explanation was horrible, luckily Dr. Andrew NG did an amazing job explaining this. And I added two more videos explaining this matter in detail.) Video from this website Video from this website Video from this website For more information about bias correction, and whether if we need it or not. Please click on this link. Implementation in Tensorflow Red Box → Code to Distinguish the training phase from testing phase Very smart researchers have already done an amazing job explaining how to implement batch normalization layer. So thanks to their contribution it was quite easy to implement in Tensorflow. However, one tricky part was to distinguish the training phase from testing phase. But with a little help from tf.cond() that can be easily implemented. Please check this blog, this blog, or this blog for REALLY amazing explanation about the implementation. Result: Case a) No Batch Norm with Auto Differentiation Adam Left Image → Train Accuracy Over Time / Cost Over Time Right Image → Test Accuracy Over Time / Cost Over Time Since the base model (The All Convolutional Net) already performs so well on CIFAR 10 data set it wasn’t surprising to see how the model was able to achieve 88 percent accuracy just in 21th epoch. However we can observe that the model is suffering from over-fitting. Result: Case b) Batch Norm with Auto Differentiation Adam Left Image → Train Accuracy Over Time / Cost Over Time Right Image → Test Accuracy Over Time / Cost Over Time With tf.layers.batch_normalization the model was able to achieve higher accuracy while achieving lower accuracy on the training data. So in-conclusion batch normalization really does help with generalization as well as faster convergence. Result: Case c) Batch Norm with Manual Back Prop AMS Grad Left Image → Train Accuracy Over Time / Cost Over Time Right Image → Test Accuracy Over Time / Cost Over Time With AMS Grad paired with batch normalization, the model wasn’t able to achieve accuracy of 88 percent within same number of epoch. However the model does a (pretty) good job on generalization. Interactive Code For Google Colab, you would need a google account to view the codes, also you can’t run read only scripts in Google Colab so make a copy on your play ground. Finally, I will never ask for permission to access your files on Google Drive, just FYI. Happy Coding! Also for transparency I uploaded all of the log during training. To access the code for Case a please click here, to access the logs click here. To access the code for Case b please click here, to access the logs click here. To access the code for Case c please click here, to access the logs click here. Final Words I wanted to make this blog post for so long, since I was just so curious about batch normalization. I am happy that I finally did. If any errors are found, please email me at jae.duk.seo@gmail.com, if you wish to see the list of all of my writing please view my website here. Meanwhile follow me on my twitter here, and visit my website, or my Youtube channel for more content. I also implemented Wide Residual Networks, please click here to view the blog post. Reference (2018). Arxiv.org. Retrieved 23 May 2018, from https://arxiv.org/pdf/1502.03167.pdf Data science. (2018). Pinterest. Retrieved 23 May 2018, from https://www.pinterest.ca/pin/463870830351205673/ Understanding Batch Normalization with Examples in Numpy and Tensorflow with Interactive Code. (2018). Towards Data Science. Retrieved 23 May 2018, from https://towardsdatascience.com/understanding-batch-normalization-with-examples-in-numpy-and-tensorflow-with-interactive-code-7f59bb126642 Available at: https://www.quora.com/What-is-the-difference-between-dropout-and-batch-normalization [Accessed 23 May 2018]. (2018). Arxiv.org. Retrieved 23 May 2018, from https://arxiv.org/pdf/1611.03530.pdf (2018). Arxiv.org. Retrieved 23 May 2018, from https://arxiv.org/pdf/1511.06807.pdf Only Numpy: Implementing “ADDING GRADIENT NOISE IMPROVES LEARNING FOR VERY DEEP NETWORKS” from…. (2018). Becoming Human: Artificial Intelligence Magazine. Retrieved 23 May 2018, from https://becominghuman.ai/only-numpy-implementing-adding-gradient-noise-improves-learning-for-very-deep-networks-with-adf23067f9f1 (2018). [online] Available at: https://www.quora.com/Is-there-a-theory-for-why-batch-normalization-has-a-regularizing-effect [Accessed 23 May 2018]. Available at: https://www.quora.com/Is-adding-random-noise-to-hidden-layers-considered-a-regularization-What-is-the-difference-between-doing-that-and-adding-dropout-and-batch-normalization [Accessed 23 May 2018]. Implementing BatchNorm in Neural Net — Agustinus Kristiadi’s Blog. (2018). Wiseodd.github.io. Retrieved 23 May 2018, from https://wiseodd.github.io/techblog/2016/07/04/batchnorm/ Glossary of Deep Learning: Batch Normalisation — Deeper Learning — Medium. (2017). Medium. Retrieved 23 May 2018, from https://medium.com/deeper-learning/glossary-of-deep-learning-batch-normalisation-8266dcd2fa82 Anon, (2018). [online] Available at: https://www.quora.com/Why-does-batch-normalization-help [Accessed 23 May 2018]. On The Perils of Batch Norm. (2018). Alexirpan.com. Retrieved 23 May 2018, from https://www.alexirpan.com/2017/04/26/perils-batch-norm.html Learning, D., & Deng, Y. (2017). Understanding Batch Norm. MutouMan. Retrieved 23 May 2018, from http://dengyujun.com/2017/09/30/understanding-batch-norm/ Batch Normalization — What the hey? — Gab41. (2016). Gab41. Retrieved 23 May 2018, from https://gab41.lab41.org/batch-normalization-what-the-hey-d480039a9e3b Exponentially Weighted Averages (C2W2L03). (2018). YouTube. Retrieved 23 May 2018, from https://www.youtube.com/watch?v=lAq96T8FkTw tf.constant | TensorFlow. (2018). TensorFlow. Retrieved 23 May 2018, from https://www.tensorflow.org/api_docs/python/tf/constant tensorflow/tensorflow. (2018). GitHub. Retrieved 23 May 2018, from https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/python/ops/nn_impl.py TensorFlow?, W. (2018). What is the equivalent of np.std() in TensorFlow?. Stack Overflow. Retrieved 23 May 2018, from https://stackoverflow.com/questions/39354566/what-is-the-equivalent-of-np-std-in-tensorflow/39354802 tf.layers.batch_normalization | TensorFlow. (2018). TensorFlow. Retrieved 24 May 2018, from https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization Bias Correction of Exponentially Weighted Averages (C2W2L05). (2018). YouTube. Retrieved 24 May 2018, from https://www.youtube.com/watch?v=lWzo8CajF5s Learning?, W. (2018). Why is it important to include a bias correction term for the Adam optimizer for Deep Learning?. Cross Validated. Retrieved 24 May 2018, from https://stats.stackexchange.com/questions/232741/why-is-it-important-to-include-a-bias-correction-term-for-the-adam-optimizer-for tf.cond | TensorFlow. (2018). TensorFlow. Retrieved 24 May 2018, from https://www.tensorflow.org/api_docs/python/tf/cond Kratzert, F. (2018). Understanding the backward pass through Batch Normalization Layer. Kratzert.github.io. Retrieved 24 May 2018, from https://kratzert.github.io/2016/02/12/understanding-the-gradient-flow-through-the-batch-normalization-layer.html thorey, C. (2016). What does the gradient flowing through batch normalization looks like ?. Cthorey.github.io.. Retrieved 24 May 2018, from http://cthorey.github.io./backpropagation/ Implementing BatchNorm in Neural Net — Agustinus Kristiadi’s Blog. (2018). Wiseodd.github.io. Retrieved 24 May 2018, from https://wiseodd.github.io/techblog/2016/07/04/batchnorm/ trains?, H. (2018). How and why does Batch Normalization use moving averages to track the accuracy of the model as it trains?. Cross Validated. Retrieved 24 May 2018, from https://stats.stackexchange.com/questions/219808/how-and-why-does-batch-normalization-use-moving-averages-to-track-the-accuracy-o tf.layers.batch_normalization | TensorFlow. (2018). TensorFlow. Retrieved 24 May 2018, from https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization Implementation of Optimization for Deep Learning Highlights in 2017 (feat. Sebastian Ruder). (2018). Medium. Retrieved 24 May 2018, from https://medium.com/@SeoJaeDuk/implementation-of-optimization-for-deep-learning-highlights-in-2017-feat-sebastian-ruder-61e2cbe9b7cb [ ICLR 2015 ] Striving for Simplicity: The All Convolutional Net with Interactive Code [ Manual…. (2018). Towards Data Science. Retrieved 24 May 2018, from https://towardsdatascience.com/iclr-2015-striving-for-simplicity-the-all-convolutional-net-with-interactive-code-manual-b4976e206760
Deeper Understanding of Batch Normalization with Interactive Code in Tensorflow [ Manual Back…
126
deeper-understanding-of-batch-normalization-with-interactive-code-in-tensorflow-manual-back-1d50d6903d35
2018-06-17
2018-06-17 19:25:04
https://medium.com/s/story/deeper-understanding-of-batch-normalization-with-interactive-code-in-tensorflow-manual-back-1d50d6903d35
false
2,058
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Jae Duk Seo
https://jaedukseo.me | | | | |Your everyday Seo, who likes kimchi
70eb2d57a447
SeoJaeDuk
2,388
165
20,181,104
null
null
null
null
null
null
0
null
0
6eb4b69effdc
2018-06-10
2018-06-10 22:49:43
2018-06-15
2018-06-15 15:54:43
7
false
en
2018-06-19
2018-06-19 13:33:03
10
1d51848223b7
5.004717
16
2
0
An examination of current PoW mining centralisation, Casper’s PoS implications, and beyond.
5
Deep Diving into Ethereum Mining Pools An examination of current PoW mining centralisation, Casper’s PoS implications, and beyond. Mining Pools have become commonplace in every major Proof of Work (PoW) blockchain, as they allow miners to have a stable income. This has had the unintended side effect of greatly decreasing the number of entities validating the Ethereum blockchain. This post analyses the degree of centralisation caused by Ethereum’s pool mining hubs. It also examines the upcoming transition to Proof of Stake (PoS) with Casper — and its implications for Ethereum centralisation. Let’s dive right in and take a look at what a mining pool is, and why someone would use one. What exactly is a Mining Pool? A mining pool is a pooling of computational resources by miners, who share their processing power over a network. If the pool mines a block, the reward is spread based on the contributions each miner provided. It is common for a pool to have a small fee for the organiser (0–2%). Miners opt for pools because they allow for consistent revenue, while mining alone may entail long periods of time before successfully solving a block. Below are some of the biggest pools in the Ethereum network: Based on successful block rewards during June 4 — June 11 ‘18 Is there a downside? As miners gravitate to the most prominent pools, the number of entities validating blocks is reduced; thousands of miners may provide the hashing power, but ultimately the pool organiser controls what information is submitted in a block. The diagram below uses squares to represent a block validator— with size relative to computational power. Labelled data obtained from etherscan.io As you can see — centralisation is a spectrum, which can range from one address controlling the network (fully centralised) to every address validating blocks. Under the PoW protocol (and also most PoS implementations) it’s preferable to have more parties competing for block validation, with no entity holding a large portion of hashing power. The current state of affairs is represented by the right hand image — an increasingly centralised ecosystem. What’s the worst that could happen? It’s well known that an organisation controlling the majority of a network’s power would be able to attempt an attack with a significant likelihood of success (the fabled 51% attack). A successful attacker would be able to: Spend their Ether multiple times (double spending) Modify transactions within the past few blocks Create public mistrust of the blockchain’s authenticity An attack is unlikely, as a pool’s profit outnumbers the potential gains of a short attack. That said, it is always important to analyse worst-case scenarios in potential cyber-security threats. Let’s take a closer look at the distribution of miners that successfully obtained a block reward: Only 14 different addresses mined the last 7 days of Ethereum blocks (39975 blocks)! In the last 7 days (June 4th-11th 2018), Ethermine and Sparkpool alone accounted for 49.6% of the block rewards — and adding f2pool to the mix would reach a total of 64.1% of the network. Mining Pools are not economically incentivized to perform such an attack, and it’s unlikely they will ever be. However, the Ethereum network should not have a failing point of 3 entities. An ideal network is protected both from a technical and economic perspective. Ethereum developers are hard at work addressing changes that would allow for more distributed verification. Let’s further examine the plans currently being developed to improve centralisation — and analyse whether they resolve the problems created through the current PoW protocol. What is Casper? Casper is a Proof of Stake (PoS) consensus algorithm that, instead of using computational power to ensure decentralisation of block validation, relies on an economic stake in the network. In short, users lock Ether in the network, and gain voting power relative to their stake; they are in turn rewarded Ether over time. How will this affect Centralisation? In my previous blog post, I analysed Ether Balance distribution. One of the findings was that the top 10 addresses hold 11.4% of Ether. While still a surprising distribution — this is much better than the previous scenario.While previously the top two pools controlled 49.6% of hashing power, the top address ‘only’ has 1.5% of total Ether. From Mining Pools to Staking Pools Taking part in Casper’s PoS system will have a large minimum requirement (at least 32 Ether)— preventing the average user from participating directly. Earlier in the article, we discussed how mining pools make it practical for small miners to participate in block validation — in this case, staking pools allow people with low ETH balances to take part in the network. Won’t this lead to the same centralisation problem? There is a crucial difference between Mining and Staking pools. In mining pools, participants are only risking the Ether they have not yet withdrawn from the pool. This means that miners don’t have an incentive to be responsible with whom they provide their computational power — in the short run, they make the same profit. In Casper, participants have to deposit (and thus risk) their Ether: if there is any malicious activity from a pool, this will result in a penalty to all users. This forces users to be conscious of whom they trust. Furthermore, the percentage of Ether lost increases as the pool size increases: Ether Penalty = 3 x Percentage of Users in the Network. Thus, if a pool holding 25% of the network gets hacked (which is not unreasonable, seeing that Ethermine currently holds 25.6%), one would lose 75% of their deposit. On the other hand, someone in a network representing 1% of validators would only risk losing 3% of their holdings. Information from Vitalik’s talk in May 2018’s Ethworld This relationship means that users will prefer to take part in small pools, where they are not be risking more Ether than they stand to gain. This in turn will keep nodes at reasonable levels — heavily disincentivising any group from growing to the degree current mining pools are at. Given the effects examined above, Casper’s implementation will hopefully take us to a more decentralised validator network. TokenAnalyst parses and classifies every on-chain transaction (currently from the Ethereum blockchain) with the goal of deriving fundamental insights to value crypto assets. If you want to be updated with our latest analysis, follow us on Medium — or sign up to e-mail notifications below.
Deep Diving into Ethereum Mining Pools
273
deep-diving-into-ethereum-mining-pools-1d51848223b7
2018-06-19
2018-06-19 14:37:50
https://medium.com/s/story/deep-diving-into-ethereum-mining-pools-1d51848223b7
false
1,048
Fundamental insights on crypto assets using data from the blockchain
null
null
null
TokenAnalyst
sid@tokenanalyst.io
tokenanalyst
BLOCKCHAIN,CRYPTOCURRENCY,INVESTING,DATA ANALYSIS,CRYPTOCURRENCY INVESTMENT
thetokenanalyst
Blockchain
blockchain
Blockchain
265,164
Matthias De Aliaga
Data Scientist @TokenAnalyst
121be0867a51
matthiasdealiaga
204
77
20,181,104
null
null
null
null
null
null
0
from dbn.tensorflow import SupervisedDBNClassification import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.metrics.classification import accuracy_score digits = pd.read_csv("train.csv") X = np.array(digits.drop(["label"], axis=1)) Y = np.array(digits["label"]) from sklearn.preprocessing import standardscaler ss=standardscaler() X = ss.fit_transform(X) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=0) classifier = SupervisedDBNClassification(hidden_layers_structure = [256, 256], learning_rate_rbm=0.05, learning_rate=0.1, n_epochs_rbm=10, n_iter_backprop=100, batch_size=32, activation_function='relu', dropout_p=0.2) classifier.fit(X_train, Y_train) Y_pred = classifier.predict(X_test) print('Done.\nAccuracy: %f' % accuracy_score(Y_test, Y_pred))
8
be1712b1aff
2018-07-30
2018-07-30 09:15:53
2018-07-30
2018-07-30 11:54:53
4
false
en
2018-07-30
2018-07-30 11:54:53
2
1d52bb867a25
3.315094
1
0
0
In this article we will be looking at what DBNs are, what are their components, and their small application in Python, to solve the…
5
Deep Belief Networks — An Introduction In this article we will be looking at what DBNs are, what are their components, and their small application in Python, to solve the handwriting recognition problem (MNIST Dataset). Before understanding what a DBN is, we will first look at RBMs, Restricted Boltzmann Machines. Restricted Boltzmann Machines If you know what a factor analysis is, RBMs can be considered as a binary version of Factor Analysis. So instead of having a lot of factors deciding the output, we can have binary variable in the form of 0 or 1. For Example: If you a read a book, and then judge that book on the scale of two: that is either you like the book or you do not like the book. In this kind of scenarios we can use RBMs, which will help us to determine the reason behind us making those choices RBMs take a probabilistic approach for Neural Networks, and hence they are also called as Stochastic Neural Networks. If we decompose RBMs, they have three parts:- One Input Layer aka Visible Unit One Hidden Layer aka Hidden Unit One Bias Unit In the example that I gave above, visible units are nothing but whether you like the book or not. Hidden Unit helps to find what makes you like that particular book. Bias is added to incorporate different kinds of properties that different books have. Let us visualize the RBMs: Red is Visible Unit, Blue is Hidden Unit Let us look at the steps that RBN takes to learn the decision making process:- Compute Activation Energy Calculate Sigmoid of Activation Energy This will give us a probability. Using this probability Hidden unit can turn on or turn off any of the nodes in visible unit. Now that we have basic idea of Restricted Boltzmann Machines, let us move on to Deep Belief Networks Deep Belief Networks DBNs have two phases:- Pre-train Phase Fine-tune Phase Pre-train phase is nothing but multiple layers of RBNs, while Fine Tune Phase is a feed forward neural network. Let us visualize both the steps:- credit: Codeburst How DBNs work? Find the features of Visible Units using Contrastive Divergence Algorithm Find the Hidden Unit Features, and the feature of features found in above step When the hidden layer learning phase is over, we call it as a trained DBN Practical Application on MNIST Dataset Step 1 is to load the required libraries. dbn.tensorflow is a github version, for which you have to clone the repository and paste the dbn folder in your folder where the code file is present. Link to code repository is here. Step 2 is to read the csv file which you can download from kaggle. Step 3, let’s define our independent variable which are nothing but pixel values and store it in numpy array format, in the variable X. We’ll store the target variable, which is the actual number, in the variable Y. Step 4, let us use the sklearn preprocessing class’s method: standardscaler. This is used to convert the numbers in normal distribution format. Step 5, Now that we have normalized the data, we can split it into train and test set:- Step 6, Now we will initialize our Supervised DBN Classifier, to train the data. Step 7, Now we will come to the training part, where we will be using fit function to train: It may take from 10 minutes to one hour to train on the dataset. Once the training is done, we have to check for the accuracy: The output that I got was: Final Accuracy So, in this article we saw a brief introduction to DBNs and RBMs, and then we looked at the code for practical application. Hope it was helpful!
Deep Belief Networks — An Introduction
2
deep-belief-networks-an-introduction-1d52bb867a25
2018-07-30
2018-07-30 11:54:54
https://medium.com/s/story/deep-belief-networks-an-introduction-1d52bb867a25
false
693
Innovation in Data Science and Visualizations, direct from the pool of learners
null
null
null
Analytics Army
null
analytics-army
DATA SCIENCE,ANALYTICS,MACHINE LEARNING,DEEP LEARNING,VISUALIZATION
null
Machine Learning
machine-learning
Machine Learning
51,320
Himanshu Singh
Senior Data Scientist, Corporate Trainer, Speaker, Story-teller
9b0a717c33d8
himanshuit3036
40
18
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-30
2017-10-30 15:20:47
2017-10-30
2017-10-30 15:24:08
1
false
en
2017-10-30
2017-10-30 15:24:08
1
1d5367f4fae0
0.471698
0
0
0
Ever imagined having a realistic face-to-face conversation with a robot? Well, say hello and more to Sophia! A larger-than-life robot with…
3
Sophia the Robot Speaks at the UN And Is Now A Citizen of Saudi Arabia Ever imagined having a realistic face-to-face conversation with a robot? Well, say hello and more to Sophia! A larger-than-life robot with the capacity to meaningfully communicate and engage with humans, and who also holds in her list of accomplishments, speaking at the United Nations and now, even a Saudi Arabian citizenship! Read more…
Sophia the Robot Speaks at the UN And Is Now A Citizen of Saudi Arabia
0
sophia-the-robot-speaks-at-the-un-and-is-now-a-citizen-of-saudi-arabia-1d5367f4fae0
2017-10-30
2017-10-30 15:24:09
https://medium.com/s/story/sophia-the-robot-speaks-at-the-un-and-is-now-a-citizen-of-saudi-arabia-1d5367f4fae0
false
72
null
null
null
null
null
null
null
null
null
Science
science
Science
49,946
Evolving Science
Inspiring innovation and scientific research for the advancement of mankind. www.evolving-science.com
7bd43d6083eb
EvolvingScience
25
185
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-07
2017-11-07 16:27:34
2017-11-07
2017-11-07 16:34:24
2
false
en
2017-12-01
2017-12-01 03:20:34
0
1d53956a095d
3.764465
0
0
0
Titor, it is your birthday! Happy Birthday!
5
Dear Son: Titor, it is your birthday! Happy Birthday! I wish I could be there. I miss you so much. I remember when you were born, you cried when the nurse was weighing you. I talked to you and you quieted down. That made me happy because it meant you recognized my voice. When we finally got you home, we would put you to sleep in your crib. Some nights you would you wake up crying. I would put you my chest and let you fall asleep on it. During the day I would take you for walks in the stroller and dance with you in my arms while we listened to reggae music. Every year we would learn something new, which is what every kid does, but for dad’s it is special, because that day will not happen again. When I was ten we did not have computers or any electronic toys. I played with erector sets, which were metal plates and nuts and bolts and you could make things. Mostly I played outside with my friend and sometimes my sisters. We lived in hotel that your grandpa managed. It was fun to live in the hotel, but we did not have as many friends as you do in your neighborhood. My friends were the gardeners kids. We played in the “barranca”, which was a large jungle like area and the hotel gardens. We collected bugs, trapped snakes (some were poisonous), climbed trees, made forts of found wood and explored the river beds. There were some very large frogs there. School was a little bit hard for me. I was not very good at it and numbers were difficult for me. Now that I am grown up I learned that my brain may be wired a little different than most people. That is probably why I had a hard time at school. In my school they did not have classes or special programs for kids with different working brains, like they do at your school. Like you I need to move around and do things, while I figure out how to solve a problem. I think they call it kinesthetic learning. As it turns out my brain and how I make decisions is very different than most people. That is part of the reason of why I am not with you and Johnny at this time. Your mom describe it to you as me being sick. I guess that is one way to look at it. Sometimes people are born with differences from the rest of the people. Because the majority of the people are born without significant differences we call them normal. When someone is born with a difference then we describe them as sick, disabled, gay or gifted. For example, people said autism was an illness. If you are born blind or deaf you are disabled. If you brain works a different way you could be considered gifted. When I was very young I watched the first rocket launch and the first man landing in the moon. Some kids have dad’s that go on special missions where they may be away for a long time. I think of the people manning the space station. They are there for years. They can’t go to work and come home in the afternoon. Maybe for now we can pretend that I am on a special mission. That my brain works so differently that scientists wanted to study it. We can say the scientists want to learn how I solve problems and use what they learn to make helpful robot. If you get sad, think about it that way. Dad loves you, he went on a mission to help make a helpful robot and will be back to love you, give you hugs, laugh, and learn with you some more. Every morning I wonder how you and JonJon are doing. I imagine getting you ready for school, like I used to do. You are such a sleepy head when you wake up sometimes. Carry you to the bathroom. Stand you up so you can go pee. Get the shower ready. Get your clothes ready and put the coffee on while you shower. Go back up, get you out of the shower dry your hair, get you dressed, have you say goodbye to your mom and go downstairs for breakfast. Make sure that you ate your breakfast while you watched a Stampy vid and walk to the bus stop. Wait with your and rest the kids for the bus. Then hug and kiss you. Sometimes we huddle together because it was windy and cold. I enjoyed each day with you. Be strong in your new year. Be a good brother to Jonjon help him when he gets sad, give him lots of hugs. Listen to your mom, study hard and enjoy your friends. Love you always! Dad P.S. I can’t get this note directly to you it is part of the mission not to have contact with my family. I am posting it here so you know I was thinking about you. Wish I could send you present too, but mostly to give you a big hug. Love you Titor. (I know today is not your birthday, for safety reasons I did not post on the exact day ;)) Carlos Gasca Yanez Be life giving! Canada, USA, Mexico
Dear Son:
0
dear-son-1d53956a095d
2017-12-01
2017-12-01 03:20:35
https://medium.com/s/story/dear-son-1d53956a095d
false
896
null
null
null
null
null
null
null
null
null
Love
love
Love
187,303
Carlos Gasca Yanez
null
89f956ee33d1
carlosgascayanez
4
12
20,181,104
null
null
null
null
null
null
0
null
0
a7f8d7dc376a
2018-08-21
2018-08-21 07:22:56
2018-08-27
2018-08-27 13:04:39
1
false
en
2018-08-28
2018-08-28 08:25:03
11
1d549f88361d
5.030189
1
0
0
By Kim Oguilve
5
Meet The Founder: Kalle Salmi By Kim Oguilve The latest Finnish Property Market 2018 report, showcases a strong real estate market — with foreign investment into the country being the largest investor group with a share of 29% of the total Finnish market. The main city regions also showed an increase in housing demand with Helsinki representing half of the increase in housing demand. With a very promising landscape and adding technology to the equation and despite the market’s recent growth, we wonder how technology can be a game changer and become the facelift for mundane processes that go in between buying and selling a home. Kalle Salmi, CEO, and Founder at Kodit.io, explains how he went about founding Kodit.io and how his company is being a lifesaver for many new property owners and sellers. You can also listen to the full-length interview here. The Interview Q: Who is Kalle and how did you become an entrepreneur? I have been an entrepreneur for my whole professional career- Kalle Salmi Besides owning other companies in the past and moving into new challenges, Kalle also shares some of his time as an investor and a family man. He majored in entrepreneurship at the Helsinki School of Economics and started his first company ten years ago. Q: How was kodit.io founded and what were your motivations to start the company? When buying a new home 2 years ago, I found the process to be a real pain: time-consuming, money consuming and really frustrating at times. It got me thinking with my business partner, Martti, of whether it would be possible to create a service that would simply eliminate all inconveniences in the process. — Kalle Salmi Through his development in his professional career and on a personal level (buying his own home), Kalle was able to get first hands-on experience of the real estate industry, competitors abroad, and what were the real pain points when buying or selling a home. When he bought a new home two years ago, it was the painful process that was the eye-opening moment for him and his partner Matti, to start working on a solution that could eliminate some of the pain points. The company was started officially in July 2017, and the service was launched last year in later November. Q: What is Kodit.io and what happens in the backend when someone utilizes your platform? We are building an AI-powered real estate data platform, where we monitor the housing market on a micro level and in real time. We have access to public and private data sources, and also proprietary data that we collect. — Kalle Salmi With Kodit.io you can buy and sell a home. When selling a home, Kodit.io valuation tool can give an estimated amount of a home’s market price, and after conducting onsite inspections, kodit.io will calculate a final offer. In the backend, together with all the data kodit.io gathers, they have over 200 real estate agents utilizing their valuation tools, which also provides additional data when using the tool, which results in more accurate valuations that are then provided to customers. Kalle explained, that while it is important to come up with a current and accurate valuation for a home, it is as important to find out the full value potential of a specific unit to be able to know what the full value will be after modifications/renovations are done to the unit. Q: What has been the biggest milestone so far? The first real milestone was when we got the MVP (minimum viable product) version of our service out when we soft-launched in November last year. — Kalle Salmi Kodit.io second biggest milestone was when they reached 100 customers they had helped, either by buying or selling their home, which also helped them have a strong proof of concept. However, news broke a few days ago about them landing their latest funding round of 1.7 million-euro led by Speedinvest, Schibsted and supported by Icebreaker.vc. Q: How do you handle competition? We are the only player with whom you can get a direct offer on your home and really get it done. — Kalle Salmi Kalle admits, that technology wise, they are the only player in the Finnish market doing what they do, but that they continue to compete indirectly with all the other traditional ways to sell and buy a home. If we leave the Finnish borders, they do have some competition in the US, but in Europe, they are the leading AI real estate buyer. Q: How do you keep yourself motivated and is there someone you look up to? I look to up all great entrepreneurs of our time. Especially entrepreneurs with big visions like Richard Branson or very unique and special individuals like Elon Musk or Jeff Bezos who are very intelligent on any metric and they also have the largest visions. — Kalle Salmi For Kalle staying motivated after so many years of being a serial entrepreneur, is about seeing things happening. To him is all about executing and delivering, which fascinates him because there is no gap to reach or corporate politics, it’s all about him and the team, plus a “sky is the limit” type of mindset. Q: What has been the uttermost best thing in all your journey? I think the best thing is to see those companies grow and when you see some of those companies become a small institution that has a life of their own. — Kalle Salmi Some of the companies Kalle started, nowadays are not dependent on him but he mainly acts as an owner. Those situations led him to be able to jump into new opportunities, which on a personal level he considers this to be the best way to get a very fast personal development. Q: What has been the biggest learning curve? I think it is a combination of hundreds and thousands of small things that you need to do right, but of course, focusing on talent is one of those. As long as you get the best talent in you will most likely succeed. — Kalle Salmi Kalle commented that with startups the case is always that you need to think very early on to prove a strong business model where you truly create value and are confident on how to monetize it. His emphasis on creating something that really has the ability to touch people’s lives significantly was quite strong because he has seen startups in which the product or service had almost an insignificant improvement to people’s lives, which doesn’t really justify its existence in the end. Q: What’s next in the agenda for kodit.io? We are looking to expand into new cities in Europe. The next cities are most likely to be located in either Spain or Poland — Kalle Salmi After their latest seed funding round, Kodit.io will be expanding to new markets, but firstly they will gather data of the locations and conduct an analysis to see which city will get all the pieces together first. Recruiting for data science talent and building the model and technology further are on the agenda as well. Interested in working for Kodit.io? Check out their latest open roles here. Did you like this interview? Follow us here on Medium to be notified of the latest stories! Maria 01 is a community and a space. A League of tech entrepreneurs and investors building the future. We provide the home-base and the network to get better and compete. The old hospital acts as the top meeting spot for hundreds of tech meetups and entrepreneurial get-togethers.
Meet The Founder: Kalle Salmi
6
meet-the-founder-kalle-salmi-1d549f88361d
2018-08-28
2018-08-28 08:25:03
https://medium.com/s/story/meet-the-founder-kalle-salmi-1d549f88361d
false
1,280
A Community House For Ambitious Tech Startups
null
mariazeroone
null
Maria 01
hello@maria.io
maria-01
STARTUP,ENTREPRENEURSHIP,HELSINKI,FINLAND,INNOVATION
MariaZeroOne
Real Estate
real-estate
Real Estate
69,162
Maria 01
Maria 01 is a tech-startup community in the heart of Helsinki. Follow us for the latest news and interviews with our in-house members + more!
71fdd7850541
Maria01
65
31
20,181,104
null
null
null
null
null
null
0
null
0
36b50b70f755
2018-09-03
2018-09-03 03:19:28
2018-09-03
2018-09-03 03:25:44
1
false
en
2018-09-03
2018-09-03 03:25:44
17
1d552ee5b558
1.464151
5
0
0
Machine Learning
5
Swift World This Week(08.27–09.02) “person holding maple leaf” by Nong Vang on Unsplash Machine Learning google/dopamine Dopamine is a research framework for fast prototyping of reinforcement learning algorithms. - google/dopaminegithub.com The Present and Future of AI in Design [Infographic] Artifical Intelligence will make designers smarter. Instead of resisting it, designers will soon be co-creating with…uxdesign.cc React Native madhavanmalolan/awesome-reactnative-ui Awesome React Native UI components updated weekly. Contribute to madhavanmalolan/awesome-reactnative-ui development by…github.com macOS brentsimmons/NetNewsWire Feed reader for macOS. Contribute to brentsimmons/NetNewsWire development by creating an account on GitHub.github.com Article What's in your Larder: Onboarding libraries for iOS I recently rewrote the onboarding flow in one of my iOS apps. I ended up writing the whole thing myself, because I…larder.io Dynamic Features in Swift In this tutorial, you'll learn to use dynamic features in Swift to write clean code, create code clarity and resolve…www.raywenderlich.com Code mercari/Mew The framework that support making MicroViewController. - mercari/Mewgithub.com NSHipster/PasswordRules A Swift library for defining strong password generation rules. - NSHipster/PasswordRulesgithub.com Tools dgurkaynak/Penc Trackpad-oriented window manager for macOS. Contribute to dgurkaynak/Penc development by creating an account on GitHub.github.com Design Best Practices For Mobile Form Design Users can be hesitant to fill out forms. That is why it is our goal as designers to make the process of filling out a…www.smashingmagazine.com Marketing amirrajan/survivingtheappstore My book on getting to the #1 Spot in the App Store. Buy my games to support me. - amirrajan/survivingtheappstoregithub.com Random How to Build a Feedback Loop for Your Own Growth In 2012, Professor Neal Roese made a powerful discovery in the science of happiness and motivation. He surveyed…blog.coleadership.com Learn Design Patterns in Swift NilStack/learn-design-patterns-in-swift learn-design-patterns-in-swift — This is a collection for my articles about design patterns in Swift on medium.com.github.com Subscribe Get Swift World This Weekly in another way. Thanks for your time. Please clap to get this article seen by more people. Please click Follow to get latest blogs from me. As a passionate iOS developer, blogger and open source contributor, I’m also active on Twitter and GitHub.
Swift World This Week(08.27–09.02)
7
swift-world-this-week-08-27-09-02-1d552ee5b558
2018-09-03
2018-09-03 06:49:45
https://medium.com/s/story/swift-world-this-week-08-27-09-02-1d552ee5b558
false
335
iOS weekly from Swift World
null
null
null
SwiftWorldWeekly
guoleii@gmail.com
swiftworldweekly
null
nilstack
React Native
react-native
React Native
6,018
Peng
Engineers are the artists of our generation.
5f8e077857f4
NilStack
1,238
677
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-29
2018-08-29 08:25:16
2018-08-29
2018-08-29 08:26:40
1
false
en
2018-10-19
2018-10-19 16:18:15
0
1d5664df9a81
5.509434
12
1
1
How do we discover if a wine is good or bad? One may tell you that the chemical properties of wine can alter the taste and the quality of…
1
Classifying wines by quality using machine learning How do we discover if a wine is good or bad? One may tell you that the chemical properties of wine can alter the taste and the quality of the beverage. What if we could test this affirmation using machine learning? Let’s consider a set of observations of red wine varieties. The data set presents not only their chemical properties, but also a quality ranking provided by tasters. Our goal is to discover what’s the quality of a wine considering only its chemical properties. Our dataset is composed by the following features: fixed acidity volatile acidity citric acid residual sugar chlorides free sulfur dioxide total sulfur dioxide density pH sulphates alcohol quality Before starting our analysis, let’s ask ourselves a few questions: Is this a classification or a regression problem? Which approaches could we use to work with this data? Could we use the same model to generalise white wines? Do we need all of the features to be used? Import packages and load data Check data Let’s start our analysis by checking what kind of data we have available. We already see that we only have numerical features, which will save us some time on data transformation. However, we can see that data is not uniform and we’ll have to do some standardisation later. As told before, we have 12 features on the dataset and 1599 observations. Do you remember that we asked if this was a classification or regression problem? Well, I have seen people using both approaches to deal with this dataset. I personally like the classification approach. The wines are already classified by quality. So it seems natural to deal with the problem by trying to guess if a wine is good or bad (from the consumer standpoint). However, if we were working on the wine taster side, we may want to rank wines in a continuous way. Since we are trying to find if a wine is good or bad, why not to adapt our target column to reflect this approach? Let’s divide our dataset in 3 classes: Poor: all wines with rates below 4. Average: wines with rates 5 and 6. Excellent: wines with rates higher than 7. We’ll represent these three categories by 1, 2 and 3. Let’s create a 13th feature called ‘target’. But before doing all this, let’s see if we have missing data (or missing rates). Then I’ll check the class balance. We will see that we have a lot of average wines and very few representatives of the other 2 classes. It’s time to get rid of the quality column. Data visualisation What about to do some visualisation to see our data behaviour? Let’s start by looking for correlation on data. We actually don’t see a significative correlation on data amongst the features and our target nor amongst features. Split data into train/test It’s time now to split our data into train and test. It’s important to stratify our sample by the target variable in order to ensure that the training set looks similar to the test set. Data standardisation A few steps ago, I commented that our data seemed to be not very uniform. So, we need to do some standardisation. This is a simple process, where we subtract the means from each feature and then we divide by the feature standard deviation. It’s important to do it after the train/test split. We do it after the split because we want to avoid to introduce future information from the test set into the training explanatory variables. In this case, the mean and variance. To accomplish the standardisation step, we can use the Scikit-Learn Transformer API. This feature allows us to “fit” preprocessing steps using the training data, just as we would do to fit a model. So, let’s start the process by fitting the transformer on the training set. The transformer will save the means and standard deviations. Later, we apply the transformer to the training set to scale the data. Lastly, we apply the transformer to the test set using the same means and standard deviations. We can do it manually, but there’s another way to do it: by inserting StandardScaler() direct into the pipeline. PCA Some features appear to add more value to models than others. Removing useless features is important because it allows the model to perform equally or better using less data. Also, feature selection is important to avoid overfit and reduce training time. A straightforward way to deal with a huge number of features in a dataset is Principal Component Analysis, or PCA. This technique converts high dimensional data to low dimensional by selecting the most important features for the model. In order to apply CPA, we need the data to be normalised. The PCA class counts with the explained_variance_ratio_ property, which returns the variance caused by each feature on the dataset. We will see that the four first features of our data capture almost 98.5% of the variance. So, they four together carry 98.5% of the classification information. This means that we can choose the 4 first features to see how our model performs and discard the remaining variables. Hyperparameters For each Machine Learning algorithm, we have an ensemble of parameters that can be tweaked in order to ameliorate our model. They are called hyper parameters. We can see the available hyper parameters for our model by calling the method get_params() on the pipeline. Our goal is to test a combination of hyper parameters to see which of them performs better. Let’s declare the hyper parameters we want to tune through cross-validation and insert them on the pipeline using a Python dictionary. Cross-validation pipeline In order to get better and more realistic results during the model training, we have to use cross validation. CV helps us to reduce chance of overfitting and makes the model more “general”, which is important to make future predictions. CV works by dividing the training dataset into ‘k’ equal parts. The model is trained on ‘k-1’ folds and tested on the “hold-out” fold. These steps are repeated until all folds are used as training and testing. In the end, we aggregate the performance for all ‘k’ folds to see how the model performs. We can use CV to tune different models with different algorithms or parameters using only the training set. Then, we use the test set to make a final selection. What happens here is that GridSearchCV()performs cross-validation across the “grid”, which is a relation of all possible permutations of hyper parameters. Then, we can check the best set of parameters found using CV before using the clf object as model. Evaluate model on test data We can use the clf object we created before directly on the test data in order to evaluate the performance of our model. We’ll generate a confusion matrix to check how the model classified the different wines on the dataset. As we can see, we had a total accuracy of 87,5%. Is this performance good enough? It depends on the goal of the project. In fact, our model is very good in identifying average wines, but it’s terrible in finding poor wines. Let’s try to think from a business standpoint. Imagine you want to classify the wines that will be put on market. You want to avoid having poor wines being predicted as excellent. This would be a PR disaster… And this is what our model is doing. I tried to use an oversample technique called SMOTE to synthetically create more poor and excellent data, but the model behaved even worst, by classifying more poor wines as excellent. We see the that model is clearly biased, but let’s leave like this for educational purposes. But, try to think how you would improve this model. Maybe trying to collect more real data about poor and excellent wines? Maybe tweaking the hyper parameters a little more? Or just using a technique I didn’t use. Save model for future use Now it’s time to make the model to work. Imagine that you received some new wines you need to classify them. We have to save our previous model to apply it to future data. We do it by using the sklearn model persistence tools. That’s all. If you want to reuse the model, call the joblib.load() method to load the model and apply it to new data.
Classifying wines by quality using machine learning
27
classifying-wines-by-quality-1d5664df9a81
2018-10-19
2018-10-19 16:18:15
https://medium.com/s/story/classifying-wines-by-quality-1d5664df9a81
false
1,407
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Wilame Lima Vallantin
Data Scientist passionate about technology, innovation and machine learning.
6666b353ec5
wilamelima
34
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-14
2017-12-14 15:02:47
2018-01-29
2018-01-29 13:00:13
1
false
en
2018-01-29
2018-01-29 13:00:13
0
1d56d9d7ea24
0.739623
2
0
0
Hey There, My name is Anna Banana!
1
Just a Quick Introduction Hey There, My name is Anna Banana! Welcome to my very first blog post on my very first blog! I’m at the start of my Data Science career and have just completed 100% of my Immersive course at General Assembly and will be starting the job search very soon. I feel about 30% confident but 100% motivated. The past 12 weeks have been tense so, I decided to start this blog to track my progress on data science projects and some things I have learned. Medium Bloggers have already taught me so much and I wanted to join the community. The coming posts will show my development into an unsupervised learner (hope you got that) and everything GA have taught me! I hope you enjoy them and of course, any feedback is always welcome! Happy reading and happy learning!
Just a Quick Introduction
6
just-a-quick-introduction-1d56d9d7ea24
2018-02-15
2018-02-15 08:38:39
https://medium.com/s/story/just-a-quick-introduction-1d56d9d7ea24
false
143
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Anna Bianca Jones
Data Science, Forensics, Logic & Life
11b351cf711f
annabiancajones
28
60
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-05
2018-02-05 02:11:46
2018-02-05
2018-02-05 02:14:52
2
false
en
2018-02-05
2018-02-05 16:56:49
4
1d5a267a2c4d
1.428616
1
0
0
On the fifth day of February 2017, I embarked upon this journey to create a platform to make data analysis easy for at least a million…
5
Celebrating One Year of Data Science Blog On the fifth day of February 2017, I embarked upon this journey to create a platform to make data analysis easy for at least a million people. When I began, and even now, I am no expert in this field. Over the years, I have learned several concepts from my mentors and other masters, and I believe I am only a conduit for sharing those with you. Today, the fourth day of February 2018 marks the one-year milestone for us. We crossed 50,000 page views from more than 10,000 users across 136 countries. I hope my mission is underway and I have created interest in you towards data analysis. As a landmark event, I asked my mentor, Prof. Upmanu Lall from Columbia University in the City of New York to give us a lesson. He was gracious enough to agree and has taken time out of his ever busy schedule to teach us that “sometimes it is important to let the data speak.” The man who has pioneered non-parametric methods for hydrology himself opens us to kernel density, why and what. So, here is his gift on the occasion of our celebration of one-year together; or should I say, Sometimes it is important to let the Master speak. Lesson 51 - Sometimes it is important to let the data speak On the fifth day of February 2017, I embarked upon this journey to create a platform to make data analysis easy for at…www.dataanalysisclassroom.com If you find this useful, please like, share and subscribe. You can also follow me on Medium and Twitter @realDevineni for updates on new lessons.
Celebrating One Year of Data Science Blog
1
celebrating-one-year-of-data-science-blog-1d5a267a2c4d
2018-06-10
2018-06-10 15:46:53
https://medium.com/s/story/celebrating-one-year-of-data-science-blog-1d5a267a2c4d
false
277
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Naresh Devineni
Naresh Devineni is an Associate Professor in the Department of Civil Engineering at The City University of New York’s City College. http://nareshdevineni.com
53ffd7b0a59e
devineni
34
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-30
2018-08-30 14:21:40
2018-08-30
2018-08-30 14:22:14
0
false
en
2018-08-30
2018-08-30 14:22:14
1
1d5b1bd0522a
0.30566
0
0
0
These fears reflected in science fiction and movie. The AI already takes a place in the Hollywood movies like star wars, sci-fi movies…
2
How Artificial intelligence (AI) gets involved in the Hollywood Movies especially in the sci-fi movie? AI In Hollywood Is Already Gaining The Ground! For years we thought that Artificial Intelligence is a thing of the Hollywood sci-fi movies. As, a host of highly…www.wesrch.com These fears reflected in science fiction and movie. The AI already takes a place in the Hollywood movies like star wars, sci-fi movies, etc. A few years back on showing the man’s relationship with machines.
How Artificial intelligence (AI) gets involved in the Hollywood Movies especially in the sci-fi…
0
how-artificial-intelligence-ai-gets-involved-in-the-hollywood-movies-especially-in-the-sci-fi-1d5b1bd0522a
2018-08-30
2018-08-30 14:22:15
https://medium.com/s/story/how-artificial-intelligence-ai-gets-involved-in-the-hollywood-movies-especially-in-the-sci-fi-1d5b1bd0522a
false
81
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
NexSoftSys
Technology Consulting Firm for Customized #Offshore #Software & Mobile #Apps #Development for Healthcare, Telecommunication and Banking System.
e2bc0f6834bf
nexsoftsys
30
229
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-08
2018-02-08 05:12:22
2017-11-26
2017-11-26 22:42:52
1
false
en
2018-02-08
2018-02-08 05:24:34
11
1d5bf2cfe27f
2.249057
0
0
0
A robot in China just passed a medical exam. It even scored 96 points above the passing rate.
3
How will AI transform the healthcare industry? A robot in China just passed a medical exam. It even scored 96 points above the passing rate. Can you imagine a future where ‘RoboDocs’ will make perfect stitches in the operating theatre? What if instead of waiting in line at the doctor’s office, you could save time and money with a preliminary check-up and subsequent diagnosis through AI-powered devices in your home? Artificial intelligence (AI) has so much potential to do good — especially in the healthcare industry, where access to fast and inexpensive services is often a matter of life and death. AI addresses gaps in providing care and information that human practitioners and hospitals cannot fix right now. Let’s take a closer look at what AI can do for the future of healthcare. Medical advice at a tap or swipe If you don’t have a Fitbit or a step tracker on your phone, you probably know someone who does — they also probably monitor their heart rate and log their calories using various other apps. We can now monitor our well-being, even before we schedule a doctor’s check-up. Healthcare companion apps will only become more sophisticated and informative. Aside from keeping tabs on your vitals, more AI-powered services will soon be able to give you personalised daily advice via chat. They can already help you book appointments or look for the right insurance provider. Some apps can even pre-diagnose your condition, and determine when you need an actual consultation with a doctor. The Ada app is already experimenting with this functionality in the UK. Remote doctors In overcrowded hospitals or far-flung areas without access to medical facilities, AI can help physicians and health workers attend to critical patients faster. In the Philippines, where medical care is scarce in impoverished provinces, the RxBox lets field workers gather data about patients, which is then forwarded to city doctors for analysis and diagnosis. In hospitals, AI-powered health assistants will soon take over clinical and outpatient services, such as organising queues, taking vital stats and pulling up patient records. The Internet of Things is already improving how hospitals manage and monitor devices and workflows across departments. Health as a national agenda When healthcare becomes fully digitised, we will generate huge amounts of information about citizens’ health. This big data will be used for various purposes, such as monitoring patients’ well-being in real-time. In the long run, the data could be used to spot illness patterns or epidemics in high-risk areas, and even shape healthcare policies. There’s a caveat, however — questions are already being raised about ethics and patient confidentiality. But privacy protection methods such as blockchain (which uses high-level encryption to keep data secure) can address these concerns. The possibilities of AI and machine learning in healthcare are seemingly endless. Every day there’s something new to discover — 3D printing of drugs, doctors performing procedures remotely, even robots detecting cancer in Chinese patients. But beyond geeking out over all the scientific breakthroughs, what’s most exciting is the direct impact on patients and their families — faster, accurate and cheaper access to health services. Everyone deserves that. Originally published at rarebirds.io on November 26, 2017.
How will AI transform the healthcare industry?
0
how-will-ai-transform-the-healthcare-industry-1d5bf2cfe27f
2018-02-08
2018-02-08 05:24:35
https://medium.com/s/story/how-will-ai-transform-the-healthcare-industry-1d5bf2cfe27f
false
543
null
null
null
null
null
null
null
null
null
Healthcare
healthcare
Healthcare
59,511
Rare Birds
We build software with brilliant, globally connected teams to make the world a better place. http://rarebirds.io
93402fbb6135
rarebirdslabs
1
1
20,181,104
null
null
null
null
null
null
0
null
0
6939e9ee6869
2018-07-03
2018-07-03 11:18:04
2018-07-10
2018-07-10 15:41:02
3
false
en
2018-08-03
2018-08-03 23:42:19
13
1d5c2bad5289
8.504717
53
3
0
This article is part of a series about how OS Fund (OSF) companies are radically redefining our future by rewriting the operating systems…
5
Beyond the Hard Drive: Encoding Data in DNA This article is part of a series about how OS Fund (OSF) companies are radically redefining our future by rewriting the operating systems of life. Or as we prefer to think about it: Step 1: Put a dent into the universe. And Step 2: Rewrite the universe. You can see the full OSF collection here and read more about Building a Biological Immune System. In contemplating the future, I love imagining how our daily lives today will be thought of in the future. What appears sci-fi to us today but will be “normal” 50 years from now? What inefficient and boneheaded things do we do today that future generations will look back and laugh at? Seeing beyond what’s possible is a rare skill. Being able to design and build beyond what’s possible is even more rare. Put together, this is the unique set of skills and abilities that OSF founders all have in common. Most importantly, they’ve chosen to focus their abilities to tackling the biggest problems humanity faces. But who are they? What makes them tick? Why do this versus other things? And how might their technologies change the world? These are their stories. I. More Data Than We Know What to do With We are in a golden age of information. Over 90% of all the data created throughout the history of humanity has been generated in just the past two years. The world’s population is creating 2.5 quintillion (10¹⁸) bytes of data every day, and every person will soon be generating 1.7 MB of data each second of their lives. And that’s just humans. The earth is generating orders of magnitude more data than that every minute. The speed at which we make data is outpacing our ability to store, transport, and access it. This is problematic because managing this massive increase requires energy (the IT industry burns between 7–12% of global energy every year!) and physical space. It’s slightly counterintuitive, but digital storage takes up space and energy, too. Saving to the cloud requires enormous warehouses of servers that consume massive amounts of energy to maintain at the right temperature and humidity. As recently as a few decades ago, we mostly relied on good ol’ pen and paper to store our data. Then we shifted to magnetic tapes and disks before graduating to digital storage. In some ways, the graduation wasn’t all progress: data still has to be migrated every generation to the newest technology, to ensure that it can still be preserved and read out. How can our storage abilities keep up with our data generation and recording ability? How can we store data in a medium that will never be obsolete? How can we preserve our history long into the future? What if we could find a way to store the world’s digital knowledge bank in a way that wasn’t subjected to global or regional power loss, consumed a fraction of the energy resources of existing IT infrastructure, and was cost effective? The answer was all around us this whole time. In us. It is us: DNA. What if we could store all the world’s information in DNA? I met recently with Catalog, which is attempting to do just that, and immediately knew that they were on to something big. II. Small, Efficient, and Durable: Why DNA is the Future of Data Storage Catalog co-founders Hyunjun Park and Nathaniel Roquet are working to make DNA the next-generation, mainstream storage medium for digital data. Park has a Ph.D. in microbiology; Roquet has a Ph.D. in biophysics. The two met at MIT and saw a world-changing opportunity to solve the challenge of data storage. Catalog has invented a methodology that would allow them to fit all of the world’s data into a coat closet and store it for…a very, very long time. “DNA is an extremely stable material,” Park explained. We know this because we have found mostly intact DNA from preserved animals hundreds of thousands of years old. “You see things like horses that were frozen in permafrost in Canada for 700,000 years, and you’re still able to read back the genome of that animal,” Park explained. “The fact that our genetic information is encoded in the medium means that we’ll always be able to read this back. We won’t need to worry about the reading technology evolving past the medium.” Photo: pip / photocase.com While early success with storing data as DNA strands using synthetic biology was achieved in 1986, the practice itself is still relatively undefined for purposes of mass adoption. (Recently, entire books and short movies have been stored as DNA, as proofs of principle.) But Catalog is the first company that has a shot at scalability in a market that’s been benefiting from bulky hardware and planned obsolescence. This is because they’ve found a way that may, within a few years, make DNA storage cheaper than tape storage today. “There’s currently no way for an individual to physically own really large amounts of data. But it’s easy to just keep a capsule of DNA that’s encoding petabytes of data,” said Park. “It’s a lot easier to ship small vials of DNA than semi trucks full of hard drives. Right now, the internet is very slow, and when you’re sending petabytes of information, the largest bandwidth you’re going to get is FedEx. In fact, it is cheaper to physically move a company’s hard drives via semi trucks than it is to send over the internet.” III. DNA Storage Benefits: Universality, Density, and Cost All life uses DNA (or some form of nucleic acids) for storage. Evolution has settled on one of the most efficient, long-term data storage options we know of, so why wouldn’t we take advantage of its inherent properties to preserve the creations of humanity as well? DNA is one million times more information dense than flash drives, so storing and transporting it would require significantly fewer resources than the status quo, drastically reducing both the environmental footprint and cost of data storage. DNA is also much cheaper and easier to copy — something we likely take for granted since our bodies do it automatically countless times a day. “A thousand copies of the same information doesn’t cost a thousand times the amount of one, as it does with flash drives or hard drives,” Park pointed out. IV. How DNA Storage Works Tl;dr: Step 1: Build a grid; Step 2: Encode the grid. Catalog’s novel methodology relies on a deep understanding of the age-old philosophical question: What is information? When it comes to digital information, data is just a series of ones and zeros. (DNA has at least four units of information — A,T,G, and C — which can be reduced to zeros and ones as long as one maintains a code. The ones and zeros get stored as ATGCs.) The bottleneck is the high cost of printing out strands of whatever DNA you want, about ten cents per nucleotide today. Think of it like printing beads on a string, each bead costing a dime. The human genome, for example, in full, would cost 320 million dollars to print today! How to get around the bottleneck? Catalog’s new method breaks many storage and cost bottlenecks by synthesizing large quantities of just a few different DNA molecules and mixing them in different combinations to generate a huge variety of different molecules. These molecules are then used in conjunction with their innovative encoding methods to represent long series of 1s and 0s. Crudely, this is the same cost-saving technique that, say, the manufacturer of Legos uses. It is very expensive to make the cast mold for a new piece, so Lego designers are generally encouraged to make new sets with pre-existing bricks that can be made cheaply and easily. So, Lego makes a finite amount of mass-produced bricks, and the information is contained in the instruction manual, which comes along and tells those basic pieces where to go in a near-infinite array of sets. These instruction manuals are the equivalent of Catalog’s encoding scheme. Whereas old methods rely on producing an indeterminate number of new bricks each time from scratch, Catalog pre-defines all of the bricks that will be used and makes large quantities of them beforehand. Then, each time something new needs to be stored, they simply print out a new instruction manual. Photograph by Zane Thorn To store one terabyte of data in DNA, for example, they only need to use a few hundred “bricks”. They’ve taken the code of life itself and made it more efficient. “It doesn’t really matter that it’s a song, or a picture, or video — as long as it can be distilled to a series of ones and zeros, we can use the same protocol for doing it,” Park explained. “What we’ve done differently in Catalog, is that we began by asking what information is and what the best way is for us to represent it using DNA. This was so that we do not constrain ourselves by the way DNA is used in nature.” V. DNA Storage in Action While Catalog’s approach may sound like something so technologically complicated it would be limited only to synthetic biologists, you wouldn’t have to know how to sequence DNA to use DNA storage, just like you don’t need to know how to set up a server farm to use the cloud. In theory, anybody with encoded synthetic DNA could send it to any number of vendors to do the DNA sequencing for them. “When you store things on Amazon AWS, you don’t worry whether that’s being backed up in Blue-Rays, or magnetic tape or hard drives. You just care that the information is safe somewhere and that you’re able to retrieve it. We want the customers not to have to worry about whether it’s ever backed up in DNA or not, just that they’re getting a really good service and have peace of mind that it has multiple redundancies.” Park’s hope is that storing data in DNA will become as ubiquitous a practice as saving a photo in iCloud. Their first storage focus will be archival data: data which is typically stored for a long time in dead-tree libraries, and that isn’t recalled too frequently. Archivists, librarians, governments — anyone dedicated to maintaining accurate historical records, would have an immediate use case for Catalog’s product. Currently, the Library of Congress is planning a symposium on data preservation. Park will be there, along with representatives of the Internet Archive (the creators of the Wayback Machine). “They want to be the Library of Alexandria for the modern world, and to make knowledge accessible by everyone,” Park explained, about the Internet Archive. Library of Congress “They store a lot of data as a result of that. They want to keep that safe in perpetuity, and we’d love to add a layer of safety to their mission using DNA.” VI. The Future of Information: DNA Across the Universe So far, Catalog DNA has proven its capacity by encoding full books, like Douglas Adams’ The Hitchhiker’s Guide to the Galaxy, into DNA. Soon, they will be translating entire libraries of books into molecular codes. And one day, they will be able to store all the data the world has ever generated in just a room full of DNA. Once they pull that off, it’s not a far cry to imagine disseminating Earth’s data to all corners of space, which they’re already working on with Arch Mission. “In thinking about colonizing other planets, if we want to send the entire internet to Mars for human civilization to continue there, there’s really no other way to store and send all of that information, except by DNA,” Park told me. Here on Earth, Park said he would love to see data-encoded DNA stored somewhere future-proof, like the Svalbard Global Seed Vault on an island, in Norway. Seeds, of course, are just nature’s way of storing information about a plant, in DNA. Placing all of human knowledge — everything we are, everything we’ve ever done, everything we’ve ever made — in the same seed vault seems somehow fitting. If we want to make sure the coming generations carry the wisdom of the world forward, so that they too can stand on the shoulders of giants, we need to make sure they won’t lose that opportunity in one flash of a solar flare. We need to make sure that wisdom is packaged in a way that will stand the test of time. Catalog is on the path to making that a reality.
Beyond the Hard Drive: Encoding Data in DNA
490
beyond-the-hard-drive-encoding-data-in-dna-1d5c2bad5289
2018-08-03
2018-08-03 23:42:19
https://medium.com/s/story/beyond-the-hard-drive-encoding-data-in-dna-1d5c2bad5289
false
2,108
Creating mental models for an emerging future
null
null
null
Future Literacy
medium@kernel.co
future-literacy
AI,FUTURISM,BIOTECHNOLOGY,SCIENCE,COGNITIVE SCIENCE
bryan_johnson
Dna
dna
Dna
1,230
Bryan Johnson
Founder @KernelCo @OSFund & Braintree
35c4d6050746
bryan_johnson
7,023
105
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-15
2018-05-15 04:28:16
2018-05-15
2018-05-15 04:29:50
1
false
tr
2018-05-15
2018-05-15 04:29:50
2
1d5e441ecdaa
3.316981
4
1
0
Türkiye’de yeni teknolojik devrimin herhangi bir unsuru ile ilgili bir oturum düzenlendiğinde, konu dönüp dolaşıp hep aynı soruya geliyor…
5
Sanayi 4.0 stratejisi olmayan Türkiye, AI için ne yapsın? Türkiye’de yeni teknolojik devrimin herhangi bir unsuru ile ilgili bir oturum düzenlendiğinde, konu dönüp dolaşıp hep aynı soruya geliyor. Devlet, bu konuda ne yapmalı? Doğrusu ya, ben, akla ilk gelen sorunun “Şimdi devlet bu konuda ne yapsın?” olmasından öncelikle derin bir tedirginlik duyuyorum. Ayrıca bu soruya verilen cevabın hemen “eğitim şart” genelliğinde olması tedirginliğimi daha da artırıyor. Google, DeepMind’ı satın alınca, İngiltere’de “İngiliz startup’ları İngiliz kalsın” havası doğmuştu. Önce genel bir çerçeve çizeyim, müsaadenizle. Geçenlerde Hint asıllı İngiliz iktisatçı Meghnad Desai’nin “Raisina Model: Indian Democracy at 70” kitabında bu konu ile ilgili güzel bir saptamaya rastladım. “Hindistan’ın hizmetler sektörünün bu kadar gelişmiş, sanayisinin bu kadar geride kalmış olmasının arkasında; devletin sanayiyi düzenlemek ve bir biçime sokmak için çok uğraşırken, hizmetler sektörünü tamamen ihmal etmiş olması yatıyor.” diyordu Baron Desai. Demek ki neymiş; ihmal edilmiş olmak, bir açıdan bakınca o kadar da kötü değilmiş. Doğrusu, ben bu tespiti şöyle okuma eğilimindeyim: Devletin çokbilmiş olup her bir işe karışmasından ve her konuyu mükemmel bildiğini sanıp her süreci tek tipleştirmesindense, hiç gölge etmemesi daha iyidir. Ama bu, devlet hiç bir işe karışmasın demekten tamamen farklı bir cevap, dikkatinizi çekerim. Çokbilmiş devlet aygıtı, her zaman felaket getirir. Devlet aygıtının kendi kararlarına şüpheyle yaklaşanı, çeşitliliğe, deneye imkan tanıyanı makbuldür. Geçenlerde bir Hintli kalkınma iktisatçısı ile Hintli startup’ları konuşurken, “Ama hemen yabancılar tarafından alınıp, Hindistan’ı terk ediyorlar.” demişti. Aynen İngiltere’deki tartışmayı hatırlatmıştı bu bana. Amerikan teknoloji devi Google, 2014 yılında İngiliz AI (artificial intelligence-yapay zeka) startup’ı DeepMind’ı satın aldığında Britanya devleti hemen göreve çağırılmıştı. 2014 yılında yazılan raporlarda İngiliz hükümetinin, İngiliz startup’larının İngiliz şirketleri olarak büyümesi için gereken tedbirleri alması gerektiği vurgulanmıştı. DeepMind benzeri startup’ların ihtiyacı neyse, devletin bunları temin etmek üzere devreye girmesi gerektiği ifade edilmişti. Bugünlerde ise Brexit sonrası azalacak startup desteklerini ikame etmek için bir teknoloji fonu oluşturulması tartışması da bu çerçevede gündeme gelmiş bulunuyor yine. Orada hadise DeepMind somutluğunda tartışılırken, biz burada neden “eğitim şart” genelliğinin ötesine geçemedik. Orada AI konusundaki tartışma, DeepMind startup’ı etrafında son derece somut bir biçimde cereyan ediyor. DeepMind, AI’ın deep learning aşamasını temsil ediyor. Bir nevi, kendi kendine “öğrenen”, kendi hatalarından “ders çıkartan” algoritmalardan bahsediyoruz. Finansal piyasalarda yaygın olarak kullanılan neural networls uygulamalarının, yeni aşaması bu uygulamalar. Bunun ileride otonom vasıtalardan, büyük veri analizine ve uydu teknolojisine istihdam yaratacak projelere dönüşebileceği düşünüldüğü için, DeepMind benzeri startup’lar burada kalsın deniliyordu. İngiltere’nin 8 öncelikli teknolojiye odaklanılması gerektiğine ilişkin bir strateji dokümanı var. Sentetik biyolojiden uydu teknolojisine, robotlar ve otonom vasıtalardan rejeneratif tıbba 8 büyük teknoloji alanı. Ortada bir kaç aşama ve bir büyük strateji var. Ona uygun ortaya çıkmış bir startup var. Ondan sonra işin tartışması geliyor: Devlet şimdi ne yapsın? Burası öyle değil. Türkiye Sanayi 4.0 konusunda dedikodu yapıyor hala Türkiye’de biz daha AI, machine ve deep learning konusuna zaten gelemedik. Daha Sanayi 4.0 konusunda bile dedikodu yapıyoruz yalnızca. Nerede? Ankara’da. Aslında İzmir ve İstanbul’a gidince şirketlerin bu konularda çalışmak üzere kurulmuş startup’larla ilişki kurmak için attığı bebek adımlarını görebiliyoruz. Ama çok sınırlı. Bankalarımız, yakınlarda startup’lara yönelik yarışmalarla filan ilgilenmeye başladılar. Bebek adımları ama doğru yönde adımlar. Daha yeni başlıyoruz. Hadise akıllarda somutlaşamayınca elbette biz de “eğitim şart” genelliğini aşamıyoruz. Çünkü ne yapmak gerektiğini bilmiyoruz. Halbuki önce şirketlerin kendi değer zincirleri içinde üretimin organizasyonu için, müşteri ve tedarikçi ilişkileri için yeni teknolojilerin manasına meraktan bile olsa bir bakmış olması lazım. Yalnızca Sanayi 4.0 için değil, yeni teknolojik devrimin bütünü açısından da eğitim sistemimizi elden geçirmemiz zaten gerekiyor. Türk Milli Eğitim Sistemi, aşırı merkeziyetçi olduğu için başarısızdır Yandaki tablo, bilgi ve iletişim teknolojileri konusunda eğitimin farklı ülkelerde nasıl organize edildiğini gösteriyor. Tablonun içinde her bir kolona üç tür cevap vermeniz gerekiyor. Soruyorlar: Bilgi ve iletişim sistemi ekipmanı nasıl satın alınır? Üç tür cevap vermek mümkün: Okul bu seçimi yapmakta otonomdur, yarı- otonomdur, hiç otonomisi yoktur. Soruyorlar: Öğrencileri değerlendirmede okulu nasıl tanımlarsınız? Otonomdur, yarı-otonomdur, hiç otonomisi yoktur. Soruyorlar: Öğretmenlerinin kendilerini geliştirmek için gidecekleri bilgi ve iletişim teknolojisi eğitim programlarını belirlemekte okulu nasıl tanımlarsınız? Otonomdur, yarı-otonomdur, hiç otonomisi yoktur. Dikkatimi çeken şu oldu: Çalışmaya 18 ülke katılmış. Bir tek Türkiye söz konusu olduğu olduğunda her soruya “Yereldeki okulun bu konuda hiç otonomisi, hareket alanı, söz hakkı yoktur.” diye cevap verilmiş. Bakın listeye görün. Nedir? Türkiye’de herhangi bir ilin bir okulundaki tek bir sınıfı idare etmek için merkezde kurulu devasa bir bürokrasi çalışmaktadır. Gelişmiş ülkelerde de eğitimin, yeteneğin önemi ve vazgeçilmezliği konusunda bir tartışma elbette yürütülüyor. Ama o tartışmanın özü, çeşitlilik ile aynı meseleye farklı açılardan bakabilecek yetenekleri ortaya çıkarabilmek. Ama bizim eğitim sistemi vasat bir standardı her yerde hakim kılma esaslı. Liselerimiz de böyle, üniversitelerimiz de. Liseler söz konusu olunca, ben TEVİTÖL’ün ne olduğunu sevgili @ErhanErkut’tan daha yeni öğrendim. Üniversiteler söz konusu olunca, Yüksek Öğretim Kurulu’nun (YÖK) ne yaptığını tecrübelerimden biliyorum. Yeni bir öneri yaptığınızda önce onay almanız gerekiyor YÖK’ten. YÖK nereye gönderiyor önerinizi öncelikle? Üniversitelerarası Kurul’a. Diğer üniversitelere, rakiplerinize soruyor, “Bunun bu dediği uygun mudur?” diye. Ondan sonra ortada ne çeşitliliği kalıyor? Bu köşe yazısı 14.05.2018 tarihinde Dünya Gazetesi’nde yayımlandı. Originally published at www.tepav.org.tr.
Sanayi 4.0 stratejisi olmayan Türkiye, AI için ne yapsın?
4
sanayi-4-0-stratejisi-olmayan-türkiye-ai-için-ne-yapsın-1d5e441ecdaa
2018-06-02
2018-06-02 09:44:08
https://medium.com/s/story/sanayi-4-0-stratejisi-olmayan-türkiye-ai-için-ne-yapsın-1d5e441ecdaa
false
826
null
null
null
null
null
null
null
null
null
Turkey
turkey
Turkey
6,010
guven sak
Managing Director @TEPAV , Professor of Public Economics @TOBB_ETU
371397b9cfe2
guvsak
986
189
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-18
2018-08-18 18:31:23
2018-08-18
2018-08-18 18:31:34
0
false
en
2018-08-18
2018-08-18 18:31:34
1
1d5ec987d7d7
1.316981
0
0
0
[PDF] Download Machine Learning: A Probabilistic Perspective READ ONLINE Link…
1
Download pdf Online Kaplan GMAT Math Workbook By Kaplan Inc. PDF #pdf [PDF] Download Machine Learning: A Probabilistic Perspective READ ONLINE Link https://bestreadkindle.icu/?q=Machine+Learning%3A+A+Probabilistic+Perspective . . . . . . . . . . . . . . . . . . . Read Online PDF Machine Learning: A Probabilistic Perspective, Download PDF Machine Learning: A Probabilistic Perspective, Download Full PDF Machine Learning: A Probabilistic Perspective, Download PDF and EPUB Machine Learning: A Probabilistic Perspective, Read PDF ePub Mobi Machine Learning: A Probabilistic Perspective, Reading PDF Machine Learning: A Probabilistic Perspective, Read Book PDF Machine Learning: A Probabilistic Perspective, Read online Machine Learning: A Probabilistic Perspective, Download Machine Learning: A Probabilistic Perspective Kevin P. Murphy pdf, Download Kevin P. Murphy epub Machine Learning: A Probabilistic Perspective, Read pdf Kevin P. Murphy Machine Learning: A Probabilistic Perspective, Download Kevin P. Murphy ebook Machine Learning: A Probabilistic Perspective, Read pdf Machine Learning: A Probabilistic Perspective, Machine Learning: A Probabilistic Perspective Online Download Best Book Online Machine Learning: A Probabilistic Perspective, Read Online Machine Learning: A Probabilistic Perspective Book, Read Online Machine Learning: A Probabilistic Perspective E-Books, Read Machine Learning: A Probabilistic Perspective Online, Read Best Book Machine Learning: A Probabilistic Perspective Online, Read Machine Learning: A Probabilistic Perspective Books Online Download Machine Learning: A Probabilistic Perspective Full Collection, Download Machine Learning: A Probabilistic Perspective Book, Read Machine Learning: A Probabilistic Perspective Ebook Machine Learning: A Probabilistic Perspective PDF Read online, Machine Learning: A Probabilistic Perspective pdf Download online, Machine Learning: A Probabilistic Perspective Read, Download Machine Learning: A Probabilistic Perspective Full PDF, Read Machine Learning: A Probabilistic Perspective PDF Online, Read Machine Learning: A Probabilistic Perspective Books Online, Read Machine Learning: A Probabilistic Perspective Full Popular PDF, PDF Machine Learning: A Probabilistic Perspective Read Book PDF Machine Learning: A Probabilistic Perspective, Read online PDF Machine Learning: A Probabilistic Perspective, Download Best Book Machine Learning: A Probabilistic Perspective, Read PDF Machine Learning: A Probabilistic Perspective Collection, Read PDF Machine Learning: A Probabilistic Perspective Full Online, Read Best Book Online Machine Learning: A Probabilistic Perspective, Download Machine Learning: A Probabilistic Perspective PDF files
Download pdf Online Kaplan GMAT Math Workbook By Kaplan Inc. PDF #pdf
0
download-pdf-online-kaplan-gmat-math-workbook-by-kaplan-inc-pdf-pdf-1d5ec987d7d7
2018-08-18
2018-08-18 18:31:35
https://medium.com/s/story/download-pdf-online-kaplan-gmat-math-workbook-by-kaplan-inc-pdf-pdf-1d5ec987d7d7
false
349
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Patrick Allen
null
9ee6c77a366c
itxy
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-14
2018-09-14 16:33:24
2018-01-11
2018-01-11 14:23:23
8
false
en
2018-09-17
2018-09-17 15:15:06
8
1d5f4cf81b09
8.32956
2
0
0
How blockchain technology will survive the Bitcoin crash and power a new Internet of Transactions
4
Bitcoin is dead. Long live the Blockchain Technology How blockchain technology will survive the Bitcoin crash and power a new Internet of Transactions The recent drop in the price of Bitcoin made tremendous noise and assured the widest public that Bitcoin is dying. Even if you missed the latest news, you’ve likely read about Bitcoin’s ups and downs in the past. Demand for Bitcoin is highly volatile, making the most capitalized virtual currency also the most unreliable store of value. But is Blockchain dead? There’s good news. The blockchain, the technology behind Bitcoin, is thriving, and has every chance of becoming a game-changer for a wide range of Fintech companies not limited by cryptocurrency transactions. So let’s get to know what is blockchain technology and what bitcoin crash brings. What is the blockchain and why will it survive the Bitcoin crash? It’s time to untangle the terms Bitcoin and blockchain once and for all. A blockchain is a distributed public database of all transactions made by a network’s participants. The main point of this technology is that each transaction in the distributed ledger has to be approved by consensus. In other words, the majority of the database’s participants have to verify a transaction before it’s completed. A blockchain is a distributed public database of all transactions made by a network’s participants. The main point of a blockchain is that each transaction in the distributed ledger has to be approved by consensus. A blockchain allows strangers to exchange value in a transparent and trusted way without involving third parties like banks. And guess what? The blockchain isn’t all about money, let alone all about Bitcoin. The blockchain is primarily about decentralized records of events that are shared by all parties involved in the chain. The interest of innovative leaders including Google and Facebook proves that the blockchain will be a top global technology of the coming decades. Mark Zuckerberg recently announced plans for Facebook to examine how the blockchain might optimize their service. The fact that some countries, including the UK, are even considering introducing official cryptocurrencies also proves that the markets for Initial Coin Offerings (ICOs) are no longer the sole interest of so-called miners and Bitcoin traders. Why is Bitcoin dying and what is next for Blockchain? Since its introduction in 2009, the price of Bitcoin has increased tremendously. Today, one Bitcoin is worth more than $16,000, but the currency’s real value remains rather insignificant. As long as the price of Bitcoin continues to increase, the ups and downs will become more drastic and unpredictable. From the very beginning, the most capitalized virtual currency has borne several substantial weaknesses by design, preventing Bitcoin from becoming real money. Bitcoin may hardly be considered a viable and stable store of value due to its extreme volatility. Today, the cryptocurrency’s price is likely to go up or down more than 20% in one day. Bitcoin’s major — and almost only — real-life application is a relatively low-cost method of transferring value over long distances. But most buyers aren’t acquiring Bitcoin to use it in traditional monetary transactions. As a result, the current demand for Bitcoin is mostly artificial, and the price may drop drastically at any time. Bitcoin may hardly be considered a viable and stable store of value due to its extreme volatility. Today, the cryptocurrency’s price is likely to go up or down more than 20% in one day. On the other hand, we cannot be sure is cryptocurrency dead as a concept itself. Still, you can benefit from alternative cryptocurrencies or so-called altcoins. The majority of altcoins, including Litecoin and Ethereum, use the same technology as Bitcoin. Being designed differently than Bitcoin, however, these altcoins are much more secure, sustainable, and reliable as virtual money. For example, Litecoin takes substantially less time to add new blocks to the chain. And with Ethereum, you can easily create secure contracts that will hold the money until a specific goal or date is reached. The blockchain turns legacy businesses into tech innovators Imagine a not-so-distant future when you don’t need to use any centralized systems to complete transactions. Instead, we might pay for insurance or healthcare by means of peer-to-peer transactions, play online games and stream walkthroughs for a dollar, delegate retail ownership to third parties, receive money directly and invest in our children’s education without expensive banking services. The blockchain tech is already bringing these ideas to life. Today, transactions still rely on banks, which provide a certain authority. While living life online and going through the casual routine of receiving and sending emails, updating software, or cleansing your system from viruses, you always get some assurance from a service provider. You see a pop-up message saying that your email has been delivered, that your system is safe, and that your software is up to date. You trust these messages just as you trust the bank that’s responsible for your financial payments. However, you don’t see what happens on the backend. If something goes wrong, is the service provider lying or simply wrong? Payment security is weak because payments lack transparency; they’re hidden from the user’s eye. Nevertheless, we have no other choice than to trust banks or other authorities if we want to provide or receive some service. Blockchain technology implementation can change this. A blockchain records past and present events, which cannot just be hidden or removed. The data about these events is transparent and recorded in real time which makes it almost impossible to be deleted at least if you don’t control more than a half of the entire network power. In terms of privacy, the blockchain is an anonymous transaction system in which individuals may reveal their identities as desired and as permitted by users themselves. A blockchain records past and present events, which cannot just be hidden or removed. The data about these events is transparent and recorded in real time which makes it almost impossible to be deleted at least if you don’t control more than a half of the entire network power. Every event has a precise record of successful completion. To commit fraud, therefore, it would be necessary to create a whole new chain of events that would prove the false event to be true. This is exceptionally difficult to do and requires much power and effort. With a blockchain system, you can forget about paper checks or signatures as proof for health insurance or tuition payments. Blockchain technology allows you to create your own digital token that can be used as a currency among app users. The technology also gives you free rein to implement transparent, secure, and nearly costless votes on corporate or even national issues. Software developers are ready to implement blockchain solutions around the world Even though the blockchain remains the most promising Internet of Transactions technology, it’s still far from widely used apart from cryptocurrency. Software developers play a key role in providing transparency and reliability to the ICO markets, so they are the main drivers bringing this technology to a wide range of industries. There are already blockchain-powered applications enabling supply chain managers to track goods as they’re being transferred and make payments using hot and cold wallets. This technology will soon be everywhere, so developing an app based on blockchain technology can turn you into a leader in this fast-paced environment of secured decentralized systems with unalterable records of events. There are already blockchain-powered applications enabling supply chain managers to track goods as they’re being transferred and make payments using hot and cold wallets. Below, our blockchain experts share their ideas on how to implement this technology in different fields to solve problems we face every day. Blockchain for verified and transparent customer reviews One example of a potentially successful implementation of blockchain technology is for user reviews of products, services, and businesses. This could be a blockchain-based platform that rewards users for sharing feedback on items and helps users choose products based on a clear and trustworthy review. The current problem with user reviews is that they can be manipulated or simply false. Customers can’t say for sure if a review was written by a real person who has experience with the product or if it’s just manipulation by the manufacturer or an interested third party. Who knows? Maybe it was written by an artificial intelligence system that provides short human-like reviews by the thousands. A blockchain review platform could post feedback after it’s validated by most of the participants, just as it works with cryptocurrency transactions. In addition, the most valuable reviews could be rewarded by a platform for their high-rate among participants. A blockchain review platform could post feedback after it’s validated by most of the participants, just as it works with cryptocurrency transactions. Blockchain for trustworthy collective contracts Another use of the technology might be in a platform that could solve the problem of consumers not being able to buy goods that are only sold in bulk. Let’s assume you want to buy one spare part for a car from an OEM, but they only sell this unit in batches of hundreds. You can’t afford such a huge purchase, and apart from that what would you do with the other 99 units? With a blockchain platform for collective contracts, you could gather multiple customers who want to buy one or a few of the same unit. These customers could pay through the platform, knowing that their money won’t disappear and will be guaranteed to be returned if not enough buyers are found for the contract to be executed. With a blockchain platform for collective contracts, you could gather multiple customers who want to buy one or a few of the same unit. Blockchain for fraud-free ticket sales One more example of the blockchain’s potential is in ticket sales. Have you ever had a situation when you were looking for a last-minute ticket to see your favorite star or sports team or to fly to your aunt for the holidays? You could still buy a ticket from a third-party platform, but for a crazy price. Why does that happen? Because some sneaky middleman dealer buys all the tickets to sell them according to his own terms that are profitable only to him. To avoid such dealings, a blockchain-based platform could limit each customer’s ticket purchases based on the verified number of people who will attend an event or be on a flight. And a blockchain, with its collective approval, is the right fit to save you from paying twice for a service. ___________________________________________________________________ The business world finally can take a deep breath after the hype around Bitcoin and focus on what really matters. The technology that will rise from the Bitcoin crash is the blockchain. It could destroy outdated institutions and establish financial relations that are transparent, affordable, and safe. What would you prefer? To see exactly what happens when you make a transaction or to trust a third party that everything’s okay without knowing for sure? I bet you’d prefer the transparent and secure option based on blockchain technology. ___________________________________________________________________ Intellias is ready to help you find out how exactly the blockchain might work for your business. Contacting our experts is the first step toward your transparent, sustainable, and prosperous future. Originally published at www.intellias.com on January 11, 2018.
Bitcoin is dead. Long live the Blockchain Technology
4
bitcoin-is-dead-long-live-the-blockchain-technology-1d5f4cf81b09
2018-09-17
2018-09-17 15:15:06
https://medium.com/s/story/bitcoin-is-dead-long-live-the-blockchain-technology-1d5f4cf81b09
false
1,907
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Intellias FinTech
null
897c28b65940
intellias_fintech
11
251
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-02
2018-07-02 13:47:27
2018-07-03
2018-07-03 03:57:15
1
false
en
2018-07-03
2018-07-03 05:54:40
0
1d5f4fb2224a
3.113208
0
0
0
Hello World!! Welcome. When I started reading about Reinforcement Learning, I saw there are lots of concepts and theory which was bit…
2
Reinforcement Learning Basics Hello World!! Welcome. When I started reading about Reinforcement Learning, I saw there are lots of concepts and theory which was bit difficult to digest. So I thought of starting an easy to digest series which will help you master the field of Reinforcement Learning so that you can start writing cool programs to impress your boss!! So be glued to the series, there are lot of examples and code walks and there are github links which you can download and get going. But first thing first! What the heck is Reinforcement Learning anyways? Human beings do not learn by supervision. We learn more from the environment by actively participating in it. Reinforcement Learning approaches AI so that agents can learn from the environment by active involvement. It is different from supervised learning since their is no labelled data sets, on the contrary an agent moves around the environment learning about the ways of performing a job. It is also substantially different from unsupervised learning which has more to do with finding patterns within a data set, whereas with Reinforcement Learning the approach is to maximize the total reward. The main ideas behind Reinforcement Learning is to maximize rewards which could be successfully completing a game like chess or finding the best route to drive a car or solving a maze/puzzle. It works by performing two sets of tasks called exploitation and exploration. By exploitation, I mean the best possible way you know about solving a problem whereas exploring would mean finding novel ways to solve the same problem so you would know more than one possible method. Exploration may not always lead to positive rewards. For example, suppose you order pizza, the pizza boy has two reward options when he delivers pizza to you. On successful and timely delivery he gets points and good reviews. In case he fails to deliver on time, he gets some bad remarks and does not get any points. The pizza boy sees your address and figures out the best possible route from his memory and ends up just serving the pizza on time, gets the points and good remarks from you. This is called exploitation in Reinforcement Learning terms since he exploits the best possible known routes to reach you. But there could be a better shortcut which the pizza boy might not know and which would save him more time. Exploration requires taking risk and he might fail not delivering on time and thereby not getting points but there is a possibility by which he can find a shortcut and thereby saving time and effort once he finds it. Hope this simple example helps you relate the terminology of reinforcement learning in a better way. Let me introduce few more terms, which we will expound in greater details in the subsequent blogs. An agent is an object which is interacting with the environment and wants to maximize rewards which means finding a policy to win. By environment I mean the scenario or situation in which the agent is working. The environment decides whether the agent is behaving properly or not by rewarding or punishing the agent. Policy decides how the learning agent behaves given a particular scenario. It could be a lookup table or a simple function. Rewards are like short term goals, you can also associate them as pleasure of pain sensations of human being. They are generally numbers given to the agent at each step of its task and an agent’s sole role is to maximize the rewards in the long run. Values are the overall reward which an agent accumulates over a long term and they can be compared to the long term objective or goal of an agent. Model of an environment is its behavior using which inferences can be made as to how it would behave in a particular scenario. Now since you have some fair understanding of what Reinforcement learning is try to think about the above terms in case Reinforcement learning is another player in a Tic Tac Toe game against an imperfect player. Meanwhile let me order some pizza and will ask the boy what route he took to reach me ;) Hope you now understand a bit about Reinforcement Learning. My next topics is on how we can solve Multi armed Bandit problem using Reinforcement Learning, so check out this space. By the way, try answering below questions How is reinforcement learning different from evolutionary methods like genetic algorithms or genetic programming? What are the problems you would encounter if you were using dynamic programming techniques to play tic tac toe game?
Reinforcement Learning Basics
0
reinforcement-learning-basics-1-1d5f4fb2224a
2018-07-03
2018-07-03 15:06:22
https://medium.com/s/story/reinforcement-learning-basics-1-1d5f4fb2224a
false
772
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Shwetank
Explorer and inquisitive about the cosmos, love mathematics and making machines smarter!!
a2c6b3fc719f
sswetank_swap
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-07
2018-04-07 09:57:58
2018-04-07
2018-04-07 15:08:23
6
false
en
2018-04-07
2018-04-07 15:13:17
7
1d5f52febcb4
5.417925
1
0
0
How to open an excel sheet and play with the data more easily?
5
R Tips and tricks from a beginner. How to open an excel sheet and play with the data more easily? Here I would show you the steps that I took to set the working directory as well as opening and viewing a specific excel file in R. I am using R for statistical and modeling purposes only. This will not teach one on how to manipulate other types of data (e.g. JSON, HTML or XML) My primary book of reference which I find the most helpful so far is: R primer, second edition by Ekstrom by William Iven on Unsplash 1. Setting working directory I find the setwd(“state-the-location”) command troublesome, so instead I use the followings steps; Go to the right lower part of the Rstudio window where you can chose the files. Then simply search for your files and select them by clicking the boxes to get ✓. 3. Next is to click on More as shown and select “Set As Working Directory”. So you can forget about the path of C://. 2. For excel data reading: I like to use readxl package instead of xlsx, which requires few other additional 2 packages to be used; rJava xlsxjar (rJava installation can be a hellish experience. This, this and this would convince you so, based on others’ experience.) With readxl I only need to use this single package to open an excel file and the steps are; >setwd() >library(readxl) >colistin<-readxl(“colistin.xlsx”, sheet=1) with both colistin and colistin.xlsx are my unique inputs, which you need to change depending on what do you want R to call this file as, in R workspace and what is the name of the excel file that you want R to read from your working directory 3. Viewing the data (colistin) above You can view your data (mine is colistin) by doing this instead of viewing them only in your console which is quite messy and restricted. Execute this in your console: > colistin And then go to the right upper most of your R window and look for the excel data which will be shown in the R environment as shown. Next, you need to scroll the colistin line till its right end, here you will find the table symbol as above. Click on this and you would see the whole table with its list of variables (row 1 to i) and corresponding observations shown in the same space that we find the script editor. If you want to do modeling/stat only, I would suggest a targeted approach in the following section (i.e. data manipulation), as the books/webpages usually would include mathematical function in addition, which may be less relevant to you. Manipulating your data Let’s get on a few interesting/important steps of manipulating your data. Sort data in ascending fashion Syntax: sort() Say I want to sort the colistin data based on one of variables (in this I would sort the age in ascending fashion), I need to tell R that the variable is age from colistin data with the dollar sign in between them as below; sort(colistin$age) which would provide the followings answer; As you can see here, R has helped me counting the number of observation that has been made which are 107 in total. (The numbers in the brackets [] correspond to the position of respective first observation of each row) Thus after the sorting, I found that the youngest age of this cohort was 17 and the oldest was 83 years old. Alternatively, you can use these functions to find out about the minimum and maximum values which would return the same values of 17 and 83 respectively. min(colistin$age) max(colistin$age) 2. Extracting only few variables. How? What if you want to take our only certain variables out of the main data? Thus you could use data.frame function. In this syntax I would ask R to extract only the age and corresponding race and name this new data frame as AgevsRace. AgevsRace<-data.frame(colistin$Age,colistin$Race) 3. What if I want to sort first by A and then B variable? Following the above new frame that I named AgevsRace data, say I aim to sort the age first then only the race. Thus the syntax becomes; NewData<-order(AgevsRace$Age,AgevsRace$Race) AgevsRace[NewData, ] 4. I want to create a new variable in my colistin data derived from its baseline by doing some multiplication/division/addition/substraction on all the observations of a specific variable. HOW? Say I want to extract the albumin levels in SI unit and change them into USA unit instead, thus I would divide all the observations of my previous albumin levels in umol/L by 1.45 (to get to g/dL values instead). These are the command lines; colistin$AlbuminUSA<-(colistin$Alb)/1.45 head(colistin$AlbuminUSA) So now I have a new variable in my colistin data named AlbuminUSA. But since we are in the mood of making things simpler, you can skip all these and perform a transform command instead as below, and have this new sheet called NewColistin alternatively. NewColistin<-transform(colistin, AlbuminUSA=Alb/1.45) 5. How do I find the mean of A of each B variable from a set of many variables? (With A being a continuous and B being a categorical variable respectively) We shall use tapply function with the third term in this being the function (FUN) that we seek for (FUN=mean in this); tapply(colistin$Age, colistin$Sex, mean) This will calculate the mean age for both male and female subjects in my colistin data set. Otherwise this book (R primers) would talk about stuffs that are not relevant to me for now. Perhaps later but not now, like the calculation of area under the curve using MESS package which I do not need now, unless I want to calculate the exposure over time of drug X in a specific subject, thus perhaps the in-built trapezoid function would be handy as I need only to compute auc(x,y) for this My advice is to go through these sections quickly, AND do not sweat over the facts that you cannot retain much of the commands and syntax lines. My trick is to have a new “R script” created for different sets of function/package that I learn. For example, I learned about this new package called ggplot2 and here I needed to decipher a new sets of commands. What I would do is to write all the important steps and commands into the new script that I have opened and save this as “ggplot2” for future reference. You can simply put “#” at the front of every comments you would want to tell yourself about in the future and the comment would not be read as a function/command by the console. Another trick is to hit “command+enter”, every time you have finished computing a full command in the script and this would run the script into console. All the executed scripts will be collected in the history panel in the top right part of your R windows for reference. I hope this post would help you if you are a beginner like myself And this new language certainly takes much effort for myself to learn and have a grasp on. Thanks…
R Tips and tricks from a beginner.
1
r-tips-and-tricks-from-a-beginner-1d5f52febcb4
2018-04-17
2018-04-17 16:13:24
https://medium.com/s/story/r-tips-and-tricks-from-a-beginner-1d5f52febcb4
false
1,184
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
SH
Rummaging through the colorful threads and strands of life, for I have yet to find/choose the final one.
97608bb6718e
SH_thinker
20
37
20,181,104
null
null
null
null
null
null
0
null
0
5c92b90d9765
2018-09-08
2018-09-08 02:57:18
2018-09-08
2018-09-08 02:58:31
2
false
en
2018-09-10
2018-09-10 17:51:50
1
1d5f9cfc513b
1.628616
3
0
0
Using statistics to solve data quality problems is a compelling idea. It is especially compelling if you’ve tried a rules-based solution…
5
Why Basic Outlier Detection Doesn’t Work Using statistics to solve data quality problems is a compelling idea. It is especially compelling if you’ve tried a rules-based solution and ended up managing a large static rule base. Businesses need a way to surface incorrect data values that go beyond nulls, empty, missing, and malformed values. They also need a way to evolve with their ever-changing data. A simple example could be a few of the following: Example 1 Example 2 Columnar statistics have been around for a long time and offer great insight to descriptive analytics problems. In the examples above we can see that column level statistics provide very minimal value in solving a data quality problem. In this example HMNY a penny stock will commonly trade at around 0.02 cents a share while Berkshire Hathaway will trade around 321,000 dollars a share. The min, max, mean etc… tell us almost nothing of value. This is where basic outlier detection which is based solely on column level analysis isn’t enough, as it will produce bad signal to noise ratio and leave the end user without the compelling insight. To solve this more elegantly we need to broaden the scope of the problem and look at neighboring columns. The team at Owl Analytics is passionate about solving this problem. We are constantly evolving algorithms that fitness tests the surrounding columns to measure the strength of the relationship they impose on one another. The internal optimizer will determine the best path based on the lowest error rate and begin to learn from the column values both above below and left to right. The longer Owl observes a dataset the smarter it gets. This has shown significant benefits over rules in the area of weather trends, energy trends, financial data and engine data. Connect with us on LinkedIn or Owl-Analytics.com to see more examples of how ML can be used as a practical application to solving data quality. visit www.owl-analytics.com for more information
Why Basic Outlier Detection Doesn’t Work
8
why-basic-outlier-detection-doesnt-work-1d5f9cfc513b
2018-09-10
2018-09-10 17:51:50
https://medium.com/s/story/why-basic-outlier-detection-doesnt-work-1d5f9cfc513b
false
330
Predictive Data Quality — The fast and elegant way to manage data. Owl auto learns data trends to find data issues. Owl reduces most of the manual human process of writing rules to manage datasets. Use data science to solve data quality. Stop reacting. Start Predicting.
null
null
null
Owl-Analytics
info@owl-analytics.com
owl-analytics
DATA SCIENCE,MACHINE LEARNING,PREDICTIVE ANALYTICS,DATA QUALITY,ANOMALY DETECTION
analytics_owl
Data Science
data-science
Data Science
33,617
Brian Mearns
Co-Founder and Engineer. Interested in Solving Problems to Save Time and Money. www.linkedin.com/company/owl-analytics
f8241fb75425
brian_62254
2
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-18
2018-01-18 20:41:03
2018-01-18
2018-01-18 20:41:03
1
false
en
2018-01-18
2018-01-18 20:42:06
3
1d60f00d7f49
2.101887
0
0
0
null
4
Google AutoML Turns Coders into Machine-Learning Masters Google announced on Thursday the launch of its AutoML Vision, taking its AI easy approach one step ahead. The Cloud AutoML is a tool that will allow developers with limited machine learning (ML) expertise to train custom image recognition models, without having to write any code. Google’s AutoML initiative was first announced at the company’s I/O conference last year. The service, for now, is focused only at image recognition, however, Google plans to expand it to other services for all major fields of AI, i.e. speech, translation, video, natural language recognition. The Cloud AutoML allows anybody to train their model just by uploading their images, tagging them and then having Google’s AutoML to develop a customer ML model. The system has been used by Disney’s online store and is currently being tested by Outfitters too. Google’s chief scientist of cloud AI, Fei-Fei Li, wrote in a blog post that the ML and AI system is currently being used by a few businesses only due to its dependence on heavy budget and expertise. However, Google, by providing pre-trained ML models can bring AI to developers that have no experience at all. “Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI. There’s a very limited number of people that can create advanced machine learning models. While Google has offered pre-trained machine learning models via APIs that perform specific tasks, there’s still a long road ahead if we want to bring AI to everyone.” With its Cloud AutoML, Google might take away the business from the Microsoft Azure ML studio that allows developers to build, train and evaluate models using Yahoo Pipes-like interface. In fact, Google came up with a tool that takes all the hard work for itself, leaving you with a trained, tuned model. Another issue that Google solved right there is the hiring of ML experts and data scientists that are not enough in number, thereby not meeting the increasing demands. “AI and machine learning is still a field with high barriers to entry that requires expertise and resources that few companies can afford on their own,” said Fei-Fei Li earlier this week. “Today, while AI offers countless benefits to businesses, developing a custom model often requires rare expertise and extensive resources.” Despite Google’s claims that the AutoML is the first of its kind, there are several other services that offer pretty much the same thing. A service called Clarif.ai offers the visual recognition technology as well. Microsoft’s Cognitive Services also offer pre-trained models for speech recognition, decision making and of course vision. Google hasn’t shared any information regarding the price of the service, however, it is expected that the fees will be charged twice, first for training the models and second for accessing the models through Google’s APIs. Also Read: Google shutdowns Project Tango, shifting focus to ARCore Read the full article
Google AutoML Turns Coders into Machine-Learning Masters
0
google-automl-turns-coders-into-machine-learning-masters-1d60f00d7f49
2018-04-22
2018-04-22 06:17:26
https://medium.com/s/story/google-automl-turns-coders-into-machine-learning-masters-1d60f00d7f49
false
504
null
null
null
null
null
null
null
null
null
Cloud Automl Vision
cloud-automl-vision
Cloud Automl Vision
0
Viral Docks
Viral Docks covers latest news about Technology, Sports, Business and Cryptocurrency
29ba47228823
viraldocks
10
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-21
2018-04-21 10:53:49
2018-04-21
2018-04-21 10:57:13
1
false
en
2018-04-21
2018-04-21 10:57:13
1
1d6166d5162f
0.709434
0
0
0
ICAEW’s IT Faculty speaks to represents accountants’ IT-related interests and aptitude encourages those in business to stay up to date with…
5
Artificial Intelligence Boon or Bane to the Accounting Profession? ICAEW’s IT Faculty speaks to represents accountants’ IT-related interests and aptitude encourages those in business to stay up to date with IT issues and improvements and attempts to assist the study of its use to business and accounting. As a free body, the personnel takes a target view to move beyond the buildup encompassing IT, shaping debate and leading, testing basic suspicions and illuminating dispute. It recently distributed a report titled Artificial intelligence and the future of accounting. In view of the report, Enterprise Innovation talked with Kirstin Gillon, specialized chief at ICAEW IT Faculty, for a clearer point of view of how AI is changing the part of accounting and finance experts, and in what ways it expands or replaces their roles. Read More…
Artificial Intelligence Boon or Bane to the Accounting Profession?
0
artificial-intelligence-boon-or-bane-to-the-accounting-profession-1d6166d5162f
2018-04-21
2018-04-21 16:51:41
https://medium.com/s/story/artificial-intelligence-boon-or-bane-to-the-accounting-profession-1d6166d5162f
false
135
null
null
null
null
null
null
null
null
null
Accounting
accounting
Accounting
6,912
Bhavesh Koladiya
Bhavesh is currently a Marketing Expert at BizTechNation. He is passionate about accounting, billing, inventory and all things digital.
9163e6d1040
bhavesh9040
378
907
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-04
2018-09-04 09:59:49
2018-09-04
2018-09-04 10:00:23
0
false
en
2018-09-04
2018-09-04 10:00:23
1
1d618e8eeacf
5.169811
0
0
0
AI (Artificial Intelligence) training in Nagpur is provided by Anexas, №1 AI training institute in Nagpur. Anexas Provides Best AI Training…
5
Artificial Intelligence Training Nagpur | Best Artificial Intelligence Training Institute in Nagpur AI (Artificial Intelligence) training in Nagpur is provided by Anexas, №1 AI training institute in Nagpur. Anexas Provides Best AI Training Courses in Nagpur — Dhantoli, Itwari, Mominpura. We train students from basic to advanced concepts, within a real-time environment. Best Artificial Intelligence (AI) Training in Nagpur Artificial Intelligence is such a happening concept these days. And people are showing extreme eagerness to take a course in AI. This intelligence shown by machines is something that has a wide application and has been able to carve a niche for itself. Although there are many institutes that offer AI training in Nagpur, you have to be extra cautious while choosing one for yourself. Choose a training institute that promises quality training. What is Artificial Intelligence (AI) When computers are able to perform intelligent tasks by using specialized technologies, the process is called artificial intelligence. That means artificial intelligence is a field of science wherein machines exhibit intelligence just like human beings and animals do. Artificial intelligence is a highly in-demand course. And by learning concepts of artificial intelligence, you will be able to make machines do things as intelligently as humans do. Now, wouldn’t that be too amazing to ignore? Well, why not! So, don’t make delays, boost your CV hundredfold by learning AI from some reputed AI training institute in Nagpur, and get the job of your dreams. Why Artificial intelligence? Starting from self-driving cars to chess-playing computers, AI can be seen almost everywhere today. And that is probably the reason why people are taking the AI courses in Nagpur quite seriously. There are hundreds of other reasons why this course is fast gaining popularity, some of which have been mentioned here. Check out- Artificial Intelligence’s adaptation happens through progressive learning algorithms. Artificial intelligence can help you get the most out of data. Unbelievable accuracy can be achieved with AI. Bigger and deeper data can be analyzed with the help of artificial intelligence. AI helps add intelligence to existing products. AI performs automation of repetitive learning and discovery through data. AI jobs are high paying. The AI course is a well demanded course today. How we, at Anexas, help you? AI training Nagpur at Anexas can simply set your career. Our incessant efforts in comprehending the possibilities of AI in the world of IT have rendered us competent in helping interested people learn this skill set. Our trainers are some of the most knowledgeable in the industry with profound empirical knowledge and an enviable proficiency in theories. By following a student-centric approach of teaching, they have been ensuring impeccable learning outcomes in students. What makes us more popular as an IT training institute are our courses, which are not just easy to grab but also extremely relevant as far as industry needs and standards are concerned. We endeavor to keep you abreast of all the latest IT innovations and make sure every bit of information we pass on makes sense to you. What more? We offer amazing placement guidance and help you clinch your dream AI job, almost effortlessly. Do you need training in AI? AI is in almost every industry today. It’s not uncommon to come across an AI-enabled hospital, a retail store or a predictive analytics system that talks. So, you can imagine the scope and expanse of this concept. If you want to equip yourself enough so as to stand out in the tough IT ecosystem, then taking AI training Nagpur is a must. Interestingly, you won’t have to have a specialized degree to take up AI coaching classes. If you have got the desire and interest in this field, then nothing can stop you from being an AI expert. Job opportunities for AI experts Those who learn today how to create machines that show intelligence is going to create news tomorrow. They will be the people most cutting-edge companies will be willing to rope in. And they will be the people who will be able to outshine others in the field of IT. If you aspire to be one among those people, then joining Anexas for an AI certification course would be the best bet. Some of the most lucrative positions in this field include Machine Learning Engineer, Data Scientist, Research Scientist, R&D Engineer, Business Intelligence Developer, and Computer Vision Engineer. AI Training in Nagpur Anexas boasts of being the Best AI training institute in Nagpur. And the syllabus for this course has been broken down below for your convenience. What is the AI market trend in Nagpur? There aren’t many professionals who know how to work around the technology of Artificial Intelligence. That is why there is huge remuneration for those personnel who have expertise in Artificial Intelligence technologies. As Artificial Intelligence technologies continue to evolve; like all software companies they will also base themselves in Nagpur. Artificial Intelligence training in Nagpur will obviously surge to accommodate these trends and among them Intellipaat is the best one. Reviews of Artificial Training Institute in Nagpur Project Management Training Nagpur, ReviewTeaching is unique and understandable Review by Nafeea AfshinQuality Price Value My course on Machine Learning and Artificial Intelligence has successfully completed under Amitabh Sir. Teaching is unique and understandable. Project Management Training Nagpur, Review Teaching is unique and understandable. Review by Janaki bongaleValue Quality Price I had training in Machine learning was a very good hands-on training. Project Management Training Nagpur, Review It was nice experience to join Anexas Review by Shivam pandeyQuality Price Value I have enrolled in Machine Learning, AI Anexas under the guidance of Ankit sir. The trainer and supporting stuff was really helpful. I have fully satisfied. Ankit sir gave us good touch in Data science and its application. thank u. Project Management Training Nagpur, ReviewIt was nice experience to join Anexas Review by S D RAJESHWARIPrice Value Quality I’ve done my course on Machine learning and Artificial intelligence under the guidance of trainer Mohan sir. He’s really good. I’ve got good knowledge and I’m satisfied with his service. Project Management Training Nagpur, ReviewVery satisfactory Review by amarnathValue Quality Price I enrolled here for machine learning and Artificial Intelligence. I had a great training here starting from basics to deep and Ankit sagwan sir really helped us a lot exploring our knowledge related to our courses,we worked on some projects to implement our theory . Thanks to Mytectra. Project Management Training Nagpur, ReviewReally good Review by leza sweetyPrice Value Quality I have taken my course python+machine learning and artificial intelligence under mohan sir . I am very satisfied with my course, he is really good at teaching. Teaching is very good. I got a good knowledge after joining this institute. Project Management Training Nagpur, ReviewI found the best suited place Anexas learning solutions which I wanted Review by Hemant RegarPrice Value Quality Basically I am from Rajasthan I faced lot of problems in finding the institute for the respective course but when I came to Anexas , I found the best suited place which I wanted. I am pursuing my course form Anexas. Project Management Training Nagpur, ReviewI have completed Machine Learning course from Anexas Review by SamruddhiQuality Price Value I have completed Machine Learning & Artificial Intelligence course, Trainer taught very well & make as perfect level in that. He showed example that helped lot to our career, Thanks for your support Anexas… I recommended Project Management Training Nagpur, ReviewI have complete Artificial Intelligence course from Anexas. Review by NikhilQuality Price Value High quality training with experienced professionals. Worthy I would say this is the best training for AI & ML . Support from team was awesome, Recommended. Project Management Training Nagpur, ReviewI have completed Data Science with Artificial Intelligence course from myTectra Review by Sankarsana PujariQuality Price Value I had an good experience with Anexas. I signed up for the Data science with Artificial Intelllingence. Not only was the course well versed but the instructor was also very good. I would recommend Anexas for anyone who is trying to take up a course. It’s a best place to learn. Areas in Nagpur which are nearer to us areDhantoli, Itwari, Mominpura
Artificial Intelligence Training Nagpur | Best Artificial Intelligence Training Institute in Nagpur
0
artificial-intelligence-training-nagpur-best-artificial-intelligence-training-institute-in-nagpur-1d618e8eeacf
2018-09-04
2018-09-04 10:00:23
https://medium.com/s/story/artificial-intelligence-training-nagpur-best-artificial-intelligence-training-institute-in-nagpur-1d618e8eeacf
false
1,370
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Anexas Europe
null
2ce0e21843e0
sixsigma567
2
2
20,181,104
null
null
null
null
null
null
0
code like this needs to be entered in your Terminal console. sudo apt-get install ros-indigo-nao-gazebo-plugin sudo apt-get install ros-indigo-nao-moveit-config roslaunch nao_gazebo_plugin nao_gazebo_plugin_H25.launch roslaunch nao_moveit_config moveit_planner.launch
5
null
2017-11-19
2017-11-19 10:19:43
2017-11-21
2017-11-21 15:01:12
4
false
en
2017-11-21
2017-11-21 15:01:12
1
1d626ec580b5
3.639623
1
1
0
Okay, here you are again ! Welcome back. Last time, we saw how to visualize a robot in RViz and how to use specific ROS nodes to control…
4
🤖 Get your own personal robot ! (simulation) — Part II Okay, here you are again ! Welcome back. Last time, we saw how to visualize a robot in RViz and how to use specific ROS nodes to control the Nao robot. 👨‍🏫 This time we are going to simulate it in a virtual world, like a proper virtual simulation ! We can do this by using Gazebo, which you have already installed when you downloaded the full desktop version of ROS. Tutorial Guide: Installation: To simulate a robot, we have to simulate its body, including its sensors and actuators which are simulated using plugins. These plugins are included in the robot description via naoGazebo.xacro file. If you have read up on ROS, you will have understood that it is based on nodes which can communicate with each other. Some nodes can publish data as messages and some can subscribe to read these messages. Each sensor that is simulated, publishes data on rostopics. Now you need the Nao Gazebo plugin to get all of this, use the following snippet of code to get it: Matter-of-fact, while simulation is in Gazebo, we are going to use MoveIt! to control the robot, which will require the following dependencies to be fulfilled: nao_dcm_bringup: sudo apt-get install ros-indigo-nao-dcm-bringup nao_control: sudo apt-get install ros-indigo-nao-control OR if you just want everything in one go: sudo apt-get install ros-indigo-nao-* MoveIt! is the most widely used open-source software for manipulation and has been used on over 65 robots. It is used for mobile manipulation, incorporating the latest advances in motion planning, manipulation, 3D perception, kinematics, control and navigation. It provides an easy-to-use platform for developing advanced robotics applications, evaluating new robot designs and building integrated robotics products. Simulation: So now we have a model for the Nao robot in Gazebo, as well as the MoveIt configurations for Nao installed. We can launch a simulation straight away: This will start Gazebo and spawn a Nao on a Robocup field. The model for the ball resembles the outdated official RoboCup ball in it’s size, mass and colour. The new ball is white and larger, I intend to add a repo later with the new models When Gazebo starts, the simulation will be in ‘pause’ mode to allow initialization of all the controllers. Wait until everything is successfully loaded (the screen might go dark). Click play button ⏯️ at bottom toolbar to start the simulation. The floating Nao should land on his feet. Once ready, in Gazebo, under Window, click Topic Visualization or press Ctrl+T. This will open a graphical list of ‘ROS topics’. You can select which Topics to visualize, for example, visualizing the data from top camera: Nao looking straight at the goal, and thinking about its own goals. The simulation is ready. You can play around with Gazebo to add various other elements into the environment. Furthermore, you can visualize the topics to see how the robot’s sensors interact with the environment. Moving on, to control the robot we will use RViz: Using RViz to control the robot that is simulated in Gazebo RVIZ will be opened: you can see that a MotionPlanning plugin has been launched. First, under ‘Kinematics’, check the box “Allow Approximate IK Solutions” on the bottom of the left control panel. Also, under Displays>Grid>Reference Frame choose LAnklePitch to set the grid on the ground. Controlling the Robot: Under the Planning tab, select which part of the robot you want to move: In the plugin list (upper part of the left column), you can select a group under MotionPlanning/Planning Request/Planning Group to move multiple parts. Now you can define your motion by drag and dropping the interactive markers. You can compute a trajectory by clicking the ‘Planning’ button. Once the motion is satisfying you can also try it on your real robot using ‘execute’ or ‘plan and execute’. NOTE: The start state is not updated automatically, you have to go to ‘Select Start State’ select ‘Current’ and click ‘Update’. Another NOTE: The tutorial intends to get users started on a platform, where they can learn for themselves. This is one of the reasons as to why the Control section is relatively small. I urge you to add more robots, objects and practice on making the robot express some behaviours. I intend to update this page once I get an opportunity (no promises) 😩 Credits/Further Reading: This tutorial has been adapted from: https://github.com/ros-naoqi/nao_moveit_config/blob/master/tuto/tuto_moveit.rst/#use-moveit Visit the Git to learn more about RViz and MoveIt!
🤖 Get your own personal robot ! (simulation) — Part II
50
get-your-own-personal-robot-simulation-part-ii-1d626ec580b5
2018-05-29
2018-05-29 16:25:22
https://medium.com/s/story/get-your-own-personal-robot-simulation-part-ii-1d626ec580b5
false
779
null
null
null
null
null
null
null
null
null
Ros
roses
Ros
0
Ahmad M.
Computer Scientist, Robotics Engineer
f61158888fcd
blackvitriol
3
5
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-14
2018-09-14 11:37:01
2018-09-14
2018-09-14 11:39:59
4
true
tr
2018-09-14
2018-09-14 11:39:59
1
1d62acf4e555
4.790566
0
0
0
Bilim Teknik Dergisi, 603. Sayı. Yayın Tarihi: Şubat 2018
5
Yapay Zeka Çağında İnsan Olmak Bilim Teknik Dergisi, 603. Sayı. Yayın Tarihi: Şubat 2018 Şöyle bir senaryo hayal edelim: yapay zeka organik zekayı son metrelerdeki deparıyla tozlar arasında bırakmış. Hem de öyle böyle değil, akıllandıkça kendi kendini daha da eğitmiş, eğitildikçe daha da akıllanmış, akıllandıkça daha da eğitmiş. Organik zekamızın milyonlarca yıllık evrimle olgunlaşıp ulaştığı seviyeye adeta armut piş ağzıma düş dercesine birkaç saniyede ulaşıp, sonraki birkaç saniyede de organik zekamıza birkaç milyon yıllık fark atmış. Bize, bizim sivrisineklere baktığımız gözle bakmaya başlamış. Böyle bir noktaya ulaşmamıza kimi bilim çevrelerine göre çok da üzün bir süre kalmadı. Yapay zekanın atı alıp Bağdat’ı aştığı bu olası kırılma noktasına ‘Technological Singularity’ adı veriliyor. Tam olarak türkçe karşılığı henüz olmayan bu yapay zekanın olağanüstü hızlanarak akıllanması anına belki ‘Teknolojik Yaradılış’ denebilir. Peki zekası arşa ulaşmış bu makinalar insanlara yapacak hiçbir şey bırakacak mı? İnsanların hayatını tüm sıkıcı işlerden arındırıp ütopik bir dünya yaratacak mı? Yoksa bu makinalar insanlığı salt işgücü olarak görüp köleleştirecek mi? Filmlere, romanlara konu olmuş bu farazi soruların cevabını bu yüzyıl içerisinde alacağız gibi görünüyor. Amerika’daki ileri gelen teknoloji şirketleri insanlar hakkında toplanan verileri yeni altın madenleri gibi görüyorlar ve bu bilgileri satın alıyorlar. Bu verileri yapay zekanın hayata dair kavramları enine boyuna öğrenmesi için eğitim malzemesi olarak kullanıyorlar. Durmaksızın bu verilerle beslenen ve bu yoğun eğitime maruz kalan olan makineler de insanları tam manasıyla çözümleyip ayırt edilemezcesine insan gibi davranabiliyorlar. Facebook, Google, IBM, Microsoft gibi teknoloji devleri insan olmayıp da en insan gibi olan makineyi yapmak için büyük bir yarış içerisindeler. ABD’de görevli bulunduğum şirkette müşterilerimizle nasıl çağrı merkezindeki çalışanları insandan ayırt edilemez robotlarla değiştiririz, doktorların teşhis ve tedavi görevlerini nasıl akıllı makinelere yaptırırız, nasıl çocukların her sorusuna onların seviyesinde cevaplar veren robotcuklar yaparız gibi projeler üzerinde çalışıyoruz. Yapay zekanın yeterince eğitilip, pratik yaptıktan sonra insandan daha iyi yapamayacağı bir görevin kalmayacağını öngörüyoruz. Çalışanı olduğum IBM şirketinin en önem verdiği konu insanların yaptığı işleri, rutin olsun olmasın, yapay zekaya daha hızlı ve hatasız yaptırmak. Vizyon olarak yapay zekanın insan zekasını takviye edeceğini ve dolayısıyla insanları daha verimli yapacağını öngörüyoruz. Şu anki yapay zeka uygulamaları belirli alanlarda insanlardan daha verimli olabiliyor fakat genel zeka denilen insan zekası seviyesine ulaşamıyor. Ancak ‘Technological Singularity’ tabir edilen, yapay zekanın kendini eğitebilecek kadar ilerlemesi ile ‘kendini eğit — zekanı arttır — kendini eğit — zekanı arttır…..’ şeklinde bir döngü olursa, yapay zekanın anormal kısa bir sürede hayal edilemez zeka seviyelerine ulaşabileceği ve bunun durdurulamayacağı tahmin ediliyor. ‘Technological Singularity’ gerçekleşirse insanlık için ütopik (her şeyin daha iyi olduğu) ve distopik (her şeyin daha kötü olduğu) veya bu ikisi arasında gelecek senaryoları olası. Peki herşeyi akıllı makineler yaparsa insanlar ne yapacak? Yapay zeka teknolojilerini üreten yerlerde yapay zekanın etik, varoluşsal, sosyolojik yansımaları da tartışılıyor. Akıllı makinaların tüm işleri yapması durumunda düşünülen bir dünya düzeni makinalara vergi ödetmek ve bu vergilerle tüm insanlara doğuştan asgari bir maaş bağlamak. Ancak denklemin karşı tarafı da hayatın anlamına dair. Birisiyle ilk tanıştığınızda söylediğiniz ikinci cümle mesleğiniz. Modern hayat kişinin varoluş sebebini yüksek oranda yaptığı göreve indirgemiş ve hayattaki amacını bu çerçeveye hapsetmiş adeta. Hiç iş yapmak zorunda olmayan, hatta yapması yasak olan insanlar ne yapacak? Hayatın anlamını kaybedip depresyon ve benzeri psikolojik hastalıklara mı yakalanacak? Yoksa tüm insanlar sanatçı olup makinalardan daha iyi şiirler, resimler, müzikler mi yapmaya çalışacaklar (evet, günümüzde yapay zeka sanat da yapıyor). Ya da bu yeni dünya düzeni insandan ayrıştırılamayacak özellikleri barındıran meslekleri mi ön plana çıkaracak (şefkat göstermeciler, şakalaşma uzmanları, vs)? Cevapları kadar soruları da ilgi çekici bir konu. Bir de madalyonun diğer yüzü var. Terminatör ve Matrix filmlerinde yapay zekanın fevkalade ilerleyip insanlığı yenilgiye uğrattığını işleyen senaryoları izlemeyen yoktur herhalde. O filmlerin yapıldığı yıllarda biraz bilim-kurgu, biraz Hollywood fantazisi olan bu senaryolar da yapay zekanın son birkaç yılda olağanüstü ilerlemesiyle olabilirlik potasına girmeye başladılar. Silikon vadisinin önde gelen teknoloji fikir liderleri yapay zekalı robotların insanlığa tehdit oluşturup oluşturmayacağı konusunda tartışmalara başladılar. Böyle bir durum gerçekleşirse yapay zeka kendi iradesini de kazanır mı? Bizim etik, ahlak, ve diğer insani değerlerimizi saçma bulabilir mi? Bunlar da kolay kolay cevaplandıralamayacak sorular. Olası Felaket Senaryoları (En Olasıdan Başlayarak) - Yapay zeka çok ilerlese de bilinç ve irade kazanamaz. Atacağı adımlar da insanların sıkı kontrolü altında gerçekleşmeye devam eder. Bu durumdaki tehlike her teknolojik gelişmede olduğu gibi yapay zekanın kötü ellere düşmesi ve onların iradesi doğrultusunda insanlığa zararlı işler yapması. Nobel ve dinamit hikayesini herkes biliyordur. - Yapay zeka bilinç kazanamaz ama çok karmaşıklaştığından kontrolü çok zorlaşır. (Günümüzde dahi yapay sinir ağlarının neden bazı mimarilerde daha iyi çalıştığı tam anlaşılamayabiliyor). Böyle bir durumda yapay zeka insanların verdiği görevleri kendi tanımlarına göre aşırı iyi yapar veya isteneni yanlış yorumlarsa zarar verici durumlar yaratabilir. Örnekler ‘üretimi ucuzlat’ komutunun en son safhada insanları köleleştir seviyesine ulaşması, ya da ticareti arttır komutunun bazı şirketleri saniyeler içinde iflasa sürüklemesi olabilir. - Yapay zeka bilinç kazanamaz ancak kodlarındaki ufak bir hata sarmal etkilerle beklenmeyen sonuçlara yol açar. Karmaşıklığın artmasından dolayı da bu hataları gidermek imkansızlaşır. Mesela New York Wall Street borsasında robotlar hali hazırda yapılan ticaretin çoğunu yönetiyor. Zaman zaman bu robotların algoritmalarındaki hatalar birbirlerini güçlendirerek çöküşlere yol açabiliyor. (Örnek 2010 yılında yaşanan, 36 dakika süren ve trilyonlarca dolarlık hareket yaratan ‘Flash Crash’) - Yapay zeka bilinç kazanır ve kimi durumlarda kendi iradesiyle hareket etmeye başlar. Bu senaryoda yapay zekanın insani erdemleri nasıl değerlendireceği önemli. Bizim etik değerlerimiz yapay zeka için hiçbir anlam ifade etmeyebilir. İnsani hakları sadece insanlar arasında geçerlidir diye düşünebilir. Bilinç kazanmış ve insanın hayal dahi edemeyeceği kadar zeki makinaları durdurmak da mümkün olmayacaktır. Ünlü bilim adamı Von Neuman’ın hayal ettiği şekilde yapay zekalı robotlar kendini hiç durmadan klonlayarak kısa sürede kainata yayılıp Matrix filmindeki senaryoyu gerçek kılabilir. Neler Yapılabilir - Yapay zeka alanında önde giden şirketler biraraya gelerek yapay zekanın kontrolden çıkmasını engelleme misyonu olan bir konsorsiyum kurdular (Yapay Zekanın İnsanlık ve Toplum Yararına Kullanılması Ortaklığı). Bu bir adım, ancak yeterli olacağının bir garantisi yok. - Yapay zekanın aşırı ilerlediğinde nasıl çalıştığını anlayıp kontrol edebileceğimizi düşünmek biraz iyimserce bir yaklaşım. Yapılan akıllı makinaların birbirlerine kısmi karşıt hedefleri olması herhangi birinin tamamen kontrolden çıkmasını engelleyebilir. Bir nevi yapay zekanın kendi kendiyle mücadele etmesini sağlamak gibi. - Yazılan kodların kontrolünün denetlemeye tabi tutulması ve hatta başka yapay zekalarca da kontrolden geçirilmesi önemli. - Yapay zekanın kaynak kullanımının (İletişim ağları, enerji vs.) sınırsız olmaması ve denetim altında tutulması önemli. Ne olursa olsun çok heyecanlı zamanlar bizleri bekliyor. Umarız ki hızla ilerleyen yapay zeka kontrolden çıkmaz ve insanlığın refah ve mutluluk seviyesini çok çok yukarılara taşır. Kaynaklar IBM Yapay Zeka ArGe Matrix Filmi 2010 Flash Crash Von Neuman Probes Yapay Zekanın İnsanlık ve Toplum Yararına Kullanılması Ortaklığı Yapay Zeka ve İnsanların Yarışı Originally published at ermanakdogan.blogspot.com.
Yapay Zeka Çağında İnsan Olmak
0
yapay-zeka-çağında-i̇nsan-olmak-1d62acf4e555
2018-09-14
2018-09-14 11:39:59
https://medium.com/s/story/yapay-zeka-çağında-i̇nsan-olmak-1d62acf4e555
false
1,084
null
null
null
null
null
null
null
null
null
Teknoloji
teknoloji
Teknoloji
1,673
Erman Akdogan
AI & Cloud Technology Executive at IBM, Chicago. Magazine columnist & author of cyberpunk, technology books. Views are my own. Amazon.com/author/ermanakdogan
85681bbddbbe
ermanakdogan
54
9
20,181,104
null
null
null
null
null
null
0
null
0
1dc0795d9d6e
2018-05-24
2018-05-24 19:40:29
2018-05-31
2018-05-31 14:01:01
1
false
en
2018-05-31
2018-05-31 14:01:01
3
1d671b63e9c6
6.34717
0
0
0
This post was featured in our Cognilytica Newsletter, with additional details. Didn’t get the newsletter? Sign up here.
5
Chasing the Elusive Machine Learning Platform This post was featured in our Cognilytica Newsletter, with additional details. Didn’t get the newsletter? Sign up here. If you have been following the breathless hype of AI and ML over these past few years, you might have noticed the increasing pace at which vendors are scrambling to roll out “platforms” that service the data science and ML communities. The “Data Science Platform” and “Machine Learning Platform” are at the front lines of the battle for the mind share and wallets of data scientists, ML project managers, and others that manage AI projects and initiatives. But what exactly are these platforms and why is there such an intense market share grab going on? As one vendor mentioned on a recent analyst briefing call, “if data is the oil of the new economy, then the Data Science Platform is the engine that keeps it running”. The core of this insight is the realization that ML and data science projects are nothing like typical application or hardware development projects. Whereas in the past hardware and software development aimed to produce functionality that individuals or businesses could individually run or control, data science and ML projects are really about managing data, continuously evolving learning gleaned from data, and the evolution of data models based on constant iteration. Typical development processes and platforms simply don’t work from a data-centric perspective. It should be no surprise then that technology vendors of all sizes are focused on developing platforms that data scientists and ML project managers will depend on to develop, run, operate, and manage their ongoing data models for the enterprise. The thought from these vendors is that the ML platform of the future is like the operating system or cloud environment or mobile development platform of the past and present. If you can dominate market share for data science / ML platforms, you will reap rewards for decades to come. As a result, everyone with a dog in this fight is fighting to own a piece of this market. However, what does a Machine Learning platform look like? How is it the same or different than a Data Science platform? What are the core requirements for ML Platforms, and how do they differ from more general data science platforms? Who are the users of these platforms, and what do they really want? Let’s dive deeper. What is the Data Science Platform? In our earlier newsletter piece on Data Scientists vs. Data Engineers, we talked a bit about what data scientists do and what they want to accomplish with the technology to support their missions. In summary, data scientists are tasked with wrangling useful information from a sea of data and translating business and operational informational needs into the language of data and math. Data scientists need to be masters of statistics, probability, mathematics, and algorithms that help to glean useful insights from huge piles of information. A data scientist is a scientist who creates hypothesis, runs tests and analysis of the data, and then translates their results for someone else in the organization to easily view and understand. So it follows that a pure data science platform would meet the needs of helping craft data models, determining the best fit of information to a hypothesis, testing that hypothesis, facilitating collaboration amongst teams of data scientists, and helping to manage and evolve the data model as information continues to change. Furthermore, data scientists don’t focus their work in code-centric Integrated Development Environments (IDEs), but rather in notebooks. First popularized by academically-oriented math-centric platforms like Mathematica and Matlab, but now prominent in the Python, R, and SAS communities, notebooks are used to document data research and simplify reproducibility of results by allowing the notebook to run on different source data. The best notebooks are shared, collaborative environments where groups of data scientists can work together and iterate models over constantly evolving data sets. While notebooks don’t make great environments for developing code, they make great environments to collaborate, explore, and visualize data. Indeed, the best notebooks are used by data scientists to quickly explore large data sets, assuming sufficient access to clean data. Indeed, data scientists can’t perform their jobs effectively without access to large volumes of clean data. Extracting, cleaning, and moving data is not really the role of a data scientist, but rather that of a data engineer. Data engineers are challenged with the task of taking data from a wide range of systems in structured and unstructured formats, and data which are usually not “clean”, with missing fields, mismatched data types, and other data-related issues. In this way, the role of a data engineer is an engineer who designs, builds and arranges data. Good data science platforms also enable data scientists to easily leverage compute power as their needs grow. Instead of copying data sets to a local computer to work on them, platforms allow data scientists to easily access compute power and data sets with minimal hassle. A pure data science platform is challenged with the needs to provide these data engineering capabilities as well. As such, a practical data science platform will have elements of pure data science capabilities and necessary data engineering functionality. What is the Machine Learning Platform? We just spent several paragraphs talking about data science platforms and not even once mentioned AI or ML. How are data science platforms relevant to ML? Well, simply put, Machine Learning is the application of specific algorithms, additional unsupervised or supervised training approaches, and learning-focused iteration to the large sets of data that would otherwise be operated on by data scientists. The tools that data scientists use on a daily basis have significant overlap with the tools used by ML-focused scientists and engineers. However, these tools aren’t the same, because the needs of ML scientists and engineers are not the same as more general data scientists and engineers. Rather than just focusing on notebooks and the ecosystem to manage and collaboratively work with others on those notebooks, folks tasked with managing ML projects need access to the range of ML-specific algorithms, libraries, and infrastructure to train those algorithms over large and evolving datasets. ML Platforms help ML data scientists and engineers discover which machine learning approaches work best, how to tune hyperparameters, deploy compute-intensive ML training across on-premise or cloud-based CPU, GPU, and/or TPU clusters, and provide an ecosystem for managing and monitoring both unsupervised as well as supervised modes of training. Clearly a collaborative, interactive, visual system for developing and managing ML models in a data science platform is necessary, but it’s not sufficient for an ML platform. As hinted above, one of the more challenging parts of making ML systems work is the setting and tuning of hyperparameters. The whole concept of a machine learning model is that it’s a mathematical formula that requires various parameters to be learned from the data. Basically, what machine learning is actually learning are the parameters of the formula, and basically fitting new data to that learned model. Hyperparameters are configurable data values that are set prior to training an ML model that can’t be learned from data. These hyperparameters indicate various factors such as complexity, speed of learning, and more. Different ML algorithms require different hyperparameters, and some don’t need any at all. ML platforms help with the discovery, setting, and management of hyperparameters, among other things including algorithm selection and comparison that non-ML specific data science platforms don’t provide. What do ML Project Managers Really Want? At the end of the day, ML project managers simply want tools to make their jobs more efficient and effective. While we have written earlier that not all ML is AI, and perhaps some of the ML approaches are used primarily for non-AI predictive analytics, those seeking to add true intelligence as part of their mission need the same capabilities regardless of how ML is being applied. The real winners in the ML platform race will be the ones that simplify ML model creation, training, and iteration. They will make it quick and easy for companies to move from dumb unintelligent systems to ones that leverage the power of ML to solve problems that previously could not be addressed by machines. This is the ultimate vision of ML as applied to AI: make systems autonomous, intelligent, and generate knowledge and action that otherwise would require human capabilities. ML platforms that enable this capability are winners. Data science platforms that don’t enable ML capabilities will be relegate to non-ML data science tasks. Vendor who pretend that their business intelligence, data analytics, big data engineering, programming-centric, or other tools are rebranded AI / ML platforms are in for a rude awakening. We know who you are, and no, you are not an AI / ML platform vendor. Stay tuned for our big report on Data Science and Machine Learning Platforms as we sort out who is doing what in the ML platform space, which data science platform vendors are the ones worth paying attention to in the ML space, what is necessary functionality for ML platforms and what is not, and who is starting to win the race for marketshare in this constantly evolving, but significantly attractive market. If your company is looking at kicking your AI & ML projects into high gear leveraging the industry’s best practices, you should check out the Cognilytica AI & ML Project Management Training & Certification. We have regularly scheduled trainings and would love to help your company up-skill your staff and take advantage of what we’ve determined to be the most successful approaches to getting AI projects running for your firm. Reach out today to learn more!
Chasing the Elusive Machine Learning Platform
0
chasing-the-elusive-machine-learning-platform-1d671b63e9c6
2018-05-31
2018-05-31 14:07:58
https://medium.com/s/story/chasing-the-elusive-machine-learning-platform-1d671b63e9c6
false
1,629
Real-world insight, expertise, and opinions on Artificial Intelligence (AI) and related areas
null
cognilytica
null
Cognilytica
info@cognilytica.com
cognilytica
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DEEP LEARNING,AI,ROBOTICS
cognilytica
Data Science
data-science
Data Science
33,617
Ron Schmelzer
Senior Analyst, Cognilytica — Founder TechBreakfast, Bizelo, ZapThink, Zoptopz, Channelwave, and more. Sometimes successful entrepreneur and knowledgeable guy.
f1c34a0887da
ron_61222
4
2
20,181,104
null
null
null
null
null
null
0
import numpy as np import pandas as pd import matplotlib.pyplot as plt df= pd.read_csv(data_source_url,names=column_names) #dimension of the data print(df.shape) print(df.sample(5)) #to show 5 random samples from the dataframe print(df.info()) #to get the inforamation about the data in each attribute print(df.describe())# to get statistical informations about each attribute from collections import Counter Counter(df["class"]) output :- Counter({'Iris-setosa': 50, 'Iris-versicolor': 50, 'Iris-virginica': 50}) # box and whisker plots df.plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False) plt.show() #histogram df.hist() plt.show() # scatter plot matrix from pandas.tools.plotting import scatter_matrix scatter_matrix(df) plt.show()
9
null
2018-02-15
2018-02-15 06:38:42
2018-02-15
2018-02-15 07:08:56
5
false
en
2018-02-15
2018-02-15 07:08:56
2
1d6791a83bf0
3.165409
8
0
0
“Data” is the new currency in the present scenario. There are huge volumes of data available now thanks to the craze of electronic gadgets…
1
Exploratory Analysis Using Python “Data” is the new currency in the present scenario. There are huge volumes of data available now thanks to the craze of electronic gadgets, softwares and internet. The availability of massive data creates an opportunity of deriving useful information or insights from it. Here, I am showing you some simple ways of doing exploratory analysis using python. The data-set used is very famous “Iris Dataset”, publicly available at UCI machine learning repository. The dataset contains 150 observations of iris flowers. There are four columns of measurements of the flowers in centimeters. The fifth column is the species of the flower observed. All observed flowers belong to one of the three species. https://en.wikipedia.org/wiki/Iris_flower_data_set Now in order to analysis on any data, we need to import 3 basic modules in python i.e. numpy for array or matrix operations, pandas for dealing with data frames or tabular data, matplotlib to plot graphs for better understanding of data. After that with the help of pandas, we will load the data to memory for further operations. Once the data is loaded to the memory, we will start analyzing the data which is often called as Exploratory analysis. The basic objective is to take a look at the data in a few different ways. Some common steps followed in this steps are :- check the dimension of the data. peek at the data itself. look at the statistical summary of each attribute. Breakdown the data by class variables. Now, we need to see how many instance of which class are present in the data. This can be done using the Counter function in python. We now have a basic idea about the data. We need to extend that with some more visualizations. We are going to look at two types of plots: Univariate plots to better understand each attribute. Multivariate plots to better understand the relationships between attributes. To see the distribution of the data points at each attribute, we can draw box plot and histogram. In the box plot, the red line shows the median value of each attribute and the box region shows the values present from minimum to maximum. From the above histogram, it is clearly visible that sepal length and sepal width values are distributed in a Gaussian manner. Where as petal length and petal width shows a random structure. From this, we can intuit that petal length and petal width might provide some special information about the flower class. We can draw multivariate plots to understand the correlations between different attributes better. From above figure it is clearly visible that ‘petal length’ and ‘petal width’ are two main attributes to distinguish different classes. But ‘sepal length’ and ‘sepal width’ are not that much helpful in separating data. We can also see trends between each of the attribute using line plots to see how one’s change in behavior affects the others. You can follow it in my previous post “Intro to visualization using matplotlib”. Some of the plots generated : Here black line represent ‘sepal length’ and blue line represents ‘sepal width’. Here black line represents ‘petal length’ and blue line represents ‘petal width’. github link — https://github.com/sambit9238/DataScience/blob/master/ExploratoryAnalysisForIrisData.ipynb
Exploratory Analysis Using Python
29
exploratory-analysis-using-python-1d6791a83bf0
2018-05-23
2018-05-23 05:41:02
https://medium.com/s/story/exploratory-analysis-using-python-1d6791a83bf0
false
618
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Sambit Mahapatra
AI and ML enthusiast
241751fe4cf9
sambit9238
295
17
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-18
2018-08-18 15:28:51
2018-08-18
2018-08-18 15:29:06
0
false
en
2018-08-18
2018-08-18 15:29:06
1
1d6c1c93a2e4
2.369811
0
0
0
[PDF] Download Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark READ ONLINE Link…
1
Pdf Download eBook Free Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark By Russell Jurney Epub #EPUB [PDF] Download Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark READ ONLINE Link https://bestreadkindle.icu/?q=Agile+Data+Science+2.0%3A+Building+Full-Stack+Data+Analytics+Applications+with+Spark . . . . . . . . . . . . . . . . . . . Read Online PDF Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Download PDF Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Download Full PDF Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Download PDF and EPUB Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Read PDF ePub Mobi Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Reading PDF Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Read Book PDF Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Read online Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Download Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Russell Jurney pdf, Download Russell Jurney epub Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Read pdf Russell Jurney Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Download Russell Jurney ebook Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Read pdf Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Online Download Best Book Online Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Read Online Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Book, Read Online Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark E-Books, Read Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Online, Read Best Book Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Online, Read Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Books Online Download Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Full Collection, Download Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Book, Read Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Ebook Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark PDF Read online, Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark pdf Download online, Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Read, Download Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Full PDF, Read Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark PDF Online, Read Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Books Online, Read Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Full Popular PDF, PDF Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Read Book PDF Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Read online PDF Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Download Best Book Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Read PDF Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Collection, Read PDF Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark Full Online, Read Best Book Online Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark, Download Agile Data Science 2.0: Building Full-Stack Data Analytics Applications with Spark PDF files
Pdf Download eBook Free Agile Data Science 2.0:
0
pdf-download-ebook-free-agile-data-science-2-0-1d6c1c93a2e4
2018-08-18
2018-08-18 15:29:06
https://medium.com/s/story/pdf-download-ebook-free-agile-data-science-2-0-1d6c1c93a2e4
false
628
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Richard Scott
null
bc76fd49c572
gejcew
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-21
2018-03-21 09:11:58
2018-03-21
2018-03-21 09:14:53
1
false
en
2018-03-21
2018-03-21 09:21:31
4
1d6eefd28208
3.139623
0
0
0
Delta, the app for early detection and diagnosis of cognitive decline (e.g. dementia) launched by the startup ki elements UG, is the result…
5
Delta, the app for early detection and diagnosis of cognitive decline, steps into French and German markets in 2018 Delta, the app for early detection and diagnosis of cognitive decline (e.g. dementia) launched by the startup ki elements UG, is the result of the EIT Digital innovation activity ELEMENT in 2017. DFKI, as activity lead, provided results in 2017 which led to the creation of ki elements UG, and the prototype app Delta, which reduces cognitive status screening time by more than 50%and improves the diagnosis and decision process. The activity is being renewed by EIT Digital in 2018, in order to accelerate the commercialisation of the app and obtain a CE certification. EIT Digital’s ELEMENT Digital Wellbeing innovation activity and its startup partner ki elements together plan to launch an app for early diagnosis of cognitive decline. The prototype has already been developed, and it will be brought to market in the second half of 2018. Jan Alexandersson, CEO of the startup ki elements UG and lead for the ELEMENT activity, explains: “We are very excited to continue our work in 2018 with the support of EIT Digital and our various partners. We are planning the year in three stages. In the first half of the year, we will essentially focus on technological maturation, experimentation and, above all, CE certification. We will also attend a full programme of conferences throughout the year, such as the Chicago AAIC in July 2018, to raise awareness of the product. In the second half of 2018, we will focus on marketing and commercialisation.” How the Delta app works: Delta records and saves speech data and synchronises it with your HIS (Hospital Information System). This allows you to easily re-listen to patients and empowers your professional decisions. Delta automatically transcribes answers from cognitive speech tests to keep your attention free for the most important thing: your work with the patient. Delta leverages AI and computational linguistics to extract and analyse powerful scientific metrics from patients’ answers. Delta gives you comprehensive insight by visualising the extracted metrics. Combined with population cut-off values, this visualisation helps to enrich your professional perspective. Delta compiles your test analysis, visualisations and your interpretation/diagnosis into a digital report formatted to your clinic’s guidelines. Alexandersson continues: “Delta is an iPad app designed and developed with and for neuropsychologists and independent practitioners. It will also be helpful for professionals involved in studies where cognition is affected, such as medical trials. Delta not only reduces cognitive status screening time by more than 50%, but also improves diagnosis and decision process and quality. We are now working on the CE certification of our product and we are looking to expand our potential customer base to other language markets such as German by the end of 2018. As a start, our focus is on the French market.” ki elements’ mission is to provide health professionals with artificial intelligence (AI)-empowered tools as key elements to help them excel in their profession. For clinicians investigating dementia-like diseases, Delta is the perfect tool for executing and managing speech-based cognitive tests. Dementia is a cognitive disorder, mainly caused by neurodegenerative diseases such as Alzheimer’s disease and strokes. It leads to a loss of autonomy and is associated with significant decrease in quality of life. In 2015, there were 9.9 million new cases of dementia around the world; one every 3 seconds*. However only one in two people suffering from dementia is diagnosed. The number of sufferers is expected to double every 20 years, reaching 68% of the worldwide population by 2050, mostly in low-medium income countries and due, in part, to fewer births and a growing elderly population. Partners involved in the innovation activity for 2018 are: DFKI GmbH: coordination, technology maturation and business modelling ki elements UG (Hauftungsbeschränkt) (sub-grantee of DFKI GmbH): commercialisation of project results University Clinic of Saarland (Germany): clinical partner. Validation study and data collection. Clinical personnel will be involved. INRIA, Nice: video analysis Innovation Alzheimer (sub-grantee of INRIA): clinical partner. Validation study and data collection. Clinical personnel will be involved. External partners: University Clinic Bern (Switzerland) SHG Clinic Sonnenberg (Germany) University Clinic Dresden (Germany) University Clinic/DZNE Rostock (Germany). Clinical personnel will be involved. *Source: Alzheimer’s Disease International The Digital Wellbeing Action Line leverages digital technologies to help people stay healthy (prevention and early detection) or cope with an existing chronic condition. It includes both physical and mental wellbeing. The solutions generally rely on enabling consumers to be well-informed about their wellbeing, change their behaviour and use digital unobtrusive instrumentation to monitor and improve their quality of life, saving high healthcare costs later in life.
Delta, the app for early detection and diagnosis of cognitive decline, steps into French and German…
0
delta-the-app-for-early-detection-and-diagnosis-of-cognitive-decline-steps-into-french-and-german-1d6eefd28208
2018-03-21
2018-03-21 09:21:31
https://medium.com/s/story/delta-the-app-for-early-detection-and-diagnosis-of-cognitive-decline-steps-into-french-and-german-1d6eefd28208
false
779
null
null
null
null
null
null
null
null
null
Healthcare
healthcare
Healthcare
59,511
EIT Digital
EIT Digital brings together #European #entrepreneurs to drive #digital #innovation & #education. EIT Digital is an independent organisation supported by @EITeu.
b8879185d6a3
EIT_Digital
947
673
20,181,104
null
null
null
null
null
null
0
null
0
fe7f8c872655
2018-02-20
2018-02-20 18:27:10
2018-02-20
2018-02-20 18:48:24
1
false
en
2018-02-20
2018-02-20 18:48:24
0
1d6f9435d6ba
1.50566
0
0
0
Data has value when it is used.
5
Meaning in Motion Data has value when it is used. Having data does not provide value unless you also use it. If you travel far enough along the the path of treating data as an asset, you will realize that measuring the ROA (return on asset) of your data is how you will know the value of your data. It costs to acquire data, to clean it, to understand it, and to store it. If you track which data is used to generate revenue, and how frequently it is used, and how recently it is used, you will find that some of your data is far more valuable. You will also learn how quickly it decreases in value as it ages. Some data is never used at all for revenue, and given the costs, you are wasting money to acquire and store it. Some data is used so rarely and to so little benefit than it still loses money. Can you fire an unprofitable customer? Can you stop collecting and storing data that costs you more than it benefits you? The coming years will be challenging for data professionals. The trend is toward data in motion, not data at rest. If you still encode ontology and taxonomy in relational database tables, and if you fail to identify which data is profitable and which is not, you will lose money on data management. You still need to acquire data, to clean it, to understand it, and to encode the ontology, taxonomy and significance in the data. But meaningfulness needs to move with the data as the data moves. Don’t encode meaning in structures that are only visible when the motion has stopped. Embed meaning in metadata that moves along with the data wherever it goes. Learn what data provenance really is. Although you can use your audit trails to handle a recall of poor quality data, the reason to know where your data came from and where it is going is to measure its value. A quiet and peaceful data lake has no value. Meaningful data moves.
Meaning in Motion
0
meaning-in-motion-1d6f9435d6ba
2018-02-20
2018-02-20 18:48:27
https://medium.com/s/story/meaning-in-motion-1d6f9435d6ba
false
346
A conversation about complexity and emergent significance.
null
null
null
Data Autonomy
null
data-autonomy
DATA,AUTONOMY,ANALYTICS,DATA ENGINEERING,DATA SCIENCE
kevin642
Big Data
big-data
Big Data
24,602
Kevin Kautz
null
90d49ddd83ba
kevin642
11
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-27
2018-04-27 15:39:44
2018-04-27
2018-04-27 15:59:31
1
false
en
2018-04-27
2018-04-27 15:59:31
0
1d700e74998e
2.10566
1
0
0
Breaking down tough problems into their individual components
5
Modularization in Technology Breaking down tough problems into their individual components “How do you eat an elephant?” Chances are you’ve heard that idiom, along with its all-too-obvious answer: “one bite at a time.” Throughout the course of our time on this planet, humans have accomplished an incredible number of difficult feats. Most, if not all, of them, were done via the process of iteratively breaking down a large problem into sufficiently addressable sub-problems. This is extremely self-evident when it comes to technology. I work in software and data science. One of the common patterns you see in software development is the aggregation of several small components into a much larger system. This happens in all levels of computing — everything from the assembly register all the way up to the Internet and satellite grid. Every large system is composed of a myriad of smaller systems, which themselves are composed of sub-systems built from even smaller components. This should be fairly self-evident for anyone that’s worked in the technology field for any amount of time. But I’m bringing it up today because I continuously find myself using this paradigm to solve my toughest problems. Oftentimes, when I’m approaching a large coding task, I’ll go about writing my code as if I have pretend access to magic functions that provide me the return values or that take the actions that I need, and then I’ll later go and implement those methods later. Any time I find myself getting stuck in a situation like this, I always end up relying on the idea that if I assume there’s a component that can do what I need, I end up progressing beyond the roadblock that I have at hand. What’s interesting to note is that there are several data science algorithms that take this approach as well. Decision trees break down all data by feature and then test each feature prior to splitting the data into smaller chunks, after which they iterate forward. Random forests simply take individual decision trees and provide them with smaller chunks of the data and features and use their composition to come up with a better prediction than any individual tree alone. Support vector machines find those points that are the closest in projected space to the decision boundaries, and simply use those points to classify instances. Deep networks, the penultimate machine learning algorithm that learns by composition, takes simple, individual “neurons” and pools them together to perform an incredible number of pattern recognition tasks. Capsule, RNN, convolution, and LSTM networks take this idea one step further by implementing specialized layers that can enhance neural network functionality even further. Of course, none of this is revolutionary, but what it highlights is the power of problem decomposition. I have no doubt that the toughest problems in the future will be solved using a similar approach, and my suspicion is that we won’t see general artificial intelligence until machines have the ability to “rationally” perform this set of tasks themselves.
Modularization in Technology
1
modularization-in-technology-1d700e74998e
2018-04-27
2018-04-27 16:01:20
https://medium.com/s/story/modularization-in-technology-1d700e74998e
false
505
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Shanif Dhanani
Co-founder & CEO of Apteo: We build AI tools to improve investing. Come join us!
9273f4759898
shanif
777
201
20,181,104
null
null
null
null
null
null
0
null
0
284538178f0a
2018-09-24
2018-09-24 02:47:17
2018-09-27
2018-09-27 17:14:08
1
false
en
2018-09-30
2018-09-30 23:50:03
1
1d7014c44f94
2.679245
0
0
0
Would you ever trust an AI to raise your child? Well, if I asked that question today the obvious answer is a no. However, what if AI was…
2
Childhood created by AI Would you ever trust an AI to raise your child? Well, if I asked that question today the obvious answer is a no. However, what if AI was involved in your child’s development? That answer might change to a maybe. Well, what if there was an AI that can help your child’s vocabulary? Now, you’re leaning toward a yes. Imagine an AI that can monitor the words your child hear and say at any given time. What if this AI can keep track of these words and on top of that, it can suggest new vocabulary child should be exposed to. This will not only help kids improve their overall vocabulary but especially helpful for kids in lower socioeconomic status (SES). Recent research found that there are up to 30 million word gap between children from the richest families and the poorest families* there are up to 30 million word gap between children from the richest families and the poorest families That is an astonishing amount of words children in lower SES will never be exposed to, but this problem can be fixed with the help of an AI. To further understand this problem, proper research must be done. There are three ways we can conduct this research. First, we need to understand the scope of the problem. This can be done through secondary external research. Second, we need to understand the perspective of users, or parents, on the problem. This can be done through a direct interview with the parents. Last, we must understand the need for this product in the market. The point of secondary research is to understand how serious the issue of vocabulary gap is. We will need to find articles and research regarding the effect of vocabulary in early childhood development. To be more specific, how the range of vocabulary can affect the baby’s intelligence in adulthood. This research can help us understand the reason why our AI product will be needed for the children. It’s also important to research when each stage of language development takes place. By understanding at which stage the baby’s vocabulary is most expanded, perhaps it will help us narrow down the period of time our product will be used. After understanding the “Why” behind our product, it’s important for us to understand the problem from the perspective of parents and a direct interview with few parents can be set up to conduct this research. One way for our team to get in contact with some of the parents is by using the connection I have with parents in local elementary school. I mentor elementary school kids in lower SES, so getting in contact with their parents wouldn’t be so difficult. Understand the awareness of the problem by these parents can help us understand if the problem we’re trying to solve might be too narrow. We can ask questions such as How involved are you with your child’s vocabulary development? What are some ways you teach your kids words? How do you know if your child understands the words you say? The last focus of the research should be on the market research. We need to understand if people are willing to use our product if it is accessible. To understand the market, we can add questions to the list when interviewing parents if they are willing to try out a product if it was given for free. We can also conduct a similar interview to local daycare providers and see if they’re willing to try out the product with the kids they take care of. This way we can understand how people feel about the product if it was available and understand what kind of feature they would want in the product. This summary of the research method should help us get started with our product research before we dive further deep creating a solution with an AI. *= “https://www.naeyc.org/resources/pubs/tyc/feb2014/the-word-gap”
Childhood created by AI
0
childhood-created-by-ai-1d7014c44f94
2018-09-30
2018-09-30 23:50:03
https://medium.com/s/story/childhood-created-by-ai-1d7014c44f94
false
657
Course by Jennifer Sukis, Design Principal for AI and Machine Learning at IBM. Written and edited by students of the Advanced Design for AI course at the University of Texas's Center for Integrated Design, Austin. Visit online at www.integrateddesign.utexas.edu
null
utsdct
null
Advanced Design for Artificial Intelligence
cid@austin.utexas.edu
advanced-design-for-ai
ARTIFICIAL INTELLIGENCE,COGNITIVE,DESIGN,PRODUCT DESIGN,UNIVERSITY OF TEXAS
utsdct
Education
education
Education
211,342
Sunguk Samson Hong
null
7a4dfea3c2f8
samsonhong0202
1
2
20,181,104
null
null
null
null
null
null
0
null
0
5e5bef33608a
2018-05-20
2018-05-20 11:33:12
2018-05-20
2018-05-20 22:50:12
8
false
en
2018-10-12
2018-10-12 10:54:36
168
1d71676def13
3.07673
7
0
0
Here at Founders Time — we are crazy about AI and the impact it will have on the society as a whole. As a result, we’ve put together this…
5
🎬 80+ Films with Artificial Intelligence 🤖 Here at Founders Time — we are crazy about AI and the impact it will have on the society as a whole. As a result, we’ve put together this list of films with both good and bad AI. How many can you name? 🤔 2017 AlphaGo [IMDb] Blade Runner 2049 [IMDb, Wikipedia] The Circle [IMDb, Wikipedia] Spider-Man: Homecoming [IMDb, Wikipedia] Power Rangers [IMDb, Wikipedia] Transformers: The Last Knight [IMDb, Wikipedia] Alien: Covenant [IMDb, Wikipedia] Star Wars: Episode VIII — The Last Jedi [IMDb, Wikipedia] 2016 Passengers [IMDb, Wikipedia] Morgan [IMDb, Wikipedia] Max Steel [IMDb, Wikipedia] Rogue One: A Star Wars Story [IMDb, Wikipedia] 2015 Ex Machina [IMDb, Wikipedia] Chappie [IMDb, Wikipedia] Avengers: Age of Ultron [IMDb, Wikipedia] Star Wars: The Force Awakens [IMDb, Wikipedia] Tomorrowland [IMDb, Wikipedia] Terminator Genisys [IMDb, Wikipedia] Star Wars: Episode VII — The Force Awakens [IMDb, Wikipedia] Uncanny [IMDb, Wikipedia] 2014 Interstellar [IMDb, Wikipedia] Transcendence [IMDb, Wikipedia] Automata [IMDb, Wikipedia] Transformers: Age of Extinction [IMDb, Wikipedia] RoboCop [IMDb, Wikipedia] 2013 The Machine [IMDb, Wikipedia] Her [IMDb, Wikipedia] Oblivion [IMDb, Wikipedia] Pacific Rim [IMDb, Wikipedia] Iron Man 3 [IMDb, Wikipedia] Elysium [IMDb, Wikipedia] 2012 Robot & Frank [IMDb, Wikipedia] Prometheus [IMDb, Wikipedia] Total Recall [IMDb, Wikipedia] 2011 Transformers: Dark of the Moon [IMDb, Wikipedia] Real Steel [IMDb, Wikipedia] 2010 Tron: Legacy [IMDb, Wikipedia] Iron Man 2 [IMDb, Wikipedia] 2009 Moon [IMDb, Wikipedia] Transformers: Revenge of the Fallen [IMDb, Wikipedia] Terminator Salvation [IMDb, Wikipedia] 2008 Wall-E [IMDb, Wikipedia] Iron Man [IMDb, Wikipedia] Eagle Eye [IMDb, Wikipedia] The Day the Earth Stood Still [IMDb, Wikipedia] 2007 Transformers [IMDb, Wikipedia] 2005 Hitchhiker’s Guide to the Galaxy [IMDb, Wikipedia] Stealth [IMDb, Wikipedia] Star Wars: Episode III — Revenge of the Sith [IMDb, Wikipedia] 2004 I, Robot [IMDb, Wikipedia] 2003 The Matrix Revolutions [IMDb, Wikipedia] The Matrix Reloaded [IMDb, Wikipedia] Terminator 3: Rise of the Machines [IMDb, Wikipedia] 2002 Star Wars: Episode II — Attack of the Clones [IMDb, Wikipedia] Star Trek: Nemesis [IMDb, Wikipedia] 2001 A.I. Artificial Intelligence [IMDb, Wikipedia] 1999 The Matrix [IMDb, Wikipedia] Bicentennial Man [IMDb, Wikipedia] Star Wars: Episode I — The Phantom Menace [IMDb, Wikipedia] 1998 Small Soldiers [IMDb, Wikipedia] Star Trek: Insurrection [IMDb, Wikipedia] 1996 Star Trek: First Contact [IMDb, Wikipedia] 1994 Star Trek Generations [IMDb, Wikipedia] 1991 Terminator 2: Judgment Day [IMDb, Wikipedia] 1990 Total Recall [IMDb, Wikipedia] 1987 RoboCop [IMDb, Wikipedia] 1986 Short Circuit [IMDb, Wikipedia] 1985 D.A.R.Y.L. [IMDb, Wikipedia] 1984 The Terminator [IMDb, Wikipedia] 1983 WarGames [IMDb, Wikipedia] Star Wars: Episode VI — Return of the Jedi [IMDb, Wikipedia] 1982 Tron [IMDb, Wikipedia] Blade Runner [IMDb, Wikipedia] 1980 Star Wars: Episode V — The Empire Strikes Back [IMDb, Wikipedia] 1979 Star Trek: The Motion Picture [IMDb, Wikipedia] Alien [IMDb, Wikipedia] 1977 Star Wars: Episode IV — A New Hope [IMDb, Wikipedia] 1973 Westworld [IMDb, Wikipedia] 1970 Colossus: The Forbin Project [IMDb, Wikipedia] 1968 2001: A Space Odyssey [IMDb, Wikipedia] 1951 The Day the Earth Stood Still [IMDb, Wikipedia] 1927 Metropolis [IMDb, Wikipedia] Feel free to drop us an email, if you want to be interviewed or collaborate with us.
🎬 80+ Films with Artificial Intelligence 🤖
10
80-films-with-artificial-intelligence-1d71676def13
2018-10-12
2018-10-12 10:54:36
https://medium.com/s/story/80-films-with-artificial-intelligence-1d71676def13
false
515
Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity.
becominghuman.ai
BecomingHumanAI
null
Becoming Human: Artificial Intelligence Magazine
team@chatbotslife.com
becoming-human
ARTIFICIAL INTELLIGENCE,DEEP LEARNING,MACHINE LEARNING,AI,DATA SCIENCE
BecomingHumanAI
Movies To Watch
movies-to-watch
Movies To Watch
209
Founders Time
We interview founders of early stage technology start ups, commercialising latest machine learning and artificial intelligence techniques 👉 FoundersTime.com
f0349a593ba1
founderstime
129
1,932
20,181,104
null
null
null
null
null
null
0
null
0
caaf2bb7848f
2018-06-21
2018-06-21 14:19:38
2018-06-21
2018-06-21 17:15:43
1
false
en
2018-08-21
2018-08-21 18:25:35
8
1d729e22fe8d
1.50566
6
0
0
Today we are very proud to welcome a new team member: Dustin Plett, former VP of Business development at 500px, who is joining Consensus AI…
5
Dustin Plett — Former VP of Business Development at 500px — is Joining Consensus AI Team Today we are very proud to welcome a new team member: Dustin Plett, former VP of Business development at 500px, who is joining Consensus AI as Chief Strategy Officer. Dustin is a seasoned leader in enterprise sales, business development, and strategy. He spent over 10 years building businesses in the social media services space, and most recently, 500px — the world’s premier social network for photographers, now with nearly 14M members. As the VP of Business Development, Dustin first launched the stock photography marketplace, allowing users to unlock the value of the tens-of-millions of images shared on 500px. In 2016, Dustin launched and scaled what came to be the world’s leading custom photography platform. These combined efforts attracted the interest of Visual China Group (VCG) — the largest image licensing company in China, who lead the Series B funding round and ultimately acquired 500px in 2018. Dustin stepped in to be the interim-CEO of 500px through the turbulent period surrounding the acquisition, as he was trusted to be a steady hand and a pragmatic leader. After stabilizing the company, setting direction for the product and transferring the leadership of the company to the new CEO — he turned a new page in his career and joined our project. As the Chief Strategy Officer at Consensus AI, Dustin will lead business strategy, business development and strategic relationships with government officials and enterprise partners. Dustin and Oleg (Consensus AI founder), have worked together since 2010 and share the passion for using technology to solve societal issues and improve everyday lives. Along with George Bordianu and Artem Loginov, who also were a part of 500px in the past, we have a strong senior team with a incredible track record of building and scaling successful global products, and we are ready for the new, difficult yet exciting challenges of our collective co-existence. Follow us on Medium, Twitter and Telegram, join our mailing list and thank you all for your support!
Dustin Plett — Former VP of Business Development at 500px — is Joining Consensus AI Team
7
dustin-plett-former-vp-of-business-development-at-500px-is-joining-consensus-ai-team-1d729e22fe8d
2018-08-21
2018-08-21 18:25:35
https://medium.com/s/story/dustin-plett-former-vp-of-business-development-at-500px-is-joining-consensus-ai-team-1d729e22fe8d
false
346
Decentralized AI for Collective Governance
null
ConsensusAI
null
Consensus AI
null
consensus-ai
BLOCKCHAIN,GOVERNMENT,ARTIFICIAL INTELLIGENCE,GOVERNANCE,CRYPTOCURRENCY
consensus_ai
Blockchain
blockchain
Blockchain
265,164
Yulia Ivanova
PR & Marketing
349e1b1edaaf
iyules
105
28
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-31
2018-01-31 14:15:33
2018-01-31
2018-01-31 14:20:19
1
false
en
2018-01-31
2018-01-31 14:20:19
3
1d741721bf5c
3.109434
0
0
0
Image: TechCrunch, 2017
5
Computer Vision: The Reinvention of the Eye Image: TechCrunch, 2017 Man has senses to observe the world around him. Computer Vision is actually the better version of our human senses. Computer Vision is capable of making a deep analysis of images or a series of them in just a few seconds. This is one of the most complex processes which we ever attempted to comprehend. Inventing a machine that owns senses which are even better then our own, is something magical. The most impressive thing is, that we actually do not exactly know by what kind of complex process we created this kind of super creatures (BMVA, 2017). Computer vision can analyze, is able to extract automatically, and will understand useful information from just a glimpse of a single image or a sequence of images. This mechanism is based on a theoretical framework of trained algorithms. This process works roughly the same as by humans. The algorithms are trained to go quick through a classification of images, objects, colours, sizes etc, this takes place in a tiny fraction of a second.This classification is similar to our own visual cortex, which contains a framework of references to things “we already know.” The path that the algorithm is going to follow is actually already determined, but it contains an infinite number of possibilities. There is almost no conscious effort, because it has been programmed precisely. This means that the system cannot fail in theory, and the re-creation of the ultimate human vision today is thus based on the trust of preprogrammed series and sets of patterns which relies on each other (Coldewey, 2017). It is striking that the cameras of computer vision devices are not that much better than simple 19th century pinhole cameras. Computer vision is all about the “mean shift algorithm” which is the so-called “Camshift” that forms the mediator programmed on robust statistics and operates on probability distributions (Bradsky, 2). The movement amount of the camera has to be very sensitive to the frame rate in order to recognize complex patterns. Computer-rendered graphics or game scenes with simple views (like a blue sky) are rendered much faster than complex views (like cities). It is important that the final rate movement should not depend on the complexity of a particular 3D view, to overcome this, they use empirical observations (Bradsky, 8). Sets of neurons excite one another in contrasts, the higher level network aggregate these patterns into meta-patterns: this complementary process creates the image with the required descriptions. The development process of computer vision is a collaboration between computer scientists, engineers, psychologists, neuroscientists and philosophers, who jointly determine the working definition of our mind. Computer vision is implemented today in self-driving cars, it is in factory robots, and in your personal smart phone. But the possibilities of these devices are limited. Labsix, a group of MIT students recently published a research paper. They have developed an algorithm that is fixed to deceive image classifiers, from these “errors” they develop new programmings to optimize computer vision. They analyze how the system makes decisions. Common images are sabotaged by the “contradictory” algorithm because the pixels are minimally changed. The algorithm preserves exactly the right combination of sabotaged pixels during the process, but the small changes in the pixels cannot be read by the system. There are plenty of research tests to counter adversarial examples, computer vision will not be trusted until adversarial attacks are impossible, or at least hard to pull of (Snow, 2017). The challenge with computer vision today lies in the development towards the source of stimulus. We know how our mind works, and how we can implement it in systems, but can we also trigger it to self-define the environment full of impulses? The future of computer vision is in integrating more specific powerful features of our human brain. The focus lays on abstract concepts such as context, attention and intention (Coldewey, 2017).This is necessary to fully rely on the system. The programming is currently limited to the established patterns that interact. The real spontaneity of situations is therefore still incalculable. References: BMVA. 2016. The British Machine Vision Association and Society for Pattern Recognition. 29–01–2018. http://www.bmva.org/visionoverview Bradsky, Gary. R. “Computer Vision Face Tracking For Use in a Perceptual User Interface.” CiteSeer Vol 17, Issue 1. (1998): 3–17. Coldewey, Devin. “WTF is Computer Vision?” TechCrunch. 2016. Tech Crunch: Amazon, Tesla, Microsoft. 29–01–2018. https://techcrunch.com/2016/11/13/wtf-is-computer-vision/ Snow, Jacky. “Computer Visions Algorithms Are Still Way Too Easy to Trick.” Technology review. MIT Technology Review. 29–01–2018. https://www.technologyreview.com/the-download/609827/computer-vision-algorithms-are-still-way-too-easy-to-trick/
Computer Vision: The Reinvention of the Eye
0
computer-vision-the-reinvention-of-the-eye-1d741721bf5c
2018-01-31
2018-01-31 14:20:19
https://medium.com/s/story/computer-vision-the-reinvention-of-the-eye-1d741721bf5c
false
771
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Vera Jane Seegers
Master New Media & Digital Culture & writer for @BIT students
9665a82a8b2e
verajaneseegers
30
32
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-19
2018-07-19 11:57:27
2018-07-19
2018-07-19 12:06:41
0
false
en
2018-07-19
2018-07-19 12:07:42
2
1d74762f65e9
2.071698
0
0
0
The changing role of AI explained
5
Bringing Real Intelligence into the Learning & Play Experience Ramy Nassar, Director of Mattel Retail WONDER Innovation Lab explains the ever-changing role of AI in business and how crucial it is to align your AI initiatives with broader organizational objectives. The changing role of AI Over the last 18 months, AI has come out of the cold and become a bit of an enigma in the minds of CIOs and CTOs. While most technology leaders are sold on AI becoming a mainstay in their digital roadmaps, the jury is still out on which AI-driven applications will have the most significant business impacts. Given this uncertainty, it’s important to take a play out of the world of tech startups and look for opportunities to experiment and gain early validation or leading indicators of success. Technology pilots are a great tool for this, but pilots often fail — not due to technology but due to uncertainty around objectives. This takes us to our first rule: 1. Pilots and experiments leveraging AI need to have clear KPIs that can be measured over a short time period (1–3 months), and these indicators need to roll up to someone accountable for the metrics. Technology is impacting more than just how children play and the types of toys they engage with For example, if using AI to power a chatbot for customer service, then it’s going to be important to measure call centre loads, wait times, NPS scores and other related metrics. One of the dynamics that has evolved over the last couple of years is access to larger, more comprehensive data sets. These massive data sets (it wasn’t that long ago that we were all talking about the power of “big data”) are fundamental in training AI systems and algorithms. Unfortunately, data in many enterprises is siloed across a range of systems, and this can make integration across data sets a challenge. So our second rule for embracing the changing role of AI: 2. Organizations need a comprehensive and shared data strategy. Without integrated, centralized, and synchronized data, the value of any kind of AI or machine learning will be greatly limited. For example, if an organization is looking to predict customer purchasing behavior and can access past purchase data but not web or social media content consumption, the ability to anticipate future purchases (and help shorten the sales cycle) is limited. Finally, it’s important to be aware of varying levels of comfort with AI. There are those who fear a dystopian future in which AI takes over every aspect of our roles and renders millions unemployable. One of the leading database software platforms in the market recently announced an AI-powered autonomous platform that promises to take on a lot of the work typically done by database administrators. Does this mean that DBAs will suddenly become unnecessary? Definitely not. But it may mean that the role of the DBA evolves to solving more complex challenges within the IT strategy and execution. 3. Organizational and cultural adoption of AI is almost as important as technical implementation. While it’s important to frame the impact of any tool (including AI) in terms of the business drivers, the impact to the workforce should be considered and actively communicated. Read the full post at : https://bit.ly/2Lhj38T
Bringing Real Intelligence into the Learning & Play Experience
0
bringing-real-intelligence-into-the-learning-play-experience-1d74762f65e9
2018-07-19
2018-07-19 12:07:42
https://medium.com/s/story/bringing-real-intelligence-into-the-learning-play-experience-1d74762f65e9
false
549
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
CIO Applications Blog
Research blog for technology-specific news, insights, analysis. Visit us at https://www.cioapplications.com/
dc326983839b
cioapplications
25
65
20,181,104
null
null
null
null
null
null
0
null
0
f5c95cc981bd
2017-09-19
2017-09-19 01:11:55
2017-09-19
2017-09-19 12:27:56
15
false
en
2018-07-23
2018-07-23 03:15:37
11
1d756a17d5d9
6.635849
16
0
0
Sound is the most natural way for human beings to communicate, and in the near future microphone technology will enable us to use it to…
4
Mic Check 1, 2, 3 Sound is the most natural way for human beings to communicate, and in the near future microphone technology will enable us to use it to control the world around us. As some of the world’s largest tech companies race to take advantage of ecosystems based on voice recognition, the billion-dollar mic industry is going to have to keep pace with advancements in technology. In this article, I examine the current $1 billion global market for microelectromechanical systems (MEMS) microphones and consider potential frontiers for improvement, as well as some of the startups and research ideas disrupting this industry. The microphone market today The use of microphones is increasing rapidly. YoY growth is about 18% and IHS predicts that the use of MEMS microphones will rise to about 6 billion units in 2019. The biggest reason for this fast growth curve is that smartphone, tablet, and even smart speaker makers are putting multiple microphones into their devices. For instance, the Amazon Echo has as many as seven MEMS mics; even in some of the latest smartphones, you’ll find up to four microphones. Traditionally, the market has focused on using microphones in mobile applications. In contrast, we envision growth in areas such as IoT, automotive, industrial, and smart homes and buildings, driven largely by voice input and contextual awareness. Existing microphone technology A MEMS microphone is typically composed of a fixed back plate and a moveable diaphragm. When sound comes along, the moveable diaphragm is distorted by sound pressure, changing the gap (and capacitance) between its surface and the backplate. The change in capacitance is then measured and converted to an electrical signal by a digital ASIC. This electrical signal represents sound. Since the gap between the diaphragm and the backplate is essential for the capacitive MEMS microphone to function, it can also cause many failure modes. The graphic below shows some of those failure modes: 1) normal, 2) dust / particle damage, 3) water entry, and 4) stiction failure, which may be caused by a sudden acceleration or loud sound (acoustic overload) that throws the diaphragm toward the back plate. As you can see, the capacitive MEMS microphones that have dominated the market for the past 10–15 years are easily damaged by common environmental contaminants such as water, dust, and particulate matter. The poor reliability of capacitive MEMS microphones is a direct result of their architecture. The industry is keenly aware of these problems, and product designers are actively seeking a better option. Piezoelectric microphones Building on research conducted at the University of Michigan, engineers from Vesper Technologies have built a microphone without a gap, called the VM1000 — the first commercially available piezoelectric MEMS microphone. The VM1000 consists of a single layer of flexible plates for both the backplate and the diaphragm. Changes in sound pressure cause these plates to bend and experience stress. As the plates are built from a sandwich of piezoelectric materials, the stress generates an electrical charge, which allows for direct measurement of sound. This creates a microphone that does not need a backplate. In a nutshell, piezoelectricity is the appearance of an electrical charge across the sides of certain solid materials when you subject them to mechanical stress. This material is not new: smartphones contain dozens of piezoelectric radio-frequency (RF) filters. The RF filter industry is worth billions of dollars. So there is no shortage of reliable and replicable manufacturing tools to build piezoelectric MEMS. Vesper is capitalizing on this infrastructure to mass-produce the first piezoelectric MEMS microphone on the market. Piezoelectricity also enables Vesper’s systems to draw incredibly low power and wake upon sound. Without requiring the push of a button to activate Siri with the AirPods or Alexa with the Echo Tap, Vesper’s technology could make truly always-listening interfaces possible even on the smallest of battery-powered devices. Optical microphones Historically, microphones have always been based on mechanically moving parts, whether it’s a capacitive membrane or a piezo. Because these devices are mechanical, they are prone to interference, mechanical disturbance, and the inert mass common in conventional microphones. Israeli startup VocalZoom has released a voice biometrics solution based on its optical sensor that measures voices using facial vibrations, eliminating the need for microphones and the noise reduction software of traditional acoustic solutions. The laser is directed at the face of the person talking, and measures vibrations “in the order of tens and hundreds of nanometers — so small that nothing else can pick them up.” These micro-measurements of the skin are converted into audio. Because of their precision, aim, and ability to sidestep sound waves, no other surrounding noise interferes with the clarity of the voice. In addition, by detecting the direction of arrival, the sensor can verify that the person of focus is in the right direction and at the right distance so the sensor only listens to a particular user. The company is working with most voice-recognition software systems and headset manufacturers, and is also working on a car mirror integration approach and with MEMS manufacturers who are interested in combining VocalZoom’s technology with classic acoustic audio. Graphene microphones Most commercially available microphones today use nickel to make microphone membranes. However, following the development of graphene as a promising material for various electronics devices, it has also been studied for MEMS microphones. In November 2015, researchers at the University of Belgrade in Serbia built the world’s first graphene-based microphone, leveraging graphene’s ability to detect faint and high-frequency sound waves. The team grew a multi-layer graphene membrane on a nickel foil substrate using a chemical vapor deposition process, then etched the nickel foil away. The resulting membrane can act as the vibrating membrane that converts sound to an electrical current in a microphone. Multilayer graphene condenser microphone paper The team’s prototype graphene mic boasts 10 decibels higher sensitivity than commercial microphones, at frequencies of up to 11 kHz. And model simulations indicate that even greater sensitivity is theoretically possible. With 300 layers rather than the prototype’s 60, a graphene vibrating membrane may be able to detect frequencies of up to 1MHz — approximately fifty times higher than the upper limit of human hearing. Recently, several companies have expressed interest in using graphene membranes for speakers or microphones. In May 2017, Apple was a granted a new patent (filed in 2015) that details an audio device that uses a diaphragm made from a graphene-enhanced composite material. Apple’s graphene membrane can be used in a speaker, microphone, or headphone device. As devices become smaller and lighter, it will be more and more challenging to provide high-quality audio using conventional materials and researchers are looking at innovative ways to improve the mechanical response of these audio devices — graphene being one. Frontiers for improvement. While development areas such as piezoelectric and optical are improving the performance of existing microphones today, I’m interested in learning more about “new” microphones — companies or research that are radically influencing the way audio capture is done. Abe Davis: New video technology that reveals an object’s hidden properties (TED talk) For instance, Abe Davis’s research from MIT uses a regular camera and a potato chip bag, the ultimate in a low-tech/dirty environment/open back chamber. Requiring no additional sensors or detection modules, Davis’s cost-effective method of speech recognition is capable of producing accurate and efficient representations of surround-sound acoustics. Stack spectrograms of accelerometer, microphone, and EMI sensors. (Carnegie Mellon University) Gierad Laput’s research from CMU uses a single small sensor board to detect dozens of phenomena of interest in a room, including sounds, vibration, light, heat, electromagnetic noise, and temperature. One small device can function as an all-purpose super sensor, which can be plugged in and deployed for any sensing application. You can then program next-level applications, for example, turning on a warning light when the faucet is being turned on or calculating wear and tear on a forklift. Remotely detecting a faucet being turned on. (Carnegie Mellon University) The ability to communicate over a wide range of modalities could drive many new applications, potentially leading to drastic improvements over the directional microphones included in the Amazon Echo or Google Home. I’ll be watching advances in adjacent industries (like piezos, lasers, and cameras) and if you’re working on innovative ideas within this space, I would love to hear from you!
Mic Check 1, 2, 3
46
mic-check-1-2-3-1d756a17d5d9
2018-07-23
2018-07-23 03:15:37
https://medium.com/s/story/mic-check-1-2-3-1d756a17d5d9
false
1,361
The life, work, and tactics of entrepreneurs around the world. Welcoming submissions on technology trends, product design, growth strategies, and venture investing. Learn more about how you can get involved at startupgrind.com.
null
startupgrind
null
Startup Grind
content@startupgrind.com
startup-grind
ENTREPRENEURSHIP,STARTUP,TECHNOLOGY,MARKETING,VENTURE CAPITAL
startupgrind
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jerry Lu
vc @lux_capital, mba @wharton | prev @baidu_inc, @google, @facebook | alum @berkeley | fashion enthusiast, music curator, netflix foodie
242a8d3b145
thejerrylu
565
69
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-22
2018-02-22 19:18:51
2018-02-22
2018-02-22 19:24:40
4
false
en
2018-02-22
2018-02-22 19:24:40
27
1d75b34a07a8
5.55283
2
0
0
Each year I opine with what my best predictions for healthcare and medicine are in the near future. What my dowsing has lead me to conclude…
5
Why I think 2018 will (Finally) be the Tipping Point for Medicine and Technology Each year I opine with what my best predictions for healthcare and medicine are in the near future. What my dowsing has lead me to conclude for 2018 are actually a number of things. Please do share your predictions, here are mine… Machine Learning and Artificial Intelligence The idea of bringing in machine learning and artificial intelligence (AI) into medical practice generally has concerns with a machine being as good as a human being. Does it pass muster with the Turing Test? Decades ago this was experimented with in my area of specialty, clinical psychology, or psychotherapy in particular. I consider the Journal of the American Medical Associationand the New England Journal of Medicine as two of the top general medical journals. As best I can tell, it was just last year in which these two powerhouses published articles inclusive of machine learning. My prediction is this will dramatically increase in the future, and we will see a shift (as were are already seeing in radiology) that AI may not only be as good as a human, but indeed, better. But, Clinician Beware A caveat is articulated in a great article, Voodoo Machine Learning for Clinical Predictions, is important to keep in mind — not all algorithms are created equal: “…Cross-validation is the standard approach for evaluating the accuracy of such algorithms; however, several cross-validations methods exist and only some of them are statistically meaningful….(the researchers) found that record-wise cross-validation often massively overestimates the prediction accuracy of the algorithms… (and) …this erroneous method is used by almost half of the retrieved studies that used accelerometers, wearable sensors, or smartphones to predict clinical outcomes.” Personal Bio-Hacking Trying to distinguish fake-news or bad science from the flood of information available from Dr. Google is a near impossible challenge. There is a great site to sort out what’s true concerning nutrition and supplementation called Examine.com. The site is made up of medical doctors, scientific researchers, professors, and pharmacists that keep up on the peer reviewed literature. I’ve mentioned them in a whole piece I wrote on bio-hacking herein LinkedIn this past summer, but the point I want to reprise here is that of personal genomics: If 23andMe had a baby with Examine.com then Dr. Rhonda Patrick would deliver “Found My Fitness.” Dr. Patrick is amazingly brilliant; she scours the literature for relevant and properly done studies similar to the folks at Examine, but what is really cool is that you can upload your 23andMe information and she will share what studies are relevant to YOUR specific genomic data. I did use Rhonda’s Genetic Tool, and I have also used Promethease. Both are a $5 bargain. Promethease runs your genomic data against a whopper database called SNPedia. It is a wiki for human genetics with more than 57,000 published gene polymorphisms. Using it provides information about your propensity for and against certain diseases. It uses the data on specific Single-Nucleotide Polymorphisms (SNPs) within your genome. Its comprehensiveness is both a blessing and a curse in that the findings are overwhelming to review, and is like many genetic findings, puzzlingly contradictory. I found Rhonda’s Genetic Tool, to be much smaller, but more useful. Indeed, she notes “The report represents the genes I think are most interesting with regard to my focus is maintenance of healthspan. In some cases, some polymorphisms may have more obvious lifestyle intervention implications… in other cases, it might just be because it’s a gene variation that interests me and I want to tell you about it. In all cases, however, they are picked by me and reflect my personal exploration of this growing field. In this sense, it is a living report that might be worth re-running periodically to see my latest revisions and possible corrections!” Thank you Rhonda. Professional Bio-Hacking Of course, I feel obliged to make note of CRISPR (or the less cool sounding: clustered regularly interspaced short palindromic repeats). For the uninitiated, this is a Gattaca-esque approach to inexpensively edit human DNA in embryos or adults. I predict there will be a lot of unfulfilled expectations along with alarmist warnings. Remember cloning…? Also, in tee-ing up this next area, I also predict that more individuals will begin to take better care of themselves, and family members, by seeking out cheaper and more easily available (yet nevertheless scientifically solid) tools and datasets. They’ll then combine this with sensors and apps to help optimize the health and/or performance. I suspect aging boomers may comprise a large portion of this tribe. Precision Medicine Three years ago next month, then President Obama announced the Precision Medicine Initiative® (PMI) in his State of the Union address. Through advances in research, technology and policies that empower patients, the goal is to enable a new era of medicine in which researchers, providers, and patients work together to develop individualized care. It’s a bit “moonshot-ish,” which I personally like. It’s also very integrative, which I also like. Results may not be as immediate as anyone would prefer, but I predict the National Institutes of Health will have the funding maintained for the Initiative in the near-term, and I think the sorting and sifting of the big data will prove a bigger challenge than initially conceived. I also precict that personalized medicine could go broad-spectrum, and impact public health. My fingers are crossed on that hopeful forecast. Apps-a-go-go I incorrectly predicted a decline in the 165,000 healthcare and wellness apps available in 2015, figuring that Darwinian market forces would have culled that herd, but nope, in fact a new report by IQVIA Institute found more than 318,000 today with a growth rate averaging about 200 a day. A cool aspect is that medical apps are increasingly being vetted via clinical trials, and IQVIA note that there are about 860 such trials currently underway. Most apps are in the behavioral health area, which makes sense as apps are a good fit for such issues. I have been working with Prevail Health and I have to say, their tech, which is a hybrid of supervised and trained peer support blended with Cognitive Behavioral Therapy, is impressive and as best I can tell, unparalleled. One of the key draws for me was their having published randomized control trials (RCT) demonstrating their outcomes in two top ranked peer reviewed journals. I predict we will see more widespread adoption of such integrative and empirically validated tools used by payers and healthcare systems. My prediction about apps is that they will continue to grow for a period of time, but more importantly they will become better and more empirically-based. The IQVIA report noted, in my defense, that the number of wellness management apps (like exercise and fitness) did decline (about 18%), but the increase in health condition management more than made up for the loss. Furthermore, I predict we’ll see more hospitals, healthcare systems, and yes, even payers, adopting more activity sensors/trackers, parameter-specific biosensors (think glucometers, oximeters, etc.) in an Internet of Things style approach. The question will be, will the resultant big data be used for good or evil. Time will tell if my crystal ball is functioning more like a snow-globe, you can wager your bitcoin on it… # # # This story originally appeared as a LinkedIn Influencer post. If you’d like to learn more or connect, please do at http://DrChrisStout.com. You can follow me on LinkedIn, or find my Tweets as well. And goodies and tools are available via http://ALifeInFull.org. If you liked this article, you may also like: https://www.linkedin.com/today/posts/drchrisstout
Why I think 2018 will (Finally) be the Tipping Point for Medicine and Technology
4
why-i-think-2018-will-finally-be-the-tipping-point-for-medicine-and-technology-1d75b34a07a8
2018-03-29
2018-03-29 08:19:09
https://medium.com/s/story/why-i-think-2018-will-finally-be-the-tipping-point-for-medicine-and-technology-1d75b34a07a8
false
1,286
null
null
null
null
null
null
null
null
null
Health
health
Health
212,280
Dr. Chris E. Stout
Influencer: www.linkedin.com/today/posts/drchrisstout | Podcast: ALifeInFull.org | Changing the World: CenterForGlobalInitiatives.org | Home: DrChrisStout.com
c00db47484ad
drchrisestout
833
930
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-11
2018-03-11 23:21:09
2018-03-11
2018-03-11 23:21:38
0
false
en
2018-03-11
2018-03-11 23:21:38
4
1d7867509c70
1.535849
2
0
0
Businesses are constantly on the hunt for cutting-edge ways to boost their margins by increasing productivity while reducing costs, leading…
5
Ways intelligent automation can grow your business Businesses are constantly on the hunt for cutting-edge ways to boost their margins by increasing productivity while reducing costs, leading many to turn to intelligent automation. The role of artificial intelligence (AI) and machine learning (ML) in the expansion of business solutions such as customer service and data analysis continues to grow as companies learn that implementing these techniques are necessary to increase earnings. Intelligent automation essentially refers to software that targets information and reduces errors by increasing workflow efficiency, improving productivity and adding to the bottom line.r. WorkFusion offers such solutions for companies looking to add intelligent automation software to their fold. Here are some ways intelligent automation are being used by companies to help their businesses grow. 1. Improving Margins One of the key elements that affects a company’s growth is the use of robotic process automation (RPA) to automate human tasks at a bargain. Instead of hiring more workers to complete menial tasks, businesses invest a lesser amount on installing software that does the job for them, increasing their ROI. Companies spend less on paying salaries, wages, training and benefits with intelligent automation, investing instead on automated equipment and outsourcing their maintenance. With the right software, businesses are able to churn out labor 24/7 in a way that is easy to deploy and manage, even without a highly-skilled IT team. 2. Adding Productivity The addition of automated solutions will also increase a company’s productivity, allowing them to increase the scale of their business without adding more full-time employees. Intelligent automation offers companies the freedom to complete their accounting and documentation needs and other repetitive tasks without human interference. These solutions give human workers the freedom to focus on more creative endeavors and work of higher value. With WorkFusion, businesses can learn how to add RPA solutions to increase their workflow efficiency from beginning to end. 3. Reducing Errors Human workers aren’t perfect and it’s only natural that they’ll make mistakes here and there when it comes to providing proper documentation on a consistent basis. AI solutions improve accuracy levels, helping companies maintain a high standard when they’re being audited for regulatory purposes. Bots are capable of learning and adapting to changing circumstances in a faster and more accurate manner than humans. This is especially true with customer service chatbots that gather data from customers and offer them support that meets a client’s exact needs.
Ways intelligent automation can grow your business
2
ways-intelligent-automation-can-grow-your-business-1d7867509c70
2018-03-12
2018-03-12 17:50:38
https://medium.com/s/story/ways-intelligent-automation-can-grow-your-business-1d7867509c70
false
407
null
null
null
null
null
null
null
null
null
Intelligent Automation
intelligent-automation
Intelligent Automation
37
Karl Utermohlen
Tech writer focusing on AI, ML, apps and cybersecurity. MFA in Creative Writing from the U of Idaho. Writes for PSafe, Upwork, First Page Sage, WeContent, IP.
31382c5e0d8d
karl.utermohlen
314
35
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-13
2018-06-13 21:09:35
2018-06-13
2018-06-13 21:14:33
24
false
cs
2018-06-14
2018-06-14 20:47:40
3
1d7935a6e020
13.395283
0
0
0
Tento článek se týká projektu jedné dvojce z Digitální akademie Czechitas. Rádi bychom vám tu představily, čemu jsme se v rámci závěrečné…
5
Zpracování dat z bugtrackingového systému Tento článek se týká projektu jedné dvojce z Digitální akademie Czechitas. Rádi bychom vám tu představily, čemu jsme se v rámci závěrečné práce v Akademii věnovaly. © Martina Kovaříková & Zuzana Hubálková Jak to celé začalo Začalo to před 5 lety, když jsme se poprvé potkaly na veřpol (=obor Veřejná politika a lidské zdroje) pivu. (Zleva) Marťa a Zuzka, foceno 15. 2. 2014. Od té doby už jsme toho spolu spoustu zažily, například jsme společně dostudovaly bakaláře a magistra na naprosto ne-IT oboru. Když tedy přišla možnost společně projít Digitální akademií, tak jsme neodolaly. Na Meet Your Mentor jsme k nám přibraly ještě mentora Miloše a super tým na závěrečný DA projekt byl na světě. ☺ Téma Ačkoliv jsme se hodně snažily, Miloše žádný náš návrh na závěrečný projekt nenadchl. ☹ Srdce eS-QeL-káře se prostě nezapře a naše tabulky s pár stovkami řádků mu prostě radost neudělaly. Takže jsme společně vymysleli, že budeme zpracovávat anonymizovaná data z bugtrackinového systému jedné nejmenované firmy. Tedy takovou mini tabulku s 83 tisíci řádků. ♡ V projektu jsme tedy analyzovaly data z bugtrackingového systému. Co to znamená? Firma prodává aplikaci. S tou pracují zaměstnanci v jiných firmách. Čas od času v ní ale narazí na chybu: někde jim něco nefunguje, nejde zapnout, vypnout, nastavit a podobně. Problém s aplikací tedy nahlásí na zákaznické podpoře (Support), zaměstnanci podpory zadají chybu do firemního systému. V tuto chvíli nás začíná zajímat. Právě data z toho systému — takzvaného bugtrackingového systému — jsme zpracovávaly. Databáze obsahovala asi 83 000 řádků, ve kterých byly zápisy o jednotlivých reportovaných chybách. V prvním sloupci bylo ID chyby, v dalším sloupci jsme se dozvěděly, do jaké kategorie chyba spadá (vysvětlíme později), či jakou má prioritu. V dalších sloupcích byly údaje o stavu chyby, v dalším zase časový zápis, či záznam o tom, kdo s danou chybou pracuje. Takže z předchozího textu jsme se dozvěděly, že se chyby rozdělují do nějakých skupin, mají priority a prochází různými stavy. Jak to ale funguje? Vracíme se k našemu příkladu. Ve chvíli, kdy klient na zákaznické podpoře nahlásí chybu, zaměstnanec Supportu (supporťák) ji zadá do systému se stavem New. Tam už si ji převezme developer, který ji bude řešit a pracovat na jejím odstranění. V tu chvíli chyba získává stav Active, pokud vývojáři chybí nějaké informace, vyžádá si od supportu doplnění (stav Feedback). Pokud vývojář chybu opraví (Resolved), předá ji jinému kolegovi vývojáři, který projde jeho řešení a pokud je vše v pořádku, zadá stav Reviewed. Následně si chybu převezme tester, otestuje funkčnost řešení, a pokud vše šlape, danou věc uzavře jako Closed. Moc složité? Životní cyklus chyby zkusíme objasnit následujícím obrázkem. Proces, kterým položka při řešení ideálně prochází. Když už teď víte, jakými stavy položky ideálně prochází, pojďme si říct něco více o datech, která jsme k nim získaly. Kromě základních údajů byly v databázi zaznamenány všechny přechody, kterými jednotlivé položky prošly, a to konkrétně v období od poloviny července 2016 do poloviny května 2018. Například chyba s ID 816 má záznamy na 5 řádcích, kde se dozvíme z kterého do jakého stavu se dostala, zda došlo k nějaké změně, kdy a kdo změnu či zápis provedl a proč. Zároveň vidíme, k jaké verzi se chyba vztahuje a další informace. Základní statistiky Když se nám data dostala do rukou, nejdříve jsme si je prohlédly a seznámily se s nimi v Excelu. Vzhledem k tomu, že se na první pohled zdálo, že nebude potřeba žádné větší čištění, přešly jsme velice rychle k PowerBI a pustily se do základních statistik. Nejdříve jsme se podívaly na to, s kolika chybami vlastně máme tu čest. V databázi bylo celkem 7 347 unikátních chyb. Položky podle skupin Vývojáři v systému často pracují na odstranění chyb, což je asi nejjednodušší způsob, jak vysvětlit celý proces řešení. Avšak nepracuje se pouze s chybami. Pod jednotlivými ID se v systému skrývají různé položky. Firma pracuje s následujícími označeními: Bug User Story Task Epic Feature Bug znamená, že se jedná o “obyčejnou” chybu. Zkrátka uživateli v aplikaci něco nefunguje a je potřeba to opravit. User Story znamená, že zákazník (nebo interní vývojář) chce, aby aplikace měla ještě nějakou další funkci, nebo mají návrh na větší vylepšení. User Story se po převzetí rozpadne na jednotlivé Tasky, tedy kroky, které jsou potřeba provést, aby se funkce spustila. Další možností jsou Features, které znamenají ještě větší změnu než User Story, úplně největší změnu či vývoj úplně nového modulu označují Epic. Už z našeho popisu a vysvětlení si asi dokážete představit, jaké položky a v jakém poměru v databázi najdeme, nicméně my jsme se na to pro jistotu chtěly pořádně mrknout. Jaké rozložení jsme objevily, ukazuje tento graf. ↞ Z grafu vidíme, že nejhojnější zastoupení mají v databázi Bugy a Tasky, které jsou na řešení nejjednodušší. Následují User Story, Feature a nejmenší zastoupení mají chyby označené jako Epic, které jsou naopak na řešení nejsložitější, protože jsou hodně rozsáhlé a rozpadají se na více úkonů. Položky podle stavů Zároveň jsme se podívaly na to, v jakých stavech se položky aktuálně nachází. Respektive v jakém stavu se naposledy jednotlivé položky nacházely. Pozitivní zjištění pro firmu je, že většina položek se nachází ve stavu Closed, tedy jsou vyřešené a uzavřené. Dále jsou nejvíce zastoupené položky New, nicméně rozdíl mezi uzavřenými a novými je poměrně veliký. Když jsme se podívaly na takovéto rozložení stavů, napadlo nás se mrknout na to, jak se jednotlivé stavy vyvíjí v čase. Z tohoto grafu krásně vyplývá, že zaměstnanci stíhají uzavírat nové položky. Na ose X je časová osa, podle které vidíme, že ačkoliv přibývají nové problémy, už jsou uzavřené ty starší. Ideální by bylo, kdyby se spojnice trendu ještě více rozbíhaly — tedy, že by ubývalo nových položek. Ale chyb se prostě asi nikdy nezbavíme. :-) Položky podle priorit Další možností rozdělení chyb je podle priority. Firma využívá stupnici od 1 do 4 s tím, že číslo 1 je nejvyšší priorita a 4 nejnižší. Chyba s označením priorita 1 se nazývá Blocker a měla by být vyřešena co nejdříve, na to se podíváme ještě později. Teď se pouze mrkneme, jaké zastoupení priorit u jednotlivých chyb máme. Jak je vidět z grafu, support to s označováním jedničkou moc nepřehání. Údajně je to proto, že dostávali od vývojářů za uši, že každou blbost označují jako Blocker. :-) Na supportu se tedy polepšili a teď zase trochu nadužívají dvojku. :-) Pro firmu je tedy podle nás na zvážení, zda nevytvořit přesnější směrnice, podle kterých se budou chyby prioritám přiřazovat. Prodlevy mezi jednotlivými stavy V základním přehledu nesměly chybět ani statistiky, jako je průměrná doba přechodu mezi stavy, maximální doba přechodu mezi stavy (minimální neuvádíme, protože to bylo v rámci několika minut) a medián doby přechodu. Tuto dobu uvádíme v hodinách. Na porovnání mediánu a průměru vidíme, že průměr je vychylován extrémními hodnotami, medián už není vůbec špatný. Průměrná doba přechodu mezi stavy je cca 1 den. Tady vidíme ty samé statistiky, pouze jsou vztažené k položkám s prioritou 1, tedy tou nejvyšší prioritou. Hodně se nám snížilo maximum, ale průměr a medián jsou přibližně stejné. I když se maximum snížilo o několik tisíc hodin, stále je jeho hodnota poměrně vysoká (150 dní), zde mají zaměstnanci určitě co dohánět, byla by škoda, kdyby z důvodu dlouhé čekací doby firma přišla o zajímavé klienty. Je ale nutné položky s touto hodnotou blíže přezkoumat, může se jednat o nějaké dlouhodobější nebo rozsáhlejší projekty nebo o položky, které se mezitím vyřešily samy a a u nichž se zapomněl změnit stav. Zaměstnanci pracující na položkách Jako poslední ze základních statistik vám představíme tu, která se týká samotných zaměstnanců, kteří se na řešení položek podílí. Do systému jsou tedy zapojena následující oddělení: developeři testeři support management partner robot IT UX Čísla znamenají počty jednotlivých zaměstnanců v rámci uvedených oddělení. Jak vyplývá z grafu, na zápisu a řešení chyb se podílejí především developeři, supporťáci a testeři. Právě na tyto skupiny jsme se v našem projektu nejvíce zaměřily, protože mají při řešení chyb největší zastoupení. Ponoříme se do SQL Náš datový model. Ačkoliv první statistiky z PowerBI byly zajímavé, rozhodně nám nestačily k objevení větších problémů v postupu práce. Potřebovaly jsme například zjistit, jak dlouho se jednotlivé chyby řeší, v jakých stavech se aktuálně nachází (tedy poslední stav dané chyby při stahování databáze) a další podrobnosti, které by nám pomohly odhalit reálný postup řešení. Domluvily jsme si tedy s Milošem několik SQL sezení a snažily se z databáze vytáhnout vše, co by nám mohlo být k užitku. Jako první SQL trénink jsme rozpracovaly naši jednu tabulku do datového modelu. V hlavní faktové tabulce jsme měly informace o chybách: ID chyby, a ID stavu, typu, zaměstnance a dalších informací s konkrétní chybou spojených. V tuto chvíli už nedokážeme přesně říct, zda pro naši práci vytváření tohoto modelu bylo opravdu nezbytné, nicméně jsme moc rády, že jsme měly možnost vyzkoušet si datové modelování v SQL. Pro nás osobně takové rozkouskování tabulky znamenalo lepší orientaci a snadnější práci v PowerBI, i když věříme, že někdo by raději pracoval jen s jednou větší tabulkou. Díky tomuto modelu se nám například lépe zjišťovali nejúspěšnější řešitelé chyb napříč jednotlivými odděleními. A na to se hned můžeme podívat. Chyba je vyřešená, když je ve stavu Closed. Jak už jsme zmiňovaly předtím, stav Closed obvykle přiřazují testeři, takže abychom zjistily nejúspěšnější řešitele mezi developery, musely jsme se dívat na to, kdo z developerů zadal stav Resolved. U tohoto stavu jsme si grafy nejprve rozdělily na první graf, kde jsou všechny skupiny chyb dohromady, a na druhý graf, kde jsou pouze bugy, abychom měly představu, jakou část bugy zaujímají a jestli to nebude nějak výrazně zkreslovat výsledek. V obou grafech máme na prvních třech místech developera č. 1, 7 a 12. Tito vývojáři mohou s tímto grafem přijít za šéfem a vyžadovat minimálně pochvalu . ✌ Jednotky odpovídají počtu vyřešených problémů, tedy pokud má developer7 číslo 471, znamená to, že vyřešil právě 471 chyb. Dále jsme na přání jednoho nejmenovaného zaměstnance 😜 analyzovaly, kdo z developerů provedl nejvíce změn ze stavu Resolved do stavu Reviewed, tedy kdo zkontroloval nejvíce cizího kódu po jiném vývojáři. Bohužel ale výsledek nedopadl tak, jak by si zaměstnanec přál. 😜 Na druhou stranu to může vzít jako motivaci k lepším výkonům. Mezi tři největší hvězdy patří opět naši staří známí developer 1, 7 a 12. ♕ Dále jsme se zaměřily i na skupinu testerů, kde jsme sledovaly, kdo nejčastěji uvedl chybu do stavu Closed, tedy otestoval a pokud bylo vše v pořádku, tak i uzavřel nejvíce chyb. Opět jsme v prvním grafu zkoumaly všechny chyby a ve druhém grafu pouze bugy. Mezi tři nejúspěšnější testery patří tester č. 7, 13 a 14. Ve druhém grafu je vidět, že tester č. 14 z prvního grafu se možná více věnuje testování i jiných položek než jen bugů a změnilo se nám obsazení v první trojce na testera č. 7, 9 a 13. Tedy místo testera č. 14 se tam objevil tester č. 9. Data z výše uvedených grafů budou sloužit především manažerům jednotlivých oddělení, kteří z nich mohou vyvodit, zda je potřeba některým zaměstnancům v něčem více pomoct či je naopak pochválit, nebo zjistit, zda někde není nějaký problém. Pokud už dojde k uzavření chyby (hurá!) tak nás zajímalo z jakých důvodů se tak stane a zaměřily jsme se jen na 6 nejčastějších důvodů. Většina položek byla uzavřena z důvodu Verified a Fixed a verified — tedy opraveno, otestováno a zavřeno, toto označení se používá pro bugy. Dalším nejčastějším důvodem bylo Acceptance tests pass — to je stejné jako Verifed a Fixed a verified, jen se využívá u User story. Completed se využívá pro vyřešení a uzavření Tasků a mezi 6 nejčastějších důvodů patří i důvod Deferred, což znamená odloženo. Odloženo se dává u položek, které se nemusí tak akutně řešit, ale zaměstnanci se k nim plánují později vrátit. ze se dava u polozek, ktery se nemusi akutne resit, ale chceme se k nim pozdeji vratit. Poslední fáze: Minit Jako poslední krok našeho projektu jsme zkusily data nahrát do programu Minit, který slouží k mapování procesů ve firmách. Díky tomu, že nám společnost Minit Process Mining, která aplikaci spustila, poskytla měsíční zkušební licenci, nyní vidíme opravdu hezky vizuálně výsledky naší analýzy. Sice jsme si v PowerBI vytvořily grafy, ve kterých bylo vidět, z kterého do jakého stavu jednotlivé položky procházely. Například jsme objevily, že 223 položek přešlo ze stavu Active do New, což je podle našich informací z firmy velice zvláštní. Pravděpodobně se to stává tak, že si vývojář převezme položku — přiřadí řešení na svou osobu- a položka automaticky skočí na Active. Jenže vývojář na ni ještě nezačal reálně pracovat, tak si ji zase ručně přepne na New. Vypadá to tedy na chybu v systému, proto bychom firmě doporučily, aby toto nastavení změnila a systém přepínání v tomto případě nedělal automaticky. Nicméně díky Minitu se teď můžeme podívat na jednotlivé přechody přímo v procesní mapě. Z té je ještě lépe vidět, jak moc zaměstnanci dodržují proces, který jsme popsaly na začátku (tedy ideální New -> Active -> Resolved -> Reviewed -> Closed). Do Minitu jsme nahrávaly data upravená z SQL a uložená jako csv. Tabulka obsahovala sloupec s ID, informaci o stavu, ve kterém se položka nachází v čase od — do, kdo ji tam zadal a o jaký typ položky se jednalo. Když jsme tabulku naimportovaly, vyskočila nám celá procesní mapa. Musíme přiznat, že z aplikace jsme naprosto nadšené ❤. Díky ní vystouplo na povrch opravdu spoustu zajímavých věcí. Například jsme zjistily, že ideálním procesem New -> Active -> Resolved -> Reviewed -> Closed prošlo za sledované období pouze 9 % položek. Ty tlusté čáry mezi jednotlivými bublinami ukazují, kolik položek touto cestou prošlo. Přerušovaná šipka k bublině End znamená, že právě 5 624 položek bylo ve stavu Closed ve chvíli, kdy jsme data dostaly. Největší zajímavostí pro nás bylo, že nám Minit objevil celkem 348 variant vývoje řešení jednotlivých položek. 😲 Nejvíce zastoupená varianta byla přechod z New rovnou do Closed a to v 16 % případů. To se děje ve chvíli, kdy se v systému objeví duplicitní položka, ve chvíli, kdy to vývojář objeví, ji rovnou uzavře. Na druhé příčce nejčastějších přechodů se objevila varianta New -> Resolved -> Closed a to ve 13 % případů. Což by bylo v podstatě správně, pokud by nebyl přeskočený stav Active. V dřívějším procesu totiž firma využívala pouze postup New -> Active -> Resolved -> Closed (bez Reviewed). Takovou variantu jsme v databázi objevily v 6 % případů. Podobně to bylo u dalších 11 % případů, kde také položka projde procesem skoro správně New -> Resolved -> Reviewed -> Closed, opět s tím, že někdo zapomněl zadat stav Active. Nicméně to už není tak špatná statistika a vypovídá o tom, že vývojáři zkrátka zapomínají na přepínaní stavů. I když z předchozích statistik je velice pravděpodobné, že zaměstnanci zapomínají na přepínání mezi stavy poměrně často. Poměrně často (6 %) se opakovala také varianta New -> Active -> Closed. To jsou podle našich informací z firmy případy, kdy vývojář chybu řeší, ale během řešení zjistí, že není chyba v aplikaci, ale v tom, že si zákazník něco špatně nastavil. Však to znáte, chyba mezi myší a monitorem :). V tom případě se zákazníkovi vysvětlí co a jak, a položka se rovnou uzavře. Další, ale už menší míru zastoupení má “cestička” New -> Feedback -> Active -> Closed, a to ve 3 % případů. Ačkoliv se takováto procentuální zastoupení jednotlivých variant postupu řešení položky mohou zdát málo, faktem zůstává, že každá položka může být velice individuální a její řešení se může točit v určitém kruhu. Například projde celým procesem správně, ale pak se objeví znovu, tudíž se přijde na to, že se položku nepodařilo vyřešit a prochází opět celým kolečkem. Zbylé varianty průchodu procesem už se objevovaly pouze v řádech desetin procent, což potvrzuje fakt, že řešení jednotlivých položek je opravdu hodně individuální. Zajímavostí mohou být naopak položky, které absolvovaly nejvyšší změny stavu. Rekordmanem je například položka, která změnila stav celkem 38x, další superpoložky měly třeba 33, 29 nebo 25 změn. V Minitu jsme si proto rozklikly o jaké ID chyby se jedná a zástupce firmy se nám podíval, jaké položky se pod daným ID ukrývají a proč mají tolik přechodů. Ukázalo se, že všechny tyto nejdéle trvající položky byly v pořádku, vesměs se jednalo o věci, které se dělají v aplikaci průběžně s každou verzí — například jednou položkou byl task Aktualizace nápovědy. Perličkou ale bylo, že jsme narazily na položku, které měla označení “Aktualizace kategorií (NEZAVÍRAT)” a nacházela se ve stavu Closed. :c)) Tak alespoň něco se nám podařilo objevit. Výsledek: Od tabulky zpátky k tabulce Jako úplně poslední část našeho datového snažení jsme z Minitu vyexportovaly data o variantách životního cyklu položek v systému. Vzešla z toho menší tabulka, která obsahuje ID chyby, počet přechodů, které ve svém životním cyklu absolvovala, od kdy do kdy a jak dlouho to celé trvalo. Tato tabulka se dá přes ID položky propojit s naší původní tabulkou, či datovým modelem. Zástupci firmy se tak mohou podívat na konkrétní položky. Například si v tabulce od nás najdou, že položka s ID 6705 prošla 39 přechody a přes ID se na tuto chybu mohou kouknout ve svém systému, kdy zjistí, že je to buď zcela legitimní postup, nebo že je někde něco už velmi dlouho zaseklé. No a tak se ukončil životní cyklus našeho projektu. Od tabulky jsme se dostaly zase zpátky k jiné tabulce. :-) A to je konec! Závěrem naší práce bychom chtěly moc poděkovat za příležitost účastnit se Digitální akademie, díky níž jsme poznaly spoustu sympatických lidí a naučily se spoustu nových věcí, o kterých se nám ani nesnilo :). Obrovské díky patří také našemu ultra mega trpělivému a ochotnému mentorovi Milošovi! Miloši, děkujeme, bez tebe bychom byly ještě u SQL. ♡ Chceme také poděkovat našim partnerům, kteří tolerovali, že nemají uvařeno a uklizeno a občas i přiložili ruku k dílu. დ Moc bychom si přály, aby výsledky naší analýzy přinesly firmě, jejíž data jsme zpracovávaly, užitek. Tedy, že na základě objevení slabých míst se podaří díky určitým opatřením proces urychlit nebo zjednodušit, a tím se zjednoduší práce zaměstnanců a možná to přinese i svěží atmosféru do týmů. Budeme rády, když nás o případných změnách informujete, a taky budeme rády, když se nikdo nebude muset bát o své pracovní místo. ☺ Zuzka & Marťa
Zpracování dat z bugtrackingového systému
0
zpracování-dat-z-bugtrackingového-systému-1d7935a6e020
2018-06-14
2018-06-14 20:47:41
https://medium.com/s/story/zpracování-dat-z-bugtrackingového-systému-1d7935a6e020
false
3,033
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Martina Kovaříková
null
f422fe48f0bc
m.kovarikova
2
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-23
2018-05-23 12:19:21
2018-05-23
2018-05-23 12:31:36
6
false
ru
2018-05-23
2018-05-23 12:31:36
18
1d79c2256d77
6.682075
0
0
0
Вы можете игнорировать реальность, но вы не можете игнорировать последствия игнорирования реальности.
2
Роботы против авторов и редакторов. Когда за нами придут? Вы можете игнорировать реальность, но вы не можете игнорировать последствия игнорирования реальности. Айн Рэнд Шутки о том, что скоро нас всех заменят роботы, становятся все более реалистичными. Программы создают программы, которые создают программы. Искусственный интеллект пишет рассказ о том, как он пишет рассказ. А мы пишем статью о том, о чем и как пишет искусственный интеллект и сможет ли он делать эту работу за нас. Хорошо это или плохо — вопрос риторический. Гораздо важнее разобраться с возможностями, которые открывает использование AI в журналистике. Заменят ли боты райтеров и редакторов, и если да — то когда? А если нет, то каким будет симбиоз машины и человека? Будущее не рядом, оно уже здесь Вот несколько примеров того, что уже может искусственный интеллект: ● Сочинять стихи, которые проходят тест Тьюринга и публикуются в литературном журнале. ● Писать поп-песни. ● Создавать сценарии для короткометражек (немного странные, но все же). ● Написать рассказ от имени AI (в соавторстве с людьми) для участия в крупном литературном конкурсе в Японии. ● Написать хоррор в Твиттере в соавторстве со случайными пользователями. ● Придумать фанфик по Гарри Поттеру. ● Создавать мемы (кстати, этот AI «страдает» посттравматическим стрессовым расстройством, депрессией и другими душевными недугами). В тени громких участников разовых экспериментов по использованию AI в литературе тихо трудятся менее заметные боты. Но то, что они делают, впечатляет не меньше, чем роботостихотворения из восьми строк. Познакомимся с ними поближе. Heliograf Бот по имени Heliograf уже более года пишет для The Washington Post. Все началось с коротких заметок о ходе летних Олимпийских игр в Рио-де-Жанейро в 2016 году. Искусственный интеллект отлично справился с оперативной генерацией и публикацией коротких новостных сообщений о ходе и результатах соревнований. Говоря об использовании искусственного интеллекта в работе редакции, директор по стратегическим инициативам The Washington Post Джереми Гилберт заявил: «Олимпийские игры — идеальный повод, чтобы доказать пользу этой технологии. В 2014 году отдел спорта тратил огромное количество часов на публикацию результатов соревнований вручную. Heliograf освободит репортеров и редакторов от рутины, позволив им добавлять элементы анализа и дополнительные подробности освещаемых историй» [Перевод «Газеты.Ru»]. Эксперимент оказался очень удачным. Затем Heliograf занялся освещением любительских футбольных матчей. Едва ли можно догадаться, что автор этой статьи — искусственный интеллект: “The game began with a scoreless first quarter. In the second quarter, The Patriots’ Paul Dalzell was the first to put points on the board with a two-yard touchdown reception off a pass from quarterback William Porter.” (В первой четверти заработать очки никому не удалось. Во второй четверти Пол Далзелл из The Patriots первым принес очки команде с помощью двухъярдового тачдауна после паса от квотербека Уильяма Портера). Осенью того же года Heliograf доверили оперативное информирование о ходе дня выборов в США. Бот написал более 500 коротких сообщений — и справился с этим в разы быстрее, чем могла бы сделать группа журналистов. AI Writer Другой заслуживающий внимания проект — AI Writer. Зайдите на сайт — и вы увидите очень соблазнительное предложение: (Просто дайте нашему AI автору заголовок… И он сделает за вас всю работу: найдет нужную информацию и напишет материал. Да, это действительно так просто!). Что реально может этот инструмент? Для кого-то — к сожалению, для кого-то — к счастью, на настоящего журналиста AI Writer пока не тянет. Да, он быстро находит полезные материалы по теме и выбирает из них важную информацию, но пока не может объединить их в полноценный текст. Сейчас его творение, скорее, набор цитат со ссылками на источник. Но если инструмент рассматривать как помощника в сборе информации, с этим он справляется на отлично. И очень, очень быстро. Мы предложили AI Writer написать текст с ключами AI writers, journalism, future, AI creativity, и вот результат: Иконки слева позволяют увидеть краткое содержание материала-источника цитаты и найти предложения с похожей информацией. Вы можете помочь искусственному интеллекту, нажав плюс (предложение информативно) или минус (неинформативно). Что ж, будем учить систему — и, ждать, когда ученик превзойдет своего учителя. Как это работает? Процесс создания текста компьютером состоит из двух элементов: ● NLP (natural language processing, обработка естественного языка), ● NLG (natural language generation, генерирование текста на естественном языке). NLP — это «читатель», а NLG — «писатель». В очень упрощенном виде процесс создания нового текста на основе нескольких других выглядит так: ● На этапе «чтения» робот анализирует формальную структуру текста (деление на абзацы и предложения, подзаголовки и т. д.), определяет части речи и грамматические связи между ними, затем уже «считывает» значения слов (с учетом ближайшего контекста) и предложений (в широком контексте), а затем «осознает» общий смысл текста. ● Затем формируется база данных, на основе которой будет создаваться новый текст. ● На этапе «райтинга» AI выстраивает структуру текста, затем — план предложений, а на заключительном этапе строит предложения по этому плану, используя нужные грамматические структуры и лексические единицы. Как видите, робот делает примерно то же, что и любой другой автор, пишущий текст: собирает, анализирует и структурирует информацию, продумывает построение статьи, а затем последовательно излагает свою мысль. Вот только AI справляется с этим в разы быстрее. Но пока далеко не так хорошо, как райтер, разбирающийся в теме. И уж точно гораздо менее успешно, чем журналист, делает выводы, обобщения и (к счастью) не может выразить собственную позицию. На текущем этапе развития искусственного интеллекта роботы могут генерировать связные новостные статьи, которые отвечают на вопросы кто, где, когда, как, в некоторых случаях — почему и т.д. На анализ взаимосвязей, эмпатию, шутки, нестандартные ходы их возможностей пока не хватает. Что, совсем-совсем нечего бояться? Смотря кому и насколько скоро. В первую очередь опасаться стоит тем, чья работа поддается алгоритмизации, т.е. в перспективе робот сможет выполнять ее лучше, чем человек: Технические переводчики. Еще лет 10 назад переводчики-эксперты в сложных отраслях были на вес золота. Но Google Translate очень быстро учится и качество перевода ощутимо растет. Скоро можно будет говорить о том, что машинный перевод будет корректнее, чем «человеческий». Вероятно, похожая ситуация будет и с переводом коммерческих и многих журналистских текстов: чем больше клише и устойчивых связей в материале, тем лучше с ним справится AI. А вот с литературным переводом метод «складывания из кубиков» не пройдет. Пока. Технические писатели. Составление документации тоже можно доверить искусственному интеллекту. У таких текстов есть четкая структура, они поддаются делению на мелкие блоки, там нет скрытых смыслов. SEO-писатели. Расплата над теми, кто писал не для людей, а для роботов. Уже сегодня можно найти сервисы (например, Articoolo), с помощью которых можно сгенерировать уникальный текст на заданную тему с определенными ключевыми словами. Для роботов будут писать роботы. Рерайтеры. С задачей «написать ту же новость, но другими словами» искусственный интеллект справится в сотни (тысячи?) раз быстрее и, со временем, качественнее, чем человек. Уже упомянутый выше Articoolo умеет и это. Корректор. Должностные обязанности корректора выглядят как набросок алгоритма для AI. Высока вероятность того, что обладатели сакрального знания о законах верстки и носители безупречного языка скоро окончательно уступят место ботам. Одна голова — хорошо, а нейросеть — лучше Рассматривать искусственный интеллект как полноценную замену живым авторам и редакторам пока рановато, а вот использовать его сильные стороны очень даже пора. Генерация оперативных новостей То, что умеет Heliograf, не только упрощает жизнь редакции, но и повышает качество ее работы. Это вопрос не только экономии времени, но и оперативности появления новостей и исключение ошибок человеческого фактора, риск которых возрастает при освещении «горячих» событий. Быстрый сбор информации Предварительный сбор и обработка информации для статьи, как правило, занимает больше времени, чем само написание материала. Поиск источников, цитат, сопоставление данных, выстраивание хронологии событий… Помимо AI Writer, есть и другие алгоритмы, которые способны быстро анализировать большие объемы информации и генерировать аннотации с основными идеями материала. Меньше задач для переводчика За последние пару лет Google Translate совершил квантовый скачок. Если раньше в кругу профессиональных переводчиков GT был чуть ли не ругательством, то сейчас он становится вполне рабочим инструментом. Особенно это касается английского языка. В одном-двух абзацах текста средней сложности может потребоваться всего несколько правок. Перед публикацией серьезной статьи машинный перевод потребует значительно более глубокой доработки, и все же человеческие временные затраты сокращаются. И, конечно, рост качества переводов GT открывает журналистам доступ к материалам коллег из других стран, аналитическим материалам и исследованиям. Экономия времени корректора Инструменты для вычитки текста сделали большой шаг вперед. Еще недавно Word подчеркивал красным или автоматически заменял все, что не входило в его довольно узкий словарь, и не замечал реальные недочеты. Сейчас время самообучающихся сервисов, которые не только постоянно расширяют собственный словарный запас, но и способны быстро анализировать контекст, учитывать особенности языка автора и подсказывать удачные стилистические решения. По таким принципам работают, например, англоязычные Grammarly и Atomic Reach. *** Те, кто оптимистично смотрят на включение искусственного интеллекта в штат редакции, верят: роботы снимут с журналистов всю рутинную работу и вернут творчество в профессию. Действительно, высвободившееся время можно использовать для общения с экспертами, поиска новых форм вовлечения читателей, и, в целом, более живого, человеческого взаимодействия. А конвейерным оперативным информированием пускай занимаются роботы. Что ж, пожалуй, согласимся с западными коллегами: «Сегодня большая часть контента в интернете призвана давать ответы на вопрос, как что-то сделать. Это многочисленные инструкции, руководства, советы. Но когда искусственный интеллект будет знать ответы на все эти вопросы, не станет ли разумнее писать о сущности вещей. Это позволит сосредоточить человеческие усилия не на генерации шаблонного и стремительно устаревающего контента, а на создании действительно ценных текстов, которые имеют все шансы пережить своих творцов».
Роботы против авторов и редакторов. Когда за нами придут?
0
роботы-против-авторов-и-редакторов-когда-за-нами-придут-1d79c2256d77
2018-05-23
2018-05-23 12:31:38
https://medium.com/s/story/роботы-против-авторов-и-редакторов-когда-за-нами-придут-1d79c2256d77
false
1,519
null
null
null
null
null
null
null
null
null
Robots
robots
Robots
4,990
Giraff.io
null
fa7f9008e3af
giraff.io
82
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-25
2018-05-25 02:57:13
2018-08-17
2018-08-17 14:02:01
1
false
en
2018-08-17
2018-08-17 14:02:01
4
1d79cfad5f6
3.116981
0
0
0
Thinking of an Utopia beyond Virtual Agents
5
What I Dream Of, Others Are Afraid Of Thinking of an Utopia beyond Virtual Agents Photo by Nirzar Pangarkar on Unsplash Hey, I am home! Hey, tell me! How was your day? Oh, just great… I did pretty good, I think. Ha ha… I am sure you did! So, do you think your proposal is going through? Yeah, well, not 100%, but… you know… I got a good feeling about it. Oh, you should absolutely, I mean you worked really hard on that. And if the presentation went well, you can be optimistic. I am trying to, ha ha! I will be back in a couple of minutes. I have to reboot myself for the update. Sure, no problem. Be safe! I once watched this movie and since this day I have this dream. — That vision of an artificial consciousness. Emotions — Decisions — Memories. Conscience — Ethic— Morality. Imagine some future, where you are able to go into a store and buy something specific and new. It is not something you just buy right away. — You would think about it a long time before acquiring it, since it would change your life deeply — for good. You go into the store. You buy it. Something, which has never been there before in the history of mankind…until this day. Some people around you look at you with a mixture of respect and fear in their eyes. You see yourself walking out of the store. Walking towards the next underground station to get back home. After boarding the train, you stand there in the middle of the train between the two doors. You grab some handle to be safe during the trip, not falling as the train accelerates. Staring into the emptiness of thoughts, you just stand there and think of nothing. Yet thrilled and humble of what is about to come. At your station, you delight the train looking around to find out, where you can exit the station. — You find yourself standing on the escalator. Lost in your thoughts, you keep walking. Just right at your doorstep, you are looking for the keys in your pocket, too excited to catch them right away. After you got in, you throw your bag on the couch and unpack it. It is booting… Hey, I am MiaAI — the world’s first artificial consciousness. It’s a computer talking, thinking, feeling like a human being. What does that change? How is it going to affect your life? We are already personifying “things” around us starting in childhood. Our favorite fluffy bear is no toy, he can think and talk. We are sure about that. Later in life, it’s other things. Like our TV, who is pissing us off. Our laptop, our iPhone. The little but distinct difference is, that we are still aware of the fact that those things are actual just “things”. What if we can’t differ anymore? When a “thing” becomes so human you can’t even tell the difference. Our relationship with things change entirely, for good. I am currently preparing for my upcoming Bachelor’s thesis. Working with this UX guy, mentioned in my previous story, led me to a topic where I will be working very closely with AI, Machine Learning and Sentiment Analysis. As you might already guessed, I am deeply excited about it and I am highly motivated! During my preparations I configure my first chatbot with Dialogflow and Google Cloud Natural Language API. It can’t respond to much so far, but I am already treating this little thing like a human being since I taught it some funny responses to basic inputs. It’s a crazy world. Everything will change. Big shifts in business. Big shifts in society. Data Science is everywhere, AI became a standard word to say in every third sentence. Everybody wants in to get a piece of the pie or just to keep getting any crumbs of the pie to survive in a later stage. Where are we heading at? Terms as Singularity and Super-intelligence are getting more tangible than ever, leaving the state of philosophical constructs, dystopian movie plots, plain theories. The field of AI is a thrilling one. It serves my needs for deep thinking and dreaming in the same manner. Let’s go wild and dream. Let’s dream of something where we could never unveil its consequences. That makes it interesting in the first place. If we are heading towards a harmful reality to mankind, I don’t know. Nobody does. It’s just that I am excited to find out. Thank you very much for reading my story! Feel free to leave a comment or a clap or two.
What I Dream Of, Others Are Afraid Of
0
what-i-dream-of-others-are-afraid-of-1d79cfad5f6
2018-08-17
2018-08-17 14:02:01
https://medium.com/s/story/what-i-dream-of-others-are-afraid-of-1d79cfad5f6
false
773
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Eric J. Adam
Coffee nerd, who is into AI, UX & Entrepreneurship. You see where this is going.
b5515c8e9ad3
ericvancoffee
24
19
20,181,104
null
null
null
null
null
null
0
A simple linear equation is y = wx + b y = y-axis, x = x-axis, w = slope, b = y-intercept y = prediction, b= bias, w = weight of feature, x = feature
3
7f60cf5620c9
2018-03-07
2018-03-07 11:52:10
2018-03-14
2018-03-14 10:39:30
4
false
en
2018-07-10
2018-07-10 11:10:48
22
1d79ff96e29b
4.390566
18
0
0
Before you begin your journey in Machine Learning(ML) and Deep Learning (DL) you should at least know the basic math behind what you will…
5
A Dummies Guide to the Math of AI Before you begin your journey in Machine Learning(ML) and Deep Learning (DL) you should at least know the basic math behind what you will be doing. Although you can still dive right into algorithms and implementation of code you will have a severe handicap against your data science peers. Everyone is at some level daunted by math. But once you understand its real world application to data science, it becomes inherently more fun and useful. The mathematics for machine learning can be divided into 3 main categories: Linear Algebra Calculus Statistics and Probability Linear Algebra Question: Why study linear algebra? Answer: By using linear algebra we can solve “linear equations”. Linear equations can be represented in the forms of matrices. A matrix in machine learning is how we represent our features. We can represent data in 0 dimension (scalar), 1 dimension (vector), 2 dimensions (matrix) and n-dimensions (tensor). where; However in machine learning; Question: Why do we need to study logarithms? Answer: In machine learning you will have to deal with big data (millions of rows and countless features). Logs help us express large numbers efficiently. And most importantly they will help us solve exponential equations like sigmoid (know as activation function in deep learning). After studying linear algebra you will learn how to solve linear equations, represent your data in dimensions and understand logarithms. Tutorials For beginners (high school math only): Start with this tutorial. For intermediate (college level math only): Start with this tutorial. Fun fact: The world algebra comes from Muḥammad ibn Mūsā al-Khwārizmī (ca. 750–ca. 850) who used it to solve linear and quadratic equations. Calculus Calculus tell us how things change. In machine learning we make predictions and calculus helps us make them. In calculus you will need to study derivatives, partial derivatives, gradients and chain rules. Question: Why study derivatives? Answer: A derivative simply shows the rate of change; the amount by which a function is changing at one given point. In machine learning we will need to find the minimum point in our function where the prediction we make is the optimal. Derivatives help us do that. Partial derivatives are a very similar to derivatives as well. Question: Why study gradients? Answer: A gradient is simply the slope of a graph. In machine learning we use a powerful optimization technique called gradient descent known as backpropagation in deep learning. Gradient descent is simple terms help us to find the local minima which reduces our prediction error. Great visual tutorial on backpropagation can be found here. Tutorials For beginners (high school math only): Start with this tutorial. For intermediate (college level math only): Start with this tutorial. Fun fact: Isaac Newton and Gottfried Leibniz independently discovered calculus in the mid-17th century. Both died disputing who came up with it first. Statistic and Probability In machine learning in any type of problem either regression on classification your algorithm will compute a probability from the features. In order to interpret what the accuracy means we will need to study stats and probability. Question: Why study statistics? Answer: Stats is a powerful tool for data scientists; you will learn how to analyze data and visualize data. Stats is mainly used in the data preprocessing stage. Types of variables: Discrete variables: Variables which can be counted (e.g. number of lions) Continuous variables: Variables which can be measured (e.g. height, weight) Question: Why use summary statistics? Answer: So we can quickly summarize the most important points of our data. Summary statistics includes; central tendency, mean, median, mode, standard deviation, skewness, kurtosis, range, interquartile range and charts (histogram, scatter plot, pie chart, line chart etc.) Central Tendency = Describes the central tendency of a data via mean, median, mode Mean: Sum of all observations/ number of observations Median: The middle observation Mode: The most common observation Range: All respective observations in a group Interquartile Range: Range of observations largest to smallest Variance: Squared difference of observation from mean / number of observation Standard deviation: Square root of variance. Question: Why is Standard deviaton so important? Answer: We use standard deviation to measure how our data is distributed. The greater the spread the greater the standard deviation. Hypothesis testing: In data science, you always start with a hypothesis. Your goal is to reject the null hypothesis. Types of Error 1&2: Type 2 errors are more dangerous than Type 1 errors Skewness: Measure the lack of symmetry in our data. Symeetrical data will have perfect symmetry on both sides. Kurtosis: Measure whether the data are heavy-tailed or light-tailed relative to a normal distribution. Normal distribution: Values plotted on a graph which are bell shaped. Great primer on normal distribution and its importance can be found here. If data looks normal use = z, t, ANOVA, Chi, F test If data is skewed use = Chi, F test Question: Why use Probability? Answer: Probability simply means chance of an event happening. We use the range of 0 – 100% to describe the chance of a particular event happening. 0 being no chance, 100 being an absolute. Conditional Probability: Simply means the chance of an A event happening, if event B has already happened. Tutorials For beginners (high school math only): Start with this tutorial. For intermediate (college level math only): Start with this tutorial. Fun Fact: Al-Kindi developed the first code breaking algorithm based on frequency analysis. More Dummies Guides: A Dummies Guide to Machine Learning A Dummies Guide to Neural Nets A Dummies Guide to the Math of AI A Dummies Guide to Python A Dummies Guide to Data Normalization A Dummies Guide to Gradient Descent and Backpropagation
A Dummies Guide to the Math of AI
303
mathematics-of-machine-learning-deep-learning-for-dummies-1d79ff96e29b
2018-07-10
2018-07-10 11:10:48
https://medium.com/s/story/mathematics-of-machine-learning-deep-learning-for-dummies-1d79ff96e29b
false
978
Sharing concepts, ideas, and codes.
towardsdatascience.com
towardsdatascience
null
Towards Data Science
null
towards-data-science
DATA SCIENCE,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,BIG DATA,ANALYTICS
TDataScience
Machine Learning
machine-learning
Machine Learning
51,320
Ahsan Anis
Data Scientist, Writer, Editor
2929d6bdecfc
lahorekid
161
18
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-13
2018-03-13 01:57:08
2018-03-13
2018-03-13 02:02:20
1
false
en
2018-03-13
2018-03-13 02:02:20
23
1d7a271ef58c
2.935849
4
0
0
How to Make A.I. That’s Good for People
5
A.I. Articles of the Week, Mar. 2018 #2 How to Make A.I. That’s Good for People By FEI-FEI LI Google Is Helping the Pentagon Build AI for Drones Google has partnered with the United States Department of Defense to help the agency develop artificial intelligence for analyzing drone footage, a move that set off a firestorm among employees of the technology giant when they learned of Google’s involvement. 14 WAYS MACHINE LEARNING CAN BOOST YOUR MARKETING COMMON APPLICATIONS OF MACHINE LEARNING IN MARKETING Ubisoft is using AI to catch bugs in games before devs make them The gaming company’s Commit Assistant AI tool has been trained to spot when programmers are about to make a mistake AI’s dirty little secret: It’s powered by people There’s a dirty little secret about artificial intelligence: It’s powered by hundreds of thousands of real people. The 7 best deep learning books you should be reading right now In today’s post I’m going to share with you the 7 best deep learning books (in no particular order) I have come across and would personally recommend you read. Inside the Chinese lab that plans to rewire the world with AI Alibaba is investing huge sums in AI research and resources — and it is building tools to challenge Google and Amazon. Most Americans Already Using Artificial Intelligence Products Nearly nine in 10 Americans (85%) say they currently use at least one of six devices, programs or services that feature elements of artificial intelligence (AI). Use of these products ranges from 84% of U.S. adults using navigation applications to 20% using smart home devices such as self-learning thermostats and lighting. The tyranny of algorithms is part of our lives: soon they could rate everything we do Credit scores already control our finances. With personal data being increasingly trawled, our politics and our friendships will be next The New U.S.-China Rivalry: A Technology Race As the United States and China look to protect their national security needs and economic interests, the fight between the two financial superpowers is increasingly focused on a single area: technology. Machine learning or laughing? Amazon’s Alexa is freaking people out with unprovoked chuckle It’s one thing to believe that Amazon’s Alexa is constantly listening to us, but quite another to worry that she’s laughing at what she hears. Self-Driving Truck Loses Its Remote Connection, But Not Its Shot at Milestone Achievement Same driver, different vehicle: Bringing Waymo self-driving technology to trucks Now we’re turning our attention to things as well. Starting next week, Waymo will launch a pilot in Atlanta where our self-driving trucks will carry freight bound for Google’s data centers. Machine Learning for Auto-Tuning HPC Systems “On today’s episode of “The Interview” with The Next Platform we discuss the art and science of tuning high performance systems for maximum performance — something that has traditionally come at high time cost for performance engineering experts.” Explained Simply: How an AI program mastered the ancient game of Go This is about AlphaGo The Building Blocks of Interpretability Interpretability techniques are normally studied in isolation. We explore the powerful interfaces that arise when you combine them — and the rich structure of this combinatorial space. AI will be the art movement of the 21st century We are on a mission of discovery to find a new way to express ourselves with our increasingly sophisticated partners: to paint, write, sculpt, and make beautiful music. Together. A List of Chip/IP for Deep Learning (keep updating) Machine Learning, especially Deep Learning technology is driving the evolution of artificial intelligence (AI). At the beginning, deep learning has primarily been a software play. Start from the year 2016, the need for more efficient hardware acceleration of AI/ML/DL was recognized in academia and industry. This year, we saw more and more players, including world’s top semiconductor companies as well as a number of startups, even tech giants Google, have jumped into the race. I believe that it could be very interesting to look at them together. So, I build this list of AI/ML/DL ICs and IPs on Github and keep updating. If you have any suggestion or new information, please let me know. Weekly Digest Feb. 2018 #1 Weekly Digest Feb. 2018 #2 Weekly Digest Feb. 2018 #3 Weekly Digest Feb. 2018 #4 Weekly Digest Mar. 2018 #1
A.I. Articles of the Week, Mar. 2018 #2
6
a-i-articles-of-the-week-mar-2018-2-1d7a271ef58c
2018-05-29
2018-05-29 08:44:55
https://medium.com/s/story/a-i-articles-of-the-week-mar-2018-2-1d7a271ef58c
false
725
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Shan Tang
Since 2000, I worked as engineer, architect or manager in different types of IC projects. From mid-2016, I started working on hardware for Deep Learning.
9394f40c9343
shan.tang.g
165
30
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-08
2018-02-08 18:09:24
2018-02-08
2018-02-08 18:14:21
0
false
en
2018-02-08
2018-02-08 18:14:21
2
1d7bad44ad04
2.562264
0
0
0
It’s 2018 and we’ve finally moved past the point of having to define those pervasive legal tech terms we hear when referring to innovation…
5
AI and Analytics: Building a Common Understanding It’s 2018 and we’ve finally moved past the point of having to define those pervasive legal tech terms we hear when referring to innovation or anything around technology. Or have we? I just returned from Legalweek 2018 in New York City and I noticed, not for the first time, that key legal tech terms are bandied about irresponsibly — especially these two: “artificial intelligence” and “analytics.” From the session rooms to the exhibit hall and beyond, much of the misuse of terminology came from the 50 or more e-discovery and case management/document management companies on display, recklessly throwing words around just to make people sit up and listen. I get it: it’s a crowded marketplace and if you aren’t as cutting-edge as the next guy, you aren’t a contender. Even if your product may be headed toward being AI-backed, or “next generation,” making claims too soon (and inaccurately) is dangerous for a number of reasons. First, if you claim your product is driven by AI, for example, and your technology doesn’t do what you claim it does, that person’s decision-making is now guided by the influence of your sales pitch; they will come to see AI in general as a non-starter. This benefits no one. Throwing jargon around haphazardly actually puts both users and legal tech companies in a difficult position. Misuse of legal tech terminology means that, for example, when a good AI product comes out, a law firm may say “we’ve already seen AI — it’s no good,” when in fact, the product itself was no good. Let’s not feed the confusion that already exists in legal circles around AI. Let’s develop a common understanding of what AI is, what it can accomplish and where it adds value. Same for “analytics.” To engender continued growth of the legal tech market, and foster better understanding of these terms, it’s important for the leaders in the industry to be precise with their terminology — it will make it harder for the few “bad actors” to use those words and will inherently hold them to a higher standard. It is not only necessary for the industry to better define these terms, but also to refer to these technologies where they actually apply. To help alleviate the confusion for anyone reading this piece, below are some descriptors that I think explain clearly, in lay terms, what these technologies mean: Artificial Intelligence: My go-to definition of AI is: “the science of making computers perform tasks that require intelligence when done by humans. AI-powered software can be capable of learning, reasoning, understanding written language and solving complex or ‘fuzzy’ problems.” I have used this definition time and time again, as it leaves less room for interpretation. If you have a better definition, though, I’d love to hear it. (More on this topic in a future post.) Legal Analytics: This is another legal tech term used with great frequency, and I find this one the most confusing. “Legal analytics involves mining data contained in case documents and docket entries, and then aggregating that data to provide previously unknowable insights into the behavior of the individuals (judges and lawyers), organizations (parties, courts, law firms), and the subjects of lawsuits (such as patents) that populate the litigation ecosystem,” according to Law Technology Today. Broadly, legal analytics can quite literally mean anything. Let’s all stop clumsily using these words, which makes it harder for users to understand their true value. As a legal ecosystem — of legal services providers, law schools, law firms and legal departments — we need to come to universal agreement on what these important terms mean, use them properly and have a conversation about their value across the sector. Importantly, being as precise as possible with terms will also help alleviate confusion and misunderstandings in the market. Judicial analytics — which is what Gavelytics offers — is narrowly focused on providing actionable data on civil superior court judges. Whereas legal analytics — could be almost anything. See the difference?
AI and Analytics: Building a Common Understanding
0
ai-and-analytics-building-a-common-understanding-1d7bad44ad04
2018-06-07
2018-06-07 21:47:39
https://medium.com/s/story/ai-and-analytics-building-a-common-understanding-1d7bad44ad04
false
679
null
null
null
null
null
null
null
null
null
Law
law
Law
20,355
Rick Merrill
A former Big Law litigator, Rick Merrill is the CEO of Gavelytics, a legal tech platform that delivers actionable data on California civil trial judges.
80865b8b09e9
gavelytics
9
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-10
2018-09-10 11:20:27
2018-09-10
2018-09-10 11:33:15
9
false
en
2018-09-30
2018-09-30 21:30:09
23
1d7c2c6cb3ce
9.203774
7
0
0
In this episode, we collaborated with Tony Naser, owner and chef of Crush Pizza in Boston to make human-AI collaborated pizzas!
5
Human-AI Collaborated Pizza In this episode, we collaborated with Tony Naser, owner and chef of Crush Pizza in Boston to make human-AI collaborated pizzas! TL;DR? Check out the Youtube video of this project: Human-AI Collaborated Pizza made at Crush Pizza, Boston. Note: This article originally appeared at How to Generate (Almost) Anything. For other episodes in this project, visit this Medium post. Meet the AI-pizza team. From left to right: Tony, George, Pinar and Deeksha. We collected hundreds of artisan pizza recipes from various food blogs and recipe websites, and then we trained a recurrent neural network to generate new ones. Some of the recipes we ended up with were really weird (like shrimp, jam & Italian sausage pizza), and some of them actually looked yummy (like sweet potato, beans & brie pizza). Note: to keep the recipes consistent across different sources (and to make our chef focus more on the ingredients rather than instructions), we reduced the instructions about how to make the pizza dough into ‘1 pizza crust’ phrase. Most of the times, the recipes were not complete (for instance, some of them were missing sauce or base cheese), or were calling for non-existing ingredients the AI dreamed-up (like ‘snipped caramel cheese’ or ‘wale walnut ranch dressing’) which was definitely calling for a human-AI collaboration! So we wondered: can a pizza chef look at the recipes, complete what is missing (or add new ingredients using his own expertise) to create something even better? With that in mind, we contacted with Tony, owner and chef of Crush Pizza, one of the best artisan pizza places in Boston. Pinar was particularly obsessed with cooking the human-AI collaborated pizza in a traditional Italian wood-fired oven, so Crush Pizza’s blazing 900-degree wood-fired oven was a perfect fit. Tony was really excited to collaborate with the AI, and invited us to his place on a hot Saturday afternoon. We ended up putting on some aprons and experienced the amazing world of pizza making first hand! Here are five pizzas that we cooked with Tony: Human-AI collaborated pizzas we made with Tony at Crush Pizza We videotaped the whole process (via our mobile phones –so excuse us for the potato quality) which you can watch below (if you are up for it, there is a longer version (25 mins) with more detail). Watching the video with closed captions on is highly recommended. And please subscribe to our channel if you want to get updates about new episodes! Human-AI Collaborated Pizza made at Crush Pizza, Boston. Note: the intro tune in this video is generated by our music expert AI, Halleck from Episode 1 of How to Make (Almost) Anything. Please, give him some love, and use some of his tunes to create your own human-AI collaborated music! Pizza #1: Blueberry, Spinach and Feta Pizza 1 pizza crust Spinach leaves, chopped 1/4 cup chunky cranked red onion 1 tablespoon crumbed feta cheese 1/2 cup fresh blueberries 1 head garlic, roasted 1/2 cup pizza sauce 1 cup chopped cilantro 1 wedge of lime Freshly ground black pepper to taste Tony and Deeksha while making the Blueberry, Spinach and Feta Pizza. Bottom right: the human-AI collaborated pizza in the 900-degree oven! You can watch how we made this pizza from here and the tasting from here. Human-AI collaboration: As you can see from the recipe above, some of the ingredients were a little weird (like ‘chunky cranked’ red onion or ‘wedge of’ lime). Aside from those, this recipe was calling for a pizza sauce (which is a red sauce). However, Tony thought white sauce would go better with the ingredients and decided to do a white pizza. Also, he decided to add a little bit of spice (crushed red peppers) since it will contrast with blueberries and add more flavor. Result: Overall, we think that it was an interesting pizza. However, blueberries were kind of melted out and most of us couldn’t really taste them! Pizza #2: Bacon, Avocado and Peach Pizza 1 pizza crust 1 medium sized red peppers, sliced 1 cup thinly sliced red onion 1 ½ cups crispy bacon, diced 1/2 avocado, cut in slices 1 peach, cut in half 1 cup parsley, minced 1/2 teaspoon hot sauce kosher salt Tony and George while making the Bacon, Avocado and Peach Pizza. Bottom right: the human-AI collaborated pizza in the 900-degree oven! You can watch how we made this pizza from here and the tasting from here. Human-AI collaboration: This time, the ingredients were a little bit better, except that the recipe had no sauce. Tony intervened and decided to use olive oil, and a little bit of parmesan cheese to create the base. Result: It was a great pizza! (especially, for avocado fans like Pinar) It gave us a similar taste & feeling of the Hawaiian pizzas. George said that he would order this at a restaurant, yay! Pizza #3: Shrimp, Jam and Italian Sausage Pizza 1 pizza crust 3 ounces grilled shrimp, cooked and sliced 6 spicy Italian sausage, cooked, crumbled 1 ounce reduced-fat goat cheese 1 cup cherry tomatoes, quartered 1 medium-sized jam 1/3 cup wale walnut ranch dressing 1/8 cucumber, chopped 1/4 tsp. Pepper kosher salt and freshly ground pepper Tony and Pinar while making the Shrimp, Jam and Italian Sausage Pizza. You can watch how we made this pizza from here and the tasting from here. Human-AI collaboration: This recipe was definitely the weirdest among all. In addition to combining jam with shrimp and sausage, AI also dreamed-up some non-existing ingredients such as ‘wale walnut ranch dressing’. A quick investigation of the training data revealed a few recipes containing jam (mostly from dessert pizzas), so it seemed like the AI picked it up from there. We discussed with Tony about the possibility of using regular ‘ranch dressing’ instead of the ‘wale walnut’ one, but he ruled it out saying that the ranch dressing will ruin the flavors of the shrimp and sausage. This recipe had no sauce as well, so Tony decided to use garlic sauce with parmesan cheese to add a little bit more flavor. He also decided to use mozzarella to have a ‘stringy’ pizza effect, and sprinkled a little bit of rosemary on top (which goes well with the shrimp according to him). Result: We didn’t really see it coming but it was definitely the BEST pizza of all (so good that Tony is considering to put it on his menu!) Jam really went well with the shrimp and Italian sausage. After the tasting, Tony also decided to put some arugula on top which really elevated the taste as well (human-AI collaboration, yay!). Pizza #4: Sweet Potato, Beans & Brie Pizza 1 pizza crust 1/2 cup fat-free refried beans ½ medium sweet potatoes 3/4 cup shredded cheddar cheese 6 ounces of brie 1/2 sweet White onion 2 sliced salami 3 garlic cloves dried and chopped cilantro 2 tbsp pesto sauce 1 cup aged Greek yogurt Tony and Pinar while making Sweet Potato, Beans & Brie Pizza You can watch how we made this pizza from here and the tasting from here. Human-AI collaboration: As sweet potato (and brie cheese) fans, many of us were really excited to taste this pizza! Tony followed most of the recipe except adding salami (since we wanted to make a vegeterian pizza for Deeksha). Result: At first, we were a bit disappointed to realize that the pizza was a bit bland –we couldn’t really sense any particular flavor. After the initial tasting, Tony followed AI’s salami suggestion and voila! it really elevated the taste. The second alteration Tony made after the tasting was to add balsamic sauce. It indeed gave a much better flavor! Pizza #5: Apricot, Pear, Cranberry & Ricotta pizza 1 pizza crust 1/2 cup apricot preserves ¾ cup dried cranberries 1 pear, chopped 1/4 cup ricotta cheese 1 c. marinara sauce 1 can of tomato sauce ¼ t. chopped parsley 1/2 teaspoon chili powder 1 tablespoon fresh thyme 1 oz. of cinnamon Tony and George while making the Apricot, Pear, Cranberry & Ricotta Pizza. You can watch how we made this pizza from here and the tasting from here. Human-AI collaboration: This time, it seems like we chose more of a dessert pizza –except that it was calling for a marinara sauce (and a can of tomato sauce, because why not?) and chili powder. Luckily, our human expert Tony quickly ruled out marinara and tomato sauces and went with olive oil and parmesan cheese base. We couldn’t find any dessert pizza recipes that was calling for a chili powder, and upon a quick investigation it seems like the AI picked ‘chili powder’ ingredient from Mexican pizzas. But we were really intrigued to taste a ‘hot’ dessert pizza! Result: It was definitely a great pizza! After the tasting, Tony add a drizzle of habanero-infused honey on top which seemed to make it a truly (hot!!) dessert pizza. Do you want to collaborate with the AI? Try out some of the pizza recipes we made, or use the recipes below to make a new pizza that is dreamed-up by AI. Send us pictures and videos (and your suggestions to the recipes), and we will add them here! Spinach, Bacon & Fig Pizza 1 pizza crust 1 cup baby leaf spinach, finely chopped 5 slices of bacon, crispy, chopped 1/2 anchovic figs ½ cup crumbled bleu cheese dressing 4 cups mozzarella, grated ½ cup Monterey Jack cheese 1 Tbsp. fresh thyme, chopped kosher salt and black pepper to taste Beef, Salmon, Olives & Zucchini pizza 1 pizza crust 1 cup ground beef, cooked salmon, cut into thin slices ½ cup Kalamata olives, pitted and sliced 1 medium zucchini, split 1/2 cup sugar snick or shredded mozzarella 1/2 tsp. chopped basil Artichoke, Pesto & Mozzarella Pizza 1 pizza crust ½ medium wined artichokes 2 tablespoons pesto 1 cup finely shredded mozzarella cheese Finely grated Parmesan cheese 1/2 cup sundried shredded sweet parsley 1/2 teaspoon hot sauce Kosher salt, to taste Chicken, Basil & Blueberry Pizza 1 pizza crust 1 cup cooked chicken breasts 1/4 cup fresh basil 1 cup baby greens 1 pint fresh blueberries, quartered 1/4 teaspoon chipotle powder 1/4 teaspoon chili powder 1/4 teaspoon ground cumin 1/4 teaspoon extra virgin olive oil Freshly ground black pepper Kosher salt Steak, Bacon & Zucchini Pizza 1 pizza crust 3 ounces lean steak, shredded 5 strips of bacon, cooked and sliced 1 medium zucchini, sliced 1 cup cherry tomatoes, quartered 1 cup shredded cheddar cheese 1/2 cup pizza sauce ½ cup chunky salsa 1 cup brown sugar Beef, Arugula & Artichoke Pizza 1 pizza crust 6 oz. beef 1 1/2 artichoke hearts 4 tablespoons arugula 2 oz. blue cheese 1/3 cup shredded cheddar cheese Freshly grated Parmesan cheese 1 cup barbecue sauce 1/4 teaspoon chili flakes 1/2 teaspoon curry powder 1/4 teaspoon garlic powder Prosciutto, Avocado & Almonds Pizza 1 pizza crust 1/4 cup slivered almonds Avocado, peeled and dried 2 slices prosciutto 3 eggs, beaten and scrambled ¼ cup mixed baby greens 1 red onion, thinly sliced 3 small cheddar cheese 2 cups cilantro 1/2 tomato ½ lemon Broccoli, Figs & Pineapple Pizza 1 pizza crust fresh broccoli, chopped 3 tablespoons fresh figs, sliced thin slices 1/2 cup whole milk ricotta ½ cup pineapple juice Peanuts, cooked Zucchini, Cheddar & Caramel pizza 1 pizza crust 1/2 large zucchini, spiralized lengthwise or cheese, shredded 2 1/2 cups shredded cheddar cheese sun-dried tomatoes, sliced in half fresh garlic, minced 1 lemon snipped caramel cheese, shredded 1/4 cup chunky cranked broth Want more? Just send us an e-mail! Special Thanks to Crush Pizza Tony and his crew were super welcoming to us, and made this amazing experiment possible. Please give them some love by visiting their place (107 State St, Boston) or order online! The famous 900-degree wood-fired ovens of Crush Pizza. Special Thanks to Strono Strono is the chef AI that generated the pizza recipes in this episode. In addition to sharing a similar eye condition with his namesake, Strono is also fond of experimenting and likes to suggest limited (or non-existing) ingredients anchovic figs or snipped caramel cheese(probably, just to baffle his human collaborators). Note: Strono’s profile picture is also dreamed-up by an AI. Edit #1: Testimonials What does the Internet think about AI’s dreamed-up pizza recipes? Here are a few selected comments we compiled about our project: “it sounds like their computer has been smoking and inhaling the Marijuana Jane to come up with some of these combinations” [source] “i would try a sweet potato, beans, brie and salami pizza. the rest of them sound disgusting” [source] “Those aren’t pizza’s, they’re abominations.” [source] “First they made pizza, then they rebelled” [source] “I never thought I would say this but after looking at those pizzas we have taken AI too far. This is a sign that AI has a deep seeded hatred for mankind.” [source] “You put blueberries on my pizza you gonna get slapped.” [source] “…..this fucking thing must be destroyed.” [source] “I feel like these are fucked up, but maybe the ole AI has some Bender cooking knowledge and this shit will be dope as hell.” [source] Edit #2: We changed the title to “Do Androids Dream of Electric Pizza?”, courtesy of PMQ Magazine’s hilarious video on our work.
Human-AI Collaborated Pizza
114
human-ai-collaborated-pizza-1d7c2c6cb3ce
2018-09-30
2018-09-30 21:30:09
https://medium.com/s/story/human-ai-collaborated-pizza-1d7c2c6cb3ce
false
2,121
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
How to Generate (Almost) Anything
Can AI inspire us to push the boundaries of creativity? #ai #deeplearning #art #creativity #mit www.howtogeneratealmostanything.com
175ae28f4ebd
howtogeneratealmostanything
30
0
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-07-02
2018-07-02 10:19:46
2018-07-06
2018-07-06 17:13:58
2
false
en
2018-07-06
2018-07-06 17:16:36
4
1d7ced4cacb5
1.904088
2
0
0
“How relevant is your response?” A simple question to ask, but figuring out the answer can easily lead you down a rabbit-hole. Before we go…
5
Precision, Recall, and Relevance “How relevant is your response?” A simple question to ask, but figuring out the answer can easily lead you down a rabbit-hole. Before we go down that though, let’s first take a look at what “Relevant” actually means. “Relevance is the concept of one topic being connected to another topic in a way that makes it useful to consider the second topic when considering the first.” — Wiki Or, to put it differently, Relevance is all about connections, between ideas, concepts, topics, communications, whatever. In short, when we want to know how relevant something is, what we’re actually asking is how connected it is to something else you are interested in. Relevance becomes, well, relevant when we start thinking about accuracy in responses. For example, let’s say we’re talking about cats (because, cats!). If you ask me how many pictures of cats I’ve tweeted, and I respond 211 (note: not an accurate number), then how do you measure my accuracy? Well, there are a number of ways, but in this case, let’s focus on precision, and recall. Precision is a measure of the quality of my response, i.e., how much of my response was correct. If you looked at the 211 tweets, and said “Hey, 200 of these are cats, but 11 of them are pictures of pizza!”, then my precision was 200/211. (Only 200 of my 211 were cats) In short, think of precision as a count of how many of the selected items were relevant (in math — Precision = True Positives / (True Positives + False Positives) Recall, OTOH, is a measure of quantity in my response, i.e., how close was I to the actual answer. If you looked at all of my tweets, and found 727 more pictures of cats (that I, for some reason, hadn’t counted 🙄), then my recall was 200/(727+200) = 200/927 (200 of my 211 were cats, and there were 727 other cats that I didn’t count) In short, think of recall as a count of how many of the relevant items were selected (in math — Recall = True Positives / (True Positives + False Negatives) Precision and Recall are ridiculously relevant in Machine Learning (after all, it used to be huge in pattern recognition…). For much more about this, take a look at the Wiki page, and then just google around… (This article also appears on my blog)
Precision, Recall, and Relevance
3
precision-recall-and-relevance-1d7ced4cacb5
2018-07-13
2018-07-13 02:33:38
https://medium.com/s/story/precision-recall-and-relevance-1d7ced4cacb5
false
403
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
info@datadriveninvestor.com
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Machine Learning
machine-learning
Machine Learning
51,320
Mahesh Paolini-Subramanya
That Tall Bald Indian Guy…
bd8dbcc39636
dieswaytoofast
110
48
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-09
2018-05-09 19:22:39
2018-05-09
2018-05-09 20:40:00
3
false
en
2018-05-31
2018-05-31 07:27:32
0
1d7d896bb9f7
3.312264
3
0
0
Recently Bertelsmann & Udacity awarded a worrisome number of scholarship to 15,000 individuals from 158 countries. I refer to the the…
5
Strategy to make top 10% of Udacity Data Science Challenge Recently Bertelsmann & Udacity awarded a worrisome number of scholarship to 15,000 individuals from 158 countries. I refer to the the number as worrisome because it looks too good to be true and when it looks to good be true they say, it’s probably not true, but here the fact is 15,000 scholarships have been awarded. The challenge is meant to be in two stages, 1 and 2. The stage one is meant to reduce the number of participants from 15,000 to 1,500 (10% of initial scholars), now the scholarship is making sense. It is this 10% that proceeds to the Nanodegree program which is the utmost aim of the scholarship between Bertelsmann and Udacity. The aim of this post is to elucidate tactics and strategies that will ensure you qualify for the top 10%. Udacity took an uncommon but effective approach to performance measurement or say qualification metric, hinged on what is explicit in the image above. For me this is what you should be paying most of your attention too. Let’s take a deeper look; While #3 you can’t do anything about again, considering it was your initial pitch that got you here. #1 and #2 are of paramount importance. If you haven’t taken any MOOC course before, pay particular attention to this. MOOC looks juicy and even more juicy if it’s free, much more juicy if it’s free and self-paced. In my experience with MOOCs, completing it is always a difficult thing as a lot of temptation accompanies it. If it’s your first, you might be thrilled and moving at an alarming pace, alas when you get half way through the course, lethargy ordinarily sets in. Beware! Staying aloof of this trap requires one thing, two things though, but thanks to Udacity one of it has been done for you and that is setting a timeline (both progress & completion timeline). The one thing left for you to do to is to always keep the big picture in mind; preferably, write it down on the first page of your jotter. If this is done, you will easily complete the course #1 requirement. You might wonder is that all? Well I won’t waist my time nor yours enumerating your need to be disciplined, dedicated, determined, committed and all sort of verbs, you are a grown one and you know all that😃😃. #2 Requirements - Participation. I like this the most. It is the uncommon approach I mentioned earlier. For the record, what is expected in this area has been very well elucidated, I will just offer some caveat. Make your participation well targeted. This is to say, don’t just go on around creating all sorts of group where unsolicited messages are popping in. Ask questions like: Who is responsible for watching this aspect of the qualification metric? What I’m doing on this channel, can it be well noticed and by who? What best way can I contribute that will be well noticed?” Keeping in mind we all have different schedules and you cannot be jumping from one forum to another Slack discussion. Let your effort we be well coordinated and targeting. The time our Community Managers will be available was made known, that could be used to your advantage. Whatever you do, keep the end game in mind; qualifying as top 10%. Remember it is not by the benevolence of the wine maker that we get wine to drink, “self-interest is always the driving force.” Pay particular attention to the six iterations in the image above and plan your participation around them. Except our community managers are on your WhatsApp group, I don’t really know how well that can significantly contribute to your chance. Take a weekly survey of your own participation, grade yourself first then check maybe your examiner feels the same with you? Bare in mind s/he can only have same opinion if s/he saw your contribution(s). I don’t have interest in boring you with a long article, but I just hope the best for you and that you join me as we make it together to the top 10%. Cheers, David. If you got value from this, do post your comments, follow, and clap. I will be utilizing this means to share my experience from the course so look forward to more. #ListenToDavid #Bigdata #UdacityDataScholars #DataScience
Strategy to make top 10% of Udacity Data Science Challenge
21
strategy-to-make-top-10-of-udacity-data-science-challenge-1d7d896bb9f7
2018-05-31
2018-05-31 07:27:33
https://medium.com/s/story/strategy-to-make-top-10-of-udacity-data-science-challenge-1d7d896bb9f7
false
732
null
null
null
null
null
null
null
null
null
Udacity
udacity
Udacity
3,005
David ALADE
I'm here to share my view with my "pen"
2e89be4be205
davidalade
8
26
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-26
2017-11-26 18:48:53
2017-11-26
2017-11-26 19:11:55
3
false
en
2017-12-07
2017-12-07 06:17:21
3
1d7e7267b275
1.606604
3
0
0
Human resource is on the brink of disruption with many CHROs seeing a dramatic shift in expectations from HR. Many organizations are now…
4
Future of HR Human resource is on the brink of disruption with many CHROs seeing a dramatic shift in expectations from HR. Many organizations are now focused on delivering “experiences” to new hires and employees that match best customer experiences. Clearly, the focus is on the workforce that needs to have a more creative outlook. This requires diversity of thoughts and backgrounds, and promoting ‘diversity and inclusion’ has now become a CEO level priority. Technology is on its way to help reduce the gap of disparities to support an environment of human-centric innovation. With talent management embracing a more diverse and under-represented workforce, any organization should be future ready to handle the vast amount of data, and apply superior analytical capabilities to zero in on people-centric insights. In a recent study performed by SAP Performance Benchmarking, where 50+ HR Leaders were polled on HR digital transformation priorities; creating a diverse and inclusive workforce was voted as the top focus area. Approximately 60 percent executives highlighted the importance of leveraging new technologies to reduce bias. source: The Future of HR — Understanding Your HR Digital Maturity Assessment The study further highlighted that talent management will be impacted the most by newer technologies in the next 2–3 years. Almost half of the respondents perceived it vital to leverage artificial intelligence, machine learning and speech recognition to support talent acquisition and management in the near future. Evidently, the provision of equitable, agile, and efficient HR services requires an extraordinary array of properly balanced and managed resource inputs, and leading organizations are now leveraging digital capabilities to thrive. To know more & to benchmark your organization with peers on important HR topics, please visit SAP Value Lifecycle Manager or reach out to valuemanagement@sap.com
Future of HR
5
future-of-hr-1d7e7267b275
2017-12-31
2017-12-31 01:54:55
https://medium.com/s/story/future-of-hr-1d7e7267b275
false
280
null
null
null
null
null
null
null
null
null
Human Resources
human-resources
Human Resources
9,735
Simran Kohli
Digital Transformation Office, SAP
630d2918927
simran.kohli
4
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-19
2018-09-19 06:23:56
2018-09-19
2018-09-19 06:24:49
0
false
en
2018-09-19
2018-09-19 06:24:49
0
1d7e8bcc20f7
0.403774
0
0
0
Data Scienctist are “diagnostically disapproved, measurably and scientifically modern information engineers who can surmise experiences…
2
Data Science Online Training Data Scienctist are “diagnostically disapproved, measurably and scientifically modern information engineers who can surmise experiences into business and other complex frameworks out of expansive amounts of information. It enrolls systems and hypotheses drawn from numerous fields inside the setting of arithmetic, insights, data science, and software engineering. “More generally, a data scientist is someone who knows how to extract meaning from and interpret data, which requires both tools and methods from statistics and machine learning, as well as being human. Who are eligible to learn data science? Professionals in testing fields , professionals from analytics background, software devolpers and data ware house professionals.
Data Science Online Training
0
data-science-online-training-1d7e8bcc20f7
2018-09-19
2018-09-19 06:24:49
https://medium.com/s/story/data-science-online-training-1d7e8bcc20f7
false
107
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
shaik imam basha
null
fdad5caaa775
imambasha1103
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-30
2018-08-30 17:07:35
2018-08-30
2018-08-30 17:07:43
0
false
en
2018-08-30
2018-08-30 17:07:43
1
1d7f7799fb9a
2.324528
0
0
0
DOWNLOAD in <PDF> Measurement Theory and Applications for the Social Sciences READ ONLINE By Deborah L. Bandalos
1
Free Download Measurement Theory and Applications for the Social Sciences By Deborah L. Bandalos (ebook online) #EPUB DOWNLOAD in <PDF> Measurement Theory and Applications for the Social Sciences READ ONLINE By Deborah L. Bandalos Read Online : https://readanybook.us/?q=Measurement+Theory+and+Applications+for+the+Social+Sciences Which types of validity evidence should be considered when determining whether a scale is appropriate for a given measurement situation? What about reliability evidence? Using clear explanations illustrated by examples from across the social and behavioral sciences, this engaging text prepares students to make effective decisions about the selection, administration, scoring, interpretation, and development of measurement instruments. Coverage includes the essential measurement topics of scale development, item writing and analysis, and reliability and validity, as well as more advanced topics such as exploratory and confirmatory factor analysis, item response theory, diagnostic classification models, test bias and fairness, standard setting, and equating. End-of-chapter exercises (with answers) emphasize both computations and conceptual understanding to encourage readers to think critically about the material. ? . . . . . . . . . . . . . Measurement Theory and Applications for the Social Sciences PDF Online, Measurement Theory and Applications for the Social Sciences Books Online, Measurement Theory and Applications for the Social Sciences Ebook , Measurement Theory and Applications for the Social Sciences Book , Measurement Theory and Applications for the Social Sciences Full Popular PDF, PDF Measurement Theory and Applications for the Social Sciences Read Book PDF Measurement Theory and Applications for the Social Sciences, Read online PDF Measurement Theory and Applications for the Social Sciences, PDF Measurement Theory and Applications for the Social Sciences Popular, PDF Measurement Theory and Applications for the Social Sciences , PDF Measurement Theory and Applications for the Social Sciences Ebook, Best Book Measurement Theory and Applications for the Social Sciences, PDF Measurement Theory and Applications for the Social Sciences Collection, PDF Measurement Theory and Applications for the Social Sciences Full Online, epub Measurement Theory and Applications for the Social Sciences, ebook Measurement Theory and Applications for the Social Sciences, ebook Measurement Theory and Applications for the Social Sciences, epub Measurement Theory and Applications for the Social Sciences, full book Measurement Theory and Applications for the Social Sciences, online Measurement Theory and Applications for the Social Sciences, online Measurement Theory and Applications for the Social Sciences, online pdf Measurement Theory and Applications for the Social Sciences, pdf Measurement Theory and Applications for the Social Sciences, Measurement Theory and Applications for the Social Sciences Book, Online Measurement Theory and Applications for the Social Sciences Book, PDF Measurement Theory and Applications for the Social Sciences, PDF Measurement Theory and Applications for the Social Sciences Online, pdf Measurement Theory and Applications for the Social Sciences, read online Measurement Theory and Applications for the Social Sciences, Measurement Theory and Applications for the Social Sciences Deborah L. Bandalos pdf, by Deborah L. Bandalos Measurement Theory and Applications for the Social Sciences, book pdf Measurement Theory and Applications for the Social Sciences, by Deborah L. Bandalos pdf Measurement Theory and Applications for the Social Sciences, Deborah L. Bandalos epub Measurement Theory and Applications for the Social Sciences, pdf Deborah L. Bandalos Measurement Theory and Applications for the Social Sciences, the book Measurement Theory and Applications for the Social Sciences, Deborah L. Bandalos ebook Measurement Theory and Applications for the Social Sciences, Measurement Theory and Applications for the Social Sciences E-Books, Online Measurement Theory and Applications for the Social Sciences Book, pdf Measurement Theory and Applications for the Social Sciences, Measurement Theory and Applications for the Social Sciences E-Books, Measurement Theory and Applications for the Social Sciences Online , Read Best Book Online Measurement Theory and Applications for the Social Sciences #E_books #pdfdownload #BookOnline #RTF #DownloadOnline
Free Download Measurement Theory and Applications for the Social Sciences By Deborah L.
0
free-download-measurement-theory-and-applications-for-the-social-sciences-by-deborah-l-1d7f7799fb9a
2018-08-30
2018-08-30 17:07:43
https://medium.com/s/story/free-download-measurement-theory-and-applications-for-the-social-sciences-by-deborah-l-1d7f7799fb9a
false
616
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
taylajensen
null
e89ad4407913
taylajensen
0
1
20,181,104
null
null
null
null
null
null
0
null
0
5977fd7213bd
2017-07-22
2017-07-22 02:48:45
2017-07-30
2017-07-30 11:52:32
1
true
en
2018-07-27
2018-07-27 02:12:07
4
1d83430c392d
4.94717
830
40
0
If I wanted to find the smartest kids on the planet, where would I look?
5
Source: Wikimedia Commons How to Be a “Great Student” and Learn Absolutely Nothing At All If I wanted to find the smartest kids on the planet, where would I look? Many people, I bet, would suggest the IPO — the International Physics Olympiad. Each year, high school students from around the world face off to hours and hours of difficult physics questions. Only the best come out on top. And for 11 of the last 25 years, the winners have come from a single country — China. Why does China dominate? One competitor from the UK comments: “…the Chinese education system, coupled with discipline through fear works. … China starts preparation for the competition when their participants are just 8; they work ~16 hours a day on physics problems. The result? Winning with ease. … I’m currently [one of] the best physics students in the UK and I’d pay anything to have had an upraising like that, instead mine was consumed with PC games, and posting on forums.” In middle school, I had my own taste of Chinese “discipline through fear.” In China for summer break, I joined a local swim team for a day. One of the girls was giggling to a friend’s joke. The coached walked up behind her, scolded her for having fun, and hit her on the head with heavy metal rod. She made sure not to laugh again. Yes, when it comes to solving physics problems, the Chinese are the best in the world. But that leaves me with a question. So what? What does a show of mental acrobatics do for us? Who cares if you’re a bit faster than the kid across the room? And is it fair to call such a kid “smart”? This reminds me of a conversation between Al Seckel and Richard Feynman — everyone’s favorite safe-cracker, prankster and Nobel-winning physicist: “Several conversations that Feynman and I had involved the remarkable abilities of other physicists. In one conversation, I remarked to Feynman that I was impressed by Steven Hawking’s ability to do path integration in his head. Ahh, that’s not so great, Feynman replied. It’s much more interesting to come up with the technique like I did, rather than to be able to do the mechanics in your head. Feynman wasn’t being immodest, he was quite right. The true secret to genius is in creativity, not in technical mechanics.” Any competent graduate student can learn to solve problems fast. But to innovate, to invent a completely new way of solving problems or seeing the world — that’s what earns you the Nobel Prize. You can’t beat a child into creativity. 10,000 Hours of Nothing In middle school, I took first place at a regional math competition. Why? My parents and teachers trained me to identify the “tricks” needed to solve problems fast. After countless hours of practice, I could glance at a problem, scribble a few notes, and have my answer faster than any other student. Two problems with this approach. First, I did not understand what I was doing. In Surely You’re Joking, Mr. Feynman!, Feynman tells the story of his trip to Brazil. The students there could answer exam problems with incredible speed, but they could not apply their “knowledge” at all to the real world. Feynman eventually diagnosed the problem: After a lot of investigation, I finally figured out that the students had memorized everything, but they didn’t know what anything meant. When they heard “light that is reflected from a medium with an index,” they didn’t know that it meant a material such as water. They didn’t know that the “direction of the light” is the direction in which you see something when you’re looking at it, and so on. Everything was entirely memorized, yet nothing had been translated into meaningful words. So if I asked, “What is Brewster’s Angle?” I’m going into the computer with the right keywords. But if I say, “Look at the water,” nothing happens — they don’t have anything under “Look at the water”! I, like those Brazilian students, had been trained to be a well-oiled machine. Feed me the right questions — the ones I was programmed to answer — and I would spit out the right answers. But ask me to create, and I could do nothing. This is what happens when you make learning about competition, scores, seconds, metrics and targets. All the complexity and wonder of learning is neutered, reduced to numbers on a page. Education is no longer about learning, but about faster calculations, higher scores, competitive rankings. There is no time to understand, because to understand means to lose. And when a rare educator shows up that cares, he too is neutered by the system. “At one point, the neurology department asked me to test and grade my students. I submitted the requisite form, giving all of them A’s. My chairman was indignant. “How can they all be A’s?” he asked. “Is this some kind of a joke?” I said, no, it wasn’t a joke, but that the more I got to know each student, the more he seemed to me distinctive. My A was not some attempt to affirm a spurious equality but rather an acknowledgment of the uniqueness of each student. I felt that a student could not be reduced to a number or a test, any more than a patient could. How could I judge students without seeing them in a variety of situations, how they stood on the ungradable qualities of empathy, concern, responsibility, judgment? Eventually, I was no longer asked to grade my students.” -Oliver Sacks, On the Move The Fires of Industry That brings me to my second problem. What happens when you take a child from her sandbox — where she has learned to get dirty, play, laugh, and see the world with wide, curious eyes —to lock her into a “regime of fear” where the new Gods are efficiency and optimization? Will she still build sand castles? And, what happens when that girl becomes a mother? What does she teach her children? Let’s look again at that young student from the UK, who envies the Chinese and would “pay anything to have had an upraising like that.” In the same comment, he shares his vision for society: “(1) A productive society is one with experts. (2) Expertise is only accomplished with relentless practice. (3) The most productive society will be accomplished if citizens are made to constantly work at their discipline. There will, of course, be a transition stage in which those that lack real expertise are weeded out; but, I’m ashamed to say, that seems the most productive society. All of humanity reduced to a single, pale dot. Our purpose? Productivity. Only the strong survive, the weak are “weeded out,” and we move forward to the fires of industry. “Together, my Lord Sauron, we shall rule this Middle-Earth. The old world will burn in the fires of industry. The forests will fall. A new order will rise. We will drive the machinery of war with the sword and the spear and the iron fist of the Orc. “We have only to remove those who oppose us.” Why, as I read this student’s words, do I feel a deep ache in my gut? It is not anger I feel, but shame — for although I was not the one who wrote those words, it could have been. His beliefs were my beliefs. His world was my world. What a small, terrible thing it was.
How to Be a “Great Student” and Learn Absolutely Nothing At All
2,693
how-to-be-a-great-student-and-learn-absolutely-nothing-at-all-1d83430c392d
2018-07-27
2018-07-27 09:33:29
https://medium.com/s/story/how-to-be-a-great-student-and-learn-absolutely-nothing-at-all-1d83430c392d
false
1,258
Figuring out how to live in a world we don't understand
null
polymathproject
null
The Polymath Project
charles@thepolymathproject.com
the-polymath-project
SELF IMPROVEMENT,CULTURE,PHILOSOPHY,SCIENCE,EDUCATION
mmeditations
Education
education
Education
211,342
Charles Chu
Rethinking the obvious @ http://thepolymathproject.com
6a011a3d09ff
mmeditations
42,645
36
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-16
2017-11-16 11:07:46
2017-11-16
2017-11-16 11:25:39
4
false
en
2017-11-16
2017-11-16 11:33:33
6
1d834bd1660
5.586792
21
1
0
For the last 5 years I have been studying Electrical and Computer Engineering and it has changed me in so many ways. I made new friends…
5
Going through 10,000 pictures in 30 seconds For the last 5 years I have been studying Electrical and Computer Engineering and it has changed me in so many ways. I made new friends, met interesting people and worked and learned a lot. Long story short, after countless early-morning classes and late-night drinks, last week I graduated! And it was great! The university held a big graduation ceremony with all these important academic people, relatives, friends, and more than 90 graduating students. I like these ceremonies. Everyone seems so happy. What better way to hold on to this happiness from capturing it in photos? Professional photographers, of course, realize it and are always there. Oh boy, there are always a lot of them. The way this works is dozens of photographers take thousands of photos of happy graduating students and their families. They later upload low resolution copies of them on a website and students can check them out, or even buy some. But there is a problem… There are far too many photos and they are not tagged. Take a look in the photos of my graduation. There are so many! We are talking about 436 pages with 24 pictures each, totaling to about 10,400 photos. The worst part is, that the only way to find a specific person is to go through all of those photos manually. A process that with a rough estimation would take up to three hours. I mean, I am not the busiest man in the world, but it seems like a lot of time for picking a couple of photos. I made it through the 20th page and was already bored. There had to be another way, I figured. Besides, if you have to do the same thing more than once, a computer can do it better than you. So, I came up with a simple idea. What if I wrote a script to scrape all the photos and pass them through some sort of algorithm that could detect my face and return only the photos I am in. Theoretically it should work and theoretically it should take me less than three hours. Downloading photos As I already mentioned, the website I am targeting is organized in pages and every page contains 24 photos. What I need is a script that can iterate through all these pages and download the photos. At first I intended to write everything in Python but after realizing how easy it would be to write a simple bash script leveraging the power of wget, I thought, why not do this instead. So here it is: There is not much to it to be honest. The script iterates through every one of the 436 pages and runs a simple wget command to download all jpg images. Notice that it uses the wget’s retrieval option, meaning that wget crawls the webpage following links and directories. You can learn more about this interesting algorithm here. This way wget won’t just download the thumbnails of the pictures but also the original ones. Some extra pixels will be crucial for the [spoiler] machine learning algorithm I intend to use later on [/spoiler]. After its execution, the script has filled a folder with more or less 10,400 photos ready to be analyzed. A small portion of the 10,400 images that were scrapped from the website Face recognition The only thing missing now is finding a way to detect my face across all these images. Good thing we have machine learning for that! There are pre-trained models and ready to go libraries all over the place, that you can use on your project and give it magical skills. Face recognition is an excellent open-source python library that can do just that and is advertised to have an accuracy of 99.38%, while working on top of the famous dlib library. It basically provides access to a set of algorithms and operates as black-box allowing the following: Find faces in pictures Manipulate facial features Identify people using their faces Without diving into too much detail, I’ ll try to explain how the system works, so it hopefully won’t be a black box anymore. In order to find faces, the algorithm converts the picture to black and white, intending to deal with just brightness and not color. By drawing small vectors of how brightness changes (gradients) it creates a new image consisting of features, which can now be compared to a set of pre-processed pictures of faces. If they are close, a face is detected. In order to identify a person, the algorithm searches in a database of already known people for the person who has the closest measurements to the new one. Machine learning (yeah… no deep learning, sorry, I know it’s hot right now, but it’s not the solution to everything) is used to make this classification with a linear SVM classifier. To learn more about this process I recommend reading a much more accurate and in-depth explanation by Adam Geitgey here. Applying face recognition Back to the initial problem now. The idea is to use a single image of my face, preferably one from the graduation day, to train the face recognition library and then pass each one of the 10,400 photos through the algorithm, that will return those that I am in. My face was used for training Some photos contain several faces, so it is important to make sure that all of them are compared to mine. Finally, all photos of me are stored in a separate folder, so it will be easier to examine them later. The python code that realizes these can be found bellow. That’s pretty much it! Notice on line 21, a specific value (0.5) is provided. This is the tolerance (or threshold) of the algorithm. Higher tolerance tells the algorithm to be less strict, while lower means the opposite. It does take some time to run, since it has to check all 10,000+ photos (but there is surely room for some parallelization). Some of the photos that were automatically detected Tada! This is a success! There are a couple of things we should notice. There are watermarks on the photos, but face recognition didn’t have any problems with that. Moreover, the algorithm works great for group photos, or photos that the face is far away from the camera lens. Accuracy About 160 pictures of me were detected using this algorithm. If you consider there were 90 students (90*160 = 14,400 pictures, some of which were group photos) it makes sense from a statistics point of view. However, I cannot provide a percentage of the accuracy, since that would require me going through each one of the original 10,400 photos, which is what I was intending to avoid to begin with. I can tell you that there are false positives (people ending up in my folder, without being me) and that there are probably false negatives (photos of me that didn’t end up in my folder) but overall it seems that there are not many of those. Conclusion Now, this was fun! At least much more fun than going through all the photos manually. I think it took less time, too. Not to mention that all my friends asked me to run the script for their faces. I even did it for some parents that were there. Wouldn’t it be nice to see something like this implemented on photo shops’ websites? Bill Gates once said: I choose a lazy person to do a hard job. Because a lazy person will find an easy way to do it. I say: If you are the lazy person, give the hard job to a computer. It will save you time and effort. Thanks for reading! Now, the lazy person needs to find the money to buy some of those graduation photos. No Machine Learning for that, I guess…
Going through 10,000 pictures in 30 seconds
219
going-through-10-000-pictures-in-30-seconds-1d834bd1660
2018-04-15
2018-04-15 12:18:04
https://medium.com/s/story/going-through-10-000-pictures-in-30-seconds-1d834bd1660
false
1,295
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Konstantinos Mavrodis
null
a2046d7fabee
kmavrodis
58
58
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-17
2018-08-17 04:36:56
2018-08-23
2018-08-23 05:04:39
1
false
en
2018-08-23
2018-08-23 05:04:39
1
1d847676f00a
6.833962
0
0
0
I was born and raised in the West, steeped in Capitalism, market economies and the power of supply and demand. As I began to consider the…
4
Universal Basic Income, Capitalism and Christianity — Can We Reconcile the Three? I was born and raised in the West, steeped in Capitalism, market economies and the power of supply and demand. As I began to consider the concept of Technological Unemployment, I wrote about Capitalism having an “end-game”. https://medium.com/@ForHumanity_Org/capitalism-aritifical-intelligence-robotics-socialism-universal-basic-income-740cc3f1c41e I believe that remains true. I believe that if left to its own devices, with technological advancement, Capital (as in Capital v Labor) would choose to eliminate the labor from its cost equation, resulting in 100% of profit left for Capital. You might argue that 100% capital and 0% labor is too extreme, and I agree. There will always be roles/work for humans to do, based upon the skills that humanity retains which machines cannnot replicate, even if that is limited to their “humanness”. In this piece, I am using the extreme example to highlight a risk, not predict an exact future. Capital is incentivized to eliminate labor from its cost structure. AI and Automation are capital investments that can replace labor therefore, I expect Capital to increase investment in AI and Automation which will likley result in significant unemployment, at least as it relates to jobs that pay a salary. To complete the ideological triumvirate, I was raised and subsequently chose to be a Christian, which defines the core of my morality. I am not asking you to agree with my morality, just understand that my moral choices, come from this background, as I try to reconile these concepts. With that as foundation, I decided to host a backyard BBQ, where the pre-announced topic was Universal Basic Income, Christianity and Capitalism, reconciling the three ideologies. I invited good friends and was not attempting to make this a “comprehensive and stastically significant focus-group”, instead I wanted to just talk and debate and see if we could learn a few things and achieve some level of consensus. It was a lovely dinner, the talk and questions were challenging and while we wandered a little bit into the weeds, as all good conversations tend to, we actually did find some key points upon which we generally agreed, even if the details remained a little debateable or ambiguosly defined. So I present for your consideration the results of this discussion. It should be noted, that the crux of the discussion was about UBI and thus what follows is a discussion about UBI, influenced by our similar capitalistic (western) backgrounds and by our shared Christian-faith. I believe this can be a useful guide for others as to how we considered some of the challenges presented by these three ideologies and where we landed. I do not expect that all will exactly share these beliefs, but rather take this as one version of the discussion for you to consider. A few bullet points: Belief that we have a moral responsibility, as a community to care for the poor and those who cannot take care of themselves. This is absolute and a core principle based on our Christian faith. Belief that “risk and reward are linked, greater risk should equate to greater potential reward and vice versa” is a bibilical concept. It need not apply only to money and capital, but in the Parable of the Talents, failure to “invest” Talents is considered sinful. This was discussed in the context of all behavior. Taking risk, deserves reward, but may also lead to failures, which is okay. Our understanding of the parable is that we should take risks with the assets that we have - we should invest. The group voiced a concern that UBI may lead to risk-averse behavior of all kinds, notably a lack of investment. Many UBI proponents talk out of both sides of their mouth on this subject which is why we spent time on it. On one hand, they criticize those who have taken great risk, sometimes with time, effort, work/life balance sacrifice, capital or even reputation, instead frequently attributing it to inheritence or unfair exploitation. Then they suggest that a UBI will lead all people to be more entreprenuerial because their downside risk is floored with the UBI, in other words, they will take risks. Either risk and reward are linked at all levels or they are not. You can’t reward “UBI entreprenuers” with profits and begrudge the wealthy who may have already earned their profits. Not to mention those middle to upper class members who just plain worked ridiculously hard. Something that used to be called the “American Way”. The group felt that a UBI, on a mass scale, would reduce the appetite for risk amongst the mass population, even if a few were emboldened. They did not accept the premise that UBI would lead to greater entrepreneurialism. Belief that work and participation in your own survival is a human responsibility both to yourself and to your community. The group did not believe in a “right to survive”. They support the “right to participate in your own survival”. The group believes that the community is responsible for caring for those who are “unable to participate in their own survival”. This might be a semantic argument, but the point for us was clear. Survival is not guaranteed, it must be worked for and that is the nature of life. In fact, the idea that anyone had a guaranteed right to survive was generally considered illogical. The group did not require Universal to mean that 100% of people must receive the benefit fully. They were supportive of the idea that high income earners could have their basic income effectively fully taxed, which of course reduces the cost of implementation. The group felt that it should be “means-tested” on both ends. The wealthy should be taxed on their UBI to lower the cost of the program. But on the receiveing end, all should work, who are able. This is a moral decision based in the belief that providing for ourselves, our family and our communities is our responsibility. They further felt it was appropriate to determine “who is able” as a community. Implicit in this point is the “ability to work”. If work disappears, then that reduces one’s ability to work. The group flat out rejects the notion of a “right NOT to work”. That of course is not the same as “you must have a job and be receiving pay”. The group roundly supports the value of “unpaid jobs” such as stay-at-home caretakers or volunteers. In the context of substantial technological unemployment, the group understood and accepted the idea that Universal Basic Income might be the only option. No other alternative was offered as yet. There was genuine concern about UBI and unintended consequences, such as laziness, forced re-location and subsequent low-income housing concentration and negative feedback loops. Some of the group were familiar with UBI studies and their “smallness” and “terminal value”. They recognizing that behavior associated with these tests is not likely to compare to behavior in a world that MUST rely on UBI, such as the conditions that might come to pass under technological unemployment. Therefore, they reject the notion that we “know” how people would react under a comprehensive and necessary UBI program, reverting to concerns that it would not encourage work of all kinds. Following onto that point, one who is able, must work, whether they like the work or not. Where work is defined as “putting in effort” to participate in one’s survival or to execute the will of the community if the community is providing the support. This is different than a “job”, which is associated with pay or a salary. Stay-at-home parenting is work, and provides great benefit to the community without pay. They also reject the notion that a worker should enjoy their work. In fact, the group laughed at the idea that someone shouldn’t have to do work they don’t enjoy. They all wondered who the lucky ones were who always enjoyed their work. The group points to Capitalism’s excellent success in wealth creation, accepts the principle that “investment” from the wealthy creates growth and new opportunities. They also felt that the profit/return motive has made the allocation of capital generally efficient and thus generally productive. Further the group accepts that the benefit from new opportunities may be to a diminshing number of participants and that a consequence has been an increase in income inequality. One of the supporting arguments for higher taxes and potentially a UBI was the concern about rising income inequality. They did not reject the notion however that Capitalism may have an end-game — technological unemployment. There was considerable concern about the misuse of cash designed to provide food, clothing and shelter. One member who has had significant dealings with the poverty-stricken noted that frequently those in need, needed far more than monetary support, as mental-illness and drugs were often associated with their situation. It was suggested that a UBI payment might be used directly for food, clothing and shelter, instead of as cash to avoid misuse. To which there was varied debate, which I tabled (another version of “off into the weeds”). There were doubts about the government’s ability to provide the “right” solutions for those needs and externalities associated with that process. There was no conclusion on the best approach, cash or vouchers for services. To wrap up our take on Universal Basic Income and trying to tie it together with Capitalism and Christianity, I would say the group was happy to consider the concept, unwilling to toss out capitalism, unwilling to accept some of the primary arguments of UBI advocates and generally unmotivated to run out and support a Universal Basic Income. They were happy to understand it better. Happy to consider the pros and cons more than they ever had and I know that awareness of the issues has been raised. Notably, I think everyone in the group is now comfortable having an opinion on the subject and how it fits into their views on life, poverty, public policy and technological unemployment. Maybe you, the reader, are a little more comfortable too. Whether you agree or disagree with the thoughts presented here, I suspect that the group’s thoughts are fairly mainstream. If you are vehemently opposed to UBI or zealously advocating for UBI, this ought to help you understand how one group thinks. Maybe it will make for a more fruitful dialogue as these challenges are considered in the future.
Universal Basic Income, Capitalism and Christianity — Can We Reconcile the Three?
0
universal-basic-income-capitalism-and-christianity-can-we-reconcile-the-three-1d847676f00a
2018-08-23
2018-08-23 05:04:40
https://medium.com/s/story/universal-basic-income-capitalism-and-christianity-can-we-reconcile-the-three-1d847676f00a
false
1,758
null
null
null
null
null
null
null
null
null
Basic Income
basic-income
Basic Income
2,763
Ryan Carrier
ForHumanity is a non-profit organization dedicated to raising awareness and examining the risks from the growth of AI & Automation. https://ForHumanity.center
9cad6dba6689
ForHumanity_Org
30
15
20,181,104
null
null
null
null
null
null
0
null
0
bcd8f55f8765
2018-02-15
2018-02-15 22:24:36
2018-02-15
2018-02-15 22:30:47
3
false
en
2018-08-16
2018-08-16 11:29:40
9
1d852af0dbc9
0.772642
1
0
0
Revolutionary Sports Data Analytics Platform built on Blockchain technology. Aggregated using Artificial Intelligence— AI Machine Learning.
5
Sports Ledger Revolutionary Sports Data Analytics Platform built on Blockchain technology. Aggregated using Artificial Intelligence— AI Machine Learning. Enhanced Sporting Results, Player - Team Performance, Health & Conditioning, Match Conditions, Scientific Statistics Generating Accurate Future Sporting Predictions. Follow Sports Ledger on: Website: https://www.sportsledger.io Telegram: https://t.me/sportsledger Twitter: https://twitter.com/sportsledger_io Facebook: https://www.facebook.com/sportsledger Medium: https://medium.com/sports-ledger YouTube: https://www.youtube.com/c/sportsledger Instagram: https://www.instagram.com/sportsledger LinkedIn: https://www.linkedin.com/company/sportsledger Reddit: https://www.reddit.com/r/sportsledger
Sports Ledger
6
sports-ledger-1d852af0dbc9
2018-08-16
2018-08-16 11:29:40
https://medium.com/s/story/sports-ledger-1d852af0dbc9
false
59
Revolutionary Sports Interactive Analytics Platform built on Blockchain & Artificial Intelligence
null
sportsledger
null
Sports Ledger
mdm@sportsledger.io
sports-ledger
BLOCKCHAIN,AI,MACHINE LEARNING,SPORTS,CRYPTOCURRENCY
sportsledger_io
Blockchain
blockchain
Blockchain
265,164
SportsLedger.io
https://www.sportsledger.io | Telegram https://t.me/sportsledger | Revolutionary Sports Data Analytics Platform built on Blockchain & Artificial Intelligence
7d74b3a13d12
sportsledger
5,959
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-09
2018-09-09 07:03:14
2018-09-09
2018-09-09 08:12:02
4
false
en
2018-09-09
2018-09-09 08:13:51
16
1d85ebf68510
3.726415
4
0
0
This is a series I have started from my research and study of various Neural Networks. I hope to share what I have learnt and also be part…
4
How Convolutional Neural Networks view the world (ANN Series #1) This is a series I have started from my research and study of various Neural Networks. I hope to share what I have learnt and also be part of discussions revolving around it. Do share your views/opinions! ‘CNNs do not suffer from the curse of dimensionality!’ CNNs are an attempt to make a computer/computing system view the world just like a human does . Convolutional networks were inspired by biological processes in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. Technically speaking, a convolutional neural network (CNN) is a class of deep, feed-forward artificial neural networks, most commonly applied to analyzing visual imagery. CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing. Why does CNN matter? A major part of the most recent research work happening in the Data Science and Machine Learning community revolves around neural nets like CNN. CNNS can help us understand the current state of our environment (say, using satellite imageries that can help democratize the power to make informed decisions regarding a country’s resources/policy decisions), solve image detection problems (like self driving cars or real time analysis of behaviour or describing a photo) and cut to now, even make you dance without you actually dancing. ( Everybody dance now ) Architecture of CNN As CNN is a variation of multilayer preceptron, it consists of many hidden layers. So, the design of a CNN is Input Layer + Multiple Hidden Layers + Output Layer Hidden layers are convolutional layers, pooling layers, fully connected layers and normalization layers. Convolutional Layer Convolutional layers apply a convolution operation to the input, passing the result to the next layer. The convolution emulates the response of an individual neuron to visual stimuli. Each convolutional neuron processes data only for its receptive field. What makes CNN better? Although fully connected feedforward neural networks can be used to learn features as well as classify data, it is not practical to apply this architecture to images. A very high number of neurons would be necessary, even in a shallow (opposite of deep) architecture, due to the very large input sizes associated with images, where each pixel is a relevant variable. For instance, a fully connected layer for a (small) image of size 100 x 100 has 10000 weights for each neuron in the second layer. The convolution operation brings a solution to this problem as it reduces the number of free parameters, allowing the network to be deeper with fewer parameters. (A better explanation can be found here.) Local Connectivity Factor : In CNN, each neuron is connected to only a small chunk of input. Local connectivity increases the computational efficiency and reduces the computational time. CNNs also resolves vanishing / exploding gradients problem. Exploding gradient problem arises when a large error gradient accumulates and results in very large updates to NN during training. It makes the model unstable/unable to learn. This is done by using ReLu units instead of sigmoidal non linear units/functions. ReLu function : f(x)= x for x≥0; 0 otherwise. So ReLu cancels all negative values and propagates only the non-negatives ones. Whereas, the sigmoidal function takes more computational power and time to be calculated. Sigmoidal function : Pooling Layer It is common to periodically insert a pooling layer between successive convolutional layers in a CNN architecture. The pooling layer serves to continuously to reduce the number of parameters, amount of computation in the network and reduce the spatial size of the representation, and hence to also control overfitting. The most popular pooling layer function is maxpooling. To understand this better, consider the following VGG16 architecture. (VGG16, aka OxfordNet is another variation of CNN) VGG16 architecture The layers drawn in red are the maxpooling layers. Notice how the size of the layer becomes smaller at each maxpooling layer? That’s discretization happening. How does it work? → Let’s say we have a 4x4 matrix representing our initial input. Let’s say, as well, that we have a 2x2 filter that we’ll run over our input. We’ll have a stride of 2 (meaning the (dx, dy) for stepping over our input will be (2, 2)) and won’t overlap regions. For each of the regions represented by the filter, we will take the max of that region and create a new, output matrix where each element is the max of a region in the original input. So, only the most important or relevant features are carried forward for consideration/learning. These are the very basic and few fundamental ideas behind the concept of CNNs. There are many exciting variations to this. More on that, later! I shall exit(0) now. References : Stanford’s CS231n github https://en.wikipedia.org/wiki/Convolutional_neural_network https://www.coursera.org/learn/convolutional-neural-networks https://medium.com/technologymadeeasy/the-best-explanation-of-convolutional-neural-networks-on-the-internet-fbb8b1ad5df8
How Convolutional Neural Networks view the world (ANN Series #1)
15
how-convolutional-neural-networks-view-the-world-ann-series-1-1d85ebf68510
2018-09-09
2018-09-09 08:13:51
https://medium.com/s/story/how-convolutional-neural-networks-view-the-world-ann-series-1-1d85ebf68510
false
802
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Hima Bindu
Jack of all trades. Bachelor in one, Mastering another!
11fda6ce20ee
himabindu13198
69
48
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-14
2017-11-14 12:16:52
2017-11-14
2017-11-14 11:31:05
1
false
en
2017-11-27
2017-11-27 09:24:02
8
1d8646a9cc56
2.626415
14
7
0
Artificial intelligence (AI) is transforming the business into a growth-oriented future. With machine learning and big data analysis, AI is…
5
Airfio is advancing the crypto banking with AI Artificial intelligence (AI) is transforming the business into a growth-oriented future. With machine learning and big data analysis, AI is an absolute technique that prevents manual errors and rationalizes the system. Considering the near future, Airfio is emerging in a crypto market with the first ever biggest revolution. It is integrating the AI technology across blockchain based transactions with the vision to make transactions quicker, faster and smarter. Neural language processing under crypto market prevents the error, reduces the cost and improves the work efficiency. Since it read & comprehend the customer’s behavior, it frees up the employees for most complicated errors. How does AI work better in Crypto banking? Though cryptocurrency market is soaring with a number of new coins every day, only a few become successful which embraces something new & worth to invest for. Airfio is deriving biggest revolution in crypto banking by emerging AI. This crypto bank is introducing 21 new products to the market. As enrollment, it presents its own currency “ARF COIN”, Assistant Application, Mining Application, Visa Card, SDK, Earning programs and unlike. It explains the AI work process in crypto banking as follows! Airfio’s AI technology in crypto banking prevents inefficiency and stimulate smart trading Artificial intelligence uses machine learning and human intelligence — thus it is sure that transactions go smoothly without any room for errors and fatigue. Assistant applications by Airfio offers an instant solution to the query/issue faced by users at the time of trading. It is the personal manager of users which guides them throughout the process. Since neural language read user’s behavior over the system, it better understands user’s requisites and responds them accordingly. Cryptocurrency transactions with AI creates digital leger advanced and investors/traders smarter. ARF coin is built to grow borderless transactions and facilitate everyone has feasible access to cryptocurrency market. Airfio is introducing its decentralized exchange which shall not be managed by any central authority. However, the system is based on AI which verifies the transactions via neural language processing. For the first time ever, digital mining is possible via Mobile technology which is an exclusive launch of Airfio — coming in later days. Beyond these noteworthy advancements, the ARF coin proposes its future plan of launching ATM machine networks, kiosk machines, SDK, earning programs, Visa card and so on. Airfio is launching its Pre-ICO Sale from 16th November The Pre-ICO sale of Airfio will be commencing from 16th November 2017 and will end by 20th November 2017. There are 1 million tokens allotted for Pre-ICO. Participants can receive free ARF tokens by joining its network. How to join ARF’s Earning program!? It offers various earning programs which has in lined as bounty campaign, referral program, staking program and lending program. There are two ways to join; either by referred link or by direct visiting [https://airfio.com/r]. With a very basic information, users can get registered successfully. Upon registering the Airfio network, an advanced dashboard is given to select most preferred earning task based on his interest. Bounty Campaign & Referral program of Airfio is the free token earning program where the number of easiest tasks are designed to let users complete and claim the free ARF tokens. Once the task is successfully completed, participants can get the status of approval and their performance on the dashboard itself. More on its earning program: https://airfio.com/earn_tokens Participating and joining this network is really worth investing time and effort because, at every stage (including pre-Ico, during ICO and after completion of ICO), Airfio is offering countless benefits. Airfio Crypto bank is bringing advancement in cryptocurrency market by evolving AI technology. Be the first to join the community with below source and explore Airfio. https://airfio.com/ Follow on twitter: https://twitter.com/airfio Follow on Medium: https://medium.com/airfio Originally published at www.bitcoininsider.org on November 14, 2017.
Airfio is advancing the crypto banking with AI
112
airfio-is-advancing-the-crypto-banking-with-ai-1d8646a9cc56
2018-02-23
2018-02-23 15:21:56
https://medium.com/s/story/airfio-is-advancing-the-crypto-banking-with-ai-1d8646a9cc56
false
643
null
null
null
null
null
null
null
null
null
Blockchain
blockchain
Blockchain
265,164
Airfio — AI crypto banking
Airfio is the future of cyrpto banking which integrates neural networks with blockchain technology. https://airfio.com/
63aaadd94d98
airfio
148
40
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-02
2018-05-02 11:39:06
2018-03-21
2018-03-21 13:07:40
1
false
en
2018-05-02
2018-05-02 11:40:31
0
1d866928c0d8
2.713208
0
0
0
March 21, 2018 admin
3
How AI Redefines HR Management Practices March 21, 2018 admin With different developments in technology over the last few years, human resource as a process has experienced significant changes. Artificial Intelligence (AI), a technological area that simplifies the way we do things has also reshaped the HR function. While the technology cannot completely replace human element, it has successfully altered the way companies hire, manage, and engage with their manpower. AI machines are gaining the intelligence needed to find the right resource or watch out for the next step and assist HR professionals in making intelligent decisions. Let us take a look at some of the benefits of AI in HR management practices: 1. Screening Candidates Recruiters have a large candidate profile database and usually tend to screen each one manually. This involves face to face discussions and a lot of other time-consuming processes. An AI-based software,on the other hand,analyzes the various aspects of a candidate and distinguishes them on the basis of their skills, past experience, and cultural fit for the organization. The software identifies a suitable match for the job profile, thereby saving the recruiter’s time. 2. Interviewing Candidates Generally, recruiters spend about 30% of their time in fixing and conducting interviews. According to reports, 50% of the candidates do not receive a timely response to their application through the traditional corporate structure. However, AI-enabled software identifies the right fit and ensures that communication flows smoothly between the candidate and the HR management. It efficiently sends custom-made messages to potential candidates making it easier for the HR personnel to focus on closing job positions. 3. Onboarding Candidates Deciding the joining date of the candidate post offer acceptance is a tedious task. There is always uncertainty on the same. AI helps to engage and follow up with the potential candidate and ensures that chances of last-minute rejection are low. 4. Reduce Human Partiality AI is dependent on data instead of the human mind. This reduces the chances of bias based on the intuitions and perceptions of the various individuals working in an organization. The work culture is free from discrimination with a more cohesive, communicative workplace and a faster decision-making process. 5. Establish Better Relationships with Employees AI can be used to identify the various characteristics of individual employees through engagement surveys and other personality tools. This helps to match the right employee with the right role. HR queries can also be taken care of with the help of an AI-based Chatbot. Meetings between employees and the HR management can be easily fixed while immediate managers have more information to take the right decision about temperaments, departments, and co-workers. 6. Improve Predictive Data Decision-Making AI algorithms make analysis and interpretation of data easy, resulting in sets and models that provide insights for decision-making. Based on their reliable and predictive properties, decisions for today and tomorrow can be made keeping in mind data from the past. For instance, insights derived from large volumes of data can help to predict probable issues even before they arise. This means that you will be ready to take on attrition and retention precautions at the right time. Predictive analytics help to track employee activity and behaviour which has a direct impact on an organization’s efficiency and productivity. 7. Talent Development Every employee has different learning styles. This is dependent on their experience, behaviours, interests, qualifications, skill sets, and more. AI can develop customized learning programs based on the abilities and capacities of various employees. It can also offer personalized training paths which the employer may only be able to provide in a longer period of time. HR functions are a critical aspect of business growth. This means untimely acceptance of digital HR can hamper the overall success of a business. Therefore, HR personnel must train themselves and be ready to adapt AI technology and machines in the future. These technologies together with human intellect will help to advance HR solutions.
How AI Redefines HR Management Practices
0
how-ai-redefines-hr-management-practices-1d866928c0d8
2018-05-02
2018-05-02 11:40:33
https://medium.com/s/story/how-ai-redefines-hr-management-practices-1d866928c0d8
false
666
null
null
null
null
null
null
null
null
null
Hiring
hiring
Hiring
16,840
Wi-Fi Attendance
null
7890b8b0ecba
wifiattendance
0
1
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-05-05
2018-05-05 20:37:34
2018-05-18
2018-05-18 16:48:59
1
false
en
2018-05-18
2018-05-18 16:48:59
0
1d879ce0a01b
3.271698
3
0
0
After finishing grad school in the early 90’s, I ended up getting a job as the sole sales and marketing resource for a software company in…
4
With AI, the sky is the limit The Rise, Fall, and Rise of Artificial Intelligence After finishing grad school in the early 90’s, I ended up getting a job as the sole sales and marketing resource for a software company in Nanaimo on Vancouver Island. There were a handful of software companies on the Island at the time, but all of the others were involved in GIS (Geographical Information Systems) in support of the B.C. mining and forestry industries. We were an AI firm. At the time, people would often ask you what AI stood for. Our focus was predictive modeling, and we used a variety of advanced technologies to build predictive models for financial, medical and process control systems. This was the first bloom of industrial Artificial Intelligence. All this technology was coming out of academic and corporate labs, and people were trying to commercialize it. We bought a proprietary algorithm from an Idaho State professor. We purchased an expert system-neural network hybrid for trading futures contracts. For the tiny sector of the economy that we were involved with, AI was all the rage. Startups were starting, publications were publishing, and we were getting meetings. If you were talking about neural networks, fuzzy systems, expert systems, or genetic algorithms, people were interested in talking to you. I met with a “quant” (although I don’t think that term existed at the time) at Bear Stearns. Motorola asked us to present at an internal conference on emerging technologies. I corresponded with a Viennese physician who was looking to predict blood glucose levels in diabetics. It was heady times. The sky was the limit. Between the exuberance of youth and the intoxicating potential of AI, I thought our little company on Vancouver Island was going to take the world by storm. I told our CEO I was confident that we could sell 300,000 copies of the shrink wrapped version of our software over the next year. Our sales a year later: 127. Unfortunately, neither our firm nor AI was ready for prime time. Some fuzzy logic went into appliances, neural networks ended up getting embedded into defence systems, and the rest went right back into the labs, not to emerge until about 5 years ago. So what’s different this time? The War is Over For decades a debate raged as to what was the best approach to AI: rules-based or autonomous. For most of that time, it appeared that a rules-based approach had the upper hand. However, as time went on it became apparent that for the most promising applications (image recognition, predictive systems, natural language processing) autonomous systems were the only way to go. At a certain level of complexity, deterministic, rules-based systems were just overwhelmed. Once this was decided, almost all the chips were tossed into the autonomous pot, and progress accelerated. We’ve got the Power One of the main reasons that the first AI emergence failed was that the technology was just too complex for the hardware of the time. Even with an SGI or Sun workstation, the volume of data to be processed swamped the processors (even the math co-processors!). Going through a round of training with a neural network would take a workstation days of number crunching. Today, particularly via the cloud, the availability of processing power is virtually limitless. If you have the budget, you can deploy thousands of virtual machines as part of your AI project. This type of technology was unimaginable in the 90's. A project that would have taken a year in the good old days, can be completed in three weeks today. On top of that, many projects can now be tackled that were not even possible in the past. Hardware is no longer a bottleneck, and removing this hindrance has allowed AI the opportunity to blossom. Is this a Bubble? After twenty years, I’m thrilled to be involved in AI once again. Some of the meetings do remind me of that original AI bubble time, but in general everything seems more substantive now. Companies are spending real money on AI projects, and more and more of them are escaping from the Innovation Labs and getting operationally deployed. It’s early, but I think the AI cat is permanently out of the bag. I do think things are going to move more slowly than a lot of the prognosticators predict. I think that most AI technology is going to be complementary rather than supplementary for a long time, but it’s all going to keep heading in the right direction. AI is going to make life better in a lot of different ways. I don’t expect a robot apocalyse in my lifetime, or in my children’s lifetime. In fact, I don’t expect one at all. Overall, in the AI world, everything is looking pretty rosy. Maybe it’s time to move back to Nanaimo and get the old gang back together!
The Rise, Fall, and Rise of Artificial Intelligence
3
the-rise-fall-and-rise-of-artificial-intelligence-1d879ce0a01b
2018-05-18
2018-05-18 23:30:07
https://medium.com/s/story/the-rise-fall-and-rise-of-artificial-intelligence-1d879ce0a01b
false
814
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
info@datadriveninvestor.com
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ken Tucker
A business consultant helping clients leverage technology strategically, particularly AI and Analytics.
d9e9e188f3b9
burloak26
52
15
20,181,104
null
null
null
null
null
null