audioVersionDurationSec
float64 0
3.27k
⌀ | codeBlock
stringlengths 3
77.5k
⌀ | codeBlockCount
float64 0
389
⌀ | collectionId
stringlengths 9
12
⌀ | createdDate
stringclasses 741
values | createdDatetime
stringlengths 19
19
⌀ | firstPublishedDate
stringclasses 610
values | firstPublishedDatetime
stringlengths 19
19
⌀ | imageCount
float64 0
263
⌀ | isSubscriptionLocked
bool 2
classes | language
stringclasses 52
values | latestPublishedDate
stringclasses 577
values | latestPublishedDatetime
stringlengths 19
19
⌀ | linksCount
float64 0
1.18k
⌀ | postId
stringlengths 8
12
⌀ | readingTime
float64 0
99.6
⌀ | recommends
float64 0
42.3k
⌀ | responsesCreatedCount
float64 0
3.08k
⌀ | socialRecommendsCount
float64 0
3
⌀ | subTitle
stringlengths 1
141
⌀ | tagsCount
float64 1
6
⌀ | text
stringlengths 1
145k
| title
stringlengths 1
200
⌀ | totalClapCount
float64 0
292k
⌀ | uniqueSlug
stringlengths 12
119
⌀ | updatedDate
stringclasses 431
values | updatedDatetime
stringlengths 19
19
⌀ | url
stringlengths 32
829
⌀ | vote
bool 2
classes | wordCount
float64 0
25k
⌀ | publicationdescription
stringlengths 1
280
⌀ | publicationdomain
stringlengths 6
35
⌀ | publicationfacebookPageName
stringlengths 2
46
⌀ | publicationfollowerCount
float64 | publicationname
stringlengths 4
139
⌀ | publicationpublicEmail
stringlengths 8
47
⌀ | publicationslug
stringlengths 3
50
⌀ | publicationtags
stringlengths 2
116
⌀ | publicationtwitterUsername
stringlengths 1
15
⌀ | tag_name
stringlengths 1
25
⌀ | slug
stringlengths 1
25
⌀ | name
stringlengths 1
25
⌀ | postCount
float64 0
332k
⌀ | author
stringlengths 1
50
⌀ | bio
stringlengths 1
185
⌀ | userId
stringlengths 8
12
⌀ | userName
stringlengths 2
30
⌀ | usersFollowedByCount
float64 0
334k
⌀ | usersFollowedCount
float64 0
85.9k
⌀ | scrappedDate
float64 20.2M
20.2M
⌀ | claps
stringclasses 163
values | reading_time
float64 2
31
⌀ | link
stringclasses 230
values | authors
stringlengths 2
392
⌀ | timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0
| null | 0
|
dab71a5c7e1f
|
2018-06-25
|
2018-06-25 21:59:11
|
2017-10-05
|
2017-10-05 08:00:42
| 2
| false
|
en
|
2018-06-25
|
2018-06-25 22:01:21
| 12
|
1a46aefcfba8
| 4.564465
| 1
| 0
| 0
|
Welcome to the fight of the century.
| 5
|
Will Robots Take Over the World and Destroy All Our Jobs?
Welcome to the fight of the century.
In one corner there are the prophets of doom, who say that no job is safe from automation, and economic chaos is inevitable. And in the other corner, we have the rosy optimists, who believe technology will usher in a new era of meaningful work, more leisure time, and improved quality of life.
And in the center of the ring…you’ll find the rest of us.
The truth is, both the naysayers and the optimists have a point. Automation and artificial intelligence (AI) are already disrupting many industries and causing sweeping economic changes in certain segments. But they are also creating opportunities for people to learn new skills and have more interesting work — perhaps even in job categories that don’t yet exist.
We’ve been here before
If we look back to the industrial revolution, we can see how factories and urbanization displaced the traditional roles of craftsmen, farmers, bakers, and artisans. People naturally felt threatened as technology rendered their skills obsolete and disrupted the social and economic order. Yet once we weathered that transition, the 20thcentury economy created new opportunities — with jobs like autoworkers, airline pilots, and engine mechanics — that previous generations could never have imagined. This time is no different, the optimists argue. We’ll have short-term disruption, and then go on to even greater growth and productivity.
Well, that may be true. But try telling that to someone in Detroit or West Virginia. They were on the winning end of the industrial revolution in the last century, but the wheel has turned and now they are on the losing end of global disruption.
The effects of automation and AI are happening now. The U.S. steel industry, for example lost 400,000 jobs — 75 percent of its workforce — between 1962 and 2005, while maintaining the same output. A new technology called the minimill was behind this job-killing productivity boost. And it’s not just in steel. A 2015 study from Ball State University said that U.S. workers lost 5.6 million U.S. jobs from 2000 to 2010 because of increased productivity across the spectrum of manufacturing.
Now, productivity is not a bad thing! But it does have an impact on employment. And while we could spend all day fretting over technology-driven job disruption, I’d prefer to focus on how to respond to it.
Silicon Valley must lead
Let’s start with the proposition that since technology is driving the disruption, technology can help us deal with it. A report from MIT predicts that AI will create as many new jobs as it eliminates. Like the industrial revolution, the report says, AI will create brand new jobs that never existed before. Imagine an “empathy trainer” for AI devices, or an AI ethics compliance auditor. Imagine an “explainer” who bridges the gulf between technology and business or government leaders. These new jobs aren’t just for those with technical skills. In fact, they rely on people with a liberal arts education to bring our humanity into AI applications.
Closer to the here and now, robots in Amazon’s warehouses are taking the heavy, repetitive jobs and creating opportunities for workers to do the more interesting work of monitoring and controlling the robots. Even with more than 100,000 robots in action around the world, Amazon continues to hire human workers at a dizzying rate.
Technology companies must lead in creating opportunities — whether through training and development programs, or by planning technology roadmaps that consider the human factor. A new report from the World Economic Forum (WEF) found that 25 percent of workers in developed countries say their skills don’t match their current jobs — and 35 percent of the skills needed for jobs will change by 2020.
It’s time for Silicon Valley to step up and take responsibility for helping the world deal with the disruption we’ve created. I’m proud that Cisco’s CEO Chuck Robbins is taking a leadership role in addressing this challenge. As chair of the WEF IT steering committee, he is leading an industry-wide global initiative on giving workers the skills they will need to be prepared for the new jobs of this century. He believes we can retrain masses of workers for the new jobs of the future. It’s a matter of vision and will.
The rise of the gig economy
As traditional jobs disappear, many people are moving into self-employment, freelancing, and task-based “gigs.” We need to take a new look at how we support workers in these non-traditional work roles. Shift: The Commission on Work, Workers, and Technology suggests this trend might give us the opportunity to “explore alternative arrangements: networks of small businesses, modern guilds, worker associations, and entrepreneurship training, while at the same time facilitating new ways to administer worker benefits.”
On the other hand, there is a dark side to the gig economy — “gigs” that are really full-time jobs with the benefits stripped away. The whole point of the gig economy is to be able to choose the work you want to do, when and where you want to do it. Hence, there is also a rise in lawsuits against the likes of Uber, pointing out the need for regulation to keep up with reality.
Cisco HR director Gianpaolo Barozzi has pointed out an interesting paradox: “…digital technologies are the potential cause for dramatic economic, employment and social collisions, drivers for a de-humanized future; at the same time they could be the force taking ‘work’ back to a much more human dimension.” By supporting the interconnected, yet self-directed work structures of the gig economy, Gianpaolo believes we enable human workers “to be the center and the core of the new world of work, yet keeping all the advantages of the post-industrial society and the economy we live in.”
Disruption of work is inevitable. It’s how we respond that matters.
It’s a choice
Disruption of work is inevitable. It’s how we respond that matters. We can focus on developing technologies that can complement and work alongside human workers. We can work hard at training and educating workers for the skills they will need. We can create new structures that support the needs of workers who are trying to make their own way in the gig economy.
The issue is ripe for innovation. That’s why we’re focusing on the Future of Work at our next Cisco Hyperinnovation Living Labs event. If you’re interested in co-creating industry-shifting solutions that can help shape the future, let’s talk.
Originally published at blogs.cisco.com on October 5, 2017.
|
Will Robots Take Over the World and Destroy All Our Jobs?
| 6
|
will-robots-take-over-the-world-and-destroy-all-our-jobs-1a46aefcfba8
|
2018-06-26
|
2018-06-26 03:36:27
|
https://medium.com/s/story/will-robots-take-over-the-world-and-destroy-all-our-jobs-1a46aefcfba8
| false
| 1,108
|
CHILL is a co-innovation catalyst for Cisco and our customers.
| null | null | null |
Cisco CHILL
|
AreYouIn@cisco.com
|
cisco-chill
|
INNOVATION,INNOVATION LAB,INCLUSION,PROTOTYPING
|
katecokeeffe
|
Technology
|
technology
|
Technology
| 166,125
|
Kate O'Keeffe
| null |
5efa882eb11
|
kateochill
| 2
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-22
|
2018-04-22 01:55:51
|
2018-04-22
|
2018-04-22 01:56:23
| 1
| false
|
en
|
2018-04-22
|
2018-04-22 01:56:23
| 1
|
1a46f09803f5
| 0.69434
| 1
| 0
| 0
|
Not only does it know when you’re home but the Nest Learning Thermostat also knows when you’re nearby. Here’s how it works.
| 5
|
How does the Nest Learning Thermostat work?
Not only does it know when you’re home but the Nest Learning Thermostat also knows when you’re nearby. Here’s how it works.
You crank up the heat to 70 and walk away. Nest then immediately returns to 62 degrees.
Thinking there must be something screwy with the algorithm you turn it back up to 70. Nest knows that it’s in trouble so it displays a comforting message like ‘Heat set until 10pm’, waits for you to leave and then sets the temperature back to 62 degrees.
Giving up on the learning part you use the app to manually program it to keep the heat on. Nest now uses its WiFi connection to phone the gas company and disconnect your service.
Originally published at ithoughthecamewithyou.com.
|
How does the Nest Learning Thermostat work?
| 1
|
how-does-the-nest-learning-thermostat-work-1a46f09803f5
|
2018-06-13
|
2018-06-13 03:06:11
|
https://medium.com/s/story/how-does-the-nest-learning-thermostat-work-1a46f09803f5
| false
| 131
| null | null | null | null | null | null | null | null | null |
Nest
|
nest
|
Nest
| 192
|
Robert Ellison
|
I Thought He Came With You
|
791d92a60936
|
abfo
| 157
| 37
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-04
|
2017-11-04 10:40:22
|
2017-11-18
|
2017-11-18 13:53:52
| 9
| false
|
en
|
2017-11-23
|
2017-11-23 04:09:35
| 4
|
1a492adf066e
| 3.528302
| 3
| 0
| 0
|
EP#1- Geting a kick start
| 5
|
Journal from 0 to Hero with Data Science and Machine learning (Python) EP1
EP#1- Geting a kick start
Data scientist is the sexiest job of 21st century but who why? I’m pretty sure that now a day, Many people might have heard of Data science and Machine Learning. For those who don’t let me explain a liitle bit about it. This episode would talk about how to set up a software that used for Data science and machine learning and also giving a walkthrough through the set up and basic functions.
some topics that relate to data science
Data Science is the study of where information comes from, what it represents and how it can be turned into a valuable resource in the creation of business and IT strategies.
Machine Learning
Data scince form the past to present
“The world is one big data problem. ~ Andrew McAfee – ”
after a long time of my research of doing data scince I found what is interesting about what can make it so popular. why every business compy need to hire this person for good. Fortunately,I’ve chance to enroll the courses on udemy to find out what “Data Science” is,
My passion
I Need to learn about programing due to my major is computer engineering
I love reading business and Marketing article which has statistic data diagram
I’ve some experience with python and R
Why Data science & Machine Learning
Data scientist Jobs are currently high demand on IT market.
It’s powerful to create some tool for predicting the future and it’s cool
Data science is not only about programing but you’ll need more skills like Data Analytic, Statistic ,Business modelling,etc.
What you need
Python
Python is a programing language that let you work quickly and more effectively. what you’ll need is to learn some basic Python language such as syntax, condition,fumction,OOP,etc.
Python’s syntax is simple,clean,and easy to understand the code. It’s easier for those who hate semicolon ‘;’ like me. you can take a course on internet like udemy or study for yourself with python library. Trust me ! Python is Fun to learn
https://www.python.org
Anaconda
Anaconda is the software that include useful libraries for data scince and machine learning.
In Anaconda, It already had almost every libraries for data scince so you won’t need to install in a hard way, It’s even save your time
Home
Discover the world's most popular Python data science platform and streamline your data science workflows from data…www.anaconda.com
Jupyter Notebook
Since you install Anaconda. It’s already include Jupyter and many other tools like JupyterLab,Spyder,Rstudio,Orange3 ,etc.
Jupyter provides you fast and easy to interact with the code and also open-source software so you can customize your own program. Not only support Python but Jupyter also works with R,Scala,and Julia.
Showing the example of using of Pandas,numpy,Matplotlib which are necessary library for doing data science
for further more detail about Jupyter
http://jupyter.org
Libraries
with library you can used many functions that cover everything you need for data analytics ,data visualize,and Machine learning such as
For data visualize and analyze
Pandas
Numpy
Matplotlib
seaborn
For Machine Learning
Scikit-learn
Spark
Hadoop
Tensor Flow
All these libraries, I will explain in detail in another episode
Conclusion
In my opinion, Data science and Machine learning are useful and good for any one who interested in Computer programing or data analtucs. It’s fun to learn you’re gonna deal with data in real world and learn how to use all these tools perfectly. In my everyday life I’ll try to catch up with the technology that come a cross data science hope this’ll be useful for whom want some information to get a fresh start in data scince.
If you like the article give me a clap and follow me I’ll share more about my experiences along with the tutorial about data science and machine learning.
|
Journal from 0 to Hero with Data Science and Machine learning (Python) EP1
| 3
|
journal-from-0-to-hero-with-data-science-and-machine-learning-python-1a492adf066e
|
2017-11-23
|
2017-11-23 04:09:35
|
https://medium.com/s/story/journal-from-0-to-hero-with-data-science-and-machine-learning-python-1a492adf066e
| false
| 617
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Punchok Kerdsiri
|
I’m person who loves technology and sharing my experience about IT and my jobs from the past to present.
|
9dc67e621117
|
punchokkerdsiri
| 46
| 58
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
9f879f9889b4
|
2018-06-27
|
2018-06-27 14:52:07
|
2018-06-28
|
2018-06-28 08:26:27
| 1
| false
|
en
|
2018-06-28
|
2018-06-28 08:26:27
| 1
|
1a4bc31af95a
| 11.437736
| 0
| 0
| 0
|
Artificial intelligence, or AI, is one of the most popular business buzzwords in circulation today. Many people are talking about how AI…
| 1
|
Picture by Andy Kelly
Finding Value in AI: Applied AI and Social Technologies
Artificial intelligence, or AI, is one of the most popular business buzzwords in circulation today. Many people are talking about how AI will transform how we live, work, and play. Journalists, politicians, futurists, and technology companies are also talking about how people all over the world will lose their jobs after they are replaced by thinking machines that do everything better than their lowly human colleagues. Companies of all kinds are asking themselves how AI can help them compete better, be more innovative, and be more efficient. The idea of AI has captured our imagination so completely because it promises the ultimate technological dream: a machine that can do more than a person ever could.
Much of this is just hype. The truth about AI is relatively boring compared to the utopian fantasies that suggest we will all soon be living a life of leisure while our computerized friends do all of the work. Until recently, AI has been little more than an area of advanced computing. But with new developments — the ones that are generating all of this hype — artificial intelligence is poised to become a truly transformative technology.
To unlock AI’s potential, we need to look beyond the technology.
Despite this potential, however, the real value of AI has yet to be realized in business. It already is an amazingly useful technology and is full of potential for the future. But its usefulness is being missed among the fantasies and problematic dialogue. This is because few understand how it works, let alone how to unlock its potential. Business leaders are not AI specialists. For many, AI is still largely just a part of the world of big data, or it is just a magical black box that you must “get” in order to compete. In part, this failure to see what AI technologies can do is due to the fact that there is a fundamental problem surrounding AI that needs to be addressed. Simply put, the hype is clouding our ability to see what we should do with this technology. And this is leading to a crisis of missed opportunities because it is not being applied well.
This story of AI has deep roots and great meaning, and it shapes how we understand what AI is and what it should be doing for us. Many of our most sophisticated AI technologies have been around for over forty years in some form or another. And we have been living with many of these technologies for years. This long history confuses the issue because much of what AI is now are technologies and processes we’ve known and used for years. Also, what we mean when we say “AI” today is only one of many definitions, and we’ve lost sight of the fact that our computer-aided world is full of carefully applied variants of AI technologies.
Visions of artificial beings have been around for five hundred years or more in one shape or another. Medieval craftsmen made mechanical monks, the earliest artificial humanoid machines. Inventors and conmen in the 18th and 19th centuries made automatons that could play chess and even speak. The science fiction of the last 100 years has added greater detail to this vision and told stories of the potential positives and negatives of creating artificial beings with thoughts of their own. All of these attempts and stories have whetted our appetites for AI and AI-enabled robots, and contributed to a feeling of the inevitability of AI’s “future” trajectory. They all tell the story that machines can match human capacities and be creative, reasoning, and social beings.
We must stop looking at AI as something that needs to be fully intelligent, and start looking at it as something smaller that can be applied to improve people’s lives and create new opportunities for creating compelling experiences.
But because these stories are so caught up in the idea of general AI — a thinking intelligence that has similar, or greater, capacities to a human — they are also a burden. They hide the fact that AI is a tool that can provide solutions to many problems. While we are distracted by the idea of an artificial person, AI’s true utility as an applied solution is lost.
Making the most of AI’s potential lies in finding a purpose for it. We need to focus on applied AI, and finding ways to put AI to work now. But applied AI is not as sexy an idea as general AI. Instead of autonomous robots that can do anything, we will have more humble systems managing smaller problems. The future of well-applied AI will include self-driving trucks and trains that move cargo. It will feature computer systems that anticipate problems in traffic flow and make the necessary corrections to keep cities moving. It will also be filled with AI enhanced consumer experiences that allow us to navigate web touchpoints more easily.
To unlock AI’s potential, we need to look beyond the technology. We need to start asking good questions about how we should mobilize our technological forces to accomplish concrete tasks or solve important problems. We must begin to ask ourselves how AI can make a difference in all of our lives. Finally, we should also ask how companies can best apply artificial intelligence to create products and services that will be truly valuable to their customers.
To answer these questions, we must stop looking at AI as something that needs to be fully intelligent, and start looking at it as something smaller that can be applied to improve people’s lives and create new opportunities for creating compelling experiences.
What this means is that providing value right now, or in the next ten years, through the use of AI involves five points to consider:
You have to first cut through the hype and see AI as a set of tools that serve a function.
Next, you must assess who you are solving for and understand their problems.
Then you must design a better outcome for you and your customers.
Once this is done, you need to find the right kind of AI to fit the purpose.
Finally, you have to apply it with a light touch.
1. CUT THROUGH THE HYPE
Cutting through the hype means understanding AI as a tool and avoiding the belief that the future of AI lies in general AI alone. Like other technologies, AI is very good at scaling tasks that humans can do already. It adds speed, a wider global reach, includes more people, and provides a greater level of repeatability. Its primary value is that it can do the work of many humans, faster, without tiring. Because it never has to be turned off, it also provides continuous presence. It is best employed to manage large amounts of information or to find novel or surprising patterns in information-rich contexts that quickly confuse individual humans. But all of these things are somewhat abstract. These capacities only come into focus when you put AI to work on a specific task. And this is what it means to apply AI.
This is the most important thing to understand about applied AI: the context, goal, and desired outcome matter more than the technology.
AI can play chess and Go, but these are goal-oriented applications of pattern management and decision engines. IBM’s premier AI system, Watson, can play Jeopardy, but it is not actually a single AI system. It is several that have been connected together to accomplish a task: playing the game. It is a natural language processor, two data retrieval systems, a decision engine, a set of trained models and filters, and a final natural language processor. It reads the clue, deconstructs it, explores unstructured data for possible answers, scores the validity of these answers, synthesizes these results using human and taught filters and then constructs an answer. But these systems were built to serve a purpose, and it took time and human effort to figure out how to best tackle the problem of providing a good answer within the rules of a game. If Watson had been designed to do something else, it would have different components. It took time and effort to generalize it — to convert it from a Jeopardy! playing machine into a sophisticated analytics engine.
This is the most important thing to understand about applied AI: the context, goal, and desired outcome matter more than the technology. They guide how AI is implemented, and this defines how it is built. What this means is that applied AI is really more about design and problem solving than it is about raw technology. Yes, expertise in AI is absolutely necessary, but an engineer cannot work without a problem to solve. When we think of applied AI not as a fabulous technology, but as a resource in a design exercise focused on making life better for customers and users, we find the true value of AI.
2. ASSESS WHO YOU ARE SOLVING FOR AND UNDERSTAND THEIR PROBLEMS
The next step in correctly applying AI is to identify a problem that AI can solve or improve. To do this you have to understand the lives of the people who will be using the system or who will benefit from its application.
Today, AI can be found analyzing natural language to serve as an interface between people and relatively dumb systems like Amazon’s retail business. It is also serving to improve searches on the web, or improve the quality of Google’s translation services. In each of these examples, the AI is an enhancement of something that existed before. People were able to order books, socks, or anything else from Amazon, but Alexa provided an application of natural language processing that created a new, effortless interface. The incredible work done by Google’s team in improving their translation services was a modification of an existing product. Newer forms of neural network processing augmented their existing expert-system AI translation and made it faster and more natural.
In both of these cases, the application of AI improved something that people were already using. It made users’ experiences better. In both cases, however, these improvements were made to serve people better, and in more natural ways. Applied AI should be used as a tool to enhance people’s lives. Both of these examples demonstrate a human-centric approach to design. The AI was used to solve problems like lag, or to create new experiences that engaged people more directly and naturally.
Understanding your customers always begins with learning about them directly. What this means is that the process of applying AI well begins with human-centric investigations. An ethnographic exploration of your customer’s lives will reveal pain-points, needs, hopes, and desires that can be addressed through careful design.
3. DESIGN A BETTER OUTCOME
Once you understand your customers better, you can begin to design new experiences, products, and solutions that will meet their needs. This design process should include a reimagining of your relationship with your customer. This will reveal the opportunities where AI can be applied to the greatest effect. But the opportunities are ways to create new products, services, or experiences that are enhanced with some form of AI, not opportunities to use artificial intelligence. Because of this, the design process best suited to this task is one that is based in the iterative design principles of design thinking.
4. FIND THE RIGHT KIND OF AI
Next, look for opportunities to use AI to enhance the function, scope, and purpose of your newly designed solutions. To do this, it is best to begin with an analysis of what you hope to accomplish. Do you want to remove a pain point? Do you want to enhance the abilities of your users so they can do something with the help of the AI that they could not do before? Do you want to include more people in completing a task? Or do you want to create a novel experience that was not possible without the application of AI?
These questions are essential because they outline the relationship between the user and the technology. These relationships define what the AI must do. Understanding them helps you decide what kind of AI will be needed to provide value for your customers.
It can expand or amplify the capacities of a user: The user is able to expand their own abilities beyond what they could accomplish alone. The tool can be seen as an adaptation of their body, knowledge, or information processing abilities. The tool and the user work together to accomplish the task and the relationship expands what is possible. Examples of this include centaur systems that help people problem solve or strategize, such as the forecasting platforms used by the financial industry. They help expand the reach of an analyst by providing them with information they would not be able to gather quickly otherwise.
It can shift a user’s relationship with the skills and/or knowledge needed to accomplish a task: The tool displaces some of the capacities or responsibilities of the user and assumes them itself. The user is either relieved of unwanted responsibilities, or must give up some of their own. One example is the centaur systems that are used in Free Chess, a variant of the game where a computer and a human play chess together against another team. The player and computer share the responsibility of choosing the next move, something that has created a team that is better than human or computer alone.
It can change the relationship between a user, or a group of users, and the skills necessary to complete a task: The tool transfers capacities or responsibilities between other users or other tools. The tool allows for the sharing or exchange of roles or responsibilities. It can often be a platform for the redistribution of user’s roles. This relationship is a familiar one. When people talk to Siri, Alexa, or Google Assistant, they are giving up a lot of the responsibility for what is needed to complete a task. The AI takes care of everything once it understands the request. These NLP (natural language processing) interfaces understand what you want and translate the request into something that the computer systems in the background can process. You could do the Google or Amazon search yourself, instead they do it for you, and make some decisions on your behalf.
It can change the number or type of resources needed to complete a task: The tool combines, collapses, or eliminates roles or responsibilities. The tool allows users to accomplish tasks that might require multiple users, or to eliminate entire sub-tasks entirely, thus eliminating roles. Machine learning has transformed how we process information. Technically, humans can do this kind of processing too, it would just take thousands of people and a very long time to complete the task. The AI eliminates the need for all of these resources and can process it quickly. This is the relationship that is at the core of the threat to worker’s jobs, because AI can eliminate many of them by providing a quicker, lower-resource route to completing a task like data processing and analytics.
It can produce novel solutions to problems or tasks: The user is able to do something they otherwise could not without the tool. The tool and user work in conjunction to achieve either a new task or to do it in a novel way. This is the rarest of the alterations. This is what we are really working towards with AI. We are looking for new solutions to problems. One very simple example of this is the fact that Google has put image recognition into its Google Photos application. Now you can use a keyword to search for the content of particular photos. It is an image processing AI that provides the foundation for this service.
Like Watson, no AI system is going to be a single technology. The secret to successful applications of AI lies in using it to create one of these relationships. Once you know what you want to do for your customers, you can then begin to design the system that will carry it out. This is simply about finding the best tool for the job. In most cases, you are not going to be creating this yourself, so finding the right technology partner will be an essential part of the next step.
5. APPLY WITH A LIGHT TOUCH
Finally, remember that the goal of any new product, service, or experience is to provide the best solution for your customer. Avoid taking a technology-first approach that prefers the technology over their needs, and use your new technology to provide for the needs that you identified earlier. Technology works best when it is invisible and works in the background. Artificial intelligence is no different. People should never know that they are dealing with AI. They should feel that they are being served well by the company they have chosen to provide them with the product, service, or experience.
Like any technology, applied AI is really nothing more than a tool that we use to accomplish a task. Its value lies in how well it does this. Ultimately, this means its value is measured in how well companies use it to serve their customers well. Once the novelty of artificial intelligence wears off, this is really all that will be left. So, to unlock the value of AI, it is best to move past the technology completely and serve your customers well.
_______________________________
For more information about Gemic and how we might be able to help with your business challenges please get in touch:
Johannes Suikkanen
johannes.suikkanen@gemic.com
+1 212 961 6515
Sakari Tamminen
sakari.tamminen@gemic.com
+358 50 361 4650
|
Finding Value in AI: Applied AI and Social Technologies
| 0
|
finding-value-in-ai-applied-ai-and-social-technologies-1a4bc31af95a
|
2018-06-28
|
2018-06-28 19:47:31
|
https://medium.com/s/story/finding-value-in-ai-applied-ai-and-social-technologies-1a4bc31af95a
| false
| 2,978
|
Gemic's take on where we are headed
| null | null | null |
The Morrow
|
eelis.nguyen@gemic.com
|
the-morrow
|
INSIGHTS,FORESIGHT,ANTHROPOLOGY,SEMIOTICS,INNOVATION
|
gemic
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Gemic
|
Markets are made up of real people. Forward-looking companies partner with us to reimagine their role in people's lives.
|
89bdf537b17
|
gemic
| 13
| 1
| 20,181,104
| null | null | null | null | null | null |
0
|
vectorizer = TfidfVectorizer(ngram_range=(1, 500))
| 1
| null |
2017-10-27
|
2017-10-27 22:33:16
|
2017-10-28
|
2017-10-28 02:37:36
| 12
| false
|
en
|
2017-10-28
|
2017-10-28 02:45:06
| 1
|
1a4d9d5ba951
| 3.082075
| 4
| 0
| 0
|
People occasionally posted job-related posts to my college Facebook group
| 2
|
Rocchio and KNN — Job post detection
People occasionally posted job-related posts to my college Facebook group
Example 01
Example 02
post like this often get a response for a person, to direct them to a specific group
Example
yea, Ollie , this is for you
Wouldn’t be easier if you can automate it
Seriously, this is literally the first thing came up in my mind, after seeing the same person, yea this is for you Ollie, who restlessly directing people to the specific group for this specific post.
Here’s what I can do, I can write a bot, to automatically flagged whether the post is a job post or not, then automatically posted a response if it’s a job post.
In this article, I will mainly discuss of the algorithms, Rocchio classifier and K-nearest neighbor, behind my job post detection system, in tfidf vector space model.
Before so, let’s briefly introduce the idea behind these two algorithms
Rocchio classifier:
Given a list of vectors in category “A”, and a list of vectors in category “B”. We can sum up all of their vectors from the same category to form their corresponding prototype vector
So for any of the query submitted to the system, it can determine whether the query is belong to category “A” or “B” by performing cosine similarity.
In this example, I have the K size of 5, and the majority of the vectors return are “A”. (A,A,A,B,B). Therefore the query submitted is belong to the category of A.
KNN classifier:
KNN, also known as K-Nearest Neighbor, determines the outcome of the category, by performing cosine similarity on each of them, and the closet in K size
Here’s how I approach. I have a corpus of job and non-job posts, with the size of 5 for each, I am going to use the popular sklearn library, to help transforming the into vectors perspective, and finally compare the accuracy with KNN (default to 3) and Rocchio classifier with test cases (Also five for each)
Here’s the code
And here’s the original result
KNN requires a bigger size of the corpus, as it’s checking the majority of the documents returned in K size, thus the bigger the corpus, the higher accuracy.
1-nearest neighbor
For now, with the corpus size of 10, it would be nice if the K size is small too
Oh! What happen if K is 10 with the corpus of 10
A good example is, I have a category of “uncle” from my mother side, and “uncle” from my father side, if I submitted a vector with the mention of “uncle” which
In another side, Rocchio may did poorer with polymorphic vectors, for instance, if both of the prototype vector are really closed to each other, then which one would the system return?
At least, if I allowed the vectorizer to have ngram, then the accuracy would be increased. Because there are some specific phrases people are going to use for job-related posts.
At least this sums up how the job detection system work.
One finally thing to say, using sklearn to do machine learning related work is really worthwhile!
Github repo
|
Rocchio and KNN — Job post detection
| 4
|
rocchio-and-knn-job-post-detection-1a4d9d5ba951
|
2018-05-16
|
2018-05-16 11:49:37
|
https://medium.com/s/story/rocchio-and-knn-job-post-detection-1a4d9d5ba951
| false
| 459
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Kiu Lam
|
Kiu at work; Nicolas outside;
|
13ed0567ad4b
|
kiulam
| 5
| 0
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
a0cad8bd07a5
|
2018-07-18
|
2018-07-18 09:16:36
|
2018-07-18
|
2018-07-18 09:20:07
| 1
| false
|
en
|
2018-07-18
|
2018-07-18 09:20:07
| 2
|
1a4ef26fdb1a
| 1.015094
| 0
| 0
| 0
|
Please welcome Ali Hosseini to the Fetch.AI team. Ali has a PhD and MSc in Artificial Intelligence from King’s College London, with a…
| 5
|
Photo by Christopher Burns on Unsplash
Kings College AI research and software engineer Ali Hosseini joins Fetch.AI
Please welcome Ali Hosseini to the Fetch.AI team. Ali has a PhD and MSc in Artificial Intelligence from King’s College London, with a background in Software Engineering. He’s had an excellent academic track record with publications in international conferences and AI journals. His work has achieved various prizes and awards.
Ali is experienced in a broad range of problem areas within artificial intelligence and multi-agent systems. This includes logical and uncertain reasoning, ontological modelling, and machine learning within the field of AI. He’s also worked on agent-based modelling, agent interaction and communication mechanisms, and game theoretical analysis.
For the past six years, Ali’s research has focused on the computational models of argumentation and dialogue, as paradigms that enable and merge communication and reasoning . Specifically, Ali has focused on bringing the formal models of argumentation and dialogue closer to humans’ utilisation of dialogue as a means of facilitating reasoning in individual and social settings.
In a professional setting, Ali enjoys working with modern and cutting edge technologies by combining analytical skills with creativity. Outside work, you can find him playing strategy games; the likes of Warcraft 3 and Starcraft.
We’re delighted to welcome another outstanding AI specialist to the team here at Fetch.
|
Kings College AI research and software engineer Ali Hosseini joins Fetch.AI
| 0
|
kings-college-ai-research-and-software-engineer-ali-hosseini-joins-fetch-ai-1a4ef26fdb1a
|
2018-07-18
|
2018-07-18 09:20:08
|
https://medium.com/s/story/kings-college-ai-research-and-software-engineer-ali-hosseini-joins-fetch-ai-1a4ef26fdb1a
| false
| 216
|
AI and digital economics company
| null |
fetchaiplatform
| null |
Fetch.AI
|
info@fetch.ai
|
fetch-ai
|
BLOCKCHAIN TECHNOLOGY,CRYPTO,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,ECONOMICS
|
fetch_ai
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Fetch.AI Team
| null |
1e76c0da4d03
|
foth
| 9
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
7b837cf1fd73
|
2018-09-07
|
2018-09-07 23:47:11
|
2018-09-08
|
2018-09-08 07:59:45
| 2
| false
|
en
|
2018-09-19
|
2018-09-19 10:10:28
| 5
|
1a501abbeb06
| 2.194654
| 12
| 0
| 0
|
The Institution of Engineering and Technology (IET) is a global non-profit headquartered in the UK. It is one of the world’s largest…
| 3
|
A tale of two A’s — From Accra to Astana
The Institution of Engineering and Technology (IET) is a global non-profit headquartered in the UK. It is one of the world’s largest engineering institutions with over 168,000 members in 150 countries. Being multidisciplinary, the organization reflects the increasingly diverse nature of engineering in the 21st century. The IET is working to engineer a better world by inspiring, informing and influencing its members, engineers and technicians, and all those who are touched by, or touch, the work of engineers.
IET — PATW emblem
Present Around The World (PATW) is the IET’s global competition for young professionals and students within engineering and technology to develop and showcase their presentation skills. The presentation lasts ten (10) minutes and it’s judged based on presentation skills (70%) and technical content (30%). On the 25th of May, I participated in the Ghana Local Network’s PATW competition held at the training conference room of the LEKMA hospital in Accra.
The story begun when Ivy Barley shared a call for applications on the Developers In Vogue cohort one’s WhatsApp group page.Seeking the next adventure, I took up the challenge and signed up. Part of the application process, the first task was to submit a presentation topic and a summary in less than 150 words. With an interest in data science and immense support from Yvette Kondoh (Data scientist at Superfluid), I settled on working with the publicly available Pima Indian data-set on Type 2 Diabetes. My topic of presentation was, “Predicting type 2 diabetes using data science.” The core content of the presentation was focused on taking the audience on a journey through the data science process (problem statement, gathering data, exploratory analysis, modeling and communication of results) in predicting the likelihood or an individual’s risk in developing type 2 diabetes, based on certain diagnostic measurements. At the end of the competition, I was adjudged the winner of the Local Network competition.
I went on to represent Ghana at the E.M.E.A (Europe Middle East and Africa) regional competition in Astana, Kazakhstan — August 11, 2018. The event was held at the prestigious Nazarbayev University — an autonomous research university founded by the president of Kazakhstan in 2010. There were 17 other presentations by young professionals from 17 different countries in the E.M.E.A region (France, U.A.E — Dubai, Pakistan, Mauritius, Greece, Cyprus, Switzerland, Nigeria, Sudan, Kazakhstan, Qatar, Oman, Ireland, Czech Republic, Saudi Arabia, Bahrain and Malta).
E.M.E.A regional finalists and community committee members — Nazarbayev University, Astana
Although I didn’t emerge winner of the regional competition, the entire experience was been worthwhile. There’s been a boost in not just my presentation skills but also in my research and team building skills. Additionally, I’ve had the pleasure of interacting and sharing ideas with some of the brightest minds in the region. And on the fun side, I can now boast of visiting one of the most modernized cities in Central Asia — Astana!
|
A tale of two A’s — From Accra to Astana
| 167
|
https-medium-com-j-jamilafarouk-a-tale-of-two-as-from-accra-to-astana-1a501abbeb06
|
2018-09-19
|
2018-09-19 10:10:28
|
https://medium.com/s/story/https-medium-com-j-jamilafarouk-a-tale-of-two-as-from-accra-to-astana-1a501abbeb06
| false
| 480
|
The official Journal blog
|
blog.usejournal.com
|
usejournal
| null |
Noteworthy - The Journal Blog
| null |
did-you-know-the-journal-blog
|
STARTUP,PRODUCTIVITY,ENTREPRENEURSHIP,TECH,TECHNOLOGY
|
usejournal
|
Presentation Skills
|
presentation-skills
|
Presentation Skills
| 284
|
Jamila Farouk Jawula
|
Data | Knowledge | Action
|
f7afa97cb22a
|
j.jamilafarouk
| 21
| 15
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
f5af2b715248
|
2018-06-17
|
2018-06-17 14:52:58
|
2018-09-21
|
2018-09-21 06:11:32
| 4
| false
|
en
|
2018-09-21
|
2018-09-21 23:56:01
| 4
|
1a50ecc2ce06
| 4.367925
| 10
| 0
| 0
|
Chatbots are great but aren’t for some businesses
| 5
|
5 Reasons Why Your Business Doesn’t Need Chatbots
credit: giphy.com
I have been in the artificial intelligence and chatbots business for more then a year now. This is a short time in most industries but in the Ai world that’s a heck of a long time.
I learned a few things along the way and while I truly believe that artificial intelligence applications will become mainstream in the years to come, we are not there yet. Robots won’t take over the world. At least not for now.
There are plenty of wanna-be artificial intelligence experts, clickfunnels mumbo jumbo adepts and bot builders who will tell you the contrary. They’ll list you reasons your business should get a bot right now. They will swear that If they build a chatbot for your business, customers will be lining-up for your services: build it and they will come.
credit: giphy.com
While there has been significant progress in text and speech recognition, today’s Ai systems capacities are still pretty basic. They are useful only for a limited number of use cases.
In fact I recommend to nearly half of my new customers to not invest in a chatbot. Maybe I’m stupid and I should be taking their money anyway, but that’s not how I build relationships businesswise.
This is why I decided to come up with a list of five reasons why your business doesn’t need a chatbot. It’ll be up to you to decide if a bot is the right tool for you or not.
Your Customers Need a Personal Touch
Automation is great but the best customer service is provided by humans, period. In a recent Korn Ferry’s survey, 43% of people reported that they prefer to communicate with a human first and foremost.
If your customers need to express their needs in a peculiar way, or if your first interaction with a customer is driven by an emotional trigger a bot may not be the greatest first contact they should have with the services you want to sell them.
In fact a McKinsey study on automation technologies such as machine learning and artificial intelligence discovered that activities involving complex human interactions such as communication with customers had less than 30% chance of success.
Bots can handle straightforward ping pong like dialogues but have a hard time dealing with complex interactions. This mean that they can handle simple inquiries, but don’t expect them to be able to handle complaints by frustrated customers.
Your Customers are Old
It took my mother a long time to learn how to use Facebook. I still remember one of her first interaction with Facebook. She posted the following message in cap letters on my wall, thinking she was sending me a private message:
YOU LEFT YOUR UNDIES IN THE DRYER. LOVE YOU.
I realized then that technology and old people doesn’t go along really well.
The older your users are, the longer they will take to adapt to new technologies. So if your customers are grandmas and grandpas, a bot may not be the best first experience they have with your customer service to resolve a problem.
Beside they probably won’t have any idea they’re talking to a bot.
Your Business Might not be a Good Fit
Bots are great for certain use cases such as Facebook Messenger lead generation or for answering frequently asked questions by users. I’ve developed many successful Ai applications for clients who are glad they purchase an Ai assistant for their business. They were a good fit for a bot.
As a rule of thumb, any venture that rely on a sales funnel could add a chatbot to their digital marketing tools. A virtual assistant designed to troubleshoot users’ problems is also a suitable utilization for a chatbot for a company if repetitive customers queries can be identified from its current communication channels on social media and on its existing chat service.
The fact that this technology works fine for these businesses doesn’t necessarily mean that you should adopt it as well.
You Don’t Have Demand on Social Media
I get it, chatbots are a brand new shiny thing. You might be tempted to get one just because it would look cool on your Facebook page or on your web site. Some social media marketing experts might have already tried to sell you a chatbot with some social media ads campaign to stand out of the digital crowd.
But if your customers don’t use your Facebook page to reach you or fail to use your chat service on your web site, you shouldn’t invest in a chatbot.
Otherwise it will be a sad and lonely bot.
Bots Can Damage Your Brand
if all of the above doesn’t convince you evaluate carefully the possibility of integrating a bot to your company’s communication channel, this last argument might be the most important to weight out:
A chatbot could be a potential liability to your business’ branding.
if your users end up having a negative experience with your Ai application, close to 75% of surveyed people answered they won’t use that chatbot ever again.
That’s a lot of potential leads lost because you were too excited to let a chatbot take care of onboarding your new clients in. That’s really bad.
So is the right time for you to invest in Ai for your business? It is up to you to decide, but before jumping in the botwagon make sure it will be worth the investment.
If you liked this article please give a couple claps, Medium writers main source of “likes” are called claps, you can give me 1,2,3 up to 50 claps! 👏👏👏On Medium and feel free to follow me ❤️️for more Ai articles.
This story is published in The Startup, Medium’s largest entrepreneurship publication followed by + 370,771 people.
Subscribe to receive our top stories here.
|
5 Reasons Why Your Business Doesn’t Need Chatbots
| 122
|
5-reasons-why-your-business-dont-need-chatbots-1a50ecc2ce06
|
2018-09-21
|
2018-09-21 23:56:01
|
https://medium.com/s/story/5-reasons-why-your-business-dont-need-chatbots-1a50ecc2ce06
| false
| 972
|
Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi
| null | null | null |
The Startup
| null |
swlh
|
STARTUP,TECH,ENTREPRENEURSHIP,DESIGN,LIFE
|
thestartup_
|
Bots
|
bots
|
Bots
| 14,158
|
Carl Dombrowski
|
The Startup & Toward Data Science Medium Writer, Ai, ML & NLP coder. CEO of WeBots
|
7788560c42c7
|
carldombrowski
| 316
| 239
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-09
|
2018-04-09 21:13:41
|
2018-04-13
|
2018-04-13 14:29:54
| 1
| false
|
en
|
2018-04-13
|
2018-04-13 14:29:54
| 11
|
1a52439c0e6d
| 1.158491
| 3
| 2
| 0
|
AIΞVE, our artificial intelligence with unequalled abilities!
| 5
|
We are all AIΞVE [Mask Contest ]
AIΞVE, our artificial intelligence with unequalled abilities!
The AIs are developing at high speed and we are about to witness something spectacular!
Furthermore, Peculium offers you the possibility to win PCL with a selfie!
By joining the movement “We are all AIΞVE” you can contribute to make the whole world know about this revolutionary AI that will find its place in history!
How to Participate ?
Step 1: Print the mask (in the contest Drive folder below)
Step 2: Take a selfie with the mask on
Step 3: Post it to your social-media-channel(s) including the hashtags #WeAreAllAIEVE #IamAIEVE #Peculium #ArtificialIntelligence on #Blockchain @_Peculium
Step 4: Fill this Google-Form: https://goo.gl/hytezD
What we look for: Selfies in a mall, in public places, with friends, with your family, with your colleagues, with your pet, just be creative!
What we don´t look for: Selfies alone in front of your PC, in the bathroom etc.
Every new selfie that you have published, you will get 500 PCL!
For a selfie in front of a lighthouse, you will get 5000 PCL!
A bonus from the “Secret Challenge”:
Guess the link between AIΞVE, the solidus smart contract and lighthouse !
First person to guess the correct answer will win 10ETH !
Relevant Links
Please find here your Masks: https://goo.gl/hDqfBF
Peculium’s Website: https://peculium.io/
Blog: https://peculium.io/blog/
Follow us on Social Media
- Twitter: https://twitter.com/_Peculium
- Reddit: https://www.reddit.com/r/Peculium/
- Telegram Announcement-Channel: https://t.me/peculiumANN
- Telegram Main-Channel: https://t.me/ico_peculium
- Telegram French-Channel: https://t.me/ICO_Peculium_FR
- Telegram German-Channel: https://t.me/ICO_Peculium_DE
- Youtube: https://www.youtube.com/c/PeculiumPCL
|
We are all AIΞVE [Mask Contest ]
| 126
|
we-are-all-aiξve-mask-contest-1a52439c0e6d
|
2018-04-21
|
2018-04-21 05:05:20
|
https://medium.com/s/story/we-are-all-aiξve-mask-contest-1a52439c0e6d
| false
| 254
| null | null | null | null | null | null | null | null | null |
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
Peculium
|
The first Savings Platform powered by Smart-Contracts & Artificial Intelligence giving you peace of mind while investing
|
827c278cc4df
|
Peculium
| 581
| 20
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-26
|
2018-06-26 09:28:47
|
2018-06-26
|
2018-06-26 10:48:09
| 1
| false
|
en
|
2018-06-26
|
2018-06-26 10:48:09
| 1
|
1a52b41a41b2
| 0.883019
| 1
| 0
| 0
|
For today the development of information technologies progresses with racing car speed. Computers, gadgets and “smart” devices have filled…
| 5
|
How to Use Artificial Intelligence (AI) in Mobile Apps?
For today the development of information technologies progresses with racing car speed. Computers, gadgets and “smart” devices have filled all spheres of our life from entertainment to education, medicine, and economics.
Computers provide security systems, listen to our questions and give advice, manage our lifestyle and suggesting a content which must be good for us. All of these features are available only by using an AI technology. An advantage of AI programs is an ability to resolve universal questions, at this time programs without AI can solve only specific questions.
AI Programs have fewer errors and defects inasmuch as artificial intelligence is more universal than human intelligence. The difference between AI and conventional programming is in the presence of the imitation of a certain level of human thinking.
Types and benefits of this technology, examples of AI-Apps, trends in AI technologies and tips for those who only start to approach a market — all about these things you can read in our article: 8 Tips to Use AI in Mobile App.
|
How to Use Artificial Intelligence (AI) in Mobile Apps?
| 1
|
how-to-use-artificial-intelligence-ai-in-mobile-apps-1a52b41a41b2
|
2018-06-26
|
2018-06-26 10:48:09
|
https://medium.com/s/story/how-to-use-artificial-intelligence-ai-in-mobile-apps-1a52b41a41b2
| false
| 181
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Mind Studios
|
Need a mobile or web solution? We make mobile and web products that turn into brands. https://themindstudios.com
|
8643be93cf73
|
MindStudios
| 134
| 228
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-30
|
2018-08-30 02:21:55
|
2018-08-30
|
2018-08-30 03:21:04
| 0
| false
|
en
|
2018-10-10
|
2018-10-10 12:11:26
| 4
|
1a52f0110deb
| 1.588679
| 1
| 0
| 0
|
I believe that the top 1% in the world is achievable if you do things today that other people are not willing to. Why you become the top 1%…
| 5
|
Introduction To Machine Learning
I believe that the top 1% in the world is achievable if you do things today that other people are not willing to. Why you become the top 1% in the world is because you can now do things that they can’t.
Learning AI was my goal for 2019 but then I asked myself, why can’t it be done now? The answer was, yeah why? On that note Gerald Muriuki, my friend and co-founder at Nestmetric was my go-to guy on tips on how to tackle this mountain of a subject. His suggestion was Intro to Machine learning videos by Jeremy Howard.
I listened to 51 min of the 1-hour long video and all I took was bits and pieces. Random forest, numpy, pandas, scikit-learn , hyperparameter tuning, classification problem, regression problem just to name a few. This for me was enough to get me started.
So, random forest, a bunch of decision trees:-). That is how I understood it.
The fundamental idea behind a random forest is to combine many decision trees into a single model. Individually, predictions made by decision trees (or humans) may not be accurate, but combined together, the predictions will be closer to the mark on average. — William Koehrsen
A random forest can be used for both classification and regression problems. Classification uses discrete variables eg. Is this mushroom edible or poisonous, Is this customer likely to churn or not churn, Is this a genuine insurance claim or a fraud. Regression problems, on the other hand, uses continuous variables eg. What is the temperature likely to be next month, can you predict house prices in Kenya by 2020 etc.
When it comes time to making a prediction, the random forest takes an average of all the individual decision tree estimates. (This is the case for a regression task, such as our problem where we are predicting a continuous value of temperature. The other class of problems is known as classification, where the targets are a discrete class label such as cloudy or sunny. In that case, the random forest will take a majority vote for the predicted class). With that in mind, we now have down all the conceptual parts of the random forest! — William Koehrsen
The following 2 blogs were instrumental to my understanding of random forest.
Random Forest in Python
The Random Forest Algorithm
I extremely excited about this venture into a new skill and I can’t wait to see the outcome.
Cheers!
|
Introduction To Machine Learning
| 3
|
introduction-to-machine-learning-1a52f0110deb
|
2018-10-10
|
2018-10-10 12:11:26
|
https://medium.com/s/story/introduction-to-machine-learning-1a52f0110deb
| false
| 421
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Muriithi_Kabogo
|
Tech Enthusiast and Everything business
|
be24f368b206
|
muriithi_kabogo
| 48
| 52
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
dda65fb0548d
|
2018-03-08
|
2018-03-08 12:23:58
|
2018-03-12
|
2018-03-12 09:09:25
| 3
| false
|
en
|
2018-03-14
|
2018-03-14 12:02:15
| 5
|
1a5434508635
| 5.165094
| 53
| 11
| 0
|
TL;DR: Using past interaction data, MakeMyTrip personalized hotel ranking for each individual user to get more than 30% upside in…
| 2
|
The “My” in MakeMyTrip: How Per-User Personalization Boosted Conversion at MMT
TL;DR: Using past interaction data, MakeMyTrip personalized hotel ranking for each individual user to get more than 30% upside in conversion.
Hotel ranking tries to solve the hotel discovery problem for users. For a given user at a given time, there is a particular hotel need. For example, the user might be looking for a hotel for the next business trip or an upcoming vacation with her family. The objective is to elicit/infer this need from the user as completely as possible, and then rank hotels in our inventory so that the hotels that best satisfy the user’s need are at the top. This is a non-trivial problem. We found that, for many users, the hotels that they were interested in were not near the top of the hotel listings that they saw. This increases the work the user has to do find a suitable hotel, and increases the chance of the user dropping off.
A user’s hotel need is elicited/inferred using several signals:
Query (city, adult, children, check-in, check-out, …)
Device (hardware, OS, browser, …)
History (bookings, cancellations, reviews, …)
Recent funnel activity (views, filters/sorts applied, …)
We can measure the satisfaction of the user with respect to the listing shown to them by analysing their click through and purchase behaviour. We assume that the “best” ranking of hotels will order the hotels in decreasing order of the probability that the hotels gets booked. In this article, we discuss an ranking approach using past user interaction data that gave us significant improvement in conversion (>30%).
Hotel-User Affinity
The first step was to measure the affinity of a user to each hotel. The affinity metric should satisfy the following properties. The metric should be:
low for hotels where the user has done negative interaction (e.g. given a negative review)
medium for hotels where the user has not interacted at all.
high for hotels where the user has had positive interaction (e.g. viewed, booked, given positive reviews etc.).
more sensitive to frequent interactions than to rare interactions (e.g. a hotel with more views should have a higher value than one with fewer views).
more sensitive to recent interactions than to older interactions (e.g. a hotel viewed yesterday should have a higher value than a hotel viewed last month).
Constructing an affinity metric that satisfies the above properties is beyond the scope of this article. For ease of exposition, we will consider a simplistic metric for hotel-user affinity that uses past viewing data of users. It is easy to visualize this as a bipartite graph of users and hotels. There is an edge between a user and a hotel if the user viewed the hotel. The weight of the edge is proportional to the number of views of the hotel by the user.
We can compute such a score for every possible hotel-user pair using the weight of the corresponding edge.
User-User Affinity
In the next step, we use the hotel-user affinity to measure the affinity between two users, in terms of the similarity in their interaction behavior. This affinity metric should satisfy the following properties:
The metric should be low where there is no intersection between the sets of hotels that the two users interacted with.
The metric should be high when there are many hotels in the intersection.
The metric should be high when the hotel-user affinity of both users is high for the hotels in the intersection.
To combine hotel-user affinity to arrive at user-user affinity, we start with a user u and take all hotels where she has non-zero affinity (following the dark edges), and then take all other users u’ who have non-zero affinity with these hotels (following the blue edges). These paths can then be used to compute the affinity of user u with every other user u’.
In this way, we can compute the affinity for every user pair in the graph. Note that it is possible to arrive at similar affinity scores for two user pairs with very different amounts of evidence. Hence it is important to estimate confidence bounds for the estimated affinities.
Personalized Ranking
If there is sufficient data, hotel-user affinity can be used to rank hotels. However, this data is very sparse since any given user will have interacted with a handful of hotels in a few cities. However, we now know other users who are similar to this user and can use their hotel-user affinities to suggest relevant hotels for this user. We need a method that can incorporate both these signals and compute a new hotel-user score which satisfies the following properties:
A user with a higher affinity to the current user should have a higher influence on the score.
If a high affinity user has high affinity to a hotel, that hotel should have a higher score for the current user.
To compute the score for a hotel-user pair (h, u), we start with the user u and compute the other users u’ who have affinity with this user (the blues smileys). We then accumulate the affinities of the users u’ for hotel h (the blue edges) after appropriately calibrating them based on the user-user affinities (the dashed edges).
In this way, we can compute a score for every hotel-user pair. This score is used to generate the personalized ranking for every user.
Business Impact
This personalization model went live on the MMT app late last year on a small amount of traffic. Based on initial promising results, we slowly increased the traffic over the next couple of months. Till February, we found a steady upside of more than 30% with respect to the baseline. This has now been mainstreamed, and all domestic hotel ranking involves some amount of personalization whenever data is available.
Implementation Note
A fundamental problem in most personalization approaches is one of scale. The MakeMyTrip universe has tens of thousands of hotels and millions of users. This makes pre-computation of per-user hotel ranking intractable. Brute force computation at request serving time is impractical due to latency constraints. Three observations helped us to speed up the computation:
Hotel-user affinity is tractable, can be precomputed offline, and can be stored in an in-memory cache for fast look-ups. Since it is computed offline, this allows us to use sophisticated methods to accurately estimate hotel-user affinity using a variety of signals.
User-user affinity computation can be restricted to only those users with at least one common interacted hotel.
In the score computation, we only consider users with non-zero affinity and only hotels having non-zero affinity to these users.
Based these observations, the optimized implementation was able to deliver personalized hotel ranking well within the latency constraints.
Other Personalization Approaches
The method described above is an example of collaborative filtering using the memory-based approach. We also tried doing model-based collaborative filtering approaches like matrix factorization. The memory-based approach was found to give more relevant results in an internal user study, and was chosen for final deployment. In addition, we are also looking at content-based filtering models based on customer segmentation and hotel attributes for further personalizing the ranking.
|
The “My” in MakeMyTrip: How Per-User Personalization Boosted Conversion at MMT
| 268
|
personalized-hotel-ranking-using-past-interaction-data-1a5434508635
|
2018-06-18
|
2018-06-18 12:36:48
|
https://medium.com/s/story/personalized-hotel-ranking-using-past-interaction-data-1a5434508635
| false
| 1,223
|
MakeMyTrip Engineering & Data Science
| null | null | null |
MakeMyTrip-Engineering
|
piyush.kumar@makemytrip.com
|
makemytrip-engineering
|
MAKEMYTRIP ENGINEERING,TECHNOLOGY,DATA SCIENCE
|
makemytrip_tech
|
Data Science
|
data-science
|
Data Science
| 33,617
|
Goutham Tholpadi
| null |
8de9cedaaf85
|
gtholpadi
| 25
| 3
| 20,181,104
| null | null | null | null | null | null |
0
|
cd "C:\Program Files\MongoDB\Server\3.4\bin"
mongod --dbpath "D:\data"
use pokemon
db.pokemon.insert({
_id: ObjectId(),
title: 'Pokedex',
description: 'Pokedex is the pokemon database',
pkid: '001',
pkname: 'Chamander',
pktype: 'Fire',
pkevol: 16
})
db.pokemon.find()
| 5
|
85a4000a95d7
|
2017-11-15
|
2017-11-15 03:32:48
|
2017-11-15
|
2017-11-15 07:17:37
| 10
| false
|
th
|
2017-11-18
|
2017-11-18 11:57:26
| 9
|
1a5522579b27
| 2.476415
| 10
| 2
| 0
|
เดี๋ยวๆนะ MongoDB คืออะไรหว่า หลายคนก็คงเคยได้ยินมาเยอะแยะมากมาย แต่ส่วนใหญ่อาจจะยังไม่เคยได้ลองเล่นกับมัน วันนี้ก็เลยจะลองแชร์…
| 5
|
เมื่อลองจับ MongoDB ครั้งแรก #หิวเลย
นั่นมัน Mango !!
เดี๋ยวๆนะ MongoDB คืออะไรหว่า หลายคนก็คงเคยได้ยินมาเยอะแยะมากมาย แต่ส่วนใหญ่อาจจะยังไม่เคยได้ลองเล่นกับมัน วันนี้ก็เลยจะลองแชร์ ประสบการณ์ที่ได้เรียนมาอันน้อยนิดแชร์กันนะครับ (รูปไม่เกี่ยวแต่อย่างใด 5555+)
MongoDB คือ โปรแกรมช่วยจัดการฐานข้อมูลในรูปแบบ NoSQL (Not only structured query language) ลองเข้าไปชมเวปไซต์เข้าได้ << คลิก >>
เพราะในโลกยุคปัจจุบันข้อมูลของเราไม่ได้ถูกจัดเก็บในรูปแบบเพียงแค่ SQL หรือรูปแบบที่เราเข้าใจได้ง่ายๆ เป็นตาราง มี row มี column ชัดเจน แต่ยังมีข้อมูลที่เป็นในรูปแบบ ของ “Video”, “Image”, “Voice”, และอื่นๆอีกมากมาย และด้วยปริมาณที่ ข้อมูลต่างๆมีมากขึ้นแบบ Exponential (แบบ f(x) = x^n ) ทำให้การใช้ฐานข้อมูลแบบเดิมๆเป็นไปได้ยากขึ้น
Source: http://www.techvshuman.com/2016/04/27/future-of-business/
MongoDB กับการเติบโต อย่างก้าวกระโดดโดยผลสำรวจจาก เวปไซต์จัดอันดับ
Source: https://db-engines.com/en/ranking
จะเห็นว่า อันดับแรกเป็นกลุ่มที่ Poppular กันอยู่แล้ว และเป็นฐานข้อมูลชนิด RDBMS ที่มีการเก็บข้อมูลแบบ SQL
แล้วถ้าอยากจะลองใช้ MongoDB บ้างล่ะเริ่มจากตรงไหนดี !!??
>> ดาวน์โหลดโปรแกรม MongoDB <<
>> ดาวน์โหลดโปรแกรมช่วยจัดการ MongoDB <<
หลังจากโหลดเสร็จแล้วและลงเครื่องเรียบร้อย ก็มาต่อกันตามนี้เลย
หากลงปกติตัว Program จะมาอยู่ตามนี้
C:\Program Files\MongoDB\Server\3.4\bin
จากนั้นทำการสร้าง Folder ที่เก็บข้อมูลของเรา ไว้ที่ไหนก็ได้แต่ต้องจำได้นะ อิอิ อย่างของผมก็เก็บไว้ประมาณนี้ D:\data
จากนั้นก็กดเรียก Command Prompt ขึ้นมาแล้วพิมพ์ตามขั้นตอนนี้เลย
เสร็จแล้วกด Enter 1 ที แล้วตามด้วย code ด้านล่างต่อ
เสร็จแล้วกด Enter 1 ทีเป็นอันเสร็จ แต่ยังไม่ต้องปิด Command Prompt นะมันจะรันให้เราแปบเดียวก็เสร็จ
เรามาต่อกันที่ตัวจัดการ MongoDB หรือ Robo3T กัน
คลิกที่เจ้าตัว Icon หุุ่นยนต์โลดด !!
ค่า Default ของ Local Address โดยปกติคือ 27017 หากมีอยู่แล้วก็ Connect ได้เลย
ถ้าหากยังไม่มีให้ Connect ก็ลองกด Create ด้านบน ก่อนนะครับ แล้ว ok
จากนั้น คลิกขวาที่ Test แล้วกด Run Shell ดูครับ
Shell นี่แหละที่เราจะเอาไว้สร้าง Dataset หรือทำการ Query กับมัน
หากเราลอง Double Click ที่ Student จะได้ Document ที่เกี่ยวข้องตามนี้
เนื่องจาก MongoDB มีการเก็บไฟลล์แบบ Document โดยมี Key value ของมัน เช่นอันนี้เป็นข้อมูลนักเรียนที่มีการเก็บในกลุ่มของ Object id หากเรากดเข้าไป หรือทำการเรียกหาก็จะต้องทำการค้นหาจาก Object id ที่เฉพาะนั้นๆ เพื่อทำการค้นเข้าไปต่อ
Data ของ MongoDB จะถูกเขียนในรูปแบบของ JSON file ดังนั้นหากจะเก็บข้อมูลต่างๆก็ต้องเขียนในรูปแบบดังกล่าว
คนละ JSON ละเพ่ !!!!
งั้นเรามาเริ่มเขียนกันดีกว่า
เริ่มจากมาที่ช่องเขียน Code แล้วเรียกใช้ database ที่ช่อง shell นี้นะ
รูปแบบของ JSON กับการสร้าง Dataset เริ่มต้นก็ประมาณนี้เลย
เสร็จแล้วลองคลุมเฉพาะแค่ code ที่เรา insert แล้วกด Ctrl + Enter ดูเพื่อ Run
ตรวจสอบอีกทีว่าขึ้นไหมโดยคำสั่ง
หากสำเร็จจะได้ผลแสดงออกมาดังนี้
แล้วถ้าถามเรื่องประสิทธิภาพล่ะ
NoSQL ใช้วเลา Run มากกว่า SQL เพราะมันต้องคนหาทีละ Key เข้าไป และอีกอย่างตัว Data มันก็ไม่ได้เก็บอย่างเป็นระเบียบเหมือนกับ SQL นั่นเอง
แต่แน่นอน NoSQL ก็แลกมากับการที่มันมีความยืดหยุ่นกว่ามากนั่นเองครับ
เรียน และ ศึกษา MongoDB เพิ่มเติมได้ที่ << PDF file >>
เนื่องจากภาษาของ Mongodb ไม่เหมือนกับฝั่งตระกูล SQL ที่เวลา Query แล้วจะเหมือนภาษาคนซะมากกว่าและเข้าใจได้ง่ายแม้ไม่เคย Code ก็ตามที แต่กลับกัน เจ้า MongoDB มีเงื่อนไขค่อนข้างเยอะกว่า แต่ให้ผลลัพธ์ที่เหมือนกัน ทำให้นักพัฒนามีการปรับให้ง่ายขึ้น จาก Run บน Shell ก็เปลี่ยนมา Run บน Jupytet โดยภาษา Python หรือเรียกอีกชื่อ Pymongo นั่นเอง
จะเริ่ม Pymongo จากไหนดีล่ะ ??
เดี๋ยวไว้มาต่อกันในเรื่องการใช้งาน Jupyter ใน Blog ต่อไปละกันนะครับ
สำหรับครั้งนี้ก็มาจากที่เคยได้มีโอกาสเรียนกับท่านยกสมาคม Programmer ไทยพี่ ปั๊บ Apaichon Punopas อ. เจด Worajedt Sitthidumrong และ อ. เอก Eakasit Pacharawongsakda ครับผม
หากเพื่อนๆมีไอเดียอะไรเพิ่มเติม หรือเนื้อหาผิดพลาดประการใด ติชม เพื่อการปรับปุงได้เลยครับ แล้วจะลองเขียนบทความที่เกี่ยวกับ โลกของ Data Science แบบง่ายๆ
(ที่บอยด์เข้าใจ หรือพึ่งเข้ามาในวงการนี่แหละมาลงให้อ่านกันเรื่อยๆนะ ถ้ามีเวลา อิอิ)
|
เมื่อลองจับ MongoDB ครั้งแรก #หิวเลย
| 21
|
mongodb-tutorial-beginner-step-1a5522579b27
|
2018-05-29
|
2018-05-29 01:53:07
|
https://medium.com/s/story/mongodb-tutorial-beginner-step-1a5522579b27
| false
| 325
|
หลัก
| null |
BigDataEng
| null |
BigDataEng
| null |
bigdataeng
|
BIG DATA,DATA SCIENCE,IOT,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE
| null |
Mongodb
|
mongodb
|
Mongodb
| 2,313
|
Boyd BigData RPG
|
Data Scientist, DigitalMarketer, Biotechnologist; BS Biot30 KU69, ME Big Data Engineering03 DPU
|
f0eb1b89a86d
|
boydbigdatarpg
| 176
| 161
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
9bd9e9b90eb4
|
2018-01-26
|
2018-01-26 11:29:07
|
2018-01-31
|
2018-01-31 14:17:43
| 1
| false
|
en
|
2018-02-06
|
2018-02-06 09:36:24
| 5
|
1a55cf20ed49
| 4.713208
| 37
| 2
| 0
|
This is the second post in a series explaining how to achieve real-time smile detection using deeplearn.js.
| 5
|
Building a real-time smile detection app with deeplearn.js and the web shape detection API — Part 2: Image Processing
This is the second post in a series explaining how to achieve real-time smile detection using deeplearn.js.
In this tutorial we are extracting the detected faces, ready to feed them into our neural network
In the previous post we successfully made use of the Shape Detection API to find faces in real-time from a video feed. Now we need extract the faces from the feed, and format them in a way that makes them efficiently processable by a neural network.
This requires a number of steps. We will need to:
Crop the video so we are left with only the detected faces
Resize the cropped face so they are much smaller
Convert the cropped resized image to greyscale
Get the normalised pixel data for the images
Each of these steps will make the process of feeding the image into the neural network much more efficient.
We want to use the smallest image possible so that the number of pixels the networks have to process are at their minimum. This is achieved by cropping the feed to just contain the faces, and scaling it down to 50 x 50 pixels.
You don’t really need colour in an image to determine if someone is smiling. By converting the faces to greyscale we also save a bit of processing power as the 3 values for the red green blue pixels are converted into 1.
Finally we will normalise the pixel values that will eventually be passed into the network. This means instead of having an array of values between 0 and 255, we have an array of values between 0 and 1.
If you have enabled Chrome’s experimental features you can see a demo here: https://face-extraction.netlify.com/
Notice the little grey face in the top left.
The finished code for this section can also be found here: https://github.com/zefman/smiley/tree/feature/extract_faces
If you haven’t read the first post in this series please do so now, otherwise the following will make no sense 🤓
Building a real-time smile detection app with deeplearn.js
For three weeks before Christmas at The Unit I had been experimenting with the unreleased web Shape Detection API and…medium.com
Cropping the faces and resizing
To crop out and resize the faces we will use an additional smaller canvas. For the time being we will continue to place everything in the App component “src/app/app.component.ts”.
Here you can see the additional canvas we add to the app component. We set it to 50 x 50 pixels, which means our resized image will have 2500 individual pixels. This should be small enough for the neural network to consume.
The rest of the HTML is untouched other than the addition of a few classes.
Moving onto “src/app/app.component.css”:
We will place the smaller canvas above everything in the top left. This canvas doesn’t actually need to be visible, but it will be useful to see the results of our image processing while in development.
Defining variables in the app component
Now on to the the JavaScript itself, in “src/app/app.component.ts” add the following above the constructor:
We need to define another ViewChild to get a reference to the new faceCanvas. We also define two more variables to hold the native canvas element and its rendering context.
Referencing the DOM elements
We then get a reference to the new canvas and its context after the view initialises, just like with the other elements.
Processing the faces
We then create a new function called processFace. This function takes a face object returned by the face detector, uses its bounding area to copy that portion of the larger canvas to our new 50 x 50 px one, and resizes it at the same time. This is achieved by passing the original canvas to the the drawImage function of the faceCtx, along with the area to crop from.
We then grab hold of the the new smaller image’s image data and convert it to greyscale. The image data is originally a 1 dimensional array of values between 0 and 255. 1 pixel is made up of 4 entries in the array e.g. [ 45, 200, 202, 255, 76, 98, 201, 255, 253, 222, 98, 0, … ] (The bold section is one pixel). The first value is the red value for that pixel, the second is the green, the third is the blue, and finally the fourth is the alpha value.
To convert these individual pixels to greyscale we create a for loop that increments by 4 each time, allowing us to modify the individual rgba values of each pixel. We then create a new brightness value that we will use to replace the original pixel values. This is done by summing the values of the rgb parts of the pixel. You might notice the r value is multiplied by .3, the b by .59, and the g by .11. This is to account for the way our eyes are sensitive to different colours and we will produce a more natural-looking greyscale result.
Once we have a set of rgb pixels to their new brightness value, we then place the modified data back onto the canvas with the putImageData function.
Finally we need to use the processFace function in our update to continually draw the detected faces to our small canvas.
You should now have a strange black and white face updating in the top left of your browser! This won’t work well with multiple faces but it’s not a problem as we are only checking the processFace function works.
The final thing we need to do is to get the normalised image data for the face. At the moment if we were to get the image data from the faceCtx we would have an array of values from 0 to 255, and each pixel would have the same value set for its rgb variables. This doesn’t provide the neural network with any additional information, so we may as well take only the r value of each pixel. On top of that we need to normalise the values so they fall between 0 and 1 rather than 0 and 255. To do this we will create two new functions to be used later on.
Hopefully this code makes sense after reading the description above. We will be passing getNormalizedGreyScalePixels the imageData from our faceCtx; we take the r value from each pixel, normalise it, and then add it to a new array.
And we’re done - phew 😅
Before starting this project I hadn’t realised how much effort would go into formatting the data correctly, before we even get to the machine learning element.
So at this point we have successfully taken the detected faces and modified them in a way that will make them more easily processable by the neural network, which we will be creating in a forthcoming post.
In the next post we will look into how we can save labeled training data to teach the neural network to recognise smiling faces. This will involve saving the image data into two sets: one smiling, one not smiling.
In the meantime, feel free to get in contact with me on Twitter: @jozefmaxted
|
Building a real-time smile detection app with deeplearn.js
| 327
|
building-a-real-time-smile-detection-app-with-deeplearn-js-1a55cf20ed49
|
2018-04-26
|
2018-04-26 13:05:05
|
https://medium.com/s/story/building-a-real-time-smile-detection-app-with-deeplearn-js-1a55cf20ed49
| false
| 1,196
|
We partner with you to design the future, collaborating to launch, evolve and grow amazing businesses.
| null | null | null |
The Unit
|
hello@theunit.co.uk
|
the-unitgb
|
DIGITAL STRATEGY,DESIGN,TECHNOLOGY
|
TheUnitGB
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Jozef Maxted 👻
|
Senior Developer @TheUnitGB Prolific maker of digital things #js #computerart http://jozefmaxted.co.uk
|
816666186cd6
|
jozefmaxted
| 109
| 62
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
aca00ea2a266
|
2018-01-02
|
2018-01-02 23:16:24
|
2018-01-03
|
2018-01-03 19:07:22
| 4
| false
|
en
|
2018-06-23
|
2018-06-23 22:17:40
| 28
|
1a55d28cf8d
| 6.492453
| 8
| 0
| 0
|
This year has been nothing short of incredible here at Aifred Health. We kicked off 2017 with an application to the $5M IBM Watson AI…
| 5
|
Aifred in Review: A Start-Up’s First Year
This year has been nothing short of incredible here at Aifred Health. We kicked off 2017 with an application to the $5M IBM Watson AI XPRIZE competition, and are finishing the year in high spirits with our Milestone Award as one of the Top 2 teams moving forward in the competition, a myriad of accomplishments from the past year, and many exciting plans for 2018. Before diving into the new year, we want to take a moment to reflect on the events of our first year as a start-up. These past 12 months have included many landmark events which have contributed to a sense of acceleration for the company — and while it’s exciting to have momentum, now is also a great time to look back and appreciate the strides we’ve taken as a company.
Some smiling faces at a company meeting, with even more tiny smiling faces on video call.
An important note to begin on: as a start up, we have a lot of thanks to give to those who have supported us, particularly in the beginning of our journey. Of course, we are grateful to XPRIZE for providing opportunities for us to connect with the AI community and acting as the driving motivation for bringing the Aifred team together. In addition, we have to thank the District 3 Innovation Center at Concordia (D3) for their support and donation of space for our team. We are also thankful for the McGill Office of Student Life and Learning as well as Building 21, who have sponsored us and invited us to be part of a community devoted to innovation and creativity.
In the early months of 2017, we took our first steps. We applied to the XPRIZE competition, joined D3, and submitted our XPRIZE proposal. In April 2017, we were accepted into the XPRIZE competition, and our journey as a start-up really began to gather speed.
In the following months, we had opportunities to attend several conferences where we could both contribute our ideas to the AI community as well absorb the wisdom of experienced individuals in the field. At C2 Montreal, there were several recurring themes throughout the event such as the importance of creativity in AI, the advantage of start-up agility in competitive environments, and the need for policies to guide AI development. We met world leaders in the development of socially responsible AI at the AI for Good Summit in Geneva, and were part of discussions regarding the role of AI at a global level. Back in Montreal, we had the chance to demo at Startupfest and hear from experts in another domain: start-ups. Several speakers gave powerful presentations, communicating their views on what made their endeavours successful. The importance of design, company environment, and Canada’s multi-cultural population for start-ups were all topics of discussion, and the overall event fostered a feeling of community and support between young businesses. It gave us a taste of how advice from entrepreneurs outside of the field of health tech can bring significant insight and spark powerful ideas.
Flashback to Startupfest — featuring John Bates (CEO of Executive Speaking Success), life-saving advice, and light-saber duels.
Later on in the year, we received a D3 Centre Grant to hire our first intern, who has brought valuable knowledge and skills to the table at Aifred Health and allowed us to expand our efforts. We also joined the McGill Notman House cohort, to whom we would also like to give thanks for their support.
In addition to external partnerships, conferences and speaking opportunities, our Research, Tech, and Product Development teams all achieved milestones of their own. The Research division has spent months extracting data and synthesizing findings for a literature review of biomarkers for treatment response in depression, the first and largest systematic review on this topic completed. This information is also clearly of relevance for our deep learning algorithm. The Technology team has developed Vulcan, a deep learning framework, along with additional model interpretability tools. This framework is the foundation of the technology which will use personalized patient features as inputs and return a list of possible treatments, ranked according to confidence level. The Product Development team has been iteratively refining the design of the user interface and website according to feedback from various audiences, as well as handling data storage and retrieval. The team at Aifred Health has also started getting access to large datasets, which consist of data holding potential predictive value for response to various antidepressants from clinical trials. Throughout the year, our team has also expanded with the assembly of a clinical advisory board, made up of experts in domains such as Machine Learning, Bioethics, Clinical Trials, and various depression treatments.
The Fall of 2017 was a whirlwind of events. Our co-founder and Director of Scientific Partnership, Sonia Israel, spoke at Synergie Émergente Recherche Industrie (SÉRI) Montreal in October, a conference aiming to facilitate co-operation between research and industry. In Amsterdam, our CEO and Chief Medical Officer, David Benrimoh, attended World Summit AI. At this conference, we had the pleasure of demoing at the IBM XPRIZE table, discussing AI start-ups onstage, and conversing about our favourite topics with conference-goers. In particular, we met the co-founders of Researchably, a tailored research platform which helps clients keep up to date with specific information of interest. A note about Researchably: we are extremely impressed with this team’s unbelievable efficiency and thoroughness. We’ve been able to call on them for a variety of research, and each time we are astonished by their effectiveness. They are phenomenal. Thanks, Researchably.
Also in the Autumn, we were recognized as one of the Top 4 teams in the Caravane Régionale de l’Entrepreneuriat (CRE) Provincial Start-Up Cup, where our co-founder and Director of Research, Kelly Perlman, deftly used her charm and intellect to present our company in a literal wrestling ring stage set-up.
Kelly Perlman: co-founder, Director of Research, and ringmaster.
We have been ecstatic to stand on stages all over the world in our first year as a start-up: Aifred Health also appeared at the IBM Watson AI XPRIZE European Forum in Paris, spoke about disability AI at AI World Forum in Toronto, and gave the keynote speech on ethical concerns in health care AI at Giant Health Event in London, UK. On a more technical side, our co-founder and Chief Technology Officer, Robert Fratila, pitched at DATAtalks, an event organized by the Data Intelligence Society of Concordia University. The audience ranged from Concordia School of Business students to data science managers of various companies, and we had the chance to speak with investors and discuss our company with individuals from diverse backgrounds.
And finally, a year of intense research, work, and development culminated in our success at the Neural Information Processing Systems Conference (NIPS’17), where we were awarded a Milestone Award as one of the Top 2 Teams out of the 59 teams advancing in the XPRIZE Competition.
Here is a photo of our co-founder and CFO Eleonore Fournier-Tombs displaying our Milestone Award on behalf of the company and embodying the elation of the entire team:
It’s hard to describe the feeling of excitement within the team that resulted from the news of our Top 2 ranking — it’s a wonderful thing to be recognized for hard work, and we are honoured to have received the award. Immediately following the XPRIZE news, our CBC Spark interview was released, where Sonia Israel had the chance not only to present Aifred Health, but to discuss the product and the measures we take into consideration for interpretability and ethical concerns we are addressing in its development. We appreciate these types of interviews, which allow us to delve deeper into the nuances of not only our product, but AI development in general. At Aifred Health, we highly value not only public accessibility to information about AI products, but public understanding of what exactly is the intention of these products.
Throughout the year, we have mentioned in presentations and publications our in-house framework for “Meticulous Transparency”, which in its essence is focused on demonstrating intentionality of AI applications. In order to achieve this demonstration of intention, we propose that a thorough description, projected risks and benefits, intended scope of use, and sources of data of AI projects should be presented to civil society and regulatory bodies in advance of developing an application. We are encouraged by the like-minded individuals and organizations we have met throughout the year, and the general atmosphere of the AI community in this regard — it’s an exciting time for AI development, but also a critical window for caution and analysis.
Finally, as we kick off the new year, we are thankful to those who have championed us in the media (shout-out to Massive for this splendid piece of work), and to those who have supported us as a company in the start-up community, as a project in the AI community, and as a psychiatry application in the healthcare community. We’ve got lots planned for 2018: more AI conferences, competitions, discussion panels, talks, and collaborations. Our team at Aifred Health continues to work hard with a drive to improve lives, and with motivation for creation. We’re thrilled with the momentum we’ve achieved, and we have no intention of slowing down in 2018.
|
Aifred in Review: A Start-Up’s First Year
| 173
|
aifred-in-review-a-start-ups-first-year-1a55d28cf8d
|
2018-06-23
|
2018-06-23 22:17:40
|
https://medium.com/s/story/aifred-in-review-a-start-ups-first-year-1a55d28cf8d
| false
| 1,535
|
Clinical decision aid using machine learning to improve depression treatment efficacy.
| null |
aifredhealth
| null |
Aifred Health
|
info@aifredhealth.com
|
aifred-health
|
ARTIFICIAL INTELLIGENCE,MENTAL HEALTH,DEPRESSION,STARTUP,DEEP LEARNING
|
aifredhealth
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Aifred Health
|
Clinical Decision Aid System using machine learning to increase treatment efficacy in psychiatry | aifredhealth.com
|
12181684473
|
aifred
| 32
| 16
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-05
|
2018-05-05 07:47:13
|
2018-05-05
|
2018-05-05 08:02:40
| 0
| false
|
en
|
2018-05-05
|
2018-05-05 08:02:40
| 1
|
1a573e3aea03
| 2.403774
| 0
| 0
| 0
|
The global homomorphic encryption market is a very dynamic market and is expected to witness stable growth over the forecast period. The…
| 4
|
Global Homomorphic Encryption Market Report 2018
The global homomorphic encryption market is a very dynamic market and is expected to witness stable growth over the forecast period. The growth of the homomorphic encryption market is influenced by the growing demand for secured data transmission, growing investment in cloud based industries and growing e-governance initiatives. Furthermore, high implementation of homomorphic encryption in the banking and finance sector is expected to boost the growth over the forecast period. However, the complexity of systems and lack of upgradation are the factors hindering the growth of the homomorphic encryption market. The global market of homomorphic encryption is expected to grow at USD 59.84 million by the end of year 2017, and grow to 104.24 million by 2022 with 11.74% of compound annual growth rate. Most of the current schemes are Partial Homomorphic Encryption scheme. The full Homomorphic Encryption program will be applied gradually in 2020, and speed up the growth of the market The Homomorphic Encryption market research report analyzes Global adoption trends, future growth potentials, key drivers, competitive outlook, restraints, opportunities, key challenges, market ecosystem, and value chain analysis. This report presents a detailed analysis, market sizing, and forecasting for the emerging segment within the Homomorphic Encryption market. The report is thoroughly segmented by product type, application, and vertical. This study includes the profiles of key players in the market and the strategies adopted by them to sustain in the competition. Recent developaments and barriers of the market is expected to help emerging players to design their strategies in an effective manner. The study is expected to help key players in broadcast Homomorphic Encryption manufacturers to formulate and develop new strategies. Frequency, Time Period ? 2012–2017 base years ? 5-year annual forecast (2018–2022) Measures Revenue Segmentation by Product Type ? Partial Homomorphic Encryption ? Somewhat Homomorphic Encryption ? Full Homomorphic Encryption Segmentation by Product Application ? Industrial ? Government ? Finance ? Healthcare Region and Country Coverage: ? North America ? Europe ? Asia Pacific Key Issues Addressed ? Competitive Landscape and Strategic Recommendations ? The market forecast and growth areas for Homomorphic Encryption Market ? Changing Market Trends and Emerging Opportunities ? Market size and the growth rate in 2022 ? Historical shipment and revenue ? Analysis key applications ? Main Players market share Customization We can offer customization in the report without any extra charges and get research data or trends added in the report as per the buyer?s specific needs.
TOC : 1.1 The Homomorphic Encryption market revenue by the end of 2017 1.2 Global Homomorphic Encryption Market revenue forecast in 2022 1.3 Market share by Company in 2017 1.4 Market share by Type in 2017 1.5 Market share by Application in 2017 2 Report Scope 2.1 Market definition 2.2 Product segmentation 2.2.1 Segmentation by Product Type 2.2.2 Segmentation by Type 2.2.3 Segmentation by Applications 2.3 Main Participants and Product 3 Market Introduction 3.1 Market Overview 3.2 Market Dynamics 3.2.1 Drivers 3.2.2 Challenges 3.3 Industry Trends 3.3.1 Trends #1: 3.3.2 Trends #2: 3.4 Industry chain analysis 4 Global Homomorphic Encryption Market Segmentation Analysis by Type 4.1 Partial Homomorphic Encryption 4.1.1 introduction 4.1.2 Partial Homomorphic Encryption Segmentation Market Data 2012–2017 4.1.3 Partial Homomorphic Encryption Segmentation Market Data by Region 2012–2017 4.2 Somewhat Homomorphic Encryption 4.2.1 introduction 4.2.2 Somewhat Homomorphic Encryption Segmentation Market Data 2012–2017 4.2.3 Somewhat Homomorphic Encryption Segmentation Market Data by Region 2012–2017 4.3 Full Homomorphic Encryption 4.3.1 introduction 4.3.2 Full Homomorphic Encryption Segmentation Market Data 2012–2017 4.3.3 Full Homomorphic Encryption Segmentation Market Data by Region 2012–2017 5 Market Segmentati by Application 5.1 Industrial 5.1.1 introduction 5.1.2 Industrial Segmentation Market Data 2012–2017 5.1.3 Industrial Segmentation Market Data by Region 2012–2017 5.2 Finance 5.2.1 introduction 5.2.2 Finance Segmentation Market Data 2012–2017 5.2.3 Finance Segmentation Market Data by Region 2012–2017
Report Details at — https://www.marketresearchoutlet.com/report/homomorphic-encryption-market
|
Global Homomorphic Encryption Market Report 2018
| 0
|
global-homomorphic-encryption-market-report-2018-1a573e3aea03
|
2018-05-05
|
2018-05-05 08:14:57
|
https://medium.com/s/story/global-homomorphic-encryption-market-report-2018-1a573e3aea03
| false
| 637
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Satya Kumar
| null |
9a64725d30ed
|
kumarsat265
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
|
package demo;
import ib.controller.ApiController;
import java.util.ArrayList;
public class ConnectionHandlerImplementation implements ApiController.IConnectionHandler {
@Override
public void connected() {
//Do something when connected
System.out.println("Connected");
}
@Override
public void disconnected() {
//Do something when disconnected
}
@Override
public void accountList(ArrayList<String> list) {
//Do something with the account list
}
@Override
public void error(Exception e) {
//Do something on error
}
@Override
public void message(int id, int errorCode, String errorMsg) {
//Do something with server messages
}
@Override
public void show(String string) {
//Do something with parameter
}
}
package demo;
import ib.controller.ApiConnection;
public class LoggerImplementation implements ApiConnection.ILogger {
@Override
public void log(String valueOf) {
//Do something
}
}
package demo;
import ib.controller.ApiController;
public class Demo {
//We need instances of our logger implementation
static LoggerImplementation inLogger = new LoggerImplementation();
static LoggerImplementation outLogger = new LoggerImplementation();
//We need an instance of our connection handler implementation
static ConnectionHandlerImplementation connectionHandler = new ConnectionHandlerImplementation();
//We need an instance of the ApiController
static ApiController apiController = new ApiController(connectionHandler, inLogger, outLogger);
public static void main(String _[]){
apiController.connect("localhost", 7497, 0);
}
}
package demo;
import ib.controller.ApiController;
import ib.controller.NewTickType;
import ib.controller.Types;
import java.util.ArrayList;
public class TopMktDataHandlerImplementation implements ApiController.ITopMktDataHandler {
ArrayList<Double> prices = new ArrayList<>();
@Override
public void tickPrice(NewTickType tickType, double price, int canAutoExecute) {
//Do something with the price response
if(tickType.equals(NewTickType.LAST)) {
prices.add(0, price);
System.out.println("Current Price: "+price);
}
}
@Override
public void tickSize(NewTickType tickType, int size) {
//Do something with the volume response
}
@Override
public void tickString(NewTickType tickType, String value) {
//Do something with a specific tickType
}
@Override
public void tickSnapshotEnd() {
//Do something on the end of the snapshot
}
@Override
public void marketDataType(Types.MktDataType marketDataType) {
//Do something with the type of market data
}
}
static NewContract initializeContract(){
NewContract nq = new NewContract();
nq.localSymbol("NQU8");
nq.secType(Types.SecType.FUT);
nq.exchange("GLOBEX");
nq.symbol("NQ");
nq.currency("USD");
nq.multiplier("20");
return nq;
}
apiController.reqTopMktData(initializeContract(), "", false, topMktDataHandlerImplementation);
package demo;
import ib.controller.ApiController;
public class Demo {
//We need instances of our logger implementation
static LoggerImplementation inLogger = new LoggerImplementation();
static LoggerImplementation outLogger = new LoggerImplementation();
//We need an instance of our connection handler implementation
static ConnectionHandlerImplementation connectionHandler = new ConnectionHandlerImplementation();
//We need an instance of the ApiController
static ApiController apiController = new ApiController(connectionHandler, inLogger, outLogger);
//We need an instance of the mkt data handler implemnetation
static TopMktDataHandlerImplementation mktDataHandler = new TopMktDataHandlerImplementation();
//We also need to initialize our contract
static NewContract initializeContract(){
NewContract nq = new NewContract();
nq.localSymbol("NQU8");
nq.secType(Types.SecType.FUT);
nq.exchange("GLOBEX");
nq.symbol("NQ");
nq.currency("USD");
nq.multiplier("20");
return nq;
}
public static void main(String _[]){
apiController.connect("localhost", 7497, 0);
apiController.reqTopMktData(initializeContract(), "221",false,topMktDataHandlerImplementation);
}
}
package demo;
import java.util.ArrayList;
public class EntrySignalA {
private ArrayList<Double> prices;
public EntrySignalA(ArrayList<Double> prices){
this.prices = prices;
if(prices.size()>1) {
if (prices.get(0) - prices.get(1) > 3);
//Buy
if(prices.get(0) - prices.get(1) < -3);
//Sell
}
}
}
package demo;
import ib.controller.*;
import java.util.ArrayList;
import java.util.List;
public class OrderHandler implements ApiController.ILiveOrderHandler, ApiController.IOrderHandler {
//This is taken straight from the API Documentation, with some minor modifications
private static List<NewOrder> BracketOrder(int parentOrderId, Types.Action action, int quantity, double limitPrice, double takeProfitLimitPrice, double stopLossPrice) {
//This will be our main or "parent" order
NewOrder parent = new NewOrder();
parent.orderId(parentOrderId);
parent.action(action);
parent.orderType(OrderType.LMT);
parent.totalQuantity(quantity);
parent.lmtPrice(limitPrice);
//The parent and children orders will need this attribute set to false to prevent accidental executions.
//The LAST CHILD will have it set to true.
parent.transmit(false);
NewOrder takeProfit = new NewOrder();
takeProfit.orderId(parent.orderId() + 1);
takeProfit.action(action.equals(Types.Action.BUY) ? Types.Action.SELL : Types.Action.BUY);
takeProfit.orderType(OrderType.LMT);
takeProfit.totalQuantity(quantity);
takeProfit.lmtPrice(takeProfitLimitPrice);
takeProfit.parentId(parentOrderId);
takeProfit.transmit(false);
NewOrder stopLoss = new NewOrder();
stopLoss.orderId(parent.orderId() + 2);
stopLoss.action(action.equals(Types.Action.BUY) ? Types.Action.SELL : Types.Action.BUY);
stopLoss.orderType(OrderType.STP);
//Stop trigger price
stopLoss.auxPrice(stopLossPrice);
stopLoss.totalQuantity(quantity);
stopLoss.parentId(parentOrderId);
//In this case, the low side order will be the last child being sent. Therefore, it needs to set this attribute to true
//to activate all its predecessors
stopLoss.transmit(true);
List<NewOrder> bracketOrder = new ArrayList<>();
bracketOrder.add(parent);
bracketOrder.add(takeProfit);
bracketOrder.add(stopLoss);
return bracketOrder;
}
//Contract initializer for simplicity
static NewContract initializeContract(){
NewContract nq = new NewContract();
nq.localSymbol("NQU8");
nq.secType(Types.SecType.FUT);
nq.exchange("GLOBEX");
nq.symbol("NQ");
nq.currency("USD");
nq.multiplier("20");
return nq;
}
//Implementation of the method to create bracket orders
public void placeBracketOrder(int parentOrderId, Types.Action action, int quantity, double limitPrice, double takeProfitLimitPrice, double stopLossPrice){
List<NewOrder> bracketOrder = BracketOrder(parentOrderId,action,quantity,limitPrice,takeProfitLimitPrice,stopLossPrice);
for(NewOrder o : bracketOrder) {
Demo.apiController.placeOrModifyOrder(initializeContract(), o,this);
}
}
@Override
public void orderState(NewOrderState orderState) {
}
@Override
public void orderStatus(OrderStatus status, int filled, int remaining, double avgFillPrice, long permId, int parentId, double lastFillPrice, int clientId, String whyHeld) {
}
@Override
public void handle(int errorCode, String errorMsg) {
}
@Override
public void openOrder(NewContract contract, NewOrder order, NewOrderState orderState) {
}
@Override
public void openOrderEnd() {
}
@Override
public void orderStatus(int orderId, OrderStatus status, int filled, int remaining, double avgFillPrice, long permId, int parentId, double lastFillPrice, int clientId, String whyHeld) {
}
@Override
public void handle(int orderId, int errorCode, String errorMsg) {
}
}
package demo;
import ib.controller.Types;
import java.util.ArrayList;
public class EntrySignalA {
private ArrayList<Double> prices;
private OrderHandler orderHandler;
public EntrySignalA(ArrayList<Double> prices){
this.prices = prices;
this.orderHandler = new OrderHandler();
if(prices.size()>1) {
if (prices.get(0) - prices.get(1) > 3) {
orderHandler.placeBracketOrder(1000, Types.Action.BUY, 1, 1, 1, .5);
}
if(prices.get(0) - prices.get(1) < -3) {
orderHandler.placeBracketOrder(2000, Types.Action.BUY, 1, 1, 1, .5);
}
}
}
}
package demo;
import ib.controller.ApiController;
import ib.controller.NewTickType;
import ib.controller.Types;
import java.util.ArrayList;
public class TopMktDataHandlerImplementation implements ApiController.ITopMktDataHandler {
ArrayList<Double> prices = new ArrayList<>();
@Override
public void tickPrice(NewTickType tickType, double price, int canAutoExecute) {
//Do something with the price response
if(tickType.equals(NewTickType.LAST)) {
prices.add(0, price);
System.out.println("Current Price: "+price);
new EntrySignalA(prices); //Check for signal
}
}
@Override
public void tickSize(NewTickType tickType, int size) {
//Do something with the volume response
}
@Override
public void tickString(NewTickType tickType, String value) {
//Do something with a specific tickType
}
@Override
public void tickSnapshotEnd() {
//Do something on the end of the snapshot
}
@Override
public void marketDataType(Types.MktDataType marketDataType) {
//Do something with the type of market data
}
}
| 87
|
55a752f13ea5
|
2018-08-02
|
2018-08-02 19:26:29
|
2018-08-03
|
2018-08-03 15:45:55
| 7
| false
|
en
|
2018-08-15
|
2018-08-15 12:57:53
| 11
|
1a5a200af260
| 11.453774
| 43
| 0
| 1
|
Quantitative Development
| 5
|
Algorithmic Trading System Development
Quantitative Development
Often a Quantitative Researcher will develop trading models in Python or R. These models are then passed off to Quantitative Developers, who implement them in trading systems with Java or C++. Usually, a Quantitative Trader will then execute trades with the help of these systems. I have had the opportunity to work with the Interactive Brokers Java API for years as a researcher, developer, and trader. In this article we will be building an algorithmic trading system, for model based automatic trade execution. There are conceptually infinite design patterns to follow when developing trading systems. However, the purpose of this article is to offer simple solutions to the most common development stages. I will break this article up into three sections:
Connecting to Interactive Broker’s Trader Work Station (TWS)
Creating a Live Market Data Stream
Implementing Models for Automatic Trade Execution
To install Interactive Brokers TWS visit: Interactive Brokers TWS Download
To install the API you will need to visit: Interactive Brokers API Download
If you wish to view the documentation of the API it can be found here: Interactive Brokers API Documentation
Important Notes:
The project that I walk you through can be found on GitHub here, and at the bottom of this article
If you do not have an account with Interactive Brokers, you can use the demo account [Username: edemo Password: demouser] to follow along and build a trading system for free
I assume you have intermediate Java programming experience
I assume you have some knowledge of working with APIs, and are capable of the installation process and setup within your IDE
When trading on a live account, live market data streams for certain securities will require a Level I Market Data Subscription, more information can be found here: Market Data Subscriptions
Connection
In this section we will look at connecting to TWS through Java, and several idiosyncrasies of the process.
Configuring TWS
The first step is to configure your TWS. Start by logging in either with the demo account above, or a personal account, and do the following:
Select “Classic TWS” → Select “Configure”
Select “API” → Enable ActiveX and Client Sockets
Disable Read Only API → Apply → Ok
We have now allowed for an API connection from 127.0.0.1:7497
We have also allowed Java to execute trades by disabling Read Only API
If your connection is not successful, you may have to forward a port. For more information on ports, see: Interactive Brokers Host and Port Documentation
Connection Handling
By now I assume you have installed the API from Interactive Brokers, and have set up a work space containing it in your preferred IDE. To establish a connection between Java and TWS, we need a way to handle connection events between the server and client. To handle connection events we will create an implementation of the IConnectionHandler interface:
Next we need to create an implementation of the ILogger interface:
Now that we know how we are going to handle connections, and how we are going to log information (with our implementations of the interfaces), we can establish a connection to TWS. I will connect to TWS in my project’s main class Demo:
The primary controller is the ApiController, through which we send our requests and receive our responses from TWS.
If the connection is successful you should see this in your terminal:
Now that we have successfully connected to TWS, we will look to create a live market data stream to handle real time data.
Live Market Data Streams
In this section we will look at the process of setting up a market data stream.
Market Data Handler
Similar to our connection handler, we have to create an instance of the ITopMktDataHandler interface so we can handle data the server responds with. We will receive data from the server through methods in the instance of the implementation we pass as a parameter. The server will respond with data as it changes, so changes in features such as Price, Volume, Bid_Size, Ask_Size, will be sent to their respective methods in the ITopMktDataHandler implementation. To implement any sort of trading model in the next section, we will need a way to capture this data. For now let’s store the last price as it changes at the top of an ArrayList:
In short, we are implementing the ITopMktDataHandler interface in a new class called TopMktDataHandlerImplementation to handle the data the server responds with. The method tickPrice will receive data from the server as it changes. When the server responds with a change in the last price, we add it to the ArrayList prices.
Visualizing the Price ArrayList’s Behavior
There is a reason we are changing the way new items are added to the ArrayList. Here is a graphical representation of how this works:
As a new price arrives, we insert it into the top of the ArrayList, this allows us to get the most recent price by the statement prices.get(0). This will be more relevant in the modeling and trade execution section.
Creating a Contract Object
The most important part of this process is establishing a NewContract object to pass as a parameter in our request. To do this we will need to initialize a NewContract object with parameters about the security we wish to stream. I will be using the a futures contract as an example:
Requesting the Live Market Data Stream
Now that we have a way to handle the data stream, and have defined a security to stream, let’s take a look at and breakdown the request method:
The first parameter is the contract, so I will simply pass the method we initialized our contract in.
The second parameter can be left as an empty string.
The third parameter determines whether or not we are requesting a snapshot, and we obviously want to establish this request as a stream.
The last parameter is our ITopMktDataHandler implementation. This allows us to do things with the data the server responds with.
Here is how the main class Demo looks updated for establishing a data stream:
Upon running this we should get the following output in the terminal:
We have successfully created a market data stream for our specified contract.
Models for Automatic Trade Execution
This is probably the most glamorous section of algorithmic trading system development. This section is all about implementing models, and automatic order execution.
Model Development
This is the most important part of our trading system. If our model is not profitable, we will not make any money. Though we will not talk much about the model development process too much in this article, I wanted to touch on it slightly. (I will dedicate an entire article to model development.) For more detailed discussion, checkout posts by Auquan on building simple strategies.
As I mentioned briefly in the introduction, usually model development is a different role falling under the responsibilities of a Quantitative Researcher. The researcher develops a model in Python or R, the developer implements it in Java or C++, and the trader is responsible for trade execution. Most researchers use some form of machine learning to assist in developing their models. This means having knowledge of data science libraries in Python such as Pandas, Scikit-Learn, Numpy, and packages in R such as quantmod. If you wish to learn more about machine learning applications in model development, this article from Auquan gives great insight to the process. Though there are several ways to develop trading models, it should be obvious that it is by no means an exact science. Some models are designed with genetic capabilities, to continuously adapt to market conditions, and others are set with loss limits, so when they stop performing they get scrapped. For Quantitative Research, advanced financial, mathematical, and statistics knowledge are a must-have. These skills are on par with creativity and perseverance, which drive the modeling process.
The general model design process can be seen as the following:
Read a PhD’s Paper or Other Relevant Information→ Sketch Out Model Ideas → Backtest Models (With some critical metrics to evaluate performance) → Scrap Non-Performing Models (Usually about 70%-90% of all model ideas) → Continue to Develop and Scrutinize Remaining Models → Look at Production Potential → Repeat Process.
You can read more on this process here.
Arbitrary Model for Implementation
For the purpose of this article, I will be using a simple strategy, based on a price change threshold, for a Futures Contract. Here is a visual representation of the entry strategy:
The next step is developing an exit strategy. For simplicity sake, my exit strategy is simply a limit and stop placed 1 point up and .5 point down respectively. (To keep costs equivalent after fees.)
The purpose of this model is to show a simple solution for model based automatic order execution. I do NOT advise implementing this strategy on a live account.
Model Development
We will be getting the information from the price ArrayList in the TopMktDataHandlerImplementation. After getting the prices, we will want to develop a class to handle the (Model based) entry signal, and then a class to handle sending orders. I am going to use the class EntrySignalA as my main signal:
Now that we have a way to get the price to our signal, to determine if we should buy or sell, we need to create a class to handle orders. We will create a class to both implement custom orders and handle active orders.
Now we can use the placeBracketOrder method in our signal class as such:
Important Note: In this example I use 1000 and 2000 as the parentOrderId. A trade will not be sent if it does not have a unique Order Id. However, if you reset the Order Id API Sequence in TWS, all existing Order Ids will be reset, and may be reused. There are many ways to generate unique order Ids, for example converting the current date/time to an integer.
Putting it All Together
If we think about this logically, there is a very simple way to implement this into our system. Just add an instance of the signal class at the bottom of the method the server responds to for every change in price:
We now have a simple system that automatically executes trades based on our model. This is what it will look like when our model executes a Buy order:
In the demo environment liquidity can be non-existent
Quantitative Developer — System Development
As a Quantitative Developer, often your job is to create trading systems based off models that have been rigorously backtested. In this example, I walked you through the development of an algorithmic trading system, and gave simple solutions to critical development stages. There are infinite ways to improve this system (and obviously the model) such as:
Adding Security Flags in the Trade Execution System
Fail to Execute Orders at Non-Optimal Prices
Analysis Libraries for Current Model Performance
Etc…
All of the code in this article can be found on GitHub: Quant_Dev_System
|
Algorithmic Trading System Development
| 237
|
algorithmic-trading-system-development-1a5a200af260
|
2018-08-19
|
2018-08-19 11:51:40
|
https://medium.com/s/story/algorithmic-trading-system-development-1a5a200af260
| false
| 2,757
|
Auquan aims to to engage people from diverse backgrounds to apply the skills from their respective fields to develop high quality trading strategies. We believe that extremely talented people equipped with right knowledge and attitude can design successful trading algorithms.
| null |
tradewithauquan
| null |
auquan
|
info@auquan.com
|
auquan
|
ALGORITHMIC TRADING,TRADING,MACHINE LEARNING,FINANCE,INVESTING
|
tradewithauquan
|
Trading
|
trading
|
Trading
| 19,801
|
Roman Paolucci
|
Data Scientist
|
ba54f7458d02
|
romanmichaelpaolucci
| 50
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-12-12
|
2017-12-12 15:51:12
|
2017-12-17
|
2017-12-17 17:36:25
| 3
| false
|
en
|
2017-12-18
|
2017-12-18 12:43:25
| 0
|
1a5a9ff4c3d8
| 1.957547
| 2
| 0
| 0
|
This paper proposes a Temporal Context Network (TCN) for activity detection. The main contribution is showing that activity context…
| 5
|
Temporal Context Network for Activity Localization in Videos
This paper proposes a Temporal Context Network (TCN) for activity detection. The main contribution is showing that activity context improves activity detection accuracy. Similar to Faster R-CNN, the pipeline is divided into three steps: proposal generation, object classification and bounding box refinement. Before explaining these steps, lets first let define video activity as a video segment between (b,e) where b and e denote the beginning and end of a segment. Every activity contains one or more actions or events; an event contains multiple actions.
To generate proposals, an untrimmed video is divided into M segments with 50% overlap — each containing L frames. For each segment K=20 proposals are generated, between (b,e), as shown in the figure
After proposals generation, feature representation for ranking proposals is required. An untrimmed video frames are sampled at rate m = T * 2/fps, where T is the number of frames, fps is the number of frames per second and 2 is a hyper parameter. Thus a video feature vector F = {f_1,f_2,……,f_m}.
Using the video feature vector F, a proposal feature vector is constructed by sampling n frame features within the proposal segment (b,e). A proposal is represented by a feature vector Z_{i,k} = {z_1,z_2,…….,z_n}, where i in the proposal index and k is the temporal scale.
To detect an activity, a pair of proposal features Z_{i,k}, Z_{i,k+1}, from two consecutive scales are fed to a Temporal CNN (TCN) as shown below
After applying TCN, the proposals features vectors are concatenated, then fed to a full connected layer to compute activity detection loss. On parallel, a similar classification pipeline classifies the activity. Unlike detection pipeline, only one proposal is used to classify activity. This figure summarize the whole neural network
Comments:
The network consider temporal proposals but no spatial proposals. This probably hinder its ability to detect action in the background.
The author said the classification problem is more difficult. The network classification accuracy is less than the detection accuracy. This network process untrimmed video which is probably the reason for such results.
Not sure why a context proposal is not used for classification similar to detection? The main idea from this paper is that context matters.
|
Temporal Context Network for Activity Localization in Videos
| 3
|
temporal-context-network-for-activity-localization-in-videos-1a5a9ff4c3d8
|
2018-01-20
|
2018-01-20 20:10:49
|
https://medium.com/s/story/temporal-context-network-for-activity-localization-in-videos-1a5a9ff4c3d8
| false
| 373
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Ahmed Taha
|
I write reviews on computer vision papers. Writing tips are welcomed.
|
996110eea09a
|
ahmdtaha
| 43
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-20
|
2017-11-20 18:22:00
|
2017-11-20
|
2017-11-20 18:22:01
| 1
| false
|
en
|
2017-11-20
|
2017-11-20 18:22:01
| 1
|
1a5ab40a568d
| 1.445283
| 0
| 0
| 0
| null | 5
|
Just the beginning: 6 applications for machine learning in radiology beyond image interpretation
Discussions about machine learning’s impact on radiology might begin with image interpretation, but that’s only the tip of the iceberg. When it comes to realizing the technology’s full potential, it’s like Bachman Turner Overdrive sang many years ago: You ain’t seen nothing yet.
The authors of a new analysis published in the Journal of the American College of Radiology wrote at length about the many applications of machine learning.
“Machine learning has the potential to solve many challenges that currently exist in radiology beyond image interpretation,” wrote lead author Paras Lakhani, MD, department of radiology at Thomas Jefferson University Hospital in Philadelphia, and colleagues. “One of the reasons there is great excitement in radiology today is the access to digital Big Data. Many institutions have implemented electronic health care databases over the past two decades, including for images in PACS, radiology reports and ordering information in Radiology Information Systems, and electronic health records that encompass information from other sources, including clinical notes, laboratory data and pathology records. Moreover, radiology images themselves are rich in metadata stored in the DICOM format, which may be leveraged as well. As such, there are great opportunities to uncover complex associations within the data using machine learning that would otherwise be difficult for a human to do.”
These are some of the many examples Lakhani et al. provided of how machine learning can be used in radiology beyond image interpretation:
Developing safety protocols is an important part of any radiologist’s job, the authors noted, and machine learning can help speed up the entire process.
“This can be a time-consuming but important task,” the authors wrote. “However, recent studies demonstrate that machine learning algorithms utilizing information extracted from the provided study indications can be accurate in determining protocols of studies in both brain and body MRIs.”
Decreasing radiation dose is a huge topic in medical imaging. Lakhani et al.
Posted on 7wData.be.
|
Just the beginning: 6 applications for machine learning in radiology beyond image interpretation
| 0
|
just-the-beginning-6-applications-for-machine-learning-in-radiology-beyond-image-interpretation-1a5ab40a568d
|
2018-04-12
|
2018-04-12 14:34:15
|
https://medium.com/s/story/just-the-beginning-6-applications-for-machine-learning-in-radiology-beyond-image-interpretation-1a5ab40a568d
| false
| 330
| null | null | null | null | null | null | null | null | null |
Big Data
|
big-data
|
Big Data
| 24,602
|
Yves Mulkers
|
BI And Data Architect enjoying Family, Social Influencer , love Music and DJ-ing, founder @7wData, content marketing and influencer marketing in the Data world
|
1335786e6357
|
YvesMulkers
| 17,594
| 8,294
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
1d905945e4a8
|
2018-08-20
|
2018-08-20 14:56:59
|
2018-08-20
|
2018-08-20 15:06:11
| 3
| false
|
en
|
2018-08-23
|
2018-08-23 11:17:17
| 3
|
1a5ae04082e6
| 3.15
| 2
| 0
| 0
|
The cloud market might experience a notable change in competition. Multicloud could be the coming USP and a way of innovative…
| 5
|
From «X-as-a-Service» to «Everything-as-a-Service»
The cloud market might experience a notable change in competition. Multicloud could be the coming USP and a way of innovative differentiation.
In an earlier article, I emphasized on the importance of an infrastructure that is able to cope with new modern initiatives and steady growth as business development progresses. Cloud computing is undoubtly the promise of having a modern and state-of-the-art IT infrastructure without the need for substantial capital investments and personnel increases. Unsurprisingly, the cloud market revenue is expected to double in the next three years. However, surprisingly though, the market is dominated by few giants only. More than 65% of the cloud computing market is occupied by only a few leading giant providers such as Amazon AWS, Microsoft Azure and Google Cloud. The remaining 35% market share is in the hands of thousands of cloud providers scattered around the world.
Nowadays we notice more and more a change in cloud competition. Microsoft Azure has surpassed Amazon AWS in terms of annual revenues. The cloud industry with its huge potential seems to undergo a change in competition which can be explained by a few notable trends that are emerging:
The few giants that make up the cloud industry form the top-tier cloud providers while a number of smaller and niche players occupy 35% of the market. Interestingly, Amazon AWS, Microsoft Azure and Google Cloud are all of them American cloud providers.
Releasing mainstream features to duplicate competitors’ features can be observed harder than before. Every provider strives to stand out! Nonetheless, compute, storage and are more or less essentially provided most of the cloud providers. However, innovation is mainly happening in areas such as big data, AI or machine learning.
Despite the important number of smaller cloud providers, the giants still keep making the price. According to critics, a rate adjustment to pricing scheme is made once or twice a year by one of the giants.
Considering these three trends, the cloud market does not appear to be changing any time soon. An innovation-driven sector as the cloud represents is taking a risky step in falling into commodity. An increasingly strong price competition shows how difficult it has become to compete on features. The question therefore is which trends begin to emerge and express exactly what today’s customers look for?
Amazon AWS users share one common experience. Entering the world of AWS is easy but the exit is exceedingly hard to find. Cloud exit strategy is a thorn in the eyes of cloud providers that look desperately for ways to lock in their customers before they join a competitor. Attractive pricing models or charging on a per-unit basis are good examples of keeping a customer as long as possible. Another option include making data transfer to other cloud platforms expensive and unattractive. Nevertheless, the only best way of making a customer stay is to respond their needs and expectations. Differentiation is therefore what most cloud providers should be looking for. By observing the cloud industry, we immediately realize that technology in this industry, especially when it comes to software, it is particularly easy to that your innovation quickly becomes commodity for your competitors. The smartphone industry shows the same characteristics.
Image: CultureWatch
It has become extremely difficult to stand out and excel competitors through unique innovations that are hard to copy. When it comes to the cloud market, the only option one can have is products or services that respond to a maximum amount of customers’ needs and expectations. Now, cloud providers should get away from that typical “X-as-a-service”. Customers should enjoy flexibility and decide freely how to integrate the cloud into their daily business. “Make your own cloud” by n’cloud.swiss could indeed by a trend setter in this cloud market that falls into commodity and lacks pure innovation to stand out from competition. This Swiss multicloud provider developed an innovative cloud platform to compete with the likes of Amazon AWS and Microsoft Azure. Others might follow very soon but for now n’cloud.swiss is definitely the cloud provider to consider!
|
From «X-as-a-Service» to «Everything-as-a-Service»
| 25
|
from-x-as-a-service-to-everything-as-a-service-1a5ae04082e6
|
2018-08-24
|
2018-08-24 08:34:13
|
https://medium.com/s/story/from-x-as-a-service-to-everything-as-a-service-1a5ae04082e6
| false
| 689
|
The digital world of n'cloud is exactly how the world turns today. Changing, moving and in full motion. Explore the opportunity of getting the latest news and publications on interesting topics such as innovation, technology, cloud computing, blockchain, ICOs, AI and many more.
| null |
netkomitservices
| null |
n'world publications
|
yma@ncloud.swiss
|
nworld-publications
|
CLOUD COMPUTING,BLOCKCHAIN,AI,ICO,INNOVATION
|
ncloudswiss
|
Cloud Computing
|
cloud-computing
|
Cloud Computing
| 22,811
|
Yahya Mohamed Mao
|
Head of Marketing Services at n'cloud.swiss AG | Writing about Cloud Computing, Innovation Management, Blockchain and AI | Aspiring to write for Forbes
|
93d9bd922ecb
|
yahyaibnmohamed
| 1,203
| 617
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-13
|
2018-09-13 05:53:09
|
2018-09-13
|
2018-09-13 06:14:55
| 0
| false
|
en
|
2018-09-13
|
2018-09-13 06:14:55
| 0
|
1a5b066bd1af
| 0.596226
| 0
| 0
| 0
|
There are several splitters in sklearn.model_selection to split data into train and validation data, here I will introduce two kinds of…
| 1
|
Splitters in sklearn
There are several splitters in sklearn.model_selection to split data into train and validation data, here I will introduce two kinds of them: KFold and ShuffleSplit.
KFold
Split data into k folds of same sizes, each time uses one fold as validation data and others as train data. To access the data, use for train, val in kf(X):.
If shuffle=True, data will be shuffled before the split.
There are variants like StratifiedKFold, RepeatedKFold, RepeatedStratifiedKFold.
StratifiedKFold ensures that the distributions of classes in each split are the same as the distribution in original data.
RepeatedKFold will just repeat KFold n times.
ShuffleSplit
Shuffle data before split into train and validation data. Specify n_splits to get multiple splits.
There is a StratifiedShuffleSplit.
train_test_split
A function in sklearn.model_selection to quickly get a split.
resample and shuffle
Both in sklearn.utils.
If replace=True, resample will sample with replacement. n_samples specify the number of samples. So you can use resample to bootstrap.
|
Splitters in sklearn
| 0
|
splitters-in-sklearn-1a5b066bd1af
|
2018-09-13
|
2018-09-13 06:14:55
|
https://medium.com/s/story/splitters-in-sklearn-1a5b066bd1af
| false
| 158
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Shao Wang
|
Machine Learning, Artificial Intelligence
|
364d1cf7becd
|
wsdonny
| 0
| 14
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
7806e848e54a
|
2018-08-20
|
2018-08-20 14:48:42
|
2018-08-20
|
2018-08-20 14:53:23
| 2
| false
|
en
|
2018-09-11
|
2018-09-11 14:22:46
| 4
|
1a5b33b3b1e2
| 6.005975
| 5
| 0
| 0
|
Reminiz was asked by brands to test tags’ accuracy when bidding on YouTube video keywords. Our results showed that using celebrities’ names…
| 5
|
What lies behind YouTube Search infinite results
Reminiz was asked by brands to test tags’ accuracy when bidding on YouTube video keywords. Our results showed that using celebrities’ names to target videos, brands miss their target 4 out of 10 times
It happens way more than you’d wish: the answer does not match the request. As YouTube gets better and better at pushing related videos depending on videos you just watched and liked, it might not be as advanced in terms of providing a wide range of videos related to requested celebrities. Of course, it takes more than the first result page to realize it. But YouTube search follows pretty much Google’s syndrome when it comes to provide an accurate result on the long run. Millions of results, way less interesting ones.
Being able to provide hundreds of thousands of videos on a single search is one thing. Making sure that the results match the actual request is another one. Especially when you wish to bid on these videos and advertise on content. So what lies behind YouTube Search infinite results? And more importantly, can we trust these results if assessing thousands of results?
A FICKLE AND UNSTABLE INVENTORY
Over time at Reminiz, we became experts in facial recognition and video understanding. Intuitively, the first videos we started to work on a few years ago were movies, television series, all kinds of “clean” content that would limit uncertainty related to a celebrity’s presence. In addition, long videos would allow gathering a significant number of identical tracks for the same celebrity. When real-time processing became possible, literally recognizing all faces and brands popping out on the screen with no latency, new forms of opportunities emerged, such as monitoring live television.
YouTube videos represent a new paradigm shift: it mixes professional and personal content, very old videos and recent ones, celebrities featured and random people, long and short formats. But the main complexity lies somewhere else: this inventory has no end. A nightmare if you “try to look for someone” in millions of videos. Every day, more than 600K hours are uploaded on YouTube: how can you make sure you have any control on the inventory’s boundaries? The least you can do is to target your search to make sure it will provide with the most accurate videos depending on set criteria. Whether a user or an advertiser, you expect the videos that are pushed to be, as promised, search-relevant. But are they?
A MATTER OF RELEVANCE
The tricky part with relevancy is that it remains, somehow, very subjective. One might think that a celebrity-related relevant video is a video in which the celebrity appears. But let’s take an example to understand how blurry the line is in this case. You are looking for relevant videos related to Donald Trump. You might be interested, of course, in all videos where Trump appears. All of them, really? What about a 3 years old video of him, compared to a very recent video talking about him, without him appearing? And what about an Alec Baldwin impression of him Vs an actual video of him playing golf? At some point, relevancy becomes not so strict and is mostly accurate for the one who set the rules.
We do not intend to solve this relevance matter yet at Reminiz. However, we tend to believe that this lack of clarity when presenting search results favors YouTube in presenting not so relevant propositions. After all, one could always argue that relevance is also based on tags. But tags are based on what people tag. And people tag anything their way. Again, relevancy becomes a real bias when presenting a search result. A simple test showed us how much we had the right intuition on this.
Thanks to Reminiz neural recognition network, we checked the first 1000 thousand videos displayed when searching for “Tsonga[1]” and sorting by relevancy. We were expecting flaws in our results, but they came sooner than expected. On the first 500 videos, Tsonga only appears in 77.2% of them. Passed the first 500 videos, more than 40% of the videos do not feature Tsonga anymore.
When observing these results, we dug into some of these videos to understand why the presence rate had dropped so drastically. It turned out to be impossible to figure a single pattern to explain how and why such videos were pushed. In videos not featuring Tsonga, some would be only tennis related, some others sport related, some others with the tag Tsonga and some others with no link whatsoever to Tsonga himself. On the other hand, some of these “Tsonga unrelated” videos were linked to previous search we had made. At this point, we were lead to think that relevancy was more a matter of “who searches” than “what does he search” for YouTube.
USER OVER CONTENT
In order to get deeper insights on this, we checked a new video batch with another celebrity, Kylian Mbappé[2], that had just won the football world cup. We changed the method and searched for results on a longer period of time: we decided to take the 500 most relevant videos that had been published every 10 days for the last year on Kylian Mbappé. We ended up processing around 10K videos.
On these 10 000 videos, 37% were not featuring Kylian Mbappé at all. From a user’s perspective, it might not be such a big deal. After all, who really gives a rat’s ass about a 6-month-old video appearing on page 8 of the search? And does that really change the trust a user can have in YouTube search engine? However, it matters way more for the advertisers. If bidding on Kylian Mbappé only, assuming that you would want to target all videos where the young football star appears, it would miss its target almost 4 out of 10 times. Talk about efficiency.
On all the tests we ran afterwards, even though results varied depending on the celebrity, one thing remained stable: the search accuracy would always drop sooner than expected and end up pushing videos with no apparent link to the celebrity anymore. This says a lot about YouTube (and the internet in general) conceptual approach: user rules over content. In other words, the content of the video is less important than the user that is watching it.
Taking advantage of Google’s ecosystem, YouTube pushes adequate videos depending on any kind of interaction you might have on its platform or any other Google platform. Previous videos watched, likes, profile information and so on. Nothing new about that. But it might explain why content itself and the ability to “understand” it automatically has been overshadowed so far. Not only have advertisers done the same, relying on always more and more data on users, but they have accepted that this would be the only information they would rely on when advertising on YouTube.
THE END OF THE COOKIE ERA
To put it plainly, advertisers have accepted to bid on criteria set by YouTube, assuming that they would be the best ones to hit the target. Even worse, they have accepted to bid on content they would not even know, both in letting YouTube decide the video inventory for them and using cookies to organize retargeting campaigns. That means in some cases, that advertisers bid on “useless views”: views related to a video that has nothing to do with their advertising strategies.
Again, user data over content.
Although we have nothing against user data, we see three main reasons why this can be a problem. First because you do not have control on the actual content. And content matters. If you want to bid on Kylian Mbappé, you want to make sure that all your ads run when Mbappé in onscreen. If you know for fact that your ad only runs on Mbappé 40% of the time and on unrelated videos 60% of the time, we expect you to be reasonably unsatisfied.
Second because content is context. As viewers become more and more ad-adverse, the impact of display ads, TrueView and other inventive formats, is questioned. Advertisers would then see a real advantage to contextualize their presence. Sports ads if I’m watching sports content. Movie trailers if I’m watching movie scenes. Endorsed commercials if I’m watching a celebrity-featured content.
Last but not least, because GDPR is creating a new order of things. Advertisers won’t be able to rely so much on user data anymore and will need to find new ways to assess what users are doing and how to reach them. Video understanding might be a very interesting lead on this path. As YouTube grows bigger and bigger day after day, making possible to automatically scan content in detail to refine an advertising campaign might become the smartest next move to keep control of your content inventory.
[1] A french Tennis player
[2] A french Football player
|
What lies behind YouTube Search infinite results
| 23
|
what-lies-behind-youtube-search-infinite-results-1a5b33b3b1e2
|
2018-09-11
|
2018-09-11 14:22:46
|
https://medium.com/s/story/what-lies-behind-youtube-search-infinite-results-1a5b33b3b1e2
| false
| 1,490
|
Reminiz is a world pioneer video understanding technology offering real-time facial and logo recognition. Augmented Content for a never-seen viewer experience.
| null | null | null |
Reminiz Insights
| null |
reminiz-insights
|
ARTIFICIAL INTELLIGENCE,FACIAL RECOGNITION,ADVERTISING TECHNOLOGY,MEDIA,VIDEO MARKETING
|
reminizapp
|
Advertising
|
advertising
|
Advertising
| 42,520
|
Paul Chaumont
|
Product Manager at Reminiz
|
ec3e3a9f2193
|
paul.chaumont
| 7
| 13
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-16
|
2018-09-16 04:43:25
|
2018-09-16
|
2018-09-16 22:22:53
| 1
| false
|
en
|
2018-09-20
|
2018-09-20 05:19:44
| 0
|
1a5d42934fd1
| 2.992453
| 0
| 0
| 0
|
It is always a good idea to attend conferences to know what is new and meet interesting people who work in the amazing Bay area. Here is a…
| 3
|
My @scale conference experience
Credits — Uber engineering — Michalangelo ML infrastructure
It is always a good idea to attend conferences to know what is new and meet interesting people who work in the amazing Bay area. Here is a summary of events that I attended at the @scale conference this year -
Women’s Leadership Breakfast
At this event the most important thought that stuck with me was think ‘long-term’. Even if you take a break in your career due to some reason, you can always start something new when you come back. This might enable you to provide more value to the company and your team as you look at things from a fresh perspective. A break might enable you to think out of the box instead of being stuck in the rut of delivering code as fast and efficiently as possible. There was a mention of kepler.gl which uses a gpu to analyze huge datasets containing geospatial data in the browser. This will definitely make visualization easier and accessible.
Keynotes
The keynote ‘Golden age for Computer architecture’ mentioned Moore’s law ending but that is not the end of the game. This has lead to a new beginning with domain specific Architectures (DSAs), which are purpose-built processors that try to accelerate a few application-specific tasks. GPUs and TPUs which accelerate neural networks will be the new standard we look up to. Floating point was a big deal in the 1980’s when x86 was introduced. Doing really fast integer based matrix multiplications is the basis of the new architectures.
FB also discussed GLOW (graph-lowering compiler) which will take a neural network graph from Pytorch and then optimize it to run on a ML accelorator. Also ONNX which is an open source format which allows graphs to be converted from one framework like Pytorch to CNTK which is interesting. Pytorch 1.0 was introduced as a combination of the best features of the two — Caffe 2 is optimised to be used in production (is scalable and fast) and Pytorch which is used in research.
Nvidia discussed Project Maglev for autonomous driving where K8s is used for programatic workflows on GPU clusters. Another interesting thing mentioned was the TensorRT inference server. Developers can focus on creating models instead of optimising performance for deployed models. RTCores were introduced at SIGGRAPH this year as a part of Turing, which make real-time ray tracing possible. The company introduced shiny new gpu’s Quadro RTX 8000/6000/5000 and Quadro RTX Server for datacenters. Also at Gamescom this year they announced Geforce RTX 2080Ti/2080/2070.
There was an interesting demo by FB for Mask R-CNN that tracks human movements in real time. There was a demo by Microsoft — Brainwave which is a FPGA powered DNN serving platform which works today with TF and Pytorch support coming soon. There were interesting demos for Nvidia HGX-2 and sentiment detection.
Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning
The more complex ML models are the harder they are to understand. Boosted trees, RF, Neural nets are harder to make sense of than Linear regression, Logistci Reg, NB and Decision lists models. This talk was very well delivered and mentions how Generalized additive modes (GAMs) attempt to solve this problem.
Applied Machine Learning at Facebook: An Infrastructure Perspective
Quick overview of how ML is used at FB. SVM for Facer, GBDT for Sigma, Multi Layer perceptron MLP for news feed, ads, search and sigma. CNN for Facer and LUMOS, RNN for language translation, speech recording and content understanding. Also covered how often models are trained, how much compute is required for each type of workload and how volta GPUs are used. Also ONNX which is a shared models and operator representation.
Machine Learning Testing at Scale
This talk mentioned how testing is important after model deployment and the opportunities for testing at every step like preprocessing etc.
Computer Vision at Scale as Cloud Services
Rapid rise of DL - Alexnet, Googlenet and Resnet. Microsoft covers how they use computer vision. How privacy was a concern for face and gender detection. Challenges like how the only found “happy”faces on the net and could’nt find many sad, shock or contempt type of face images. They had to retrain their model after generating enough of these type of images.
Scaled Machine Learning Platform at Uber
Described ‘Michalangelo’ an internal machine learning platform that supports Big data at Uber.
That wraps it up!
|
My @scale conference experience
| 0
|
my-scale-conference-experience-1a5d42934fd1
|
2018-09-20
|
2018-09-20 05:19:44
|
https://medium.com/s/story/my-scale-conference-experience-1a5d42934fd1
| false
| 740
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Prajakta H
|
Engineer interested in making the world a better place. Passionate about testing. Works @Nvidia. The opinions in the blog are my own.
|
12c13b9180c1
|
prajaktahegde
| 1
| 7
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-04
|
2018-03-04 00:22:22
|
2018-03-04
|
2018-03-04 00:33:44
| 0
| false
|
en
|
2018-06-25
|
2018-06-25 23:17:28
| 2
|
1a5eb9f0940d
| 0.573585
| 0
| 0
| 0
|
This post is a part of Jeff’s 12-month, accelerated learning project called “Month to Master.” For February, he is downloading the ability…
| 4
|
M2M Day 62–One thousand Swipes
This post is a part of Jeff’s 12-month, accelerated learning project called “Month to Master.” For February, he is downloading the ability to build an AI.
Now that I have the image extraction program built, I’ll just need to extract and hand-label about 1000 images manually. Here’s a video of what the program looks like:
As you can see, I’m pressing 1 or 2 if I like the photo. Of course, for privacy reasons, I’m not going to show the photo of the person. The next few days will just consist of me swiping at least 1000 times. Let’s go.
Read the next post.
Jeff Li is saving the world by matrix-downloading skills into his brain. He is …….“The SuperLearner. ”
If you love me and this project, follow this medium account. Hate me, you should still follow this medium account. One option here……
|
M2M Day 62–One thousand Swipes
| 0
|
m2m-day-62-one-thousand-swipes-1a5eb9f0940d
|
2018-06-25
|
2018-06-25 23:17:28
|
https://medium.com/s/story/m2m-day-62-one-thousand-swipes-1a5eb9f0940d
| false
| 152
| null | null | null | null | null | null | null | null | null |
Learning
|
learning
|
Learning
| 37,342
|
Jeffrey Li
|
Accelerated Learning Fanatic | Data Scientist | Educator
|
3899f1e86899
|
dj.jeffmli
| 275
| 143
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
24b00b47988d
|
2018-06-14
|
2018-06-14 09:26:44
|
2018-06-14
|
2018-06-14 10:39:33
| 3
| false
|
zh-Hant
|
2018-06-14
|
2018-06-14 10:40:15
| 2
|
1a60ea1eb2c7
| 0.829245
| 0
| 0
| 0
|
這次來稍稍得比較一下一些常見的分類模型
| 3
|
Machine Learning 分類模型的比較
這次來稍稍得比較一下一些常見的分類模型
有了這麼多種分類的模型可以使用
到底這些模型之間使用起來到底有沒有什麼樣的差異呢?
一般在做Machine Learning 的模型,最重要的就是準確性的問題,其次就是運算資源的問題
這兩個是在實際運用模型的時候,常常需要考慮的問題
所以這邊就簡單的把常見的分類模型拿出來做一個比較~
這邊要比較的分類模型是下面幾種:
K-nearest neighbors
Logistic Regression
Decision Tree
Support Vector Machine
Neural Network
為求問題簡單化一點
這邊使用的資料是Sklearn套件裡面的Iris, Wine, Digits 資料集, 參考資料在這邊:http://scikit-learn.org/stable/datasets/index.html
使用現成的樣本資料集的好處是較為完整,資料的品質也較好,在做比較的時候可以不用考慮太多資料清理、特徵工程的問題
三個都是尋找分類問題的資料集,Iris 是最簡單的入門級資料級,Wine 就稍稍比Iris 複雜一點點,而Digits 是手寫數字資料集,屬於圖像的資料,這邊就拿這三種資料做簡單的測試
我這邊比較的方法是這樣:
拿這5個分類的模型,使用sklearn 套件的預設參數下去對3個不同的資料集進行訓練,然後紀錄每個模型訓練的時間,與相對應的F1-Score,最後再將各個模型的訓練時間與F1-Score,取平均做比較
比較的完整程式碼在這邊:https://github.com/AdamYuCheng/Machine-Learning/blob/master/Compare%20with%20different%20Classification%20Model.ipynb
好的,下面就開始做比較了~
首先先拿最簡單的Iris 資料集進行測試,訓練完的狀況是這樣:
Iris Dataset 訓練狀況
訓練時間以Logistic Regression 最長,其次是則是神經網絡模型
而F1-Score則是都十分接近,使用預設參數的神經網絡模型則稍低於其他模型
再來使用Wine 的資料集來測試看看:
Wine Dataset 的訓練狀況
由上圖可以發現,當資料的複雜度(參數)增加之後,SVM明顯需要較多的時間進行運算,而Logistic Regression 和決策樹模型則有較好的成績
最後來看看手寫數字辨識:
Digits 資料集的訓練狀況
從圖中可以發現維度大量增加上去之後,KNN反而需要更大量的時間進行運算了,而神經網絡模型則是一貫的需要很長的時間進行運算
成效的部分可以發現決策樹模型似乎並不適合影像類的資料,明顯在這部分成效比其他模型差了一點
最後將五個的平均訓練時間來做一個比較:NN>Logistic>KNN>SVM>Tree
而平均訓練的F1-Score: Logistic>SVM>Tree>KNN>NN
這樣子乍看之下,在模型未經過調整的狀況下,NN的表現似乎是比較差一點的
但近幾年隨著深度學習的方法,神經網絡模型只要經過一定程度的調整,在分類模型上則是具有相當爆發式的成績;但相對的,需要花費較多的時間進行相關參數的調教
在剛拿到資料集的狀況下,不訪可以先拿SVM或決策樹的模型先進行測試,看看成效如何
而Logistic Regression 雖然訓練的時間相對長一點點,但在分類項目較少的狀況下,仍能獲得十分不錯的成績
KNN在裡面看起來雖然是獲得了比較平均一點的成績,但在做資料清理(預測缺失值)或是推薦系統上,仍然是非常好用的工具
在尋找最佳化模型的過程中,可以先將大的資料集抽樣,變成一個小的資料集,然後帶進去每個模型進行測試,比較看看時間跟成效;這樣子的做法可以先建立一個良好的基準線,後面可以在針對這個基準,去評估要再那個模型下再下功夫,如此可以很好的去權衡成效跟訓練時間上的衝突
這邊分類模型的比較就寫到這邊
預祝大家都能快速地訓練出合適的模型~
|
Machine Learning 分類模型的比較
| 0
|
machine-learning-分類模型的比較-1a60ea1eb2c7
|
2018-06-14
|
2018-06-14 10:40:16
|
https://medium.com/s/story/machine-learning-分類模型的比較-1a60ea1eb2c7
| false
| 74
|
紀錄所想、所見、所聞、所感動
| null |
yucheng.adam
| null |
無邊拼圖框
|
aabbzztw@gmail.com
|
無邊拼圖框
|
DATA SCIENCE,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING
| null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Chang Yu-Cheng
|
致力於探索信息中的價值;應用數據科學的技術,在充滿不確定性中的未來不斷地尋找可以被掌握的事物
|
26bd1d9cd66b
|
AdamCheng
| 1
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-05
|
2018-02-05 11:16:56
|
2018-02-05
|
2018-02-05 11:18:46
| 2
| false
|
en
|
2018-02-05
|
2018-02-05 11:18:46
| 3
|
1a62ced19fb7
| 6.911635
| 0
| 0
| 0
|
There’s been an explosion of interest in ethics, responsible AI and bias in recent years. When built, cultivated and deployed with the…
| 1
|
As Our AI Systems Become More Capable, Should Ethics be an Integral Component to your Business Strategy?
There’s been an explosion of interest in ethics, responsible AI and bias in recent years. When built, cultivated and deployed with the right human oversight, AI has the potential to do significantly more good for the world than harm. However, the key here is the right human oversight, and as AI is becoming more and more accessible, it’s important for all aspects of each product to be designed with the potential ethical repercussions in mind.
Before we get stuck into discussing what is ethically ‘right’ or ‘wrong’, part of the confusion comes from a misunderstanding of what ethics is. At the AI Assistant Summit in San Francisco we were joined by leading minds in the field on the panel discussion ‘As Our AI Systems Become More Capable, Should Ethics be an Integral Component to your Business Strategy?’.
Joining us on the panel was:
Jane Nemcova, VP & NO, Global Services for Machine Intelligence at Lionbirdge
Lionbridge provides international organisations with the language, cultural, and technological expertise they need to transform how they communicate globally.
Abhishek Gupta, Prestige Scholar at McGill University, and AI Ethics Researcher at District 3
He is also organiser of the Montreal AI Ethics meetup where members from the community do a deep dive into technical and non-technical aspects of the ethical development of AI.
Cathy Pearl, VP of UX at Sense.ly
Sensely’s virtual nurse avatar, Molly, helps people engage with their health. Cathy is the author of the O’Reilly book “Designing Voice User Interfaces”.
Jake Metcalf, PhD Consultant at Ethical Resolve (Panel Moderator)
Ethical Resolve provides clients with a complete range of ethics services, including establishing ethics committees, market-driven research and engineering and design ethics training programs.
Read a transcript of the panel discussion below to learn more:
What are some of the ethical issues in AI that are most pertinent for AI assistants? What would you encourage your colleagues in AI assistants to pay attention to?
Cathy Pearl: For us, as a healthcare company, we think a lot about patient data privacy. In a broader sense, we want to make sure that we’re ethical in the way we get our patients to be compliant. For example, as a company it benefits us if our users to do their daily check in on their health, but it also benefits the patient to see if they may need more help or adjustments in their medication. However, we don’t want to completely gamify this process. Whilst it’s good for patients to want to check in, it loses its reliability if they’re doing it for the wrong reasons.
Abhishek Gupta: We need to think about what norms are we imposing on people when we put these AIs into action. For example, in the Western world we have an expectation of how a conversation takes place, but is that right for the rest of the world? Are they comfortable with the way we say things? Say you’re building something for the developing world, we need people from that community to work on it too so you don’t impose something that puts people in a position where they get a bad UX because there hasn’t been a fair process behind the creation of the product.
Jane Nemcova: We see challenges with different ways of analysing, but privacy and different types of legal issues around data is something we’re very careful with and we have a rigorous approach to. However, there’s a larger question in what are the big companies who are creating these AIs are doing — there’s a need for more folks who have an understanding of the bigger picture to figure out where you draw the line.
It’s interesting to pay attention to the breadth of users. Why do we need to take attention to this?
Cathy Pearl: There are definitely compliance issues with people taking medication, and there have been so many tech ‘solutions’ for people to take their medications by assuming that the compliance issue lies in that people forget their medication. Yes some people forget, but so often that’s not what it is. Maybe people can’t afford the prescription or they don’t believe their doctor. If you only look at your own issues you’ll miss some, so you need to look at a whole and varied data set.
Abhishek Gupta: Diversity in the collection of datasets is really important. Earlier, we were discussing the example of voice recognition with the issue of accents. If you don’t have a north american accent then the recognition and accuracy of the system is poor because the data is trained on largely north american accents. If you train the system on a wide set of accents it’s more likely to perform accurately for a wider audience.
Jane Nemcova: Our area is specifically in getting scalable data, so we cover India ,Africa, Asia, Europe and yes the US market is the largest, but the diversity even within the US is enormous. You have discrimination but it’s changing and the companies that are creating them realise that to have all the applications and to get the best UX the diversity of data is critical.
We hear a lot of discussion about what AI will be like in 10 years. But what do you think AI ethics will look like in 10 years? How should we be structuring technology research and business now in order to deal with future challenges?
Abhishek Gupta: Let’s think about cyber security. Go back 25/30 years. The role of cyber security was to have dedicated teams to act as a final check point before release — if flaws are found it goes back to the beginning of the production cycle: this leads to ‘secure by design’. Everyone thinks about security from the beginning, and in the future we’ll have ‘ethical by design’. It won’t just be a few people in the company thinking about the ethical implications, but every day everyone will be thinking about the consequences.
Jane Nemcova: Even the fact that we’re having this discussion means that ethics is an important issue for us all to consider no matter what our role in AI is, but it struck me a couple of years ago that people who were interested in the ethics weren’t necessarily educated in it. We run into problems when we try to apply one person’s judgment. In 10 years from now we’ll be grappling with new applications that continue to evolve, but the companies and government need to consider this.
Cathy Pearl: At Sensely, our clinical team thinks about patient safety all the time, but I hope in the future, AI teams will be thinking about customer safety over all. Currently in AI a 5% fail risk is okay, but say for example you were doing a suicide prevention app and that was the fail rate that would be not good. So we need ethics teams to ensure products are safe.
Do we need a chief ethics officer or do we have something different for startups and big companies, or do companies that work on the backend need something different for user facing technologies?
Jane Nemcova: What we need is the education of everyone. It’s definitely in societies interest to think a about how it’s affecting everyone.We all need to know about how we fit into the world and our understanding of everything is critical. We need to develop the right habits around that so we can behave well with the systems. In a AI company, every role needs to have that in mind, what’s the ethical implications of what they’re doing, even if they’re not experts in the field.
Abhishek Gupta: Definitely. Education is important. The simple solution is having a course be mandatory in University on courses such as Computer Science and rolling it out to other disciplines that will be involved. Internal training courses at companies are also important and could have a huge impact. For example, at Microsoft before you write a single line of code you have to go through training, so to have something like that where you’re compelled to look into and study is practical and important.
Cathy Pearl: We do user testing before we put the product out to the public. We need to build in user testing with diverse audiences in the environment the product would actually be used in. It would be great to have a disaster prevention team to think of the worst case scenario. Something like a chief scepticism officer (laughs).
How is bias going to play out in AIA? How can we get ahead of that?
Abhishek Gupta: Something that comes to mind is a document around responsible data practices which sums up a lot of the concerns around bias in data sets amongst other data practices. During design and conception of AI assistants you need to think about who is your target audience, thinking about a red/blue team situation with all the possible cases that can happen and then go and collect all the data for that. It’s hard to do but fundamental.
Jane Nemcova: Having unbiased data is a hot topic, what worries me is when we look at the greater good of what a product is doing and who it’s serving, there’s a fine line that starts to get crossed between having an empirical approach, to having a ‘hey I don’t like how users are behaving’ so you change things on the way. Unless the greater good is universally agreed on, it could go from good to bad quite quickly — it’s all about how people are approaching it.
Cathy Pearl: We were working with a clinic improving lives with congestive heart failure and we invited patients in to talk to professionals and they learned so many things about the stressors in their patients lives that were impacting their health outside of their diagnosis that were previously unknown. The more stories you understand behind their behaviour, the more information you have.
Keen to hear more from the panel? Sign up to receive access to watch the full discussion, as well as receiving presentations from the AI Assistant Summit last week in San Francisco.
We’re also working on a new White Paper centralising around ethics & AI, so if you’re interested in contributing do email Yaz at yhow@re-work.co
|
As Our AI Systems Become More Capable, Should Ethics be an Integral Component to your Business…
| 0
|
as-our-ai-systems-become-more-capable-should-ethics-be-an-integral-component-to-your-business-1a62ced19fb7
|
2018-02-05
|
2018-02-05 11:18:47
|
https://medium.com/s/story/as-our-ai-systems-become-more-capable-should-ethics-be-an-integral-component-to-your-business-1a62ced19fb7
| false
| 1,730
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
RE•WORK
|
Applying emerging technology & science to solve challenges in business and society. Deep Learning, Machine Intelligence & more! https://www.re-work.co/
|
3ae910353b87
|
teamrework
| 3,032
| 1,075
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-26
|
2017-09-26 17:30:30
|
2017-11-27
|
2017-11-27 14:26:24
| 3
| false
|
en
|
2017-11-27
|
2017-11-27 14:53:54
| 5
|
1a62e486e5e1
| 4.85566
| 5
| 1
| 0
|
The Ig® Nobel prizes are awarded each year in early September. They are awarded to research that “makes you laugh, then makes you think”…
| 5
|
Rocking your Brand Personality
The Ig® Nobel prizes are awarded each year in early September. They are awarded to research that “makes you laugh, then makes you think”. Many times, the research gets a small amount of air time during this time of year, then dies down quickly. This year, I decided to scroll down to reacquaint myself with prior winners.
The Economics winner for 2016 went to a Marketing Research Group from New Zealand, “The Brand Personality of Rocks: A Critical Evaluation of a Brand Personality Scale” (Avis et al, 2013¹). I’ll give you a moment to chuckle…
Now time to think!
This is brilliant research, and not just coming from a recovering geologist (true story). Why may you ask?
“Rocks…do not have any obvious commonalities with brands, or have antecedents to BP (Brand Personality) formation.” (Avis et al, 2013¹)
To understand the implications of this, we need to go further back to what BP formation is and why it’s important to the marketing landscape.
In Aaker’s cornerstone paper on Brand Personality, “Dimensions of Brand Personality” (Aaker, 1997²), she outlines the 5 dimensions that build brand personality:
Sincerity (Down-to-Earth, Honest, Wholesome, Cheerful)
2. Excitement (Daring, Spirited, Imaginative, Up-to-date)
3. Competence (Reliable, Intelligent, Successful)
4. Sophistication (Upper-class, Charming)
5. Ruggedness (Outdoorsy, Tough)
Brand personality increases consumer preference and usage (Sirgy, 1982³), evokes emotions in consumers (Biel, 1993⁴) and increases levels of trust and loyalty (Fournier, 1994⁵).
One would think most people don’t have personifications of rocks (all 5 dimensions are stable), save the recovering geologist (guilty as charged). Therefore, when presented with a rock, people developed “…surprisingly detailed personifications” (Avis et al, 2013¹).
“ Pictures of three rocks were put in front of 225 New Zealand students who were then asked which personality traits applied to each. The rocks’ ‘personalities’ were described in great detail, including as ‘a big New York type businessman, rich, smooth, maybe a little shady’, ‘a gypsy or a traveler, a hippie’ and ‘liberal, attractive and female’.”⁶
“…the findings raise questions about its conceptualization and emphasizes the importance of critical examination of the methods used to measure marketing concepts” (Avis et al, 2013¹).
If Aaker’s concepts of Brand Personality are potentially outdated, can one define their own unique personal brand?
Just one of many pictures of me as scale for pictures of rocks. This is a glacial erratic boulder outside Chamonix, France (2007).
Well, I’m not a rock, but I have studied and researched them for a significant portion of my adult life. Another quirk about me is that many people are unaware of what corporate geologists do and how their skills can be leveraged in other disciplines (a topic of which I am passionate about educating the masses).
Like in the Avis et. al study, others may have already “branded” me as a rough and tumble field worker who goes out and finds the “black gold” raping and pillaging the land along the way. *sigh*
This can lead to a can of worms regarding building a Personal Brand of one’s self…
The Good:
If there are no pre-conceived metrics for measuring how I am branded in the marketplace (as Aaker² would suggest). As few people know of the quantitative nature of a corporate geoscientist, it provides an opportunity to be a blank slate in the eyes of the industry.
The Bad:
If the good is true, people will make a characterization of my character, good, bad or indifferent, just by virtue of what is in front of them. For hiring managers, typically a resume and cover letter; for professional networking, my LinkedIn profile or other social data found online (hint: most of it contains race results). This only gives simple a 2D view of who I am and what I bring to the table.
Which brings us to…
The Ugly:
However, by and large, most people will not reach out directly to gain deeper understanding beneath the thick patina of any preconceived notions.
Though the Avis et al.¹ point towards the Ugly being true in their research at the brand scale, I believe not all is lost.
One thing remains constant, the traits that make us in the core of who we are, which are addressed by the dimensions of Brand Personality. What we need to do is break through the stereotypes so our core values and personality shine.
Therefore:
Aaker’s 5 dimensions of Brand Personality² still provides a useful framework in establishing your professional image.
Rocking my Brand Personality in a 2D World:
Over the past several months, I’ve been learning a lot and thus writing a lot about my observations, trying to find my voice and defining my personal brand.
Sincerity —
(Down-to-Earth, Honest, Wholesome, Cheerful)
I haven’t shied away from the tough stuff. From admitting my ongoing issues with impostor syndrome to exploring the tough challenges faced throughout my career pivot to marketing analytics/data science. I honestly believe that we bring our best self when we bring our whole self to the table. I’m optimistic about the future and optimistic I will continue to seek out and continue to explore…
Excitement —
(Daring, Spirited, Imaginative, Up-to-date)
My goal is to inspire and excite others with my reflections upon my journey. I have received notes from others in similar positions around the Globe. I never thought some of my musings would gain such traction and evoke imagination in my readers. Even those who are not on the same path, they have found hope in updating their skills as commodities shift from oil to data.
Competence —
(Reliable, Intelligent, Successful)
My reliability makes me successful over time. I’ve never been the most brilliant mind nor the one to grasp totally new concepts immediately. However, I keep at it with a tenacity that outshines the most brilliant gem in the drawer. I’ve engineered solutions to problems in unique and different ways that are drawn upon from my diverse experiences.
Sophistication —
(Upper-class, Charming)
Well, we can’t have it all? Maybe that is my charm?
Ruggedness —
(Outdoorsy, Tough)
The easiest one to justify in my case. I run 100-milers for fun in the woods…for fun…just look at my race results!
I ran nearly 15 miles with this beauty on my knee. Okay, probably now crossing over the line of my forward-facing brand.
I did say I was honest and sincere, right?
To read my most popular work on Medium, start with why geoscientists make great data scientists.
References —
¹http://journals.sagepub.com/doi/abs/10.1177/1470593113512323
²http://www.haas.berkeley.edu/groups/finance/Papers/Dimensions%20of%20BP%20JMR%201997.pdf
³Sirgy, Joseph (1982). “Self-Concept in Consumer Behavior: A Critical Review”, Journal of Consumer Research, 9 (December) 287–300
⁴Biel, Alexander (1993). “Converting Image into Equity”, in Brand Equity and Advertising, David A Aaker and Alexander Biel, eds. Hillsdale, NJ. Lawrence Erlbaum Associates.
⁵Fournier, Susan (1994). “A Consumer-Brand Relationship Framework for Strategy Brand Management”, unpublished doctoral dissertation, University of Florida
⁶http://www.massey.ac.nz/massey/about-massey/news/article.cfm?mnarticle_uuid=C3FB8BCD-B161-DA96-E97B-3D37C7F55AD5
Note: 3–5 not independently read, synthesized by Aaker (1997)
|
Rocking your Brand Personality
| 12
|
rocking-your-brand-personality-1a62e486e5e1
|
2018-06-08
|
2018-06-08 14:22:15
|
https://medium.com/s/story/rocking-your-brand-personality-1a62e486e5e1
| false
| 1,141
| null | null | null | null | null | null | null | null | null |
Marketing
|
marketing
|
Marketing
| 170,910
|
Stef Bernosky
|
Data Scientist @ Expedia. Lover of trails, travel, Data and Earth Science. Enjoys long strolls through the mountains.
|
4368e05c9dbc
|
stefbernosky
| 92
| 60
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-02
|
2018-07-02 12:39:02
|
2018-07-02
|
2018-07-02 13:01:39
| 1
| false
|
en
|
2018-07-02
|
2018-07-02 13:01:39
| 0
|
1a6346e570c1
| 1.003774
| 0
| 0
| 0
|
It has been a while since I’ve been able to update this — my free time due to my internship, research, online class, and traveling has been…
| 5
|
51Majority Projection Update: 7/1/18
It has been a while since I’ve been able to update this — my free time due to my internship, research, online class, and traveling has been somewhat limited during the past month. I’m glad I was able to find time last night to get another projection working. Hopefully I will be able to keep this up during the summer.
Current 51Majority Projection
Changes from our previous model
Quite a bit has changed over the last month! Heitkamp is once again projected to lose her Senate seat to her Republican opponent. In addition, the Nevada and West Virginia races are now projected to be won by the Republican parties. All in all, the Republicans will take four Senate seats from the Democrats.
On the flip side, the Democrats are now projected to keep their seats in Ohio and Pennsylvania, both very important races. In addition, the model is also predicting that former Governor Phil Bredesen of Tennessee will defeat Representative Marsha Blackburn in an election that will hurt Trump.
The Democrats gain a seat from the previous projection, putting the Senate in a deadlock at fifty seats each.
Senate seats gained by Democrats
Arizona, Mississippi (special election), Tennessee, Texas
Senate seats gained by Republicans
Indiana, Missouri, Nevada, North Dakota
|
51Majority Projection Update: 7/1/18
| 0
|
51majority-projection-update-7-1-18-1a6346e570c1
|
2018-07-02
|
2018-07-02 13:01:39
|
https://medium.com/s/story/51majority-projection-update-7-1-18-1a6346e570c1
| false
| 213
| null | null | null | null | null | null | null | null | null |
Politics
|
politics
|
Politics
| 260,013
|
Ajay Jain
|
Statistics & Computer Science and Political Science student at the University of Illinois. Interested in political analytics and data science.
|
83ff4c27c062
|
theajayjain
| 47
| 62
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-28
|
2018-02-28 07:01:42
|
2018-02-28
|
2018-02-28 06:49:31
| 2
| false
|
en
|
2018-02-28
|
2018-02-28 07:04:37
| 8
|
1a63fcead263
| 2.655031
| 0
| 0
| 0
|
Australian government agencies dominate our round-up this week — ASIC and the Office of the Gene Technology Regulator both get shout-outs…
| 5
|
Rare Birds: The AI Dispatch #7
Australian government agencies dominate our round-up this week — ASIC and the Office of the Gene Technology Regulator both get shout-outs for embracing the possibilities of new technology.
On the other side of the proverbial ring are the 26 experts who penned a 100-page report on why AI could become a serious global threat if placed in the wrong hands.
So let’s hear what they all have to say, shall we? Dive right in…
Xinja gets credit licence
Several big milestones for Xinja: the digital ‘neobank’ announced on Wednesday that it received its Australian credit licence from the Australian Securities and Investments Commission (ASIC). This means they’ll soon be able to offer home loans, sans paperwork and long waiting lines at a physical bank. Xinja will also be launching its app and prepaid card within the month. “We are fortunate to be entering a highly regulated financial services sector and look forward to contributing to this regulatory landscape in a positive way,” said David Nichols, Chief Risk Officer at Xinja.
ASIC wants to launch NLP trials
Speaking of ASIC, the government body is looking to launch pilots of natural language processing (NLP) technologies and approaches within its system. ASIC wants to see how automating these time-intensive tasks can lead to better decision-making. Its goal is to overcome problem areas such as monitoring social media and advertising mediums, or reviewing how fund product disclosure systems and financial advice are delivered to customers.
Gene editing might soon be freed from gov’t regulation in Australia
After a 12-month review, Australia’s Office of the Gene Technology Regulator has proposed that gene editing technology (such as CRISPR) be freed from government restrictions. “If these technologies lead to outcomes no different to the processes people have been using for thousands of years, then there is no need to regulate them because of their safe history of use,” said Dr Raj Bhula.
New report warns against a grim future with AI
A massive new report, titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, gathers the insights of 26 experts from 14 different institutions and organisations, including Oxford, Cambridge and even the Elon Musk-founded OpenAI. The gist of the document is that AI could soon be a disruptive threat to society, as rival states, criminals and terrorists use it to launch targeted attacks on systems. To address these threats, the report presented five high-level recommendations.
But let’s end on a positive note with this think piece
How innovative is your bank’s culture? “A culture of change is needed if banks are to make the most of the new opportunities presented by open banking, introduce new technologies, and work in an agile way,” writes Matthew Phillips. The article provides some good reminders on how to build the right environment for success within your organisation.
And since we’re on the topic of open banking…
It’s the biggest buzzword right now, but there are still plenty of questions about open banking. As such, there’s a need for a proper dialogue between decision-makers in the financial services sector.
Well, guess what? We’ve gone ahead and set up the perfect event for that dialogue to take place. A breakfast huddle, featuring leaders in banking and technology, where attendees can enjoy free-flowing coffee and ideas.
Regular tickets to the event are $199, but we have an early-bird promo for $125/seat. Better sign up before spots run out! Visit the event page at bit.ly/rb-open-banking for more info.
Originally published at rarebirds.io on February 28, 2018.
|
Rare Birds: The AI Dispatch #7
| 0
|
rare-birds-the-ai-dispatch-7-1a63fcead263
|
2018-02-28
|
2018-02-28 07:04:39
|
https://medium.com/s/story/rare-birds-the-ai-dispatch-7-1a63fcead263
| false
| 602
| null | null | null | null | null | null | null | null | null |
Banking
|
banking
|
Banking
| 14,612
|
Rare Birds
|
We build software with brilliant, globally connected teams to make the world a better place. http://rarebirds.io
|
93402fbb6135
|
rarebirdslabs
| 1
| 1
| 20,181,104
| null | null | null | null | null | null |
0
|
# Load libraries
import pandas
from sklearn import model_selection
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.neighbors import KNeighborsClassifier
# Load dataset
url = “https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
names = [‘sepal-length’, ‘sepal-width’, ‘petal-length’, ‘petal-width’, ‘class’]
dataset = pandas.read_csv(url, names=names)
# Split-out training set for A
array = dataset.values
X = array[:,0:2]
Y = array[:,4]
validation_size_A = 0.8
seed = 7
X_train_A, X_remaining, Y_train_A, Y_remaining = model_selection.train_test_split(X, Y, test_size=validation_size_A, random_state=seed)
# Split-out validation sets for A
validation_size_B = 0.2
X_validation_A1, X_validation_A2, Y_validation_A1, Y_validation_A2 = model_selection.train_test_split(X_remaining, Y_remaining, test_size=validation_size_B, random_state=seed)
# Make predictions on first validation set for A
knn_A = KNeighborsClassifier()
knn_A.fit(X_train_A, Y_train_A)
predictions_A1 = knn_A.predict(X_validation_A1)
print(accuracy_score(Y_validation_A1, predictions_A1))
print(confusion_matrix(Y_validation_A1, predictions_A1))
print(classification_report(Y_validation_A1, predictions_A1))
# Make predictions on second validation set for A
predictions_A2 = knn_A.predict(X_validation_A2)
print(accuracy_score(Y_validation_A2, predictions_A2))
print(confusion_matrix(Y_validation_A2, predictions_A2))
print(classification_report(Y_validation_A2, predictions_A2))
# Create training and validation set for B
X_train_B = X_validation_A1
Y_train_B = Y_validation_A1 == predictions_A1
X_validation_B = X_validation_A2
Y_validation_B = Y_validation_A2 == predictions_A2
# Make predictions on validation set for B
knn_B = KNeighborsClassifier()
knn_B.fit(X_train_B, Y_train_B)
predictions_B = knn_B.predict(X_validation_B)
print(accuracy_score(Y_validation_B, predictions_B))
print(confusion_matrix(Y_validation_B, predictions_B))
print(classification_report(Y_validation_B, predictions_B))
| 8
| null |
2018-03-04
|
2018-03-04 09:49:54
|
2018-03-04
|
2018-03-04 10:16:56
| 5
| false
|
en
|
2018-03-04
|
2018-03-04 10:24:24
| 2
|
1a64268c1b45
| 4.837107
| 1
| 0
| 0
|
It is nearly impossible for humans to understand decisions made by machine learning algorithms. Of course, you can explain the maths behind…
| 5
|
Machines learning about machines
It is nearly impossible for humans to understand decisions made by machine learning algorithms. Of course, you can explain the maths behind these algorithms but, especially if huge amounts of data are involved, those decisions are hard to grasp.
In the context of the Internet of Things where devices kind of are autonomous individuals with certain tasks, own spendings and income I have been wondering whether it is important to have a deeper understanding of machine learning decisions. In an ecosystem where many machines directly interact with each other I can hardly imagine that every machine’s decision is the best one for this entire ecosystem and for the machine itself.
So do we need some sort of coach or psychologist for machines? But who would do this job if we humans can’t follow the machine’s decisions already now? Would it be another machine? Could this machine kind of serve as a super-ego of the actual machine? This kind of thinking motivated me to do the following little experiment.
Let’s say person A has to answer a question asked by person Q while person B is observing person A and guesses if person A will give a correct answer.
Person B sees the question but doesn’t necessarilly need to know the answer himself in order to perform his task. If person B were good at his job, it would be smart if person A asked person B about his opinion before actually answering Q.
The purpose of this article is to transfer this situation into the machine learning world. In this sense, machine A, machine B and machine Q replace person A, person B and person Q, respectively. Possible use cases where a ‘second opinion’ machine could be of value are:
Machine B recognizes if the question deals with morally critical situations (e.g. all these dilemma situations a driverless car can slip into).
Unseen input combined with machine B’s guesses could serve as additional training data for machine A.
Machine B could detect misunderstandings between machines A and Q.
More generally, machine B learns about machine A so that in a further step machine A could learn from machine B.
I couldn’t think of any simpler way than to use the Iris flower data set to illustrate this situation. The code I will be using is based on Jason Brownlee’s very straightforward tutorial. I have to add that the goal of this article is not to present decent and useful results (actually they won’t be useful at all😉), it is just about showing the idea in a simple way and maybe start a discussion on how it could be used and implemented in a more advanced way. I’m just playing around a little bit without knowing how much sense this all makes.
Basically, the steps are the following:
Train machine A to predict the correct flower based on some input.
Validate machine A and train machine B.
Validate machine B.
First of all, I load the needed libraries and the dataset:
Now the tricky part begins. We have to decide how we split the dataset into training and validation data. To illustrate the idea I would like machine A to do its job rather badly. Therefore I even ignore two columns of the dataset (I omit ‘petal-length’ and ‘petal-width’) and take only 20% of the dataset as training data. Two decisions that obviously don’t make sense at all if the goal is to train machine A in the best possible way (any other goal may be useless anyway but let’s not care about this right now😊). I hope the following figure makes it clear how the dataset (150 flowers in total) is split up:
Now we can train machine A (using orange flowers) and validate it using validation set A1 (green flowers). I use k-nearest neighbor algorithm since it delivered the most demonstrative results:
Subsequently, in order to create the training set for machine B we again validate machine A but now using the smaller validation set A2 (blue flowers). Check the results below.
Finally we can create the training set for machine B. The input will be the same as in validation set A1 (green flowers). However the output will not contain flowers but a boolean telling if machine A’s answer was correct or not. We also create machine B’s validation set in a similar way.
Last but not least we train machine B (using green flowers) and validate it (using blue flowers):
I massively weakened machine A in order to train machine B which sounds quite stupid. However, I hope the idea is clear now. Interesting points to me are:
Can we apply an adapted version of cross-validation so that we don’t weaken machine A that much?
Let’s say machine A’s answer is Iris-setosa (97% precision according to first validation) but machine B predicts that this answer is wrong (60% precision). How could this situation be interpreted?
Similarly, let’s say machine A’s answer is Iris-virginica (68% precision) and machine B backs this answer by predicting it is correct (84% precision). Is this really additional value or would it be better to have a more precise machine A and forget about machine B at all?
|
Machines learning about machines
| 4
|
machines-learning-about-machines-1a64268c1b45
|
2018-03-08
|
2018-03-08 07:37:10
|
https://medium.com/s/story/machines-learning-about-machines-1a64268c1b45
| false
| 1,061
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Benjamin Freisberg
|
Mathematician, developer, teacher and co-founder of Byrds & Bytes
|
193816503f1f
|
hello_85458
| 9
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-30
|
2018-04-30 12:01:12
|
2018-04-30
|
2018-04-30 12:04:33
| 4
| false
|
en
|
2018-04-30
|
2018-04-30 12:04:33
| 8
|
1a6450c6ea6d
| 3.088679
| 1
| 0
| 0
|
Robotic Process Automation (RPA) technology, although a relatively new technology, has managed to gain increasing attention in the…
| 5
|
Top-5 Benefits of Robotics Process Automation (RPA) Adoption for Your Company
Robotic Process Automation (RPA) technology, although a relatively new technology, has managed to gain increasing attention in the corporate world over the past couple of years. Business owners and CTOs alike are beginning to take notice.
RPA technology allows a software robot to mimic human behavior. For example, it can navigate enterprise software like ERP systems, FSM software, or service management tools using the application’s user interfaces just like a human would. Except, a robot is able to work much faster, and more efficiently without ever slowing down. So, what are its limits with AI, machine learning, RPA and how do we leverage these smart machines?
A recent industry research on accounting and finance professionals found that in reality, RPA software has huge potential to eliminate the most time-consuming and repetitive manual processes that make up an accountant’s day-to-day work. Robotics process automation can improve efficiencies to deliver more accurate intelligence data and also provide real-time access to finance data with reporting and analytic capabilities.
As the amount of financial data keeps on increasing since the Big Data boom, this technology can aid finance professionals to start adding real value from a strategic viewpoint and start contributing more towards the bottom-line of their company.
Benefits of RPA Software
An RPA approach to streamline internal processes, where people and technology play their part in synchrony, enables better insight into trends and opportunities for business. Robotics process automation (RPA) works best with rule-based, regular tasks that require manual inputs. As the software robot uses other application UIs, very few modifications, if any are required to implement the robot.
Here, we list top-5 benefits of implementing RPA software for your company.
1) Reduced costs: By automating tasks, cost savings of nearly 30% can be achieved over the output of productivity. Software robots also cost less than a full time employee.
2) Better customer experience: Deploying RPA frees up your high-value resources for them to be put back on the front line defining your customer success.
3) Lower operational risk: By eliminating human errors such as tiredness or lack of knowledge, RPA reduces the rate of errors thereby providing a lower level of operational risk.
4) Improved internal processes: In order to leverage AI and RPA, companies are forced to define clear governance procedures. This in turn, allows for faster internal reporting, on-boarding and other internal activities.
5) Does not replace existing IT systems: One of the biggest advantages of using a virtual workforce, or an RPA bot is that it does not require you to replace your existing systems. Instead, RPA can leverage your existing systems, the same way a human employee can.
Artificial Intelligence and automation lets modern jobs become more fluid and ensures freedom for employees from high-volume mundane administrative work. This can free up the workforce of an organization to continue to drive innovation in key performance areas such as customer service, and product development, and ultimately contribute to the bottom-line for the business.
ProV International Inc. are a global IT services delivery organization committed to providing high-end technologies to make the day-to-day of running a business easier and more cost-efficient. Our RPA services includes RPA strategy, RPA proof of value, RPA business case development, RPA production rollout, and RPA managed services.
ProV helps companies improve services by connecting their existing systems to robot-aware technologies, that increase speed and efficiency and pave the way for your digital transformation.
To learn more about how ProV can help streamline your processes with RPA, or for pricing details, drop a comment below or contact us today.
Originally published at: https://www.provintl.com/blog/top-5-benefits-of-robotics-process-automation-rpa-software
|
Top-5 Benefits of Robotics Process Automation (RPA) Adoption for Your Company
| 7
|
top-5-benefits-of-robotics-process-automation-rpa-adoption-for-your-company-1a6450c6ea6d
|
2018-04-30
|
2018-04-30 12:37:44
|
https://medium.com/s/story/top-5-benefits-of-robotics-process-automation-rpa-adoption-for-your-company-1a6450c6ea6d
| false
| 633
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Simanta Sarkar
|
Branding and Communications Manager at ProV
|
d8cd9cbfeb15
|
simanta.sarkar
| 1
| 15
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
3950f63b7006
|
2017-11-07
|
2017-11-07 06:48:54
|
2017-11-11
|
2017-11-11 20:03:15
| 23
| false
|
en
|
2017-12-27
|
2017-12-27 07:35:30
| 3
|
1a64a3c004f1
| 10.892453
| 4
| 1
| 1
|
LaTeX version: https://www.sharelatex.com/project/52295b43e77a8bec1401f6bc
| 3
|
From Restricted Boltzmann Machine to Deep Neural Network — Missing Links Explained
LaTeX version: https://www.sharelatex.com/project/52295b43e77a8bec1401f6bc
1. Introduction
Deep neural network (DNN) has been a hot topic in the community of speech processing in recent years [1]. People are eager to learn about the various theories of DNN. However, those who are not familiar with some background knowledge of Probabilistic Graphical Models [2], Markov chain Monte Carlo (MCMC) methods [3], variational methods [4][5], etc., may found the related publications literally too deep to follow. Here I documented my footprints on figuring out some fundamental theories of DNN. This note spins around the paper written by Hinton et. al. in 2006 [6], which I think is the pivot reference for understanding DNN. I strongly recommend readers to at least scan through most of the references co-authored by Hinton before reading this note. Below I will begin with the training algorithm of restricted Boltzmann machine (RBM), i.e. contrastive divergence. Then I will investigate why initializing DNN with RBM is justifiable.
2. Background Overview
2.1. Terminology
The constantly changing notations is probably first obstacle for readers to follow various references of RBM and DNN. Below I try to unify and clarify the connections among these notations.
Model parameters
In general, θ stands for the weights and bias terms of any Boltzmann Machine with arbitrary structure. When we focus on the Restricted Boltzmann Machine, only the weights W among hidden and visible layers are taken into account:
Empirical data distribution
As stated in [7], the empirical data distribution “is a uniform mixture of delta distributions, one for each training point’’:
, where N is the number of training instances. The subscript/superscript “0'’ implies this is the initial distribution of the visible random variables before applying Gibbs Sampling.
Model distribution
By definition, the probability distribution function of an RBM is given by
, where
Again, the superscript ∞ implies this is the ultimate distribution of the visible random variables after applying Gibbs Sampling until equilibrium.
2.2. Gibbs sampling on RBM
[Edited: 01/22/2017] Below I will give some head first explanation of Gibbs sampling on RBM, which is crucial to understand
Why the estimation of the P_model can be done by Gibbs Sampling in contrastive divergence learning;
Why we can unroll an RBM into a deep network, and reverse the direction of propagation.
The process of Gibbs sampling, or the family of Markov chain Monte Carlo methods in general, can be imagined as a particle wandering in the space consists of all possible outcomes of a set of random variables. And how this particle “wanders” is limited by the underlying probability distribution constructed by these random variables. We can thus estimate this underlying probability distribution by observing the history of outcomes this particle has visited.
It is not coincidence that this family of methods is called Markov chain Monte Carlo — this particle moving from one possible “outcome” to the next is just like transitioning on a Markov chain from one “state” to the next.
Under the world view of probabilistic graphical model, an RBM with V visible nodes and H hidden nodes is just a probability distribution with V+H random variables having bipartite dependencies, i.e. only among pairs of visible and hidden variables. Due to the special structure, it is legal to perform Gibbs sampling on all hidden variables at once given visible variables, and vice versa.
Assume first that we are given an RBM with linear perceptrons, and the weight matrix is W. Let us set the initial value of the visible random variables v as v⁰ drawn from the training data set D. As stated in [8], each step of the Gibbs Sampling is as follows:
1. Multiply the sample of the visible random vector v with weight matrix W into the sample of the hidden random vector h:
2. Multiply the sample of the hidden random vector h with weight matrix W into the “new’’ sample of the visible random vector
where W’ is the transposed W.
The full process of Gibbs sampling is thus nothing more than keep multiplying an initial vector v⁰ with W and W’ again and again.
Now let us change our perspective: we see the nodes v as the states in a Markov chain. The above Gibbs sampling is in fact exactly the same as the state transitioning process of a time-homogeneous Markov chain with the states initialized as v⁰, and the state transition matrix T defined as
The state transition process is given by
where t is the index of iteration. Note that if we see from the perspective of the hidden nodes h, the Gibbs Sampling is also a state transitioning process, but with T=W’W instead. The above perspective is further illustrated in Fig. 1.
Note the above interpretation still holds with non-linear activation functions — it is simply generalizing the transitions of Markov chain from linear to non-linear ones.
Figure. 1. This is a further clarified version of one of the most important figures in [6]. The solid lines and arrows show the process of Gibbs Sampling. And the dotted lines and arrows show how one can conceptually view the Gibbs Sampling process as two Markov Chains, one of visible nodes and one hidden nodes, with transition matrix WW’ and W’W respectively.
3. RBM Training
3.1. ML training using gradient descent
In some of the most recent publications of Hinton’s [6][10], the gradient of log probability for ML training is simply written as
, where < … >p stands for taking the expectation with some distribution p. Note how I expanded the subscript compared to [10] for completeness. It seems that v is just 1 sample drawn from the training data set D, which is true for stochastic gradient descent or contrastive divergence learning.
However, the original derivation (i.e. more formal formulation) was in fact done on ordinary gradient descent, of which the gradient is the average of the gradient of all the instances in D [7][8][9]:
With P_model(v) given in Section 2.1, we can calculate the partial derivative:
Note there is no variable v in the 2nd term, therefore (1/N)Σ{v∈D} can be omitted.
By comparing the expectation and summation terms in the above 2 equations, now it is clear that
Also,
which is identical to the definition of the probability function of RBM.
Assuming binary hidden variables, P_model(h|v) can be derived simply by passing all the data through RBM, thanks to the structure of RBM:
, where σ(x) is the logistic sigmoid function 1/(1+e^-x) [10], and then check all possible configurations of h for the summation.
On the other hand, calculating P_model(v, h) requires exhaustively going through all possible configurations of {v, h}, which is intractable when visible nodes are continuous random variables.
3.2. Contrastive divergence learning
In the contrastive divergence learning proposed by Hinton in 2002 [8], the above criteria for gradient descent is simplified in 2 aspects:
First, observe that
This is why Hinton came up with the explanation that “Maximizing the log likelihood of the data (averaged over the data distribution) is equivalent to minimizing the Kullback-Leibler (KL) divergence between the data distribution, P⁰, and the equilibrium distribution over the visible variables, P^∞_θ, that is produced by prolonged Gibbs sampling from the generative model’’ [8]:
Based on such interpretation, we can generalize the concept and modify the criteria to only minimize the KL divergence between P⁰ and P^t, which is the distribution after only t steps of Gibbs sampling.
Second, as I mentioned previously, the contrastive divergence learning is in fact a “stochastic’’ learning method. Instead of calculating <v_i,h_j>P_model(v,h), the contrastive divergence learning only takes the instance of v_i h_j after t steps of Gibbs sampling. Also the <v_i,h_j>P_data(v,h) is replaced by the instance of v_i h_j in the 0-th iteration of Gibbs sampling:
Theoretically, if we run sufficiently many iterations of Gibbs sampling, the instance of v_i h_j is effectively drawn from the equilibrium distribution, i.e. P_model, of the Markov chain. In practice, however, we usually run Gibbs sampling for only 1 step in contrastive divergence learning. More discussion can be found in [8][9].
4. From RBM to DNN
4.1. Deep Networks at a Glance
The terminology of various deep networks are often mixed and confusing. We illustrate the definition given by Hinton et. al. [11][12] in Fig. 2. First, the structure of deep neural network (DNN) is in fact exactly the same as what have been called “neural network” or “multi-layer perceptron” (MLP) for decades. Second, the deep believe network (DBN) is a generative model with undirected edges between the two top layers, and directed edges toward visible layer between the remaining layers. Last but not least, deep Boltzmann machine (DBM) is literally a multi-layer RBM.
Figure. 2. Deep Neural Network (DNN), Deep Believe Network (DBN) and Deep Boltzmann Machine (DBM).
Note in Fig. 2 the number of nodes in all the layers are the same. Such configuration is just for the sake of concept discussion below. Also just for discussion purpose, below I will refer to the term DBN as a generative neural network, not necessarily with undirected edges among the two top layers.
4.2. Unrolling an RBM, reversibility and complementary prior
A constantly-mentioned concept of DNN initialization with RBM is that, one can “unroll’’ an RBM into a DNN with infinite number of stacks [6][13]. With the insight I mentioned in Section 2.2 that, performing Gibbs sampling on an RBM actually resembles the state transition process of a Markov chain, this unrolling concept is not as hard to understand. All we have to do is rearranging Fig. 1 so the transition is illustrated from bottom to top, and the resulting structure will be a DNN with infinite layers, as illustrated in the left half of Fig. 3. What is more, since the resulting neural network can still be seen as a Markov chain (or 2 Markov chains, if you see visible and hidden nodes separately), and the Markov chain resulted from Gibbs sampling satisfies detailed balance, a.k.a. reversibility [3][13], we can reverse the direction of weight propagation from bottom-up to top-down. We thus conceptually transformed the DNN with infinite layers into a DBN also with infinite layers, as illustrated in the right half of Fig. 3.
Figure. 3. Unrolling an RBM, based on the Gibbs sampling procedure illustrated in Fig.1, into a DNN with infinite layers; and reversing a DNN into a DBN.
The reversibility between DBN and DNN was explained in [6] with so called “complementary prior’’. Later in [13] Teh et. al. further explained this term. But it seems simpler if we see the reversion of edge directions as the factorization of a probability distribution with different structures of Bayesian network:
4.3. Layer-by-Layer initialization of DNN with RBM
Now we are ready to see why the weights in DNN can be initialized with an RBM. In [6][13], the proposed DBN training algorithm was justified with the concept of variational bounds. But, again, it seems simpler if we see the whole process as follows: Let us begin with the DBN in Fig. 3. Since this DBN is transformed from an RBM, which we assumed have been trained with some data already, the likelihood of the data, log(P(v)) , should have been maximize “to the best we could now”. If we take the first hidden layer h0 into consideration, the likelihood of data can be written as
Now, if we wish to maximize the likelihood even further, we either polish W or P_model(h0). But because in a DBN transformed from an already trained RBM, the weights among each layer are given by the same weight matrix which have been optimized either by gradient descent, contrastive divergence or their stochastic versions. Therefore as in [6], we turn to optimize P_model(h0) as follows:
We fix the weight matrix between the first and second layer, and “untie’’ it with all the remaining weight matrices, which means all the weights above the 2nd layer are now free parameters.
The sub-DBN above the 2nd layer of the original DBN is again equivalent to a RBM with visible layer being h0. Hence we are able to apply the RBM training algorithm to maximize the likelihood of h0, i.e. P_model(h0) in Eq. 19.
By iteratively applying the above steps upward the DBN layer by layer, the likelihood of the original data set, P_model(v), can thus theoretically be maximized further. Finally, as stated in [12], we just revert the DBN derived as above, add a soft-max layer on the top, and run the traditional back-propagation training algorithm for neural networks. The resulting DNN is what we want.
5. Conclusion
In this document, I tried to clarify some of the ambiguities that people may run into while reading important references about RBM and DNN. Due to the commentary nature of this document, it is strongly suggested that you should at least read some of Hinton’s papers and have some basic ideas about RBM and DNN before reading this document. I believe this document can save lots of time and effort for those who are dying to figure out all the equations on Hinton’s papers and understand the basic theories of deep learning. (Or you might not care at all because you just want to run some existing libraries and get jobs done.)
References
[1] Hinton, Geoffrey, et al. “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups.” Signal Processing Magazine, IEEE 29.6 (2012): 82–97.
[2] Kollar, Daphne, and Nir Friedman. Probabilistic graphical models: principles and techniques. The MIT Press, 2009.
[3] “MCMC expalined” [online], http://www.youtube.com/playlist?list=PL0E34F5354AA5952E
[4] Neal, Radford M., and Geoffrey E. Hinton. “A view of the EM algorithm that justifies incremental, sparse, and other variants.” Learning in graphical models. Springer Netherlands, 1998. 355–368.
[5] Beal, Matthew James. Variational algorithms for approximate Bayesian inference. Diss. University of London, 2003.
[6] Hinton, Geoffrey E., Simon Osindero, and Yee-Whye Teh. “A fast learning algorithm for deep belief nets.” Neural computation 18.7 (2006): 1527–1554.
[7] Sutskever, Ilya, and Tijmen Tieleman. “On the convergence properties of contrastive divergence.” International Conference on Artificial Intelligence and Statistics. 2010.
[8] Hinton, Geoffrey E. “Training products of experts by minimizing contrastive divergence.” Neural computation 14.8 (2002): 1771–1800.
[9] Carreira-Perpinan, Miguel A., and Geoffrey E. Hinton. “On contrastive divergence learning.” Artificial Intelligence and Statistics. Vol. 2005. 2005.
[10] Hinton, Geoffrey. “A practical guide to training restricted Boltzmann machines.” Momentum 9.1 (2010).
[11] Salakhutdinov, Ruslan, and Geoffrey E. Hinton. “Deep Boltzmann machines.” International Conference on Artificial Intelligence and Statistics. 2009.
[12] Hinton, Geoffrey, et al. “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups.” Signal Processing Magazine, IEEE 29.6 (2012): 82–97.
[13] Yee Whye Teh, Geoffrey E. Hinton, Simon Osindero, “Setting the Stage: Complementary Priors and Variational Bounds”, in Deep Learning Workshop of NIPS, December 6, 2007.
Originally published at ybdarrenwang.blogspot.com.
|
From Restricted Boltzmann Machine to Deep Neural Network — Missing Links Explained
| 14
|
from-restricted-boltzmann-machine-to-deep-neural-network-missing-links-explained-1a64a3c004f1
|
2018-05-02
|
2018-05-02 03:34:59
|
https://medium.com/s/story/from-restricted-boltzmann-machine-to-deep-neural-network-missing-links-explained-1a64a3c004f1
| false
| 2,383
|
Less is more.
| null | null | null |
Lex Parsimoniae
| null |
lex-parsimoniae
| null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Darren Yow-Bang Wang
|
A researcher/engineer with hands-on work experience of deep learning, machine learning, data science and software development.
|
e40b871f03a7
|
ybdarrenwang
| 3
| 5
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-16
|
2018-02-16 15:38:00
|
2018-02-26
|
2018-02-26 16:27:43
| 1
| false
|
en
|
2018-03-09
|
2018-03-09 17:14:47
| 7
|
1a64dc3cf561
| 2.630189
| 1
| 0
| 0
|
AI that bests humans at increasingly complex games like Go and Dota 2 may seem surprising, but it is actually pretty dull compared to what…
| 1
|
Cooperation —not superintelligence— is where the real AI revolution begins
AI that bests humans at increasingly complex games like Go and Dota 2 may seem surprising, but it is actually pretty dull compared to what cooperative AI will do. Don’t get me wrong, the latest improvements in robotics, deep learning, and processing architectures are crucial achievements, but — just like with us humans — the biggest revolution comes when individual efforts are combined.
“If I have seen further it is by standing on the shoulders of Giants.” — Isaac Newton, 1675
While superintelligence is most popular for science fiction and nightmares, the level at which AI becomes truly transformative is not at the individual level, but rather at the system level. While many experts have already reported on how beating human Dota 2 players one-on-one is not as significant as a matchup between teams of five, few have provided much on why cooperative AI is where the real revolution begins.
To understand this, let's look at fully autonomous cars. In thinking from our own limited abilities, engineers have primarily focused on doing what we do, only better. For example, instead of having just one view at a time, autonomous cars have a three-sixty degree view at all times. When it comes avoiding collisions using spatial information, humans only have shaky stereoscopic estimates, while AI has radar with precise depth information. Provided advantages like these, it is no real surprise that autonomous cars will one day best humans at safety.
The real shock, however, will be when we let AI work together at the system level. For example, when picking a driving route, individual AI already have an advantage over most humans at the system level because they have a complete list roads combined with traffic data. While this is helpful in avoiding the worst traffic jams along a route, cooperative AI is about preventing and resolving traffic jams once and for all.
The popular navigation app Waze provides a glimpse into that future in how it encourages humans to cooperate together to improve traffic data, but this ultimately does little to improve the flow of traffic. If anything, Waze might actually be making traffic worse by directing humans into clogging all the alternative routes.
With cooperative AI, the ability to share intelligence is vastly different. For example, instead of only relying on its local sensors, autonomous cars can work together to produce a shared real-time perspective of the entire world around them. This ability will help solve one of the biggest current failings of individual AI cars, avoiding completely stopped obstacles such as a firetruck.
By sharing a precise virtual model of the real world, AI will be able to combine their data into a complete understanding of the road system they are traveling. Instead of puzzling about a stationary obstacle that suddenly appears in its sensors, every autonomous car will know the moment a firetruck or any other obstacle arrives and when it is cleared.
This form of shared perception not only solves the problem behind current collision failures, it also provides cooperative AI with the means for actively managing traffic and safety. For instance, slowing vehicles precisely around a hazard without triggering a traffic jam, and allowing rescue and wreckage crews to arrive on scene quickly.
Eliminating traffic and creating safer roads is just one of many possible domains. When applied to logistics, manufacturing, agriculture, health care, and countless other areas, cooperation — not superintelligence — is where the real AI revolution begins. Instead of limiting AI only to tasks that individual humans can do, we need to provide AI with a means to collaborate on problems that we never could. Fortunately, this does not require any new far-off technology or frightening super intelligence. All we need, is a new way of thinking about AI and systems.
|
Cooperation —not superintelligence— is where the real AI revolution begins
| 8
|
cooperation-not-superintelligence-is-where-the-real-ai-revolution-begins-1a64dc3cf561
|
2018-06-12
|
2018-06-12 07:23:49
|
https://medium.com/s/story/cooperation-not-superintelligence-is-where-the-real-ai-revolution-begins-1a64dc3cf561
| false
| 644
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Bruce Skarin
|
Father, Scientist, Independent
|
588502fe52cd
|
bruceskarin
| 194
| 194
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-30
|
2017-10-30 11:26:46
|
2017-10-30
|
2017-10-30 11:51:19
| 3
| false
|
en
|
2017-10-30
|
2017-10-30 11:52:35
| 4
|
1a64de775565
| 2.448113
| 5
| 0
| 0
|
The U.S. Bureau of Labor Statistics Census just issued a report confirming what most of us already know. Jobs in computers and mathematics…
| 5
|
Don’t let Math Phobia Drive you from Analytics
The U.S. Bureau of Labor Statistics Census just issued a report confirming what most of us already know. Jobs in computers and mathematics, a category that includes data analytics, not only had the second highest median salary in 2016. It is also projected to have the second highest percentage growth by 2026, second only to healthcare support. Articles in Business Day and The New York Times affirm the burgeoning opportunities in this field.
And yet, many young people I speak to dismiss these prospects, claiming that “they’re not really math people.” I can empathize. I’m currently teaching young scientists in the Philippines a course in data analytics. I recently retired from the U.S. Environmental Protection Agency, where I spent most of my time developing environmental models and teaching young staffers the basics of data analytics. I’ve been invited by the U.S. National Academies of Sciences to work on projects involving computational models. Through all this, I’ve never told anyone about my fear of math.
When I was about 12 years old, my math teacher pulled me out of the classroom for no apparent reason other than to humiliate me. He sat me down on the bench outside the classroom and told me that he had just corrected my exam and that my results were abysmal. He asked, “Didn’t anyone ever teach you how to multiply, to divide, or any other basic math?” It was not until I left the Philippines and finished college in the U.S. that I discovered I was actually pretty good with numbers.
This always puzzled me until I read about Stanislas Dehaene’s work. Dehaene monitors brain activity to understand how humans process mathematical information. His work suggest that all humans, including infants, and even some other primates, possess an innate number sense that we evolved to processes information about time and location. A mathematical activity like addition appears to be hard wired; we seem to add by thinking about objects strung along some subliminal number line. On the other hand, multiplication is culturally developed, part of the grammar and language of school math that we need to master by rote.
When I teach my students in my data analytics class, perhaps because of my own math deficiencies, I try to communicate analytics visually and intuitively, tapping into our inherent number sense. I rely heavily on R, the public domain statistical software, and my hope is that once my students master R’s syntax, they will be free to slice, dice, and manipulate their data to discover the patterns embedded within. I provide some basic syntax, but then encourage students to use their innate number sense to guide their exploration of data as they modify the R code as they see fit.
I’m not sure whether my approach works, but I have been rewarded by one of my students with the words every teacher longs to hear: “I’ve taken Bayesian statistics several times before, but this is the first time I understand it.”
|
Don’t let Math Phobia Drive you from Analytics
| 6
|
dont-let-math-phobia-drive-you-from-analytics-1a64de775565
|
2018-03-13
|
2018-03-13 02:12:09
|
https://medium.com/s/story/dont-let-math-phobia-drive-you-from-analytics-1a64de775565
| false
| 503
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Pasky Pascual
| null |
35f1df95c3b2
|
paskypascual
| 8
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-20
|
2018-09-20 06:16:32
|
2018-09-20
|
2018-09-20 06:19:19
| 1
| false
|
en
|
2018-09-20
|
2018-09-20 06:19:19
| 2
|
1a6515d58dc8
| 0.411321
| 0
| 0
| 0
| null | 2
|
Applications of Artificial Intelligence: AI for Recruiting
Applications of Artificial Intelligence: AI for Recruiting - RecruitGyan
One of the most trending topics in HR these days is AI recruitment. The term explains an emerging technology that has…recruitgyan.com
RecruitGyan (@GyanRecruit) | Twitter
The latest Tweets from RecruitGyan (@GyanRecruit). RecruitGyan strives to be an objective industry lighthouse to both…twitter.com
|
Applications of Artificial Intelligence: AI for Recruiting
| 0
|
applications-of-artificial-intelligence-ai-for-recruiting-1a6515d58dc8
|
2018-09-20
|
2018-09-20 06:19:19
|
https://medium.com/s/story/applications-of-artificial-intelligence-ai-for-recruiting-1a6515d58dc8
| false
| 56
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Recruit Gyan
|
Inspired by Gyan, which means knowledge in Sanskrit, RecruitGyan strives to be an objective industry lighthouse to both candidates and clients.
|
49eeaa388162
|
recruitgyan
| 1
| 65
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-26
|
2018-07-26 20:20:37
|
2018-09-19
|
2018-09-19 19:31:02
| 0
| false
|
en
|
2018-09-19
|
2018-09-19 19:31:02
| 7
|
1a666c6e6dc6
| 1.030189
| 1
| 0
| 1
|
In the first two entries of Startup Watch, we looked at BenevolentAI, a London-based company that uses artificial intelligence to…
| 5
|
Startup Watch: Adding machine learning to business with SparkCognition
In the first two entries of Startup Watch, we looked at BenevolentAI, a London-based company that uses artificial intelligence to accelerate drug discovery, and x.ai, a New York City startup that offers NLP-powered AI assistants to help with scheduling meetings and other administrative tasks. This week, however, we focus on a company whose success proves that AI and machine learning need not be relegated to traditional tech meccas like London, New York, and San Francisco.
SparkCognition is a global AI company, headquartered in Austin, Texas, which provides machine learning-based software solutions to companies in industries ranging from energy and finance to aerospace and defense. It’s been making waves recently, having been named the fastest growing company in Central Texas by the Austin Business Journal in late 2017.
The company’s CEO and founder, Amir Husain, is a Pakistani-born inventor and entrepreneur. Husain has been behind a number of successful tech ventures, and holds over 20 US patents in computing and AI. He also wrote the 2017 book, The Sentient Machine, which examines the economic, ethical, and existential implications of artificial intelligence.
SparkCognition is part of Austin’s thriving tech industry, fueled by Texas’ position as the second largest economy in the United States. Unbeknownst to many, the state also has a rich history in computer science and AI. The University of Texas at Austin boasts one of the highest-ranked computer science departments in the world, providing a steady stream of tech talent to major tech companies in the city such as Google, IBM, and Dell, as well as SparkCognition.
READ MORE.
|
Startup Watch: Adding machine learning to business with SparkCognition
| 1
|
startup-watch-adding-machine-learning-to-business-with-sparkcognition-1a666c6e6dc6
|
2018-09-19
|
2018-09-19 19:31:02
|
https://medium.com/s/story/startup-watch-adding-machine-learning-to-business-with-sparkcognition-1a666c6e6dc6
| false
| 273
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
#ODSC - The Data Science Community
|
Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.
|
2b9d62538208
|
ODSC
| 665
| 19
| 20,181,104
| null | null | null | null | null | null |
0
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.optimize as opt # more on this later
data = pd.read_csv('ex2data1.txt', header = None)
X = data.iloc[:,:-1]
y = data.iloc[:,2]
data.head()
mask = y == 1
adm = plt.scatter(X[mask][0].values, X[mask][1].values)
not_adm = plt.scatter(X[~mask][0].values, X[~mask][1].values)
plt.xlabel('Exam 1 score')
plt.ylabel('Exam 2 score')
plt.legend((adm, not_adm), ('Admitted', 'Not admitted'))
plt.show()
def sigmoid(x):
return 1/(1+np.exp(-x))
def costFunction(theta, X, y):
J = (-1/m) * np.sum(np.multiply(y, np.log(sigmoid(X @ theta)))
+ np.multiply((1-y), np.log(1 - sigmoid(X @ theta))))
return J
def gradient(theta, X, y):
return ((1/m) * X.T @ (sigmoid(X @ theta) - y))
(m, n) = X.shape
X = np.hstack((np.ones((m,1)), X))
y = y[:, np.newaxis]
theta = np.zeros((n+1,1)) # intializing theta with all zeros
J = costFunction(theta, X, y)
print(J)
temp = opt.fmin_tnc(func = costFunction,
x0 = theta.flatten(),fprime = gradient,
args = (X, y.flatten()))
#the output of above function is a tuple whose first element #contains the optimized values of theta
theta_optimized = temp[0]
print(theta_optimized)
J = costFunction(theta_optimized[:,np.newaxis], X, y)
print(J)
plot_x = [np.min(X[:,1]-2), np.max(X[:,2]+2)]
plot_y = -1/theta_optimized[2]*(theta_optimized[0]
+ np.dot(theta_optimized[1],plot_x))
mask = y.flatten() == 1
adm = plt.scatter(X[mask][:,1], X[mask][:,2])
not_adm = plt.scatter(X[~mask][:,1], X[~mask][:,2])
decision_boun = plt.plot(plot_x, plot_y)
plt.xlabel('Exam 1 score')
plt.ylabel('Exam 2 score')
plt.legend((adm, not_adm), ('Admitted', 'Not admitted'))
plt.show()
def accuracy(X, y, theta, cutoff):
pred = [sigmoid(np.dot(X, theta)) >= cutoff]
acc = np.mean(pred == y)
print(acc * 100)
accuracy(X, y.flatten(), theta_optimized, 0.5)
| 14
|
7219b4dc6c4c
|
2018-09-02
|
2018-09-02 12:35:53
|
2018-09-04
|
2018-09-04 05:38:25
| 4
| false
|
en
|
2018-09-15
|
2018-09-15 11:50:51
| 4
|
1a666f049ad6
| 5.24717
| 68
| 5
| 2
|
In my previous post we had discussed about Pythonic implementation of Linear Regression with Single and Multiple independent variables as…
| 4
|
Python Implementation of Andrew Ng’s Machine Learning Course (Part 2.1)
In my previous post we had discussed about Pythonic implementation of Linear Regression with Single and Multiple independent variables as part of week 1 and week 2 programming assignment. Now we will move to week 3 content i.e., Logistic Regression.
Now since this is going to be a pretty lengthy post I am going to divide this post into two parts. Watch out for Part 2.2 that looks into how to combat overfitting problem.
If you are new here I would encourage you to read my previous post
Python Implementation of Andrew Ng’s Machine Learning Course (Part 1)
Pre-requisites
It’s highly recommended that first you watch the week 3 video lectures.
Should have basic familiarity with the Python ecosystem.
Here we will look into one of the most widely used ML algorithm in the industry.
Logistic Regression
In this part of the exercise, you will build a logistic regression model to predict whether a student gets admitted into a university.
Problem context
Suppose that you are the administrator of a university department and you want to determine each applicant’s chance of admission based on their results on two exams. You have historical data from previous applicants that you can use as a training set for logistic regression. For each training example, you have the applicant’s scores on two exams and the admissions decision.
Your task is to build a classification model that estimates an applicant’s probability of admission based on the scores from those two exams.
First let’s load the necessary libraries.
Next, we read the data (the necessary data is available under week-3 content)
So we have two independent features and one dependent variable. Here 0 means candidate was unable to get an admission and 1 vice-versa.
Visualizing the data
Before starting to implement any learning algorithm, it is always good to visualize the data if possible.
Implementation
Before you start with the actual cost function, recall that the logistic regression hypothesis makes use of sigmoid function. Let’s define our sigmoid function.
Sigmoid Function
Note that here we are writing the vectorized code. So it really doesn’t matter whether x is a scalar or a vector or a matrix or a tensor ;-). Of course writing and understanding the vectorized code takes some mind bending (which anyone will become good at after some practice). However, it gets rid of for loops and also makes for efficient and generalized code.
Cost Function
Let’s implement the cost function for the Logistic Regression.
Note that we have used the sigmoid function in the costFunction above.
There are multiple ways to code cost function. Whats more important is the underlying mathematical ideas and our ability to translate them into code.
Gradient Function
Note that while this gradient looks identical to the linear regression gradient, the formula is actually different because linear and logistic regression have different definitions of hypothesis functions.
Let’s call these functions using the initial parameters.
This should give us a value of 0.693 for J.
Learning parameters using fmin_tnc
In the previous assignment, we found the optimal parameters of a linear regression model by implementing the gradient descent algorithm. We wrote a cost function and calculated its gradient, then took a gradient descent step accordingly. This time, instead of taking the gradient descent steps, we will use a built-in function fmin_tnc from scipy library.
fmin_tnc is an optimization solver that finds the minimum of an unconstrained function. For logistic regression, you want to optimize the cost function with the parameters theta.
Constraints in optimization often refer to constraints on the parameters. For example, constraints that bound the possible values theta can take (e.g., theta ≤ 1). Logistic regression does not have such constraints since theta is allowed to take any real value.
Concretely, you are going to use fmin_tnc to find the best or optimal parameters theta for the logistic regression cost function, given a fixed dataset (of X and y values). You will pass to fmin_tnc the following inputs:
The initial values of the parameters we are trying to optimize.
A function that, when given the training set and a particular theta, computes the logistic regression cost and gradient with respect to theta for the dataset (X, y).
Note on flatten() function: Unfortunately scipy’s fmin_tnc doesn’t work well with column or row vector. It expects the parameters to be in an array format. The flatten() function reduces a column or row vector into array format.
The above code should give [-25.16131862, 0.20623159, 0.20147149].
If you have completed the costFunction correctly, fmin_tnc will converge on the right optimization parameters and return the final values of theta. Notice that by using fmin_tnc, you did not have to write any loops yourself, or set a learning rate like you did for gradient descent. This is all done by fmin_tnc:-) You only needed to provide a function for calculating the cost and the gradient.
Lets use these optimized theta values to calculate the cost.
You should see a value of 0.203 . Compare this with the cost 0.693 obtained using initial theta.
Plotting Decision Boundary (Optional)
This final theta value will then be used to plot the decision boundary on the training data, resulting in a figure similar to the one below.
It looks like our model does a pretty good job at distinguishing the students who got the admission vs those who didn’t. Now lets quantify our model accuracy for which we will write a function rightly called accuracy
This should give us an accuracy score of 89% . Hmm… not bad.
You now have learnt how to perform Logistic Regression. Well done!
That’s it for this post. Give me a clap (or several claps) if you liked my work.
You can find the next post in this series here.
|
Python Implementation of Andrew Ng’s Machine Learning Course (Part 2.1)
| 444
|
python-implementation-of-andrew-ngs-machine-learning-course-part-2-1-1a666f049ad6
|
2018-09-15
|
2018-09-15 11:50:51
|
https://medium.com/s/story/python-implementation-of-andrew-ngs-machine-learning-course-part-2-1-1a666f049ad6
| false
| 1,205
|
Analytics Vidhya is a community of Analytics and Data Science professionals. We are building the next-gen data science ecosystem https://www.analyticsvidhya.com
| null |
analyticsvidhya
| null |
Analytics Vidhya
|
medium@analyticsvidhya.com
|
analytics-vidhya
|
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,DEEP LEARNING,DATA SCIENCE,PYTHON
|
analyticsvidhya
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Srikar
| null |
afc1c8c8f5bc
|
srikarplus
| 603
| 8
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-10
|
2017-11-10 07:02:55
|
2017-11-10
|
2017-11-10 07:04:22
| 1
| false
|
en
|
2017-11-10
|
2017-11-10 07:04:49
| 1
|
1a66a2dfbb6d
| 2.098113
| 0
| 0
| 0
|
Today we will discuss about the past, present and the future that holds true for AI. The Falling Walls annual conference that is held in…
| 3
|
AI to surpass humankind and biology
AI and the near future; The Falling Walls Conference 2017
Today we will discuss about the past, present and the future that holds true for AI. The Falling Walls annual conference that is held in Berlin from 8th to 9th November every year will make 2017 special by discussing on the possibilities of some more walls falling apart in the near future. According to Jurgen Schmidhuber, AI will make the world inclusive to the extreme limits and bring down every wall acting as protective layers or barriers to advancement. As the recent reports suggest by 2016 there has already been a phenomenal rise in the use of computational power interfaces such as Siri, Google data centers, Google’s speech recognition and Apple’s QuickType. All of the top 5 public companies Apple, Google, Microsoft, Facebook and Amazon are extensively putting immense efforts in exploring deep-learning neural networks. LSTM (Long Short-Term Memory network) can learn a lot from experience as it is a recurrent neural networks (RNN). It can be aligned with the applications to recognize and analyse them. LSTM is found to take up actions quickly — It can be made to learn how to compose music, run chat bots, summarize documents, control robots, recognize videos and handwriting, analyze images and perform various forms of predictive analyses. LSTM is now being considered to stand on the base of deep learning. In 2015 LSTM created spoken answers for Amazon’s Alexa. It was made to be part of more than 2 billion Android phones in 2015. In 2017, Facebook is found to be using LSTM in its daily 4.5 billion translations.
Progress of AI over the years
LSTM had not reached an impressive level in the 1990s when Artificial intelligence, according to Jurgen Schmidhuber was popularly named as artificial curiosity. The buzzword (AI) started evolving much later since 2011.
AI has continuously tried to evolve itself. There have been many changes which have driven AI to reach far beyond human imagination and senses. Many humans hope to become immortal as AI allows the brain to scan and upload future information that makes them able robots. The reach and scope of AI is grander than industrial revolution.
Relevance of The Falling Walls Conference
With the beginnings and end of AI standing in a crucial transition point, the Falling Walls Conference is sheer thing of hope and excitement for the science and innovation communities around the world. Being supported by famous foundations, top-class universities, NGOs and corporations the conference will gain a lot in 2017 in setting the beginning of a new era for AI that truly surpasses humankind and biology as a whole.
In the ongoing and finale (9th November, 2017) of The Falling Walls Conference around 20 start-ups will be presenting their enterprise efforts and innovations. This is a prestigious science conference that has a varied set of global decision makers as attendees from media, politics, science and cultural organizations. Be a part of the conference that is attended by the brightest minds on planet: http://www.falling-walls.com
|
AI to surpass humankind and biology
| 0
|
ai-to-surpass-humankind-and-biology-1a66a2dfbb6d
|
2018-03-08
|
2018-03-08 10:52:17
|
https://medium.com/s/story/ai-to-surpass-humankind-and-biology-1a66a2dfbb6d
| false
| 503
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Rplanx Technology Private Limited
|
We provide top notch development support in Blockchain, AI and IoT solutions. Get extensive Technology services and project delivery route maps.
|
ad5f65157fc3
|
RPlanX_Tech
| 2
| 5
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
7404f5b5b342
|
2018-07-07
|
2018-07-07 03:59:01
|
2018-07-07
|
2018-07-07 04:08:02
| 1
| false
|
pt
|
2018-07-07
|
2018-07-07 04:08:02
| 2
|
1a6a0f48435e
| 0.562264
| 3
| 0
| 0
|
Hoje foi meu primeiro dia no desafio lançado pelo Siraj de 100 dias de Machine Learning Code. Neste desafio quem “topar”fazer, deve dedicar…
| 2
|
100 Days of ML Code — Day 1
Hoje foi meu primeiro dia no desafio lançado pelo Siraj de 100 dias de Machine Learning Code. Neste desafio quem “topar”fazer, deve dedicar ao menos 1 hora por dia para estudar/codificar algo relacionado a ML. Espera-se que ao final do desafio a pessoa tenha tenho desenvolvido um projeto ou participado de um relevante.
Hoje comecei com o problema inicial do Kaggle(plataforma de competição para ML).
Vou logar tudo em um repositório próprio no GitHub (Confira aqui)
Espero chegar até o final!
Bate palma aí para incentivar =)
|
100 Days of ML Code — Day 1
| 26
|
100-days-of-ml-code-day-1-1a6a0f48435e
|
2018-07-07
|
2018-07-07 04:08:02
|
https://medium.com/s/story/100-days-of-ml-code-day-1-1a6a0f48435e
| false
| 96
|
Machine Learning | Musica | Tecnologia
| null |
john.theo.souza
| null |
johntheology
|
john.theo.souza@gmail.com
|
johntheology
|
MUSIC,TECHNOLOGY,MACHINE LEARNING,COMPUTER SCIENCE
|
john_theo
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
John Theo
|
Engenheiro da Computação, musico, esposo, pai. Buscando acertar os detalhes da vida com muita arte.
|
47b7d050199
|
johntheo
| 110
| 131
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-09
|
2018-08-09 10:14:16
|
2018-08-09
|
2018-08-09 10:17:31
| 1
| false
|
en
|
2018-08-09
|
2018-08-09 10:17:31
| 12
|
1a6ac6fc8f0
| 1.25283
| 1
| 1
| 0
|
AI and Stochastic optimization part 1
| 5
|
How ServAdvisor Works — Artificial Intelligence
AI and Stochastic optimization part 1
In fact, many phenomena observed in the physical universe are actually best modeled with nonlinear transformations. We use this in ServAdvisor process modeling for transformations between system inputs and the target output in machine learning and AI solutions.
For AI model training and the optimization of stochastic parameters of models, we develop special adaptive Generic Algorithm (GA) involving the idea of randomness when performing a search. However, it must be clearly understood that GAs are not simply random search algorithms. They utilize knowledge from previous generations of strings in order to construct a new generation that will approach the optimal solution.
Summarizing, the following essential features of GAs can be listed:
ØGeneric algorithms manipulate structures which represent the parameters, not the actual values of the parameters themselves
ØGeneric algorithms use a population of points to perform a search, not just a single point on the parameter space
ØGeneric algorithms use only the current measure of ‘’’goodness’’ to guide themselves to the optimal solution
ØGeneric algorithms are probabilistic in nature, not deterministic
ØGeneric algorithms are inherently parallel, dealing with a large number of points (strings) simultaneously
Apparently, GAs transfer the biological mechanisms of reproduction, crossover, and mutation to algorithms.
Moreover, efficient Approximate Stochastic Maximum Likelihood Estimates (AMLEs) are used, where they are known to have asymptotically optimum properties. Furthermore, the elements of the Cramer-Rao bound (CRB) matrix will be considered as a lower bound of the error covariance matrix of the ML-estimates. This establishes relations between model parameters and efficiency of the ML-estimates.
#cryptonews #cryptocurrency #blockchain #ICO #Crypto #TokenSale #earlybird #bitcoin #cryptokitties #altcoin #ServAdvisor #SRV
|
How ServAdvisor Works — Artificial Intelligence
| 50
|
how-servadvisor-works-artificial-intelligence-1a6ac6fc8f0
|
2018-08-09
|
2018-08-09 10:17:31
|
https://medium.com/s/story/how-servadvisor-works-artificial-intelligence-1a6ac6fc8f0
| false
| 279
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
ServAdvisor
| null |
64017f48c363
|
ServAdvisor
| 32
| 1
| 20,181,104
| null | null | null | null | null | null |
0
|
Number of colors selected on slicer = IF(ISFILTERED(flagInfo[details]), COUNTROWS(ALLSELECTED(flagInfo[details])), 0)
PBISlicer Check =
IF([Number of colors selected on slicer] = 0, 1,
IF(DISTINCTCOUNT(flagInfo[details]) = [Number of colors selected on slicer], 1, 0))
PBISlicer Country Count =
CALCULATE(COUNTROWS(countryInfo),
FILTER(countryInfo, [PBISlicer Check] = 1))
Number of variables selected on slicer = IF(ISFILTERED(flagInfo[variable]), COUNTROWS(ALLSELECTED(flagInfo[variable])), 0)
HierarchySlicer Check =
IF([Number of variables selected on slicer] = 0, 1,
IF(DISTINCTCOUNT(flagInfo[variable]) = [Number of variables selected on slicer], 1, 0))
HierarchySlicer Country Count =
CALCULATE(COUNTROWS(countryInfo),
FILTER(countryInfo, [HierarchySlicer Check] = 1))
HierarchySlicer population =
CALCULATE(SUM(countryInfo[population]),
FILTER(countryInfo, [HierarchySlicer Check] = 1))
| 7
|
7500d0eb69cc
|
2018-05-14
|
2018-05-14 20:15:34
|
2018-05-16
|
2018-05-16 16:56:11
| 8
| false
|
en
|
2018-08-03
|
2018-08-03 18:29:20
| 7
|
1a6b20aee5f5
| 5.171069
| 8
| 1
| 0
|
Goal: Create the necessary calculated fields to change “OR” to “AND” logic for Power BI slicers (native slicer and the hierarchySlicer).
| 5
|
Changing “OR” to “AND” Logic for Power BI Slicers
Goal: Create the necessary calculated fields to change “OR” to “AND” logic for Power BI slicers (native slicer and the hierarchySlicer).
I have been using Power BI for half a year now. Like previous BI tools I have used, a couple of hacks help to transform seemingly impossible business requirements into super happy reactions:
“Seems pretty magical that you got it to work, good job!”
“OMG, I’m actually a little giddy.”
A couple of months ago, I was asked to change the logic within a custom visual, the hierarchySlicer, from showing data based on “OR” logic to showing data based on “AND” logic. After some research and several attempts, I am happy that that the necessary hack has been found!
Completed dashboard which utilizes slicers with AND instead of OR logic for a hierarchySlicer.
Download the dashboard here.
So how do Power BI’s slicers work?
Borrowing the words from Rob Collie: The more you select, the more you “get”. As you select more items in your slicer, Power BI shows the union of your data or the distinct list of items satisfied by selecting the slicer items. With each additional slicer item selected, the total pieces of data sliced increases.
Selecting Flag Colors == ‘gold’ (Country Count = 91) and Flag Colors == ‘orange’ (Country Count = 26) causes the Country Count to increase (Country Count = 102).
Why would you want to change the logic?
Whatever the business requirement might be, at the end of the day you want to determine which members in a group have common features. You want to perform an intersection of your data. So let’s say that I am interested in knowing which countries’ flags are similar to one another based on color and details. We will expose this with two different Power BI slicers:
Native slicer
hierarchySlicer
But first let’s discuss the data
I want to be transparent on how the data is organized for replicating this approach. I restructured UCI’s Flags Data Set into two different tables, one holding the country information (countryInfo) and the other holding the flag information (flagInfo). countryInfo’s primary key (unique, does not repeat) is the name of the country and important metrics like area, population, language, etc. live in this table. The variable and detail flag data is a hierarchical structure, which lives in flagInfo. countryInfo and flagInfo have a one to many relationship on the name of the country. When repeating this for your data set, set up a similar structure to ensure correct results.
Tables countryInfo and flagInfo. Note: This data set is from 1986 and URLs for the flags were matched as best as possible, not all the countries in the data set exist today.
Case 1: Changing to “AND” Logic for Power BI’s Native Slicer
Question: Which countries’ flags have gold AND orange colors?
We will create three calculated measures which will do the following:
Counts the number of items selected in the slicer
Description: If an item on the slicer is selected, count the number of items selected on the slicer.
2. Checks if the colors selected in the slicer are in a country’s flag
Description: If no items were selected in the slicer, show all the countries. If items in the slicer were selected and the number of items selected on the slicer are equal to the number of selected colors for a country, show the countries.
3. Counts the number of countries which satisfies the ‘AND’ requirement
Description: Filter the country table only where PBISlicer Check is true. For the filtered table, count the rows of the table.
When creating the actual dashboard, you will want to make sure that the slicer’s “AND” logic effects every one of the dashboard visuals.
A. For card visualizations drag the recalculated Country Count. B. For table visualizations, create your table as you usually would but set PBISlicer Check is 1 as a Visual Level Filter to ensure that the list of countries will be filtered based on slicer selection.
Case 2: Changing to “AND” Logic for Power BI’s hierarchySlicer
Questions: Which countries’ flags have icon OR quarters details AND 0 stripes? For those countries, what is the total population? What is the distribution of religions for those countries?
The main thing to notice for this example is that we have two tiered logic within the hierarchy. At the top level, AND logic applies (ie. Want countries’ flags where there are details AND stripes), while at the bottom level, OR logic applies (ie. Want countries’ flags where details are icon OR details are quarters). We will create four calculated measures which will do the following:
Counts the number of items selected in the slicer
Description: If an item on the slicer is selected, count the number of items selected on the slicer.
2. Checks if the variables selected in the slicer are in a country’s flag
Description: If no items were selected in the slicer, show all the countries. If items in the slicer were selected and the number of items selected on the slicer are equal to the number of selected variables for a country, show the countries.
3. Counts the number of countries which satisfies the ‘AND’ requirement
Description: Filter the country table only where HierarchySlicer Check is true. For the filtered table, count the rows of the table.
4. Calculates the population of countries which satisfy the ‘AND’ requirement
Description: Filter the country table only where HierarchySlicer Check is true. For the filtered table, calculate the sum of the population.
When creating the actual dashboard, you will want to make sure that the slicer’s “AND” logic effects every one of the dashboard visuals. Refer to the above case to create the previous card and table visualizations.
A. For this card visualizations drag the recalculated population. B. For all visualization except for table visualizations, you will need to drag the recalculated metric as the value for the visualization. Using Check is 1 as a Visual level filter will not work.
Recommendations
I would not recommend having multiple ‘AND’ logic slicers on the same dashboard, as you can imagine the logic for the ‘Check’ DAX calculations can become very complicated very fast.
Also, I would not recommend having multiple ‘AND’ logic slicers on the same dashboard since this can cause very poor performance as I had found when using 2 ‘AND’ logic hierarchySlicers in the same dashboard.
Perform a lot of quality assurance on your data results!
Concluding Remarks
There are many other ways that you can implement the ‘AND’ logic DAX equations (ie. using variables, breaking the calculations up more, etc.). This is the approach that worked for me and scaled well in production. The data, data restructuring code, and PBIX file in the post is available here. If you have any questions or thoughts on the tutorial, feel free to reach out in the comments below or through Twitter. Also, if you would like to learn more about Seismic Software and how we use Microsoft’s Power BI, visit us.
|
Changing “OR” to “AND” Logic for Power BI Slicers
| 157
|
changing-or-to-and-logic-for-power-bi-slicers-1a6b20aee5f5
|
2018-08-03
|
2018-08-03 18:29:20
|
https://medium.com/s/story/changing-or-to-and-logic-for-power-bi-slicers-1a6b20aee5f5
| false
| 1,070
|
Seismic Data Science
| null | null | null |
Seismic Data Science
| null |
seismic-data-science
|
DATA SCIENCE,SALES ENABLEMENT,BUSINESS INTELLIGENCE,ANALYTICS,MACHINE LEARNING
| null |
Data Visualization
|
data-visualization
|
Data Visualization
| 11,755
|
Orysya Stus
|
Data Scientist at Seismic Software. I enjoy creating stories with data.
|
4a2bbcea3edb
|
ostus
| 27
| 12
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-23
|
2017-11-23 16:24:14
|
2017-11-29
|
2017-11-29 01:21:38
| 0
| false
|
en
|
2017-11-29
|
2017-11-29 01:21:38
| 0
|
1a6c83a8f902
| 0.758491
| 0
| 0
| 0
|
Data Science, Machine Learning, AI have all taken their turn riding the hype cycle in recent years. Despite all the buzz about each one of…
| 2
|
What is DataOps?
Data Science, Machine Learning, AI have all taken their turn riding the hype cycle in recent years. Despite all the buzz about each one of those terms the lines between them are actually pretty blurry. One of the biggest similarities between them is the need for quality data. That is where DataOps comes in.
As you can tell from the name, DataOps is a compound word — Data and Operations. The data part is pretty self explanatory. In case you haven’t heard we’re generating increasingly larger volumes of data each day and that data is being used to transform almost every aspect of society. Organizations that aren’t using data to inform and drive their decisions are at a strategic disadvantage. While data is a key part of the equation the operations part is where the value comes to organizations and individuals.
Operations are what turn resources into value. Data has no value in of itself. The value comes from transforming it and using it to discover new insights. When you focus on operations you get better quality data and you allow your analysts to better do their job. They can focus on analysis not data pulling and QA.
|
What is DataOps?
| 0
|
what-is-dataops-1a6c83a8f902
|
2017-11-29
|
2017-11-29 01:21:38
|
https://medium.com/s/story/what-is-dataops-1a6c83a8f902
| false
| 201
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Mike Sarnoski
| null |
a5bda27bce1b
|
mikesarnoski
| 1
| 9
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-02
|
2017-10-02 00:30:06
|
2017-10-02
|
2017-10-02 00:31:05
| 0
| false
|
en
|
2018-01-23
|
2018-01-23 00:35:19
| 8
|
1a6e54afe2b2
| 2.581132
| 3
| 0
| 0
|
In the 70’s when ‘desktop, file and folder’ metaphors were created by Xerox PARC, computers were mostly used by clerical workers [1]. Job…
| 4
|
Capabilities of People and Computers
In the 70’s when ‘desktop, file and folder’ metaphors were created by Xerox PARC, computers were mostly used by clerical workers [1]. Job profiles such as clerks, telephone switchboard operators, office assistants were in abundance .For those sort of jobs, computers that worked as a prostheses memory [2] sufficed. As storing information and doing basic logical functions was all that was required and expected from them. From mid-90s i.e. after the invention and wide spread use of internet, job profiles became more of the “communication worker” category [1]. Some new technologies were developed in this era to suit its communication related needs, such as e-mails, virtual chat rooms, etc. but even they used the file- folder system for managing data. The situation today has changed a lot as compared to both of those scenarios. For what is labeled as the ‘information age’, it sure has majority of jobs related to managing information as an instrumental tool to achieve professional goals. A major chunk of young professionals fall under the “knowledge worker” [1] category.
Today, computers are largely used as aids to the human mind, at work and at home. I am making this claim based on the facts that at workplace we use softwares to help us solve logical problems, we use VR gaming for entertainment, and social networking platforms for connecting with loved ones. If the technology is going to work as a subordinate to something as complex as human mind, its information management technique needs to be more creative and abstract, instead of just compartmentalized storage of data. Speaking of stored data, digital memories are not useless, but they should be visually / spatially / contextually organized in order to understand the information they contain [2].
Current research is surely going towards solutions that would address the issues discussed above. Humans attend to evidences and form intuitive theories about them [3]. ‘Content recommendation systems (CRS)’ is a fairly recent technology that is trying to replicate this part of human behavior. What such kind of algorithms do, is create interest-based clustering and cluster-based content recommendation solutions on a real-time basis, for individual users based on their unique and diverse interests [4]. One of the most used CRS today is Facebook’s newsfeed algorithm. It acts as a complex feedback loop, which connects people with information that they would most likely want to connect to, by making some items easier to access than others [3]. Twitter’s ‘trending’ function works in a similar way.
Along with social media, CRS technology is widely used in E-commerce. Amazon’s product recommendation system is trying to provide a personalized experience to all of its customers. Their algorithms use traditional methods such as ‘cluster models’ and ‘search based methods’ along with a comparatively unorthodox method called ‘item-to–item collaborative filtering’. Unlike traditional methods, this one produces recommendations in real time, scales to massive data sets and generates high quality recommendations [5].
CRS’ have some shortcomings- ‘Popularity bias’ being one of them. Already most ‘liked’ posts generally appear in the newsfeed/trending, which makes them even more popular. On E-commerce sites, the most reviewed or searched products appear in recommendations, which manifolds its sales. It is unfair but not far from reality. In society, an already popular person tends to make even more friends through their existing circle, as humans gravitate towards wide acceptance and familiarity more. Along with intuition and complexity, the recommendations systems have started picking up human flaws as well.
REFERENCES:
1. Kidd, A. 1994. The marks are on the knowledge worker. CHI 1994, 186–191
2. Ch. 28 (Steve Whittaker): Making Sense of Sense Making, HCI Remixed
3. Rader and Gray. 2015. Understanding User Beliefs about Algorithmic Curation in the Facebook News Feed. CHI 2015
4. Interest-based real-time content recommendation in online social communities by Dongsheng Li , Qin Lv , Xing Xie, ,Li Shang, ,Huanhuan Xia, ,Tun Lu, ,Ning Gu
5. Amazon.com recommendations: item-to-item collaborative filtering, IEEE Internet Computing (Volume:7 , Issue: 1 ) by G. Linden ; B. Smith ; J. York
|
Capabilities of People and Computers
| 9
|
capabilities-of-people-and-computers-1a6e54afe2b2
|
2018-05-30
|
2018-05-30 21:56:43
|
https://medium.com/s/story/capabilities-of-people-and-computers-1a6e54afe2b2
| false
| 684
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Amruta Mali
| null |
e78090d21c08
|
amrutamali
| 11
| 23
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
948d4f9c991
|
2018-02-09
|
2018-02-09 13:51:56
|
2018-02-09
|
2018-02-09 14:19:14
| 4
| false
|
en
|
2018-02-09
|
2018-02-09 14:19:14
| 1
|
1a6e99cc0918
| 5.99434
| 0
| 0
| 0
|
Are you working on a project where no data is available to you? Maybe you are expected to work with a small data set or a data set that has…
| 4
|
When data goes missing
Are you working on a project where no data is available to you? Maybe you are expected to work with a small data set or a data set that has some missing values. Here are some ideas that might help you in the process.
Why is missing data a problem?
First things first, let’s understand the problem. Why is missing data an issue?
Let’s tackle the “No data” scenario first. Even though the problem of having no data might be obvious to data scientists, many people are not aware of the issue. Here is a high-level explanation of the problem. Imagine yourself sitting in an empty room. Someone tells you that in 10 minutes you will be asked to recognize Chinese characters. Let’s say that you have no previous knowledge of Chinese, nor is there a way you can learn anything. In 10 minutes, an interviewer comes and starts showing you some characters. You have two options — say nothing or guess the meaning. How well did you perform? Think of machine learning models as small, very simplified artificial brains. Whether we are talking about labeled or unlabeled data, training machine learning models requires data. Without it, the best models can do is guess.
Similar issues arise when working with small datasets or datasets with missing data entries — biased estimates. Imagine yourself in the same room, but this time there is a Chinese visual dictionary with 50 characters in it. You read it, memorize some characters, possibly detect some patterns that might be useful for detecting even more characters in the future. The interviewer comes and asks you to recognize 10 characters. Luckily, all 10 characters were in the book and your answers are 100% accurate. Now we can deduce that you have an excellent knowledge of Chinese! No? Why? What happens if you get another 10 characters you haven’t seen before? Based on those 10 characters, the interviewer can make any conclusion and it can vary from “highly proficient in written Chinese” to “no knowledge of Chinese whatsoever”. You can imagine how much damage could a poorly estimated machine learning model make in production.
In conclusion, working with limited amount of data will very likely result in poor models, biased estimates, wrong assumptions, incorrect error estimation. To solve these issues, we must minimise the amount of missing data, make the right assumptions when it comes to working with small datasets and choose the right algorithm and analytics approach to work with.
How much data do I need?
The more the merrier! But how much is enough? The amount of data required for a machine learning model to work depends mostly on the problem and the algorithm that is going to be used.
Here are some ideas on how you can decide how much data you need. Keep in mind that your model’s job is to capture correlations between input features and/or between input and output features. You should provide enough data to cover at least the most representative scenarios. The more complex correlations are, the more data you need. Here’s an example. Imagine you have to create a connect the dots game where the result should be a sine wave. How many dots would you put? Could you describe it with one dot? Two? Three? Five? One hundred? When should you stop? Similarly, any model’s performance depends on how many good data samples you provide. Hopefully, it makes sense now that nonlinear algorithms will often require more data than linear ones.
Another approach you should consider is analysing your model’s performance when trained with different amounts of data. Maybe you will realise that you are providing more data than it is needed, or maybe you will realise that the model performs better every time you add more data so therefore you should try to collect more. If none of these work for you, try looking for similar problems that have already been solved. In papers you will more often than not find information about the datasize they used to solve an issue. Take it as a guideline.
If none of these apply to you, start somewhere, take an educated guess! How big is your problem? For complex deep learning problems such as image recognition, you will probably need hundreds of thousands to millions of data samples (images). For simpler problems, try with few hundreds, few thousands, tens of thousands and see how your model performs. Try simpler models. Trial and error method is your best friend.
Handling missing data
Missing data is a common issue that occurs in almost every research. From biased estimates to invalid conclusions, the problem of missing data must be identified, understood and resolved. Here are some ideas on how you can handle your missing data.
Collecting more data
Rather obvious choice is collecting more data. But how do you do it? If your problem is domain specific and you have some unlabeled data, consider hiring a person who will label your data set. It will save you some time. Depending on the problem you are trying to solve, consider conducting a survey, creating web crawlers (do check whether crawling certain website is legal beforehand), working with multiple data sets that have similar features, or try some of the augmentation methods listed below. Also, you could try collecting more data in a similar domain. For example, if you are predicting weather for a certain country, include information from other countries as well; if you are working on a sentiment analysis of comments on a certain website, collect comments or text from other websites as well. Another choice would be fine-tuning existing models using your data set if something like this is applicable to your problem.
On the other hand, if you are experiencing issues with missing values in your dataset, instead of removing tuples or dealing with errors and bad performance of your algorithm, consider imputation methods. Some popular methods include mean, regression, stochastic and multiple imputation. Give them a try.
Data augmentation methods
Why not make the best of what you have by using data augmentation methods? The task of these methods is to increase the amount of data that is already available by, let’s say, tinkering with some parameters (in a meaningful manner). Why should you consider this option? It’s mainly the cheapest one in terms of human effort, computational resources and the time consumed.
There are many ways to augment data or artificially generate more data. For example, if you are working with images, consider rotating them, flipping or cropping. This way, one image can result in multiple ones, already labeled if you are working with a labeled dataset. If you are working with tuples that contain features, think about which parameters can be modified or artificially created. Maybe averaging or mixing some features over similar tuples could result in new ones. Also, if you are working with multiclass classification, give “One vs. Rest” strategy a try. All data points that do not belong to the observed class can represent negative samples for your binary classifier.
When it comes to artificially generating more data, maybe it would be cheaper to first create a model that will learn from existing samples and generate more data for you by using generative or recursive adversarial networks for example.
Working with small amount of data
If none of these work for you, all that is left is using algorithms that will give decent performance even with small amount of data. If you haven’t done it before, analyse your data! See if all features are really necessary and/or consider regularization and model averaging. Pay special attention to noise and outliers. They can have much higher negative impact on your results when your data set is small. Work with simpler models and rule out complex algorithms that involve non-linearity or feature interactions. Lastly, introduce confidence intervals. For example, when classifying your data consider probabilistic classification. It can be quite helpful when analyzing model’s performance.
Summary
Here we presented our opinion on importance of having enough data and some approaches you can try to make the best of what you already have. Even though there are many approaches you can try, most of them focus on using simpler, preferably linear models, uncertainty quantification and regularization. We hope you got an idea on how to handle and expand your data set.
Originally published at www.smartcat.io.
|
When data goes missing
| 0
|
when-data-goes-missing-1a6e99cc0918
|
2018-05-11
|
2018-05-11 06:20:51
|
https://medium.com/s/story/when-data-goes-missing-1a6e99cc0918
| false
| 1,403
|
Stories about solutions we develop using combination of Data Science, Data Engineering and DevOps Expertise and problem we face in our day to day work with clients in SmartCat.
| null |
SmartCat.io
| null |
SmartCat.io
|
info@smartcat.io
|
smartcat-io
|
DATA SCIENCE,DATA ENGINEERING,DEVOPS,MACHINE LEARNING,CASSANDRA
|
SmartCat_io
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Nina Marjanović
| null |
3b46b35d813c
|
nina.marjanovic
| 4
| 8
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-18
|
2018-09-18 17:28:11
|
2018-09-20
|
2018-09-20 17:02:49
| 0
| false
|
en
|
2018-09-21
|
2018-09-21 13:35:00
| 0
|
1a6f140e50dd
| 2.871698
| 14
| 0
| 0
|
Summer of 2018 — as I reach the end of my sophomore year I decide to do an internship revolving majorly around software development in…
| 5
|
Getting started with data science — the path I chose as a sophomore
Summer of 2018 — as I reach the end of my sophomore year I decide to do an internship revolving majorly around software development in python. Data science as a field was not particularly new to me — however, was definitely something which I considered as “out of my league” or “not my cup of tea”. One of the biggest hurdles of getting started with data science is the mental blockage thrown by either our peers/seniors as to how daunting the field and the math associated with it can be.
A major part in getting started with any field is the route you choose. Choose a sub-optimal route and you get easily burnt out or scared by the utter complexity of the concepts being thrown upon you. Stop learning at a wrong stage and you only know concepts from the surface & not their actual implementation. What proceeds is how tackled this issue with trial and error and is my personal take — your mileage may vary.
Looking at the initial hype, I immediately decided to jump on the hype train and start off with the famous Machine learning course taught by Andrew Ng. Four weeks in, I get stressed by just looking at the math and see myself struggling to grasp the concepts. (As you might have guessed, math isn’t something I am extremely comfortable with, hence the bias towards opting for a more application-oriented course. If you like math, you might like the Machine learning course by Andrew Ng.)
Mistake 1 — Do not start with something which involves a lot of math, there’s a high chance you’ll give up or fail to understand the underlying concept.
Two months later, during my summer internship, I started following the Machine Learning A-Z course taught by Kirill Eremenko and realized that the concepts standalone aren’t too tough to understand and I can do it. I later preceded to finish all the A-Z courses and felt pretty confident about my skills in the field considering that I completed 4 courses in a span of 5 weeks.
Mistake 2 — Do not overshoot your abilities.
With all this knowledge from the 4 courses I took, I went ahead & tried to solve a real-world problem — trying to detect malicious URL’s. After doing some digging on the web, I find out random forests is a good approach to this problem and successfully implement a basic model to solve the issue in a matter of four to five days. Here’s where I hit my first roadblock — Parameter Tuning.
I find myself tuning hyperparameters for the random forest algorithm like a monkey turning knobs randomly. It involved a lot of trial and error and when things did work, I had no idea about how they worked.
Now here’s when I decided that it’s time that I need to know the math behind this to be actually efficient. I immediately started the machine learning course and found myself actually understanding a lot of the math which I failed to grasp previously. I attribute a lot of this to the fact that on the second attempt, I knew where the math was actually being implemented. This gave me enough motivation to actually take up the deeplearning.ai specialization and complete it too.
Once you are done with this, you can move on to take further advanced courses in a particular domain of your choice. A great example to this would be a course like CS231n if you are interested in Computer Vision and wish to learn more about CNN’s.
Mistake 3 — Just do not keep on doing courses!
I find myself guilty of committing this mistake. If you keep on going through theory without practical implementation, a lot of what you learned won’t stick around for long. Start off by making good side projects — an ideal approach can be incorporating data science into your college mini projects which most universities require for every course you take in a semester. This way you end up earning brownie points as your project would stand out from the rest of your peers and also help you fill in your resume while applying for internships later on!
TL;DR: Don’t start off with math heavy courses. The A-Z courses offered by Kirill Eremenko are great to start off, after which you should move on to understand the math.
Again, everything mentioned here is my personal opinion and your mileage may vary!
|
Getting started with data science — the path I chose as a sophomore
| 71
|
getting-started-with-data-science-the-path-i-chose-as-a-sophomore-1a6f140e50dd
|
2018-09-21
|
2018-09-21 13:35:00
|
https://medium.com/s/story/getting-started-with-data-science-the-path-i-chose-as-a-sophomore-1a6f140e50dd
| false
| 761
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Viral Tagdiwala
| null |
5be75b4a664f
|
tagdiwalaviral
| 18
| 12
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-23
|
2018-01-23 20:00:06
|
2018-01-23
|
2018-01-23 20:16:26
| 8
| false
|
en
|
2018-01-25
|
2018-01-25 21:49:48
| 0
|
1a6f1c1b7d09
| 7.578616
| 1
| 0
| 0
|
1. Abstract
| 3
|
San Francisco crime classification: Descriptive, Predictive, and Prescriptive analysis
San Francisco Area under Radar
1. Abstract
Predicting the crime and the crime rate is one of the essential factor in improving the efficiency of police department and reducing threat for the public. We are working on San Francisco Crime classification Data from Kaggle which was collected from SF Police Department reporting system. We analyzed the data from 2003 to 2015 with more than 800,000 observations in training data set and around 880,000 observations in testing dataset. Using data modeling approaches such as Support Vector Machine SVM, Random Forest, XG Boost we created models that can be used to classify category of crime given the location and time.
2. Introduction
San Francisco first boomed in 1849 during the California Gold Rush. The city then expanded both in terms of land area and population. As a result, the crime rate and civil problems also proliferated. However, San Francisco of today is different than what it was at it’s beginning. Now it is well known for the Silicon Valley and the Tech giants than that for its criminal history.
With the increase in crime rate it is very difficult to predict the crime and prevent it from happening. Though with the help of data mining and other tools the prediction of crime can be done. This doesn’t mean that the crime will be completely controlled but to some extent the SFPD can provide helpful information to prevent crime. The San Francisco Crime classification data with 800,000 observations has following features:
· Dates — timestamp of the crime incident
· Category — category of the crime incident
· Descript — detailed description of the crime incident
· Day of week — the day if the week
· PdDestrict — name of Police Department District
· Resolution — how the crime incident was resolved
· Address — Approximate street address of the crime incident
· X — longitude
· Y — latitude
The X and Y essentially gives the location parameter.
Date is clubbed together as a date, time and day of week. This is further split for further use in following sections.
3. Structure of DATA:
Data consists of 800k observation of 9 variables.
There are total of 39 categories of the crime. The Data seems widely distributed in terms of frequency of the occurrences. As the top 4 categories of the data are composed of almost 53% of the data and the rest of the 47% is comprised of the remaining categories. The data has the highest frequency of 174900 for larceny and just 6 for TREA (trespassing and loitering in industrial area). We have added following new variables to the data set for better understanding by using strip time function from caret package.
Year, month, day, hours, weekday/ weekend and day of month.
Further the address type is also split to understand if the crime incident was an intersection of two roads.
1. Exploratory Analysis:
San Francisco is one of the busiest cities around the world and it is incredibly difficult to know what is going on. Given the strength of police departments we cannot just deploy the police equally around the city for policing. Some of the parts have a high crime rate while others are not as high.
To clarify the crime incidents, we super imposed the crime incidents’ longitude and latitude on the San Francisco map.
4.1 Distribution of crime incidents:
Area wise crime distribution
Distribution of crime events with timestamp
The graph tells us that the crime rate is significantly higher from 15:00 hours to 20:00 hours on all days and decreases as the time passes but the crime rate on weekends (Friday night and Saturday night) falls shortly and increases again.
Prevailing Crimes
Prevailing Crimes with respect to Years
4.3 Influential parameters:
Since there are 16 variables influencing the categories of the crime committed we need to narrow down our search and check if all the parameters are influential. Using Variable importance function from Caret package in R we narrow down this search.
The influential variables are Longitude, Latitude, Police department district, year of crime, week number in the entire year, day of week and month. The graph indicates that the X and Y combined have an effect of over 30% influence on the crime category.
5. Goal
The goal of the project is to classify the given crime given the time and location of the incident.
In this process, we exploded the date column which had a lot of data clubbed together. We included following parameters in our model which were relevant to classification, PD district, hours, years, X, Y.
Ultimately, we will be calculating the log loss for the classification. Log loss calculates the wrong classification of the model and penalized the model for each wrong classification. This eventually indicates the entropy of the entire model. The higher the miss classification of the categories the higher is the entropy in the model. We can say that there is excessive noise in the system which leads to wrong classification.
The Leaderboard in Kaggle for this competition had a logloss value of 1.95936. We would try to reach as close to the value as possible.
Log Loss with respect to Accuracy
LogLossBinary (1, c(0.5))
0.69315
LogLossBinary (1, c(0.9))
0.10536
LogLossBinary (1, c(0.1))
2.3026
5. Models
5.1 Decision Tree:
The decision tree algorithm works by splitting the dataset recursively, which means that the subsets that arise from a split are further split until a predetermined termination criterion is reached. At each step, the split is made based on the independent variable that results in the largest possible reduction in heterogeneity of the dependent variable.
Here we can see as described above the Larcency/Theft which occupies the 30 percent of our total dataset has been predicted by the model the most. The three classes out of total 39 classes predicted by decision tree comprises of 68 percent of the dataset.
To develop this model, we chose a split for a given subset that minimizes entropy in the subset’s, as the higher entropy got us higher accuracy but the misclassification error also increased. Thus, we computed the weighted average over all sets resulting from the split
Recursive partitioning is implemented in “rpart” package and plotted the conditional tree graph with respect to category.
We used the observations in the subset, apply statistical test of independence between each feature and the labels. The model helped us to understand the best feature for predicting the labels on our subset.
5.2 Random Forest:
Random Forests is a very popular ensembling learning method which builds many classifiers on the training data and combines all their outputs to make the best predictions on the test data. Thus, the Random Forests algorithm is a variance minimizing algorithm that uses randomness when making split decision to help avoid overfitting on the training data.
We used random forest to rank the features based on their importance to predict the labels.
In our study, we came across that random forest do not work well with negative values thus we took the absolute of the longitude(X), it didn’t make any difference on the dataset but slightly improved the performance of the model.
Error estimate came up to 74.77% the model did really bad on it. But the evaluation metrics used by the Kaggle is the log loss which is a probabilistic prediction scale we scored 2.75.
5.4 XG — Boost:
Extreme gradient boosting was applied by transforming the data type numeric for Police Dept. District and DayOfWeek. The parameters for the model was tuned after the first run to improve the performance of the model, the parameters like eta, gamma was increased, and max delta step was increased as the dataset was highly imbalanced.
The model was tested with 100 iterations and the minimum error was found at 92th iteration.
As earlier, we calculated both accuracy and logloss for our reference.
Parameter Tuning in XgBoost:
Here we set the objective to multi:softprob and the num_class to mlogloss.
These two parameters tell the XGBoost algorithm that we want to probabilistic classification and use a multiclass logloss as our evaluation metric.
The parameter multi:softprob objective also requires that we tell the number of classes we have with num.class which is 39 for us.
The other parameters of note are nrounds and prediction. The nrounds parameter tells XGBoost how many times to iterate.
The learning rate was tried for different values between 0 and 1 and we got our best result at 0.2.
Maximum depth was increased to 8 even though we were aware of the overfitting of the model but it gave us a better accuracy but the performance level deteriorated for the logloss evaluation.
We also altered the maximum delta step from 1: 6 as the dataset is highly imbalanced and it helped us to weigh the lower frequency classes and improved the performance of the model.
iter train_merror_mean train_merror_std test_merror_mean test_merror_std
1: 95 0.6227765 0.001673217 0.7429598 0.003165538
2: 96 0.6217300 0.001696940 0.7430197 0.003131105
3: 97 0.6206335 0.001766637 0.7430798 0.003186837
4: 98 0.6196170 0.001793703 0.7432298 0.002993724
5: 99 0.6182865 0.001795452 0.7431100 0.002873621
6: 100 0.6171802 0.001681101 0.7431998 0.003097023
>table(train$Category == train$pred)
FALSE TRUE
349011 529038
Accuracy
prop.table(table(train$Category == train$pred))
FALSE TRUE
0.386584 0.613416
5.5 Evaluation of the model:
Comparing various models we see that Decision tree model is highly biased with the majority values in the Larceny, Assault and non-criminal offenses. Hence the model is over fit for these categories while other categories are not predicated accurately given the less number of occurrences.
Model using Random forest algorithm is better than the decision tree and we have an error rate of 73% with a log loss of 2.75.
XG boosting has given better results compared to other three algorithms, the accuracy we achieved was 0.613416 which is much higher than other models. The model also predicted 529038 observations correctly.
6. Conclusion:
During the classification, we have seen that the classification has least log loss in Xg-Boosting and we would use this model on test set. We have received an 61% accuracy in the given set and log loss of 2.44.
Better results can be obtained by augmenting the data in to 4 segments viz- White collar crimes, Blue collar crimes, violent crimes and non-violent crimes. This results would be more easily predictable and we expect the accuracy to increase.
|
San Francisco crime classification: Descriptive, Predictive, and Prescriptive analysis
| 1
|
san-francisco-spatial-data-research-for-crime-classification-1a6f1c1b7d09
|
2018-05-24
|
2018-05-24 14:10:10
|
https://medium.com/s/story/san-francisco-spatial-data-research-for-crime-classification-1a6f1c1b7d09
| false
| 1,708
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Vikas Mishra
| null |
a0d8f6b527fa
|
m.vkumar89
| 3
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-07
|
2018-02-07 14:07:43
|
2018-02-13
|
2018-02-13 09:01:01
| 1
| false
|
en
|
2018-02-13
|
2018-02-13 09:56:45
| 0
|
1a6fc0aca009
| 1.720755
| 0
| 0
| 0
|
Have you ever received a gift of which you did not quite comprehend the value?
| 5
|
Are we ready for the gift of life?
Have you ever received a gift of which you did not quite comprehend the value?
For my twenty-first birthday my father gave me the family hunting knife which was passed down through the generations and I am the fourth owner of this piece of history.
To his wise words my father added that although I no longer need to hunt as my predecessors did for survival, may this be a symbol that reminds me of my worth — an heir to legends.
Gift giving and receiving has many forms. Stories of generosity fill our history pages and among them you will find tales of great gifts such as the Taj Mahal, Lady Liberty, Pandora’s box and the Trojan horse.
If you are anything like me you too may perceive Artificial Intelligence (AI) more like Pandora’s box which may inevitably end up to be a Taj Mahal. The director of Duke University’s Humans and Autonomy Lab, Mary Cummings, cautioned that “technology is not a panacea” and that “we’re getting ahead of ourselves”. Would an increase in collaboration between academics, government, and companies to understand the phases technologies are in and act accordingly bring about a remedy?
What happens to your worldview when you view AI as a life-giving gift. Google’s CEO Sundar Pichai said in an interview: “AI will be bigger than electricity or fire”. His reason for this bold statement is that it potentially and fundamentally will change how we do everything.
What if it could be the trojan horse that dismantles the constraints that we as people experience in our world regarding limited resources and reduces the inequality we live with daily?
Like Pandora and her box, what we have in our hands is a most precious and dangerous resource. And like with fire, we will need to learn the art of wielding it in a resourceful and constructive way. As leaders we might experience or find ourselves in the middle of a wildfire that is fuelled on by the winds of change that brings about the kind of destruction that may outlast our lives.
Perhaps sculpting a liberty manifest, offering us as biological inhabitants of this planet an opportunity to leave the broken chains of slavery at our feet as we embrace what the ‘new inhabitants’ bring is the type of leaders we must become.
How can we start this process?
|
Are we ready for the gift of life?
| 0
|
are-we-ready-for-the-gift-of-live-1a6fc0aca009
|
2018-05-15
|
2018-05-15 05:29:52
|
https://medium.com/s/story/are-we-ready-for-the-gift-of-live-1a6fc0aca009
| false
| 403
| null | null | null | null | null | null | null | null | null |
Future Of Work
|
future-of-work
|
Future Of Work
| 8,540
|
Matt White
|
Matt White is an Neuro-integrator, Author, Speaker, Brand Specialist, Customer and Employee Engagement Expert, and Executive META Coach.
|
ca9a4638a9ca
|
thrivewithmatt
| 25
| 21
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
e690cc199aa4
|
2017-11-03
|
2017-11-03 12:46:44
|
2017-11-03
|
2017-11-03 13:13:19
| 3
| false
|
en
|
2017-11-17
|
2017-11-17 16:20:55
| 5
|
1a707cf57843
| 2.942453
| 10
| 1
| 0
|
While the Full Impact of AI May Still Be Unclear, China Moves Ahead
| 5
|
China, AI, and the Future of Work
While the Full Impact of AI May Still Be Unclear, China Moves Ahead
AP Photo/Shizuo Kambayashi
By Paula Klein
Some said the current state of artificial intelligence (AI) is one of hype and misconceptions. Others see a true acceleration in technological advancements and acceptance under way.
In either case, AI, and what it means for the future of work, was top-of-mind for speakers and hundreds of attendees at an event hosted by MIT’s Computer Science and AI Lab group (CSAIL) and the MIT Initiative on the Digital Economy (IDE) on Nov. 1.
On the first of a two-day conference held at MIT in Cambridge, Mass., researchers, designers and business leaders debated everything from the viability of driverless cars, to the growing wealth gap resulting from digitization and automation. Will we be able to create new jobs as fast as workers are displaced by AI and automation? Which jobs are safe, and which will be the first to go?
Daniela Rus, CSAIL Director, led off by citing advances in medical diagnosis, energy efficiency, and manufacturing, as areas where AI and machine learning are already demonstrating tangible benefits while keeping humans at their jobs.
Daniela Rus
Working With Machines
She introduced a common theme of the day when she said that people, working in combination with machines, will yield the greatest results. Erik Brynjolfsson, IDE Director, agreed that humans don’t have to be replaced by machines for AI to automate tedious work and improve lives.
One speaker who is aggressively pursuing the rewards of what he calls “the golden Age of AI,” is venture capitalist, Kai-fu Lee, Chairman and CEO of Sinovation Ventures. Lee said that half of the firm’s $1.3 billion investments since it was founded in 2009 have been in AI. The company’s latest deal is a $20.6 million stake in Zhuiyi Technology, an AI-up developing chat bots.
Kai-fu Lee
At his lunch presentation at the conference, Lee declared that “AI can do 50% of current job tasks in next 10–15 years,” including many white-collar jobs. For example, technology introduced by Face++, a face-recognition company that is also a Sinovation investment, could ultimately eliminate “tens of thousands” of front-desk and security jobs. Face recognition is a legitimate, commercial app and is making huge gains in China and globally.
Radical Transformation
This radical transformation of the workforce will require many new education and training programs, but fundamentally, “We will have to change our industrial age work ethic and replace it with new values,” according to Lee.
He also spoke about the Chinese government’s commitment to pursue AI on all levels. Right now, AI is led by American researchers, but China’s presence is growing very fast and attracting the best and brightest students, Lee said. Chinese researchers are publishing many more research papers today, and funding is being provided.
In addition, mobile use is skyrocketing, and smartphones are ubiquitous in China. Chinese consumers use phone payments 50 times more often than U.S. consumers, and cash transactions are becoming a thing of the past — bypassing credit cards, as well. Every transaction generates more data which then yields better products, services, and algorithms, he said.
All of these changes are also moving China from a saving economy, to a spending one. “Online and offline sales are becoming fully integrated,” Lee said. Moreover, the quality of Chinese tech products has improved greatly and the country is becoming a leader, rather than an imitator, of U.S. products. High on its to-do list are mobile apps, AI programs, and robotics. At this rate, Lee expects China to meet and then exceed the U.S. in robotics leadership in the next two years. Whether this is more AI hype, or true acceleration is yet to be seen.
Video now available on YouTube here.
|
China, AI, and the Future of Work
| 38
|
while-the-full-impact-of-ai-may-still-be-unclear-china-moves-ahead-1a707cf57843
|
2018-03-14
|
2018-03-14 16:17:19
|
https://medium.com/s/story/while-the-full-impact-of-ai-may-still-be-unclear-china-moves-ahead-1a707cf57843
| false
| 634
|
The IDE explores how people and businesses work, interact, and prosper in an era of profound digital transformation. We are leading the discussion on the digital economy.
| null | null | null |
MIT Initiative on the Digital Economy
|
ide_social@mit.edu
|
mit-initiative-on-the-digital-economy
|
MIT,INNOVATION,DIGITAL,INCLUSIVE INNOVATION,AI
|
MIT_IDE
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
MIT IDE
|
Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.
|
dd9c51c40b05
|
mit_ide
| 2,792
| 28
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-07
|
2018-05-07 08:35:31
|
2018-05-07
|
2018-05-07 08:56:40
| 3
| false
|
en
|
2018-05-07
|
2018-05-07 16:27:01
| 4
|
1a70dc8141bc
| 2.282075
| 5
| 0
| 0
|
There’s a dataset called Labelled Faces In The Wild (LFW) with about 13,000 photos of people scraped from the web. As part of this little…
| 5
|
Zuck Faces (part 2)
There’s a dataset called Labelled Faces In The Wild (LFW) with about 13,000 photos of people scraped from the web. As part of this little series, I ran each of them through FaceNet to get the feature vector which represents their facial identity.
Facial recognition systems working working from the vectors produced by FaceNet are able to beat humans on industry benchmarks (a fairly recent milestone, by the way).
You can think of these vectors as representing a point in 512-dimensional space. When you want to know whether a given face belongs to the same person as another, you take the distance between the two points. I used this property the other day to produce the picture of Mark’s various faces sorted by how distant they are from his “average” face.
Today I sorted all the faces in LFW by how distant they are from Mark’s average. Here’s a sample of 160 of them, sorted from top-to-bottom and left-to-right:
Here’s the nearest 160:
And the most distant:
Things that stand out to me:
There’s a black guy and a woman in the top eight
The algorithm is clearly thrown off by glasses, hats, shadows and what I’m thinking of as “extreme facial expressions”
The top eight don’t really look like Zuck doppelgangers to me
One thing to keep in mind that although LFW has more than 13,000 photos, they’re only of a couple of thousand people, and they’re mostly celebrities. I’d very much like to try this over a larger and more diverse dataset. There must be people out there who look a whole lot more like Zuck who’ve worked their way into publicly available datasets.
I’m not sure how to deal with hats and glasses. There must be research on it, though. I want to dive into how those industry benchmarks (on which FaceNet does extremely well) are set up. Are they scrubbed of glasses / blurriness / facial occlusions / etc.? Are there benchmarks on which these systems currently don’t beat humans?
I’m pretty sure I’m making a mistake by using the average of Mark’s various faces. I think what I need to do is train a simple binary classifier that takes feature vectors as input, and then rank all faces by the probability spat out by the classifier. I’m going to try that soon. [UPDATE: tried it!]
On another note, how come Mark never wears glasses or a hat? I guess the photos I’ve scraped are extremely biased. They tend to be posed, and taken by (presumably) professionals. I might need to dive into the world of paparazzi to get some less biased images of his face.
(here’s a gist of how I produced these results: https://gist.github.com/atroche/287d803c6610a4500e18f009e7a38b4e)
|
Zuck Faces (part 2)
| 25
|
zuck-faces-part-2-1a70dc8141bc
|
2018-05-07
|
2018-05-07 21:42:07
|
https://medium.com/s/story/zuck-faces-part-2-1a70dc8141bc
| false
| 459
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Alistair Roche
| null |
5a238820f061
|
atroche
| 768
| 209
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
7ff81454a4c3
|
2017-12-27
|
2017-12-27 17:53:15
|
2018-01-01
|
2018-01-01 16:09:40
| 31
| false
|
en
|
2018-01-17
|
2018-01-17 23:26:02
| 2
|
1a72dd0451e1
| 5.937736
| 5
| 0
| 0
|
Editor — Ishmael Njie
| 5
|
Statistical Analysis with Python: Pokémon
Editor — Ishmael Njie
Why analyse Pokémon?
I wanted to start off with a dataset that was relatively small and not too complicated. I found this dataset on Kaggle: “Pokémon with stats”. I have a fair understanding of the columns and the data in the CSV file from that page so I thought, why not? The file consists of 800 rows and 13 columns, detailing the features of each Pokémon spanning 6 Generations.
For this post, it may be worthwhile to have my Kaggle Kernel alongside to follow it in its entirety. Now let’s start…:
Preliminaries: Import Libraries
Read CSV file and save as a variable
As you can see, from taking the first instance in the data frame, there are two indices to identify the Pokémon, one formed when the rows were entered in the data frame, and one from the file itself: ‘#’.
We will set the ‘#’ column as our index. We are also going to rename the column names so that all the spaces in the names are removed.
Let’s look at the head of the data frame we have.
After looking at the head and tail of the Dataframe, there is unnecessary text in front of some Pokémon names. This needs to be removed using regular expression (Regex). With more research through the data, I also found that in front of other names, there was unnecessary text, this is all rectified in the cell below.
Mega, Primal and Legendary Pokémon
From top left: Mega Charizard X, Mega Charizard Y and Charizard
Generation 6 saw the introduction of Mega Pokémon. This evolution is not applicable to all Pokémon. An example of this evolution is the Mega Evolution of Charizard. In this case, Charizard has two Mega forms, where they both have a Total base stat of 634, as opposed to Charizard’s base form which has a Total base stat of 534.
Legendary Pokémon are Pokémon that feature in myths in the Pokémon world; two of these take a Primal form. Primal Reversion is a transformation affecting Legendary Pokémon Kyogre and Groundon.
From left to right: Primal Groundon and Primal Kyogre
All of the above are very powerful and therefore, their base stats are expected to be of the highest level amongst the dataset. Since not all Pokémon can take these forms, it would be a good idea to omit these types of Pokémon from our analysis.
Omitting Mega and Primal Pokémon; an indication of this is seeing that Mega Venusaur is not present in this dataframe
Poke holds all Pokémon that are not legendary, Poke L holds all Pokémon that are.
Following this, we can look at the proportion of Pokemon in the dataset that are not Legendary and those that are.
The dataset we have consists of Pokémon from 6 Generations. Conventionally, generations work independent of each other so an option would be to potentially analyse Pokémon with respect to their region.
2. Type Analysis
Single vs Dual. We can look at the proportion of Pokémon that are dual types vs those that are not.
The pie chart shows that the split is fairly even. We have 50.9% of the Pokémon in this data frame that have only a single PokeType. Moving on, we can analyse the distribution of primary and secondary Pokémon types.
Primary Types
Water has the highest frequency as a primary Poketype. Flying has the lowest. We can see that the bar plot has taken into consideration the ‘type1_colours’ to colour the bars appropriately. The ‘type1_colours’ were set in a cell before hand.
Secondary Types
The ‘None’ type was set in this cell, for Pokémon that did not have a secondary PokéType.
Here, we can see that the ‘None’ field has the highest frequency. We can also see that the PokeType ‘Flying’ is the highest secondary PokeType but lowest on the Primary plot.
A reminder to follow this post along side the Kaggle Kernel.
3. Base Stat Analysis
The following cell and graphic will express the correlation between each of the base stats against each other.
From the heat map, we can see that the correlation between the Sp.Def and Total is 0.68, which is the highest in the matrix (excluding the diagonal). We can go one step further and create a scatter plot of the Sp.Def and Total.
Overall, for all generations, the correlation metric of 0.68 is echoed as scatter plot shows a positive correlation between the Sp.Def and Total.
In the game, there are two types of attacks: Attack and Special Attack.
Attacks (Physical Attack) make contact with the Pokémon and damage is calculated based off of the opponent’s Defense.
Here, the distribution of both attributes are similar and one could suggest that both are positively skewed. We can see that there is a significant tail end to the Defense stat as opposed to the Attack stat, portraying that there are more Pokemon with high Defense stats than Attack. You could argue that the Defense stat has a higher variance than the Attack stat also.
Special Attacks (Sp.Atk) do not make contact with the Pokémon and damage is calculated based off of the opponent’s Special Defense.
Sp.Def and Sp.Atk have a similar distribution. One could argue that both are positively skewed, as a large number of the Pokemon have relatively low base statistics, with a few Pokémon having a large Sp.Def and/or Sp.Atk stat. Visually, you could argue that Special attack, in blue, has a larger variance than that of Special Defense. One can also see that Sp.Def holds the higher stat of the two, approximately at 225.
To reinforce the comments made above, we can print the summary statistics of the fields in our dataframe.
By calling on the summary statistics, we can see that the assumption about the variance and skewness of both plots was correct. The ‘std’ metric (standard deviation) of the Attack is less than the Defense, meaning that the Defense statistics are more spread. Similarly, the Sp.Atk ‘std’ is larger than that of the Sp.Def. Skewness is determined by the positions of the median (50%) and the mean. Since in all instances (Attack, Defense, Sp.Attack and Sp.Defense) the mean is greater than the median, it is emphasised that the distribution is right-skewed (positively skewed).
Here, we will create a user defined function (UDF) for the minimum and maximum of the base statistics. The user can input any frame along with the stats array to find the Pokemon with the highest and lowest stats. Initially, an array needs to be formed as the set of base stats to analyse.
An example of using the function:
Shows the Pokemon with the highest stat in each attribute, along with the generation they belong to.
Visually, we can compare the base stat total of each generation.
From this, we can see instantly that Generation 3 has the Pokémon with the highest total base stat. From printing the max stats using our UDF, we know that Pokemon is Slaking, with a base stat total of 670. All the other Generations share a high base stat total of 600. Dragonite for Generation 1, Tyranitar for Generation 2, Garchomp for Generation 4, Hydregion for Generation 5 and Goodra for Generation 6.
That is it for the statistical analysis! There is more on the Kaggle Kernel I have published so check that out here. It follows on with some great user defined functions:
One about finding the Top 10 strongest Pokémon based on Generation and base stat.
Another that allows the user to find the Top 6 Pokémon that can combat a specific PokéType (a concept based on the handheld games).
Also check out the code here at my GitHub.
|
Statistical Analysis with Python: Pokémon
| 59
|
statistical-analysis-with-python-pokémon-1a72dd0451e1
|
2018-06-12
|
2018-06-12 03:50:27
|
https://medium.com/s/story/statistical-analysis-with-python-pokémon-1a72dd0451e1
| false
| 964
|
Our team is made up of 2 MSc Data Science students who want a place for readers to find out more about the field of Data Science and some interesting applications of the field. Stories will range from projects to informative stories. Follow, share and enjoy your read!
| null | null | null |
DataRegressed
|
dataregressed@gmail.com
|
dataregressed
|
DATA SCIENCE,PROGRAMMING,MATHEMATICS,COMPUTER SCIENCE,DATA
| null |
Data Science
|
data-science
|
Data Science
| 33,617
|
DataRegressed Team
|
Editors — Ishmael Njie and Sulayman Saleem. Find out more about us: https://www.linkedin.com/in/ishmael-njie/ https://www.linkedin.com/in/sulayman-saleem-33491
|
e4a723a73618
|
drteam
| 35
| 53
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-05
|
2017-10-05 12:15:11
|
2017-10-05
|
2017-10-05 12:24:36
| 3
| false
|
en
|
2017-10-05
|
2017-10-05 12:25:15
| 11
|
1a7450ea9b5e
| 3.783962
| 0
| 0
| 0
|
And the Disruption of Social Media
| 5
|
The Future of Facebook
And the Disruption of Social Media
People in the tech industry are always on the lookout for the “next big thing.” Actually, that goes for most marketers, business owners, and consumers. In the social media sphere, we’ve seen a remarkable amount of improvement and development in the last decade. Social media channels have become more mobile-friendly and content has become more visual, to name a few. But we could argue that we haven’t seen any genuine disruption on social media in the last few years.
We’ve seen the eruption of the next big thing in fintech — blockchain and Bitcoin, and in transportation in the form of automatic, driverless vehicles.
If social media, specifically Facebook, is of such paramount importance to marketers and consumers, and we’re living in an era where digital disruption is so probable, then surely, we can expect a disruption in social media soon?
The Age of Disruption
Disruption has become everyone’s new favourite word, and certainly a buzzword in most industries across the board — from the media to mining. Let’s start out by defining it. Harvard Business School professor and disruption guru, Clayton Christensen, says that a disruption displaces an existing market, industry, or technology and produces something new, more efficient and worthwhile. It is both destructive and creative.
Everyone wants to get their hands on Forbes’ Most Innovative Companies list every year. So what’s the difference, then, between disruption and innovation? Shilen Patel, founder of business accelerator, Independents United, says that if a company is facing large-scale and unpredictable change, but they have a degree of control over that change, then they need to innovate. If the company doesn’t have control, then it needs to revolutionize itself before someone else does.
As Mark Zuckerberg said, “If we don’t create the thing that kills Facebook, someone else will.”
The Social Network
Facebook is undeniably the biggest influencer in the social media world. With more than one billion active users and a long, consistent history, as well as the acquisitions of other global social giants Instagram and WhatsApp, Facebook has earned its reputation for being the dominant player in the social game.
Some recent updates on Facebook include 360 degree photos, the ability to react to photos with five different emojis instead of just “like” it, and there have been a number of developments in the layout and efficiency of Facebook Business Pages. In fact, keeping up with every new update is close to impossible, but updates occur automatically and consumers gradually get accustomed to them.
But these new features simply enhance what already exists, they don’t disrupt it.
Does this mean we’re overdue for a major disruption? Are we on the verge of the next social media innovation? If so, what will it be, and when can we expect it?
The Future of Facebook is The Rest of Your Life
Artificial Intelligence
Just as it already does in social networking and instant messaging, Facebook aims to dominate the Artificial Intelligence and machine learning fields too.
Facebook has been dabbling in AI for a few years. Since the platform released its facial recognition feature in 2010, it has also been used to curate the newsfeeds of all users.
DeepText, a deep learning-based engine that was released last year, is designed to understand text. With “near-human accuracy” it can understand several thousand posts per second, across over 20 different languages. Deep learning becomes necessary when it comes to text because computers have to learn slang, sarcasm and disambiguation for every language, too.
Facebook is currently working on developing the standard facial recognition feature, DeepFace, too. This new tech will look for other identifying features — regularly worn clothing, a distinctive posture or haircut, and voices in videos — as clues to identify people. At its launch, Facebook’s DeepFace was 97 percent accurate, compared to the 85 percent accurate system used by the FBI.
Since the company tripled its investments in AI and machine learning research early this year, we can only expect bigger and better things still to come.
Virtual Reality
When Facebook bought Oculus in 2014, the obvious question — besides “Seriously? Two billion dollars?” — was what Facebook expected to do with a virtual reality company. Everyone joked about advertisements popping up in the middle of video games being played, and giant newsfeeds projected on skyscrapers. But Facebook’s new VR app, Spaces, does a whole lot more.
Spaces, currently in Beta, is a virtual hangout where you can chill out with up to four friends, each represented by self-created digital avatars. The goal is to create the ultimate communal experience in an imaginary space, while giving people the sense that they’re actually with friends and family.
For instance, instead of just looking at photos of a family wedding you missed, you’ll be able to put your headset on and experience the wedding from the front pew. This may have been possible without Facebook, but we don’t doubt that the social giant will help to make it more easily accessible to more people, and that it will soon be ingrained in the mainstream.
https://www.youtube.com/watch?v=PVf3m7e7OKU
|
The Future of Facebook
| 0
|
the-future-of-facebook-1a7450ea9b5e
|
2018-01-16
|
2018-01-16 12:17:46
|
https://medium.com/s/story/the-future-of-facebook-1a7450ea9b5e
| false
| 857
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Philippa Dods
| null |
aca4d9be80f8
|
philippadods
| 63
| 145
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
877bf4f35264
|
2018-06-07
|
2018-06-07 10:46:14
|
2018-06-07
|
2018-06-07 11:04:16
| 6
| false
|
en
|
2018-06-07
|
2018-06-07 11:17:12
| 0
|
1a7711f857ad
| 2.904717
| 0
| 0
| 0
|
Marina Bay Sands, Singapore
| 5
|
De/Centralized 2018
Marina Bay Sands, Singapore
5th — 6th April
On April 5th to 6th , De/Centralized 2018 was held at the Marina Bay Sands Convention Center in Singapore. Singapore, as Asia’s fintech center, has declared itself a world-class jurisdiction for blockchain projects due to its prudent regulatory framework and excellent information and communication infrastructure.
De/Centralized, as the most important blockchain conference in Singapore, has attracted much attention. De / Centralize consists of ZPX, blocks and XSQ. This consortium covers everything from asset management to research, mining and token generation. It aims to build a strong stakeholder community in the industry, focus on the development of the blockchain, and narrow the gap between North America and Asia.
The meeting brought together global giants, entrepreneurs, regulators, risk investors and investment institutions of the regional chain industry to jointly deploy the blockchain ecology. PopulStay is honored to be invited to the conference. Further invitees include Airbloc, Fintech, Invictus, Krypto and other companies.
How will blockchain change the world? What are the opportunities for investors and entrepreneurs? How can we use technology to create a better world? De/Centralize aims to uncover answers to those questions.
Dr. Wang Yue, the founder of PopulStay, said that prior to the conference, PopulStay has entered into a strategic cooperation agreement with the more influential groups in Japan such as Febow, Japanese food, Tourcandy, etc. The restaurant’s experience with authentic Japanese cuisine and the Febow Residence B&B can also be settled using the token of PopulStay.
The beginning of “block chain + homestay” was to optimize customer experience and maximize the interests of homeowners. The concept was also highly appreciated by the guests at the roundtable forum. The combination of blockchain technology and the traditional bed and breakfast industry has transformed the blockchain from a technology concept into a truly ground-breaking application that has helped the traditional homestay market.
We have the opportunity to unlock trust barriers between homeowners, tenants, and service providers through blockchain technology. It has the ability to reduce unpleasant frictions, make it easier and more efficient for both parties to communicate through smart contracts, and combine artificial intelligence technologies, Internet of Things, and platforms.
In the roundtable forum, some guests asked: What is the experience of using the B&B platform and the blockchain technology? Dr. Wang Yue responded as follows.
PopulStay for the host: Bed and Breakfast + Management. This said, the community accesses third-party paid property management services such as cleaning, blockchain smart door locks, interior design, and smart pricing. It achieves the lowest cost, as well as efficient and unattended inventory management. Every process from inquiry, check-in, check-out to maintenance of a house is executed by a smart contract.
For tenants: B&B + experience. The PopulStay community connects students with the Crowdsourced Skills Crowdsourcing program, which allows guests to experience local cultural experiences such as flower arrangements, photography, design, modeling, beauty and makeup, baking, and lectures.
The good prospects of applying blockchain to the sharing economy have reached a consensus in the industry. Let us look forward to what the PopulStay will bring to users.
|
De/Centralized 2018
| 0
|
de-centralized-2018-1a7711f857ad
|
2018-06-07
|
2018-06-07 11:17:13
|
https://medium.com/s/story/de-centralized-2018-1a7711f857ad
| false
| 518
|
The decentralized booking and autonomous property management platform for vacation rental and the home sharing economy. www.populstay.com
| null |
PopulStay
| null |
PopulStay
|
walter@populstay.com
|
populstay
|
BLOCKCHAIN,BLOCKCHAIN STARTUP,ARTIFICIAL INTELLIGENCE,SHARING ECONOMY,TECH STARTUPS
|
PopulStay
|
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
PopulStay
|
The decentralized booking and autonomous property management platform for vacation rental and the home sharing economy.
|
5074d740440b
|
populstay
| 7
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
dc7104214618
|
2018-06-25
|
2018-06-25 17:45:34
|
2018-06-25
|
2018-06-25 18:04:23
| 1
| false
|
en
|
2018-06-25
|
2018-06-25 18:04:23
| 2
|
1a7797c9ba56
| 5.562264
| 5
| 0
| 0
|
Introduction:
| 5
|
Big Data Driven Networking Explained
Introduction:
Before understanding “Big Data Driven Networking” let’s break the title. You get two major terms, Big Data and Networking. What is Big Data? It is simply a term that describes large sets of data. It can be structured and unstructured. It’s not the amount of data we’re concerned with but what we do with the data that matters i.e. applying data science principles to it and draw some useful insights from it.
What is Networking? Networking in simplest terms is a process of connecting two computers together. It’s like establishing a connection between two workstations to exchange information. Big Data and Networking are closely related to each other and work hand in hand. This article is about how Big Data is utilized in networking. To give a brief outline, large amounts of data is first collected from various sources, and then it is sorted/pre-processed and transported to various data centers where the process of data analysis takes place. We would also be discussing 5G and its role in revolutionizing Big Data and networking.
5G and The Concept Of Communication, Caching and Computing:
To put it simply, 5G is a wireless network that enables faster communication. To understand this, imagine downloading a 1.25 GB movie in 10–20 seconds. With the arrival of 5G in hand, we would experience the massive flow of Data. To understand 5G better and its relation with the Big Data, we have to understand the three basic phenomenon that comprises 5G i.e. communication, caching and computing.
The 5G wireless network will support multiple diversities in terms of communication. This is where the concept of small cell base stations arrives. Small cell base stations would be densely deployed to improve the quality of communication across 5G networks. Small cell base stations are generally used to accommodate large amounts of traffic.
As we know that small cell base stations would be used to manage large amounts of data flowing, but there is one problem we need to tackle, the backhaul congestion. Backhauling is simply a process of sending network data over and out of the way route, taking it farther than its destination in order to get data there. With the massive amount of traffic being generated by 5G, then it’s obvious that the backhauling would be impacted as well. That’s where the concept of caching arrives. Caching will simply store the most requested content as its first priority and reduce redundant transmission to the remote server. For example, if there are 500 users are active at the moment and 300 of them are requesting the data “A” then the caching would simply store “A” in the first place closer to the users so that when the users request “A” then it will simply send the data reducing redundancy.
Now that we’ve understood the concept of communication and caching, let’s discuss one more important concept called cloud computing. What is a cloud? No, it’s not that cloud that causes rain. It is a backup option to store all your data when you know that your main source of data might be lost or get corrupted. Cloud computing is becoming a new trend in this day and age and with the arrival of 5G, it will completely change the dynamics of information sending and retrieving. Cloud computing will support various network-intensive applications such as augmented reality. So now, you have a broad idea of how communication, caching and computing improves 5G.
5G wireless networks: The Bridge
Before we dive into the concept of what the bridge in networking is, let’s understand the concept of data center and data source. The data center is simply the entity that requests data from the data source (which has the requested data) and then the data source replies back to the data center with the requested data. 5G will be used as the medium for the communication between the data center and the data source.
The concept discussed above was just the tip of the iceberg. Let’s go a little deeper and explore how exactly data center and data source communicates and what processes and subprocess are involved. The communication between data source and data center can be broken down into three processes, data acquisition, data preprocessing and data transportation.
Data Acquisition is simply acquiring data from the data source. For example, automobiles, mobile phones, smart meters, drones and various other electronic devices are generating a truckload of data constantly with the help of data gathers such as sensors. We call this a raw data because it is unsorted, unstructured and unorganized.
Data preprocessing is a method that helps to sort and to organize data. Various sub-processes such as data compression and data aggregation are applied for sorting. This process occurs before the transmission of data to data centers. 5G wireless networks support data preprocessing with the help of RAN. After the process of preprocessing, the transmission of data takes place.
Data transmission takes place when the data is preprocessed and it’s ready for transmission. First data is transmitted from RAN to the core network and then to the data center for analysis. Different datasets have different requirements when it is being transmitted from data source to data centers. For example, healthcare data and smart metering data are two distinct data sets so the requirements would be different for each of them. Healthcare data needs to be more secure and organized than the smart metering data which is giving out just random readings.
Networking for Big Data:
To support efficient networking for big data, big data’s features such as volume, velocity, and variety should be accommodated by the 5G wireless networks. First big data volume requires great network capacity and that capacity can be made available by use and re-use of spectrums. Spectrums help to boost network capacity. So, adding more spectrums can boost network capacity which enables the 5G to exchange huge amounts of data. The velocity of Big Data refers to how rapidly the data is acquired, pre-processed and delivered. The more the velocity, the better will be the communication. Lastly, variety refers to the different types of data sets. As described earlier, different data sets need to satisfy different requirements in terms of deploying it. So, you can say that volume and velocity features mainly aim for the efficient use of the infrastructure of the network while variety feature is more related to its deployment methods.
In Big Data processing chain, communication, computing, and storage resources are needed at different points between the journey from the data source to the data center. Here is the important point, the infrastructure of the 5G wireless networks remains same but the Big Data will be different in terms of its variety. So, communication, computing, and storage resources have to be called at different points between the data source and the data center. It is like manipulating all the available resources in the infrastructure to satisfy the data requirements.
One method for handling all the varieties of Big Data simultaneously is called network slicing. As the name suggests, we would slice our network to pieces and each piece would handle a different functionality but utilize the same resource pool. Easy, right? Now let’s try to understand this concept in a deeper and a slightly technical way for you to really grasp the concept of slicing. Suppose the two organizations power grid supply and financial institutions are requesting data exchange from the same wireless network. As you can see that these two organization’s data are different in terms of variety so the network utilization would be completely different so how would the network handle it simultaneously? We would then divide the whole infrastructure of the network to small pieces and each piece would be allotted a certain task to perform. That way, we can easily manage these two different data requests. Note that each section/piece of the network would use the same resources available to them.
Conclusion:
In our journey to the Big Data, we learnt that how Big Data can utilize 5G networks for its usage and what are the basic factors that are enabling this interaction. With Big Data and networking combined, there will be a revolution in the world of information exchanging and we would efficiently utilize the resources to get the maximum output.
(This article was authored by Research Nest’s Technical Writer Zeeshan Mushtaq)
Clap and Share if you liked this one, and do follow “The Research Nest” for more insightful content.
|
Big Data Driven Networking Explained
| 75
|
big-data-driven-networking-explained-1a7797c9ba56
|
2018-06-25
|
2018-06-25 18:04:24
|
https://medium.com/s/story/big-data-driven-networking-explained-1a7797c9ba56
| false
| 1,421
|
We are The Research Nest. a multi-diverse team and a tele-research based R&D house backed by young engineers and visionaries.
| null |
theresearchnest
| null |
The Research Nest
|
the.research.nest@gmail.com
|
the-research-nest
|
TECHNOLOGY,SCIENCE,RESEARCH,MEDIA,COMPUTER SCIENCE
| null |
Big Data
|
big-data
|
Big Data
| 24,602
|
Team Research Nest
| null |
f2965b8e376e
|
the.research.nest
| 26
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
f1110abea2bb
|
2018-06-21
|
2018-06-21 09:22:49
|
2018-06-21
|
2018-06-21 09:24:56
| 1
| false
|
en
|
2018-06-21
|
2018-06-21 09:24:56
| 5
|
1a77e319fe11
| 1.596226
| 0
| 0
| 0
|
In the modern, globally-oriented workplace, it’s not unusual for employees to work under the same business umbrella with people from…
| 5
|
Embracing a Digital Workplace Evolution
In the modern, globally-oriented workplace, it’s not unusual for employees to work under the same business umbrella with people from different countries. The abundance of technologies takes collaboration to a whole new level: the level of productive relationships.
As a matter of fact, we are not even talking about the growing tendency towards remote work and the dispersed workforce — well, not anymore. Alex Shootman, CEO of Workfront, believes that this notion will have a different meaning over the coming years. Instead of working remotely, “people will go to work on a dynamic digital platform as a workplace.”
In the future, the question will not be about how to get the work done, managing remote teams. The focus will lie on the desired qualities that presuppose continuous training and reskilling, the ability to maintain a healthy work-life balance, rethinking waterfall careers in favor of talent mobility within a company, and redefining the roles of leadership. The goal will be to create a working environment that fosters collaboration, innovation, creativity, and lifelong development. The digital transformation of the workplace is but one way to reach this goal.
What should you know about the digital workplace?
To have a clear vision of what the digital workplace is and how it can help in reaching the above-mentioned goal, we will address this notion through its main characteristics. Hence, the digital workplace can be described in the following ways.
It is a natural evolution from a physical workplace to an interconnected one.
Fairly recently, there has been a clear separation between work and personal life. But with mobile devices and social media becoming ubiquitous, this line gets more blurred and interrelated.
The Gallup research states that “employees are pushing employers to forgo traditional structures since new and emerging technologies are transforming the type of work employees perform, as well as where and how work gets done.”
Nowadays, the workplace is far more than a space occupied by an employee during working hours. The workplace is all about the technologies people need to get work done starting from hardware equipment to cloud-based training platforms, HR software, and collaboration tools like Slack and Google Hangouts.
Read more
|
Embracing a Digital Workplace Evolution
| 0
|
embracing-a-digital-workplace-evolution-1a77e319fe11
|
2018-06-21
|
2018-06-21 09:24:57
|
https://medium.com/s/story/embracing-a-digital-workplace-evolution-1a77e319fe11
| false
| 370
|
Preparing organizations for the Future of Work at www.rallyware.com
| null |
rallyware
| null |
Rallyware
|
info@rallyware.com
|
rallyware
|
TALENT DEVELOPMENT,EMPLOYEE ENGAGEMENT,FUTURE OF WORK,DISTRIBUTED WORKFORCES,EMPLOYEE TRAINING
|
RallywareSF
|
Remote Working
|
remote-working
|
Remote Working
| 9,056
|
Rallyware
|
Intelligent workforce training
|
54d2c3dd3a9b
|
RallywareSF
| 7
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-12-10
|
2017-12-10 01:26:15
|
2017-12-10
|
2017-12-10 01:29:24
| 0
| false
|
en
|
2017-12-10
|
2017-12-10 01:29:24
| 0
|
1a79140d7d40
| 0.683019
| 0
| 0
| 0
|
Just like a human,a computer can learn from three sources.
| 1
|
Machine Learning
Just like a human,a computer can learn from three sources.
One is Observing what others did in similar situations.
The second is observing a situation and trying to come up with best possible logic on the spot to decide/conclude .
The third is learning from previous mistakes/success .
There are 3 Categories of Machine Learning Algorithms
[Supervised Learning ] [Unsupervised Learning] [Reinforcement Learning]
Supervised Learning : You go on a cricket inn’s batting and don’t stop until you complete your target score whoever balls on your way.
It include algorithms such as Linear Regression, Logistic Regression, Decision Tree, Random Forest etc.
Unsupervised Learning : Your rival has challenged you. Now, you decide after assessing your strengths and weakness, whether to accept the challenge or become out.
It include algorithms such as k-means, apriori etc
Reinforcement Learning : You’ve accepted the challenge. The game has begun. After every hour, you are accessing your position in the feild.
Are you loosing more wickets? Is the opponent dominating? And, accordingly you decide where to continue or attack till the last ball.
|
Machine Learning
| 0
|
machine-learning-1a79140d7d40
|
2018-06-08
|
2018-06-08 05:32:33
|
https://medium.com/s/story/machine-learning-1a79140d7d40
| false
| 181
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
shanmukha yadavalli
| null |
c32109c92ec5
|
yvssram
| 0
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-11
|
2018-09-11 09:20:33
|
2018-09-25
|
2018-09-25 11:37:57
| 14
| false
|
en
|
2018-10-08
|
2018-10-08 14:51:49
| 0
|
1a79b5e3081f
| 4.159434
| 3
| 0
| 0
|
Overfitting is a common problem that all Machine Learning Algorithms run into. This occurs when the model is fit too well to the training…
| 3
|
Regularization methods to prevent overfitting in Neural Networks
Overfitting is a common problem that all Machine Learning Algorithms run into. This occurs when the model is fit too well to the training set but does not perform as well on the test set. Models can also run in to the issue of under fitting, when we choose too simple of a model that it does not perform well on both the training and test set. For Deep Learning, the issue of overfitting since the model is able to adapt freely to large amount of complex data. The most common ways to reduce overfitting is through Regularization methods of Early Stopping, L1, L2 and DropOut.
Early Stopping: stopping the model when thee validation model reaches a plateau.
L1: shrinking less important coefficients to zero, removing these features entirely from the model. (Lasso)
L2: reducing coefficients to close as zero, reducing the importance of less important features.(Ridge)
Drop Out : every neuron as the probability (defined by the user) of not being used during a round of training, but will be implemented in the next round. This allows the neurons that are training to increase their learning capabilities.
To demonstrate these methods, I am using the Wine Review data set from Kaggle, to test out the different data sets. My goal is to find out the category of wine based off of the wine review. I started by cleaning the data and converting the wines into 19 different categories ranging from Dry Red — Champagne. With taking a random sample of 10,000 wine reviews in order to find the category of the wine.
Early Stopping
The first regularization method I am going to try out is early stopping, which means to stop the model as soon as the validation error reaches a minimum. This way the model reduces any overfitting over time. In order to find this minimum, the model needs to run a full cycle. I ran the model with 200 epochs with a batch size of 32.
As you see, the model is overfitting since the Training accuracy is 99% while the test results are around 61%. After 75 epochs the training model still continues to grow and increase accuracy while validation model plateaus. Rerunning the model to stop at 75 epochs resulted in an increase test accuracy of 60%
L1
Using L1 to remove all less important features trains the model to underperform in both the training and test set. This is seen by the training accuracy and test results being having a 4% spread but the results are below 65%. This is showing us that some of the features that were removed from the model have some importance to predicting the wine category based off review. This has been the best model yet, since the training set is not overfitting and the test set is achieving similar results.
L2
Running the model by reducing the coefficients of lesser important features closer to zero, continued to have the training model overfit the data. The test results remained close to 60% which is similar to what we have seen in the past models. This did not import our accuracy overall
Drop Out
The Drop out method with a probability of 30% of a neurons being dropped during the cycle, we see our model’s Test Accuracy increase to 64%. The training model is still overfitting since its accuracy is at 100%, but we were able to increase the Test accuracy. One thing to note about Drop out, When running the model on the Test, all neurons will be used since the model wants to have all the neurons that were trained. Increasing the drop out percentage to 50%
Drop out model at .3%
increased Test accuracy by 1%
Conclusion
The best from the models results came from L2 regularization, where it dropped all features with no significant impact. My model was able to predict the predict the type of wine based off the review with 61% accuracy, with a training accuracy of 65%. Although the drop out model achieved an accuracy of 65%, our training set accuracy was at 99% accuracy, showing that our model was severely over fitting to the training set. Overall these are very good results given that there is a 5% probability one could randomly guess the type of wine based off of a review.
|
Regularization methods to prevent overfitting in Neural Networks
| 20
|
regularization-methods-to-prevent-overfitting-in-neural-networks-1a79b5e3081f
|
2018-10-08
|
2018-10-08 14:51:49
|
https://medium.com/s/story/regularization-methods-to-prevent-overfitting-in-neural-networks-1a79b5e3081f
| false
| 718
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Jennifer Arty
|
Data Scientist and Machine Learning Engineer
|
2dba6326e7b5
|
jennifer.arty
| 7
| 9
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-04
|
2018-08-04 16:08:46
|
2018-08-05
|
2018-08-05 06:37:21
| 5
| false
|
en
|
2018-08-05
|
2018-08-05 06:40:57
| 1
|
1a7a9540d866
| 2.920126
| 0
| 0
| 0
|
Now lets get one thing straight, Machine Learning and Deep learning are not the same and many use them interchangeably.Thing is deep…
| 1
|
WTF is Machine Learning
Now lets get one thing straight, Machine Learning and Deep learning are not the same and many use them interchangeably.Thing is deep learning is a sub unit of machine learning involving various computational layers.But machine learning is vast subject containing algorithms for data classification and prediction like regression problems such as predicting stock prices by time or other factors.
Let us consider a few examples of primitive methods of predicting and classifying in order to get a better understanding of what exactly learning means with perspective to algorithms.
This what a regression for a piece of data would look like, where our job as analysts is to create a line that would "fit" all the points of data, meaning creating a line that will approximately summarise the patterns in the data.Here if we treat this as data for stock prices we can see that as time(x axis) increases, the price(y axis) also increase.Now this is extremely basic form of learning where the only algorithm use here is a formula for a line.
Its quite fascinating how much you see this formula in many forms throughout this field.
There are also other algorithms such as K-nearest neighbour or Support vector machines which are all fancy names for classifiers meaning these algorithms can classify or differentiate data in various dimensions.Let us take a simple example where we are trying to differentiate breeds of dogs given their height and other features and then plot out our data.
Now let us examine what is going on.the two axes here are features such as height and weight for example.Here the black dots may represent a chihuahua and the red and a german shepherd.unlike regression ,the job is to create a line that would best divide these two classes of dogs; called a hyperplane for multi dimensional data.As we go on There arise problems such optimization and loss which involves finding the best possible line that would separate these classes.
But there is a possibility that out data is non-linear meaning it cant be separated by a line at least not in 2 dimension.For this we use kernels which in short reconstruct out data in higher dimensions.Note that beyond the gibberish and at its core, these algorithms and problems help us in our task of classifying various data.
Now our goal here is to understand these concepts at an intuitive level and therefore this post does not contain the technical aspects of machine learning or the bare mathematics for these algorithms for it would take a books to go over each concept.For a more deeper understanding of these topics as-well as the code for this you may refer to this source.
visithttps://pythonprogramming.net/support-vector-machine-intro-machine-learning-tutorial/
Moving on let us reflect what we learnt so far.We have understood that machine learning comprise of solving 2 problems prediction and classification and generation in some cases through an over look of simple yet fascinating methods to do so.Understand that we have only scratched the surface of whats there an so dwell deeper.this article is an ignition of thought of whats really there when it comes to machine learning.Moving on my later topics will comprise explanation of neural networks and their inner workings of the state of the art technologies of our time.
|
WTF is Machine Learning
| 0
|
wtf-is-machine-learning-1a7a9540d866
|
2018-08-10
|
2018-08-10 11:07:27
|
https://medium.com/s/story/wtf-is-machine-learning-1a7a9540d866
| false
| 553
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
savage_homosapien
|
lets make it simple i like memes and more importantly i write about things you wouldn't give a shit about and make them interessting
|
fe5579d5d13
|
abhi.zurich
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
1f35b6f451e8
|
2018-08-13
|
2018-08-13 10:23:51
|
2018-08-14
|
2018-08-14 12:20:57
| 0
| false
|
en
|
2018-08-14
|
2018-08-14 12:20:57
| 2
|
1a7ac6a6851e
| 0.279245
| 1
| 0
| 1
| null | 5
|
Webography for 4 dummies to make it in machine learning — Chapter 25, Scene 1
Using Excel CUBE Functions with PowerPivot - PowerPivotPro
Arriving Here from a Search Engine or via Excel Help? This article below by Dick Moffat, as well as the one by Dany…powerpivotpro.com
How to debug a Flask app
How are you meant to debug errors in Flask? Print to the console? Flash messages to the page? Or is there a more…stackoverflow.com
|
Webography for 4 dummies to make it in machine learning — Chapter 25, Scene 1
| 1
|
webography-for-4-dummies-to-make-it-in-machine-learning-chapter-25-scene-1-1a7ac6a6851e
|
2018-08-14
|
2018-08-14 12:20:57
|
https://medium.com/s/story/webography-for-4-dummies-to-make-it-in-machine-learning-chapter-25-scene-1-1a7ac6a6851e
| false
| 74
|
We offer contract management to address your aquisition needs: structuring, negotiating and executing simple agreements for future equity transactions. Because startups willing to impact the world should have access to the best ressources to handle their transactions fast & SAFE.
| null |
ethercourt
| null |
Ethercourt Machine Learning
|
adoucoure@dr.com
|
ethercourt
|
INNOVATION,JUSTICE,PARTNERSHIPS,BLOCKCHAIN,DEEP LEARNING
|
ethercourt
|
Python
|
python
|
Python
| 20,142
|
WELTARE Strategies
|
WELTARE Strategies is a #startup studio raising #seed $ for #sustainability | #intrapreneurship as culture, #integrity as value, @neohack22 as Managing Partner
|
9fad63202573
|
WELTAREStrategies
| 196
| 209
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
b6f8e4cf6622
|
2018-09-05
|
2018-09-05 02:13:15
|
2018-09-05
|
2018-09-05 11:42:55
| 1
| false
|
en
|
2018-09-05
|
2018-09-05 11:58:25
| 0
|
1a7c8c0280f2
| 3.867925
| 1
| 0
| 0
|
It is rightly said that AI is a word that is found more in the air around us than at the desk. It has become more as a part of the…
| 5
|
Artificial Intelligence: Neither a fantasy of Techies nor money plant of Capitalist but rather an unconscious need of common (wo)man of the future generation
Steve Wozniak with PR2 robot
It is rightly said that AI is a word that is found more in the air around us than at the desk. It has become more as a part of the conversation rather than the part of desk research by common people.
AI’s origin was rooted in the view that machine can simulate reasoning with the help of Binary 0 and 1.Alan Turing gave the above view and is called as the father of AI. It was prematurely born in the 1950s and became a Frankenstein monster of the common man in last 10 years. It is mainly associated with loss of jobs by the common man and seen as a competitor for the job.
But why at all a common man needs AI?
To understand this we need to delve back in history and understand how an individual would see meaning in their life in the societal structure in the past.
When human beings became conscious of it’s existence, we felt that periodic positive forces of nature like Rain, Sunlight which are powerful than us and we are at their mercy to survive. We also encountered negative forces of Earthquake, Volcanism, and Storm. Since human’s psychology is designed in a way that we surrender to those more powerful than us, we started pleasing nature by making them gods (a superficial entity based upon whom our existence depends) and later the definition of god changed to a superficial entity that created us. And since it is difficult for humans to assume rain as god therefore in every mythology human-alike figure was created to represent the god of natural force and emotion.
From the Societal perspective, it became very convenient as every person with the help of mythology understood a narrow perspective of their existence on the planet and they were mentally satisfied with it. There were few classes or caste mobility so people learned about the work they will pursue in future from socialization with family and village.
Nowadays, after the advent of science age, all the understanding is criticized if justified solely on the basis of mythology or religious text. Rational explanations are encouraged. Science has shown us that reality is very complex and an individual will be only understanding a part of the whole.
People’s contribution towards the complete political, economic and social system is manifold in form of jobs in economics, volunteering in society and raising a voice in political activity. Jobs which people take in their careers are very specialized ones and they will incomplete knowledge about another specialized role that they are not into. So if you are a Metallurgical Engineer you will have very little knowledge about the dental problem you have and it’s consequences and which doctor would be best for you. And this will create instability and lack of decision-making in life of people, we solve this at present by accepting a story of Doctor X in general is better than Doctor Y so (s)he will be better for them too.
Machine Learning by companies like Amazon rating it’s products or Practo in India rating its doctors resolve our objective dilemma of selecting let’s say the best earphone or dentist but many times the problem is related to subjective dilemma and it can only be resolved by someone who has dedicated their life to understanding you and your belongings: And here comes the role of personalized AI for everyone. The AI will help people understand nihilism when someone explains a version of it to him and show all the viewpoints leading to the creation of no stereotype, or help in choosing the eyeglasses best for him(Presently say the Lenskart AI only shows suggestions based upon your browsing history rather than having a complete understanding of you). Since it understands better the person as well about eyeglasses and nihilism, no one would be a better guide than AI. And no, this is not just inspired from Terasem Movement.
At present despite living in the high time of economic stability, low warfare, high comforting technologies, the happiness of people are low. This is more so in people who have a better understanding of the complexities of nature’s creation i.e weather, mountains, Sun, planets and human’s creation i.e Economic system, Political System. Earlier people believed in God and accept what they have or the conditions they are treated with as an act of God. This makes the life of people mentally stable. In future with scientific temper pouring to the bottoms of society, people will become more atheists, think rationally and so AI will bring a happiness in the life of people similar to ancient times without a need to attach to a false story of God by Church or Enlightened people by Sadguru of Isha Foundation. The major reason of people’s unhappiness is that they can’t understand reality as it is very complex while in ancient times due to the myths and Religious literature it seemed to them that they have understood it and also all the system created for their governance. But actually they haven’t. It was just like Plato’s Allegory of Cave case.(Read “Note” at last of article)
So AI which at present looks like a business profiteering tool of Capitalists and religion of Intelligent Aliens will solve a problem which would have been the biggest emotional instability for humanity as a community.
Note: Steven Pinker in his book “Enlightenment Now” tried to address the problem of why people think life of people in past was better but I think he underrepresented the role of belief in myths in happiness of individual’s life. Although I do agree with his view of media presenting only ill effects of modernity.
|
Artificial Intelligence: Neither a fantasy of Techies nor money plant of Capitalist but rather an…
| 1
|
artificial-intelligence-neither-a-fantasy-of-techies-nor-money-plant-of-capitalist-but-rather-an-1a7c8c0280f2
|
2018-09-05
|
2018-09-05 11:58:25
|
https://medium.com/s/story/artificial-intelligence-neither-a-fantasy-of-techies-nor-money-plant-of-capitalist-but-rather-an-1a7c8c0280f2
| false
| 972
|
We present here all the questions, thoughts that have remained unanalyzed over time by scholars, media persons. This will help us better understand who we are and our place in the vast cosmos. We believe that truth should come to people at all cost.
| null | null | null |
Deepest questions of humanity
| null |
deepest-questions-of-humanity
| null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Dhrub
|
IIT(BHU) Varanasi Batch of 2016, Proficiency in Anthropology,Political Science, Economics, Artificial Intelligence, Natural Science
|
7502ab839c19
|
dhrub0216
| 27
| 178
| 20,181,104
| null | null | null | null | null | null |
0
|
stddev = 1e-1
labels = [0, 1]
b = [1, 2]
labels_ph = tf.placeholder(shape=(2), dtype=tf.int32)
b_ph = tf.placeholder(shape=(2),dtype=tf.float32)
c = tf.Variable(tf.truncated_normal(shape=[2], stddev=stddev))
k = tf.Variable(tf.truncated_normal(shape=[2], stddev=stddev))
d = tf.add(b_ph, c)
e = tf.add(c, k)
a = tf.multiply(d, e)
loss = tf.losses.softmax_cross_entropy(logits=a, onehot_labels=labels_ph)
opt = tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=0.1)
e = tf.add(c, k)
def add_grad(c, k, d, graph, name=None):
with tf.name_scope(name, "AddGrad", [c, k, d]) as name:
return py_func(forward_func,
[c, k, d],
[np.float32],
graph,
name=name,
grad=backprop_func)
def py_func(func, inp, Tout, graph, stateful=True, name=None, grad=None):
# Need to generate a unique name to avoid duplicates
# if you have more than one custom gradients:
rnd_name = 'PyFuncGrad'+ str(np.random.randint(0, 1E+8))
tf.RegisterGradient(rnd_name)(grad)
with graph.gradient_override_map({"PyFunc": rnd_name}):
return tf.py_func(func, inp, Tout, stateful=stateful, name=name)
def forward_func(c, k, d):
e = np.add(c, k)
return e.astype(np.float32)
def backprop_func(op, grad):
c = op.inputs[0]
k = op.inputs[1]
d = op.inputs[2]
e = tf.add(c, k)
return (e + d) * grad , d * grad, e * grad
grads = opt.compute_gradients(loss, tf.trainable_variables())
grads = list(grads)
train_op = opt.apply_gradients(grads_and_vars=grads)
e = tf.add(c, k)
e = add_grad(c, k, d)
| 15
|
32881626c9c9
|
2018-02-03
|
2018-02-03 20:51:50
|
2018-02-05
|
2018-02-05 19:57:45
| 0
| false
|
en
|
2018-10-03
|
2018-10-03 20:17:45
| 5
|
1a7e2bb3f516
| 3.343396
| 10
| 3
| 0
|
Tensorflow is a great tool that works with deep learning. There are a lot of operations that you easily can implement and make good model…
| 5
|
Override Tensorflow Backward-Propagation
Tensorflow is a great tool that works with deep learning. There are a lot of operations that you easily can implement and make good model that solves needed problems. But sometimes there are models that have own specific computations.
I faced with one of them. When you create neural network architecture you can find that some operations do not flow in backward-propagation. On a piece of paper you can compute gradient and derive the formulas that are participated in backward-propagation, but Tensorflow due to its complexity cannot resolve the gradient and as a consequence you cannot train neural network.
As I said Tensorflow is a great tool and it proposes the ability to override gradient and write own custom gradient that can flow in backward-propagation. But how to do it and make it really works?
Firstly, I will explain it using simple network just for understanding this process. And then I will show real case where I had to have to override the gradient.
Simple network:
Imagine that this part cannot be calculated in backpropagation — Tensorflow returns None for such gradients:
It means that it’s not so obvious for Tensorflow how to compute the gradient in backpropagation. But you know that this is a differentiated model and on a piece of paper you can derive the formula. So what you need — just tell Tensorflow how to do it.
tf.RegisterGradient allows overrides the gradient: for this operation create a method where you setup the custom gradient.
In add_grad you return the function that overrides gradient. The py_func looks like this:
As you see Tensorflow has own wrapper for python functions — tf.py_func. But at the same time it means that forward propagation you have to implement without Tensorflow operations. Because if you use Tensroflow operations in function that afterwards you pass to tf.py_func — it’s going to fail.
I couldn’t find the way how to implement forward propagation using Tensorflow operations, but I think it should be for having better performance. Please, share with me if you know.
2. Forward and Backprop methods:
a) What should be in forward pass (forward_func):
In forward_func you have to implement forward propagation using python operations.
As you see d variable is redundant in this method, but you cannot avoid using it, because it will be needed in backprop. It means that forward and backprop are at the same bunch — they share variables. You can notice that in py_func you declare all variables that are needed in both methods.
b) What you want to get in backpropagation (backprop_func):
Finally, in backprop_func we can implement the custom gradient. But custom gradient you do not pass to any wrapper, so you have to do it with Tensorflow operations.
Notice, that we have to return gradient for each variable that was passed to that function — it is partial derivatives. I do not show how to calculate the gradient for each variable. Please do it by yourself.
op contains all passed variables; grad — the flown gradient from the back propagation.
3. Then explicitly call compute gradients and apply them when you initialize graph:
And replace
with
The work is done. But to be sure that computations will be without errors, you can check firstly calculating on a peace of paper and then run code. For that example we can call tf.add instead of add_grad and remember values, and then call add_grad to compare the outputs. I got the same result.
Real problem
In paper “Clothing Retrieval with Visual Attention Model” they describe attention network that generates Bernoulli series has to be multiplied with another feature map. Unfortunately Bernoulli is not differentiable, hence backward propagation will not flow.
Follow the above points I implement forward propagation that generates Bernoulli series with the given shape and in backprop function I implement custom gradient — it was just multiplication between intermediate layer (another feature map) and coming gradient.
Also there is tf.stop_gradient. It can be helpful in some cases. This operation allows to stop flow gradient further just for the given operation, it doesn’t prevent backward-propagation altogether.
P.S. I found the solution surfing the Stackoverflow resource, so it was done by collecting almost all the answers of this topic.
|
Override Tensorflow Backward-Propagation
| 32
|
override-tensorflow-backward-propagation-1a7e2bb3f516
|
2018-10-19
|
2018-10-19 12:44:36
|
https://medium.com/s/story/override-tensorflow-backward-propagation-1a7e2bb3f516
| false
| 886
|
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
| null |
datadriveninvestor
| null |
Data Driven Investor
|
info@datadriveninvestor.com
|
datadriveninvestor
|
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
|
dd_invest
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Sirena
| null |
e38407c41d67
|
SirenaFiriuza
| 2
| 2
| 20,181,104
| null | null | null | null | null | null |
0
|
./runStandaloneSystemML.sh test_data/test.dml -nvargs M=test_data/test_csv.csv
spark-submit SystemML.jar -f test_data/test.dml -nvargs M=test_data/test_csv.csv
hadoop jar SystemML.jar -f test_data/test.dml -nvargs M=test_data/test_csv.csv
pyspark --executor-memory 4G --driver-memory 4G --jars SystemML.jar --driver-class-path SystemML.jar
| 4
| null |
2018-02-15
|
2018-02-15 18:00:55
|
2018-02-18
|
2018-02-18 17:09:41
| 7
| false
|
en
|
2018-02-18
|
2018-02-18 17:13:18
| 13
|
1a7e3067418d
| 4.706604
| 5
| 0
| 0
|
“SystemML provides declarative large-scale machine learning (ML) that aims at flexible specification of ML algorithms and automatic…
| 4
|
Apache SystemML Quick Start Guide
“SystemML provides declarative large-scale machine learning (ML) that aims at flexible specification of ML algorithms and automatic generation of hybrid runtime plans ranging from single-node, in-memory computations, to distributed computations on Apache Hadoop and Apache Spark.”
Introduction to SystemML
The normal flow of developing a machine learning algorithm is, there is a data scientist who develops the algorithm in R or Python and then run it on a personal computer using a data set and then according to the result the algorithm is modified. This is the normal flow for designing new algorithms and is suitable for small data sets which can be handled in one node.
Normal Flow of Developing Algorithm
But when the data become very large so that could not fit into a single computer, data scientist has to go for larger distributed system. In this case when the data scientist develop an algorithm that is needed to be rewritten for a distributed system using Scala like language. And then finally run on Apache Spark Platform.
Develop Algorithm for Distributed Platform
If any modifications needed to be done for the algorithm, this cycle is repeated. This process takes lot of time;may days or weeks, and there is lot of room for errors when converting algorithm form one language to the other. Further when a error found it is difficult to decide whether the error is in original algorithm or converted one.
SystemML tries to address those issues by removing the middle job of System programmer and automating it. SystemML compiles scripts written in Declarative Machine Learning (DML) language into mixed driver and distributed jobs. DML’s syntax closely follows R, thereby minimizing the learning curve to use SystemML.
Testing Using Apache SystemML
SystemML can be run on top of Apache Spark, where it automatically scales your data, determining whether your code should be run on the driver or an Apache Spark cluster.
If you need to move to Apache Hadoop instead of Spark it also can be done without any code changes when you use Apache SystemML.
Sample Declarative Machine Learning(DML) Code
DML is a high level language that can be used to write machine learning algorithms. This provide a similar functionality as Keras or Caffe frameworks which allows you to develop machine learning algorithms. But advantage in DML is, SystemML could directly run DML algorithms in distributed platforms
This code snippet will print read a input matrix from a CSV file and print a portion of it. It will get input CSV file name in line 1 and will read it in line 6. It will get. The dimensions for output are given in line 3,4 After in a loop it will print those values.
For more details about DML language refer this link http://apache.github.io/systemml/dml-language-reference.html
Next Let’s see how to run this DML Code.
Running SystemML
To use SystemML you have to download the required version from this link. Download the zip file and extract it to any directory you prefer.
For the examples in this section I am using the code previously mentioned. Create a directory called “test_data” in the SystemML extracted directory and save the above code as “test.dml”. For testing create a sample CSV file with a metadata file. Metadata file required for DML to read the CSV file.
Sample CSV file and meta data file.
There are several methods to run SystemML.
As a Standalone job
The first method is running SystemML on a single node. This is same as writing your code in Python or R and running them on your PC. DML code simply run on the PC.
The command as bellow,
runStandaloneSystemML.sh script is available in the downloaded SystemML zip file. To run in standalone mode just execute it with required arguements. nvargs section specifies variables with their names used in the DML file. Here it specifies input CSV file with the name ‘M’.
Output of DML Script
2. As Spark Batch job
SystemML could be run on a Apache Spark Cluster. For this section you need spark installed and configured and SPARK_HOME variable configured. For details of Apache Spark, visit this site.
This will launch SystemML in a spark cluster and will run the test.dml file. For more details about submitting applications to Spark visit this https://spark.apache.org/docs/latest/submitting-applications.html.
3. As Hadoop Batch job
As well as Spark Cluster, SystemML could be executed on Hadoop also. To run on Hadoop install it from this site and configure.
4. From Python or Scala using Spark MLConext API
Spark MLContext API is method to run SystemML on Apache Spark using Scala or Python. For this section also you need Apache Spark installed and configured.
Spark MLContext API can be used from Python, PySpark Shell, Scala and Spark Shell. To use PySpark directly from Python you have to install systemml using Pip and then called the functions. For PySpark and Spark Shell you con direct the SystemML.jar file in the command line.
PySpark shell starts with SystemML
For more details about accessing MLContext API in python or Spark visit this http://apache.github.io/systemml/spark-mlcontext-programming-guide
5. From Java using Java Machine Learning Connector(JMLC)
The Java Machine Learning Connector (JMLC) API is a programmatic interface for interacting with SystemML in an embedded fashion from Java.
Create a new java project and add all the jars in downloaded SystemML distribution to the class path of the project.
Here create a Connection object to connect with the SystemML. Then prepare a DML script by calling conn.prepareScript method. Finally execute that script.
For more details of JMLC visit http://apache.github.io/systemml/jmlc
#Developer Guide for SystemML
References
https://medium.com/@apachesystemml/what-is-systemml-why-is-it-relevant-to-you-d40c4ecd4116
http://apache.github.io/systemml/
https://www.youtube.com/watch?v=5Y2k1aPqW6g
https://www.youtube.com/watch?v=n3JJP6UbH6Q
https://www.slideshare.net/ArvindSurve1/apache-systemml-architecture-by-niketan-panesar-65987753
|
Apache SystemML Quick Start Guide
| 28
|
apache-systemml-quick-start-guide-1a7e3067418d
|
2018-06-03
|
2018-06-03 02:44:45
|
https://medium.com/s/story/apache-systemml-quick-start-guide-1a7e3067418d
| false
| 969
| null | null | null | null | null | null | null | null | null |
Apache Spark
|
apache-spark
|
Apache Spark
| 877
|
Chamath Abeysinghe
|
Undergraduate of University of Moratuwa, Department of Computer Science and Engineering.
|
15fa9f5a626c
|
abeysinghechamath
| 14
| 14
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
923e17d2b5
|
2018-01-31
|
2018-01-31 05:13:14
|
2018-02-12
|
2018-02-12 15:31:00
| 12
| false
|
en
|
2018-02-12
|
2018-02-12 15:31:00
| 14
|
1a7ed9d2226d
| 6.059434
| 25
| 0
| 0
|
We’re now in a rapid innovation cycle for AI, DL, and ML. How will these influence IoT?
| 5
|
Using Deep Learning Processors For Intelligent IoT Devices
We’re now in a rapid innovation cycle for AI, DL, and ML. How will these influence IoT?
The Growing Demand For Deep Learning Processors
In the past few years, the Artificial Intelligence field has entered a high growth phase, driven largely by advancements in Machine Learning methodologies like Deep Learning (DL) and Reinforcement Learning (RL). Combinations of those techniques demonstrate unprecedented performance in solving a wide range of problems, from playing Go at super-human level to diagnosing cancer like a specialist.
In our previous blogs, Intelligent IoT and Fog Computing Trends and The Rise of Ubiquitous Computer Vision In IoT, we talked about some interesting use cases of DL in IoT. The applications will be both broad and deep. They are going to fuel the demand for new breeds of processors in coming decades.
Deep Learning Workflow Overview
DL/RL innovations are happening at an astonishing pace (thousands of papers with new algorithms are presented in numerous AI related conferences every year). Though it is premature to predict the final winning solutions, hardware companies are racing to build processors, tools, and frameworks. They are trying to identify pain points and bottlenecks in DL workflows (Fig. 1), leveraging years of experience of researchers.
Deep Learning Workflow
Fig. 1: Basic Deep Learning Workflow
Platforms For Training DL Models
Let’s start with training platforms. Graphical Processing Units (GPU) based systems are usually the choice for training advanced DL models. Nvidia has long realized the advantages of using GPU for general purpose high performance computing.
GPU has hundreds of compute cores that support a large number of hardware threads and high throughput floating point computations. Nvidia developed Compute Unified Device Architecture (CUDA) programming framework to make GPU friendly for scientists and machine learning experts to use.
CUDA toolchain has improved overtime, providing researchers a flexible and friendly way to realize highly complex algorithms. A few years ago, Nvidia aptly identified the DL opportunity and persistently developed CUDA support for most of DL operations. Standard frameworks like Caffe, Torch, and Tensorflow all support CUDA.
In cloud services like AWS, developers have a choice between using CPU or GPU (more specifically Nvidia GPU). Platform choice depends on the complexity of the neural networks, budget, and time. GPU based systems can usually cut the training time by several times over CPU but are more expensive (Fig. 2)
Fig. 2: AWS EC2 GPU Instances
Alternatives to GPU / CPU
Alternatives are coming. Khronos proposed OpenCL in 2009 which is an open standard for parallel computing on a wide range of hardwares like CPU, GPU, DSP or FPGA. It will enable other processors like AMD GPUs to enter DL training market, providing developers with more choices.
However, it is still behind CUDA in DL library support. Hopefully, that situation will improve in the next few years. Intel is also developing processors customized for DL training through its Nervana acquisition.
Competitive Landscape of DL Inference
DL inference is a very competitive market. Applications can be deployed at multiple levels, usually depending on the requirements of the use cases:
Cloud / Enterprise: Image classifications, Cybersecurity, Text Analytics, NLP, etc.
Smart Gateways: Biometrics, Speech Recognition, Smart Agent, etc.
Edge endpoints: Mobile devices, Smart cameras, etc.
Cloud Inference
Cloud inference market will see tremendous growth, with a strong push from internet giants like Google, Facebook, Baidu, or Alibaba. For example, Google Cloud and Microsoft Azure offer very strong image classification, natural language processing, and face recognition APIs that developers can easily integrate into their cloud applications.
Cloud inference platforms will need to support millions of simultaneous users reliably. The ability to scale the throughput is critical. Besides, cutting down energy consumption is another top priority in order to control operating cost of their services.
On cloud inference space, in addition to GPUs, data centers are using FPGA or customized processors to make cloud inference applications more cost effective and power efficient. For example, Microsoft Project Brainwave uses Intel FPGAs to demonstrate strong performance and flexibilities in running DL algorithms like CNN, LSTM, etc.
Fig. 3: Intel 14nm Stratix FPGA
FPGAs have advantages. The hardware logics, compute kernels, and memory configurations are customizable for a specific type of neural network, making it more efficient in tackling a pre-trained model. However, one drawback is the difficulty of programming compared to CPU or CUDA. As mentioned in the previous section, OpenCL will be helpful in making FPGA more software developer friendly.
Besides FPGA, Google is also making a customized processor called TPU. It is an ASIC that focus on highly efficient matrix calculations. However, it is only supported within Google’s own services.
Here are some of the players in DL cloud inference.
Table 1
Embedded DL Inference For Intelligent Edge Computing
On the edge, DL inference solutions need to address a diverse set of requirements for different use cases and markets.
Autonomous Driving Platforms
Autonomous vehicle platforms are currently the hottest market where the state-of-the-art DL and RL methods are being applied to achieve the highest level of autonomous driving. Nvidia has been leading the market with several classes of DL SoCs from Tegra to Xavier. For example, Xavier SoC is built into Nvidia’s Drive PX platforms that can achieve up to 320 TFLOP. It is going to target level 5 autonomous driving.
Mobile Processors
Another rapid growth area is mobile application processors. DL enables new features on smartphones that were not possible before. One example is Apple’s neural engine integration into A11 Bionic chip, which enables it to add high accuracy face locking on the iPhone X.
Chinese chipmaker HiSilicon has also released its Kirin 970 processor which features a Neural Processing Unit (NPU). Some of Huawei’s latest smartphones (Fig. 4) are already designed with the new DL processors. For example, using the NPU, the smartphone camera “knows” what it is looking at and adjusts the camera settings automatically depending on the subject of the scene (e.g. human, plants, landscape, etc).
Fig. 4: Huawei Mate 10 Pro — Subject Aware Camera
The following tables list some of the processors for DL inference applications.
Table 2
New Architectures
It is worth mentioning that there is a new category of processors, called neuromorphic processors, which closely mimic the mechanism of neurons and synapses of human brains. They can realize a type of neural network called Spiking Neural Network (SNN) which learns in both the spatial and temporal domains.
In principle, they are much more power efficient compared to existing DL architectures and have advantages in tackling online machine learning problems.
IBM’s TrueNorth and Intel’s Loihi are based on neuromorphic architecture. Researchers are exploring the capabilities of the chips, showing some potential. It is unclear when the new types of processors will be ready for broad commercial use. A number of startups like Applied Brain Research and Brainchip are also focusing on this area, developing tools and IPs.
Fig. 5: Intel Loihi
It’s an Interesting Time
In just a short few years, AI/DL/RL/ML have become important tools for many industries. The underlying ecosystem, from IPs, processors, system designs to toolchains and software methodologies, has entered a rapid innovation cycle. New processors will enable many new IoT use cases which were not feasible before.
However, IoT and Machine Learning use cases are still evolving. it will take generations of processors for chip designers and developers to come up with the right mix of architecture in addressing the needs of various markets. We will take a deeper look into compute platforms for various verticals in future articles.
Want all the latest advances and tech news sent directly to your inbox?
🗓 This article was originally posted on iotforall.com on January 29, 2018.
|
Using Deep Learning Processors For Intelligent IoT Devices
| 229
|
using-deep-learning-processors-for-intelligent-iot-devices-1a7ed9d2226d
|
2018-06-11
|
2018-06-11 08:22:56
|
https://medium.com/s/story/using-deep-learning-processors-for-intelligent-iot-devices-1a7ed9d2226d
| false
| 1,248
|
Expert analysis, simple explanations, and the latest advances in IoT, AR/VR/MR, AI & ML and beyond! To publish with us please email: contribute@iotforall.com
| null |
iot4all
| null |
IoT For All
|
hello@iotforall.com
|
iotforall
|
IOT,INTERNET OF THINGS,TECH,TECHNOLOGY,ARTIFICIAL INTELLIGENCE
|
iotforall
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Frank Lee
| null |
82dd57a2360b
|
frankyk_lee
| 107
| 13
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-07
|
2018-03-07 02:48:58
|
2018-03-07
|
2018-03-07 03:15:17
| 2
| false
|
en
|
2018-03-07
|
2018-03-07 03:15:17
| 9
|
1a7f64edb5c2
| 0.851258
| 0
| 0
| 0
|
Sources tell us all viruses are a form of artificial intelligence. They couldn’t be more right, if you think about it. These pathogens are…
| 5
|
Artificial Intelligence Virus
Sources tell us all viruses are a form of artificial intelligence. They couldn’t be more right, if you think about it. These pathogens are being spread by social media, and through unregulated cryptocurrencies, which take over the computers of highly-sensitive people (HSP) and make sophisticed DDoS attacks.
While officials deny any risk of infection, we can’t be so hasty as to dismiss these claims without rigorous investigation.
South Korea, for one, has been hit by an outbreak. And allegations are already rolling in that the Italian elections, among others, were hacked by A.I.V.-infected voters being controlled remotely by a hivemind.
THE TRUTH WILL SET YOU FREE.
JOHN 8:23
#aivirus #artificialintelligencevirus #aiv #mkultra #tyler #qanon #john8-23 #socialmedia #socialmediavirus #socialmediaaddiction
|
Artificial Intelligence Virus
| 0
|
artificial-intelligence-virus-1a7f64edb5c2
|
2018-03-07
|
2018-03-07 03:15:18
|
https://medium.com/s/story/artificial-intelligence-virus-1a7f64edb5c2
| false
| 124
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Artificial Intelligence Virus
|
#aivirus
|
189376d2d30e
|
artintelvirus
| 1
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
a04bd47fb971
|
2017-09-11
|
2017-09-11 12:36:27
|
2017-09-11
|
2017-09-11 17:33:17
| 1
| false
|
en
|
2017-09-12
|
2017-09-12 11:38:41
| 1
|
1a7f912b8258
| 3.539623
| 6
| 0
| 0
|
Long before AI existed in the palm of our hand, it existed in the imagination. Speculative fiction writers from the second half of the 20th…
| 5
|
Curl Up with Some of the Best AI Sci-Fi Books
Long before AI existed in the palm of our hand, it existed in the imagination. Speculative fiction writers from the second half of the 20th century anticipated much of the technology we enjoy today, celebrating their benefits and warning of their possible ills. Ironically, we might see a future where bots are writing books themselves! But for now, authors continue the tradition of anticipating cutting-edge tech developments through fiction.
So, did the classics guess the future correctly? And what are modern writers anticipating based on the world we live in today? Find out by spending some time with some of the best artificial intelligence science fiction, collected for you here.
The Moon is a Harsh Mistress by Robert A. Heinlein
This sci-fi classic is rare in that it portrays AI as friend rather than foe. Forget evil computers like HAL; HOLMES IV, the computer featured here, is a friendly system that wants to befriend humanity. What follows is a Pinocchio-like story in which the newly-sentient AI aims to become more humanlike. Who says a computer with a sense of humor can’t be the shining point of light in a dystopian future?
Heinlein was influenced in his writings by Marvin Minsky, co-founder of MIT’s AI laboratory and an influential AI researcher. The two were friends, and The Moon is a Harsh Mistress became known for its realistic imagining of future society. For these reasons, this might be the best artificial intelligence science fiction example on this list in terms of realism. The book won both a Nebula and Hugo Award.
Do Androids Dream of Electric Sheep? by Philip K. Dick
It’s an old book, but this classic is the inspiration for Blade Runner — which has a sequel releasing this fall. This classic from legendary author Philip K. Dick explores the nature of existence in a world where differences between AI and human intelligence is indistinguishable.
Because the average consumer encounters humanlike digital agents each day, Do Android Dream of Electric Sheep remains a powerful meditation on humanity, sentience and empathy. How does this change the way we view AI agents that already exist today?
Speak by Louisa Hall
One of our top artificial intelligence books came out only two years ago! This 2015 novel follows a whole cast of characters that contribute to the development of an AI across generations. It’s inspired by the historical development of AI (with figures like Alan Turing and a character based on Joseph Weizenbaum), then traces a fictional path between an AI-powered doll (a la Hello Barbie) and a 17th century diarist whose writing inspires its script and personality. The educational value of this novel earns it a place among our top artificial intelligence books.
The historical aspect of the book should help readers understand how we’ve ended up with conversational UI today — and how they can be applied to things like toys that are on store shelves right now. But a powerful theme to the book is how memory can persist across time via AI, as girls in the near-future befriend the diarist through the doll.
Bête by Adam Roberts
What if you could hold an actual conversation with your pet? In this recent work of speculative fiction, animal rights activists call for domesticated animals to become augmented with AI. The technology puts them on par with humans in intelligence, with the goal being that humans will treat them with better empathy.
What results is a strange exploration of ethics (for animals and AI) and questions on the nature of intelligence itself. For example, where does the spirit or soul reside in the animals, and what about their intelligence is artificial? With real-life developments like Elon Musk’s Neuralink — a project aimed at merging the human brain with AI — such questions are compelling for our increasingly cyborg society.
Ancillary Justice by Ann Leckie
Ann Leckie’s debut novel is already regarded a modern classic in AI sci-fi books, nabbing a Hugo, Nebula and Arthur C. Clarke award with critical praise. AI is explored here as a force for organization and building a collective; in this far-future world, soldiers called “ancillaries” are controlled by AI, effectively serving as multiple bodies for single artificial intelligences to act through.
While the book may not be the best example of realism in an artificial intelligence fiction book, it allegorically might make you question how AI can shape who we are through media distribution — individually or in collective society.
River of Gods by Ian McDonald
Set a century after India’s independence from Britain, River of Gods features an interesting mix of traditional customs and futuristic technology. In 2047, humans and artificial intelligences (called “aeias”) intermix in society, but those passing the Turing test are eliminated. It’s a wonderful example of what makes AI sci-fi books great, delving into larger political topics as well.
With the planet on the brink of collapse due to natural disaster, we witness a culture war brewing between traditional orthodox culture and a society that has embraced aeias for entertainment and defense. A British Science Fiction Award winner, River of Gods is a compelling artificial intelligence fiction book portraying a probably future beyond AI development.
|
Curl Up with Some of the Best AI Sci-Fi Books
| 13
|
curl-up-with-some-of-the-best-ai-sci-fi-books-1a7f912b8258
|
2018-05-06
|
2018-05-06 20:33:25
|
https://medium.com/s/story/curl-up-with-some-of-the-best-ai-sci-fi-books-1a7f912b8258
| false
| 885
|
Everything about chatbots analytics, AI and ML
| null | null | null |
BotMag
|
botmagco@gmail.com
|
botmag
|
CHATBOTS,AI,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,ANALYTICS
|
botmagco
|
Robots
|
robots
|
Robots
| 4,990
|
Botanalytics
|
Botanalytics is a conversational analytics tool for bots. Identify bottlenecks, segment conversations & users, and measure engagement. https://botanalytics.co
|
8f2cabaf5f37
|
botanalytics
| 391
| 5
| 20,181,104
| null | null | null | null | null | null |
0
|
const loadModel = async () => {
console.log(“Loading Model…”);
model = await tf.loadModel(INSERT_S3_LINK_HERE);
console.log(“Model loaded!”);
setInterval(startScreen, 500);
}
const logits = tf.tidy(() => {
const b = tf.scalar(255);
const img = tf.fromPixels(imgElement).toFloat().div(b);
const batched = img.reshape([1, 40, 85, 3]);
return model.predict(batched);
});
| 2
| null |
2018-05-01
|
2018-05-01 16:59:10
|
2018-05-05
|
2018-05-05 17:23:16
| 24
| false
|
en
|
2018-05-06
|
2018-05-06 18:51:30
| 15
|
1a84d4598bc0
| 10.123585
| 37
| 2
| 0
|
Hi I’m Farza and I’m the creator of a music streaming site for gamers called mood.gg which has gotten pretty huge recently and has millions…
| 2
|
DeepOverwatch — combining TensorFlow.js, Overwatch, Computer Vision, and Music.
Hi I’m Farza and I’m the creator of a music streaming site for gamers called mood.gg which has gotten pretty huge recently and has millions of hits + over 500,000 users. I’m just going to call it Mood for the remainder of this post :). This post is all about how I trained a convolutional neural network on my own dataset, using the recently released TensorFlow.js, in order to do real-time detection on the character a player uses in the game Overwatch in order to play music specifically for that character. All automatically.
Special thanks to Google Brain’s awesome developers that worked on TensorFlow.js, specifically Nikhil Thorat who gave me tons of hands on help.
If you have any questions, don’t hesitate to drop me a question on Twitter!
Get The Code/ Trained Model
Code behind the training scripts and the trained model(Python/Keras) can be found here. Code to the desktop app with all the TensorFlow.js stuff can be found here.
What is Mood?
Mood allows users to listen to music that relates back to the characters they play in certain games. Users can then listen to this music while they play the character in-game to actually “feel” like that character. This music we choose is based on a characters specific theme, play style, and personality. For example, below is a character named “Reaper” from Overwatch. He is described as “a wraith-like terrorist who sets out to kill his former comrades to feed his desire for revenge”.
He’s obviously this dark, menacing figure and that’s why if you check his playlist you’ll find its full of metal music, edgy rock, and satanical hip-hop. Now, when a player wants to play Reaper they can also listen to his playlist in the background. If you don’t use Mood, you’ll have to trust me when I say that it feels really fucking cool to play this very ominous character that blows up enemies with these massive guns all while rocking out to some metal music. I found that even people who hate heavy music had a great time immersing themselves in the personality of the character.
Best part about it all? Mood supports every hero in Overwatch!
The Problem
Mood for Overwatch comes with a downloadable desktop app that allows people to control the music via keyboard commands while they are in the game. This is cool, but what sucks is that when you change characters you’d need to actually minimize the game and manually change the playlist. And in Overwatch people sometimes switch between different characters a lot. This kills the immersion.
Instead, wouldn’t it be cool if the desktop app could automatically detect what character the user was playing by just looking at the screen? HELL YEAH. But is this possible?
Possible Solutions
Below is an example screenshot from the game of someone playing Ana, a sniper. Every single character in Overwatch has a couple of visual factors that distinguish them. Their character portrait (bottom left corner), their weapon which is held around the center of the screen like any other FPS, and the characters specific weapon art (bottom right corner). Detecting the weapon in the center seems hard, so I didn’t wanna do that. But the other two things seem doable.
I know what your thinking, “can’t you just use the character portrait and compare it against all the other character portraits in the game to decide what character it is?”. This was actually my first thought and its something that can easily be done using template matching with OpenCV. But, I quickly began to hate this idea when I realized that characters can have different portraits depending on what skin is used by the player. That means I would need to constantly update my program as new skins were released. Also, Blizzard is known to randomly update champion portraits as well .That’s annoying. I like to make stuff that maintains itself so this wasn’t a good solution.
So than I thought, “oh, I’ll just template match with the weapon art at the bottom right”. This also ended up being extremely problematic because the art is transparent and the background is constantly changing. This means template matching would give terrible results and a ton of false positives.
Plus, these solutions aren’t efficient. Template matching across many different templates is computationally expensive and it might actually cause lag for some people with low-end systems.
Neural Networks
I wanted to solve this problem using a convolutional neural network. You might think this is overkill. You might be right. This problem could be solved many ways but honestly neural nets are the new cool kids on the block and I was just curious to see how well they’d do in terms of accuracy and efficiency. After all, I needed this thing to work on super shitty computers and super good computers. Plus I’m pretty experienced with using convolutional neural nets and have already used them on video games to some success. But, the first thing I needed was training data.
Training Data
Definitely check out my code here if you want to know exactly how I did all this and want to replicate it yourself.
This was actually pretty easy. I just played each character for 5 minutes and pressed random buttons + moved around like crazy. I only played on one map the entire time, so in order to create a model that would generalize well on other maps I need to diversify the training set. The reason I was moving around a lot was to create data with a lot more variability. Neural nets trained on a lot of the same data rarely generalize well, so I made sure to make each frame count.
I should stress that the training data was ONLY made up of actual gameplay. So, all the other screens where you are dead or in a lobby were not included.
First record the clip.
Then rename the clip to reflect the character played in the clip.
Finally, crop out just the gun from the clip and save the clip as individual images. I simply kept track of the label of each image by saving the images in folders named after the label.
TensorFlow.js
I built the desktop app using Electron.js which is a framework that allows people to create beautiful desktops with HTML/CSS/Javascript and runs on Node.js/Chromium. It is also special in that it gives developers access to lots of OS level functionality like the ability to call shell scripts, create files, and take screenshots. Most neural net stuff is done using Python. This means I’d have to package Python with my desktop app, and the desktop app would call my Python scripts, which would than send messages back to my desktop app to tell it the detected character. This flow isn’t too bad, but it requires that I package Python with my app. That’s kinda lame.
But I still wondered, could it be done without Python? Could it be done right in the Chromium engine?
I knew that neural networks written in pure Javascript were possible. Andrej Karpathy, one of my favorite computer vision researchers and the current Director of AI at Tesla, created ConvNet.js in 2014. It allows you to train and test neural nets, right in your browser! This was insane to me when I first saw it because I was so used to neural nets requiring expensive GPUs and lots of setup. But, here they were, right in the browser. The library hadn’t been updated in four years but I still managed to get it working and implemented the tutorial MNIST program it provided. Sadly, it caused a ton of lag on my desktop app :(.
I then actually abandoned this pure JS approach for the packaged Python approach that I was trying to avoid, but pretty soon I found DeepLearn.js and hope was redeemed! This was similar to ConvNet.js, but, it was made by an actual team (versus just one person) and provided more features that would allow the neural nets to run faster in the browser such as the use of WebGL and the use of a GPU. Within a week of me using DeepLearn.js, the team announced that DeepLearn.js was now TensorFlow.js. This was actually amazing and the timing was pure luck. TensorFlow.js brought with it some awesome features.
Model
According to the developers, models that run in TensorFlow.js are “1.5–2x slower than TensorFlow with Python”. Despite this, I still went forward because:
Neural networks running purely in the front end are super cool.
The amount of setup on the clients end is minimal, just a script injection on a webpage via a <script> tag.
The app wouldn’t need to be packaged with Python.
My early experiments with MNSIT were showing that TensorFlow.js caused no lag on my desktop app. LETS. GO.
TLDR, with TensorFlow.js, you sacrifice speed for usability. My grandmother could easily run TensorFlow.js models and that’s amazing. Setting this thing up is a breeze. Just plug in the script and go. No crazy dependencies on Python, CUDA, etc.
The most useful feature of TensorFlow.js is the ability to train models in Python via Keras, and port them over to TensorFlow.js through a simple script. That means I was able to take advantage of the full power of TensorFlow in Python and could quickly train my model using an AWS GPU cloud instance. Afterwards I could just convert the model to a TensorFlow.js model.
COOL. So thanks to the TensorFlow guys the process to train and run a model was now super easy. Now I just needed a model! I couldn’t think of an existing model for this task, so I decided to create my own after lots of iteration.
Model I came up with that predicts 27 classes since there are 27 heroes in Overwatch.
As always, the hardest part of coming up with a completely new model is finding the perfect balance between a model that underfits and overfits. My process is always to start small and build a model that gets okay results. Than I start adding layers and more parameters and run lots of experiments to see how the model reacts. And of course its smart to use regularization techniques as needed. For example, my models kept overfitting as I added more layers. So, I combatted this my throwing dropout in between nearly every layer. I probably didn’t need to go tat crazy with it, but it worked out well!
Training was a breeze with this model. The validation loss and training loss decreased accordingly and didn’t show signs of under-fit or overfit. By the end, the accuracy on both training set and validation set was around 100%. The test set accuracy was right below 100% as well.
yay!
The final model has around 127,000 parameters and is shown above! By the end I spent around 200$ on AWS cloud GPUs on all my different experiments.
Almost Done
The last thing I did was take my trained Keras model, convert it to TensorFlow.js model, and and hosted it on S3 here. Now, in TensorFlow.js I can just do:
startScreen is a function that takes screenshots of the users screen while they are in a game. I take the screenshot, crop it so it sees just the weapon icon, and pass that to tf.js like this:
And at this point, I know what character the person is playing and can change the music that plays :).
Final Observations
So, this was something I didn’t even think about prior: What happens if the desktop is running, but the user is not in game? What if their just browsing the internet or chilling in a pre-game lobby? I had to be sure that the playlists would only switch if the user was in an actual game because technically the desktop app could always be running.
I was worried about this because often with template matching (with small templates) you get a lot of false positives because its just looking for the pixels that look closest to the template. If I had used template matching for the desktop app, the user could have been chilling playing Call of Duty and the desktop app would randomly start playing music because something that looked like an Overwatch weapon icon popped up.
Luckily, I didn’t have to worry about this! For example, below I was just browsing the web with the desktop app on and my neural net was giving me a 7% chance I was playing Bastion.
Pretty cool! I don’t have to code for edge cases like when the user is in a lobby or playing another game. The neural net takes care of it because it’s gained a better understanding of what the weapons icon actually look like :).
And with that, thank you for making it this far and taking a moment learn about how I built this desktop app :). If you have any questions, don’t hesitate to drop me a question on Twitter! Now to celebrate the release of the desktop app!
me celebrating
|
DeepOverwatch — combining TensorFlow.js, Overwatch, Computer Vision, and Music.
| 391
|
deepoverwatch-combining-tensorflow-js-overwatch-computer-vision-and-music-1a84d4598bc0
|
2018-06-20
|
2018-06-20 15:20:14
|
https://medium.com/s/story/deepoverwatch-combining-tensorflow-js-overwatch-computer-vision-and-music-1a84d4598bc0
| false
| 2,166
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Farza
|
Computer science undergrad that does stuff with computer vision, mostly on self-driving cars. Follow me on Twitter @farzatv.
|
6911b0d3582d
|
farzatv
| 224
| 27
| 20,181,104
| null | null | null | null | null | null |
0
|
def nomal_approximation_to_binomial(n, p):
#Binomial(n, p)に相当するµとσを計算する
mu = p * n
sigma = math.sqrt(p * (1 - p) * n)
return mu, sigma
#変数が閾値を下回る確率はnormal_cdfで表せる
normal_probability_below = normal_cdf
#閾値を下回っていなければ、閾値より上にある
def normal_probability_above(lo, mu=0, sigma=1):
return 1 - normal_cdf(hi, mu, sigma)
#hiより小さく、loより大きければ、値はその間にある
def normal_probability_between(lo, hi, mu=0, sigma=0):
return normal_cdf(hi, mu, sigma) - normal_cdf(lo, mu, sigma)
#間になければ、範囲外にある
def normal_probability_outside(lo, hi, mu=0, sigma=1):
return 1 - normal_probability_between(lo, hi, mu, sigma)
def normal_upper_bound(probability, mu=0, sigma=1):
#確率P(Z <= z)となるzを返す
return inverse_normal_cdf(probability, mu, sigma)
def normal_lower_bound(probability, mu=0, sigma=1):
#確率P(Z >= z)となるzを返す
return inverse_normal_cdf(1 - probability, mu, sigma)
def normal_two_sided_bounds(probability, mu=0, sigma=1):
#指定された確率を方含する(平均を中心に)対称な境界を返す
tail_probability = (1-probability) / 2
#上側の境界はている確率(tail_probability)分上に
upper_bound = normal_lower_bound(tail_probability, mu, sigma)
#下側の境界はている確率分下に
lower_bound = normal_upper_bound(tail_probability, mu, sigma)
return lower_bound, upper_bound
mu_0, sigma_0 = normal_approximation_to_binomial(1000, 0.5)
normal_two_sided_bounds(0.95, mu_0, sigma=0) #(469, 531)
#pが0.55であると想定の元で、95%の境界を確認する
lo, hi = normal_two_sided_bounds(0.95, mu_0, sigma_0)
#p = 0.55であった場合のµとσを計算する
mu_1, sigma_1=nomal_approximation_to_binomial(1000, 0.55)
#第二種過誤とは、帰無仮説を棄却しないという誤りがあり、Xga当初想定の領域に入っている場合に生じる
type_2_probability = nomal_probability_between(lo, hi, mu_1, sigma_1)
power = 1 - typw_2_probability #0.887
hi = normal_upper_bound(0.95, mu_0, sigma_0) #526
type_2_probability = normal_probability_below(hi, mu_1, sigma_1)
power = 1 - typw_2_probability #0.936
def two_sided_p_value(x, mu=0, sigma=1):
if x >= mu:
#xが平均より大きい場合、テイル確立はxより大きい分
return 2 * normal_probability_above(x, mu, sigma)
else:
# xが平均より小さい場合、テイル確率はxより小さい分
return 2 * normal_probability_below(x, mu, sigma)
two_sided_p_value(529, mu_0, sigma_0) #0.062
extreme_value_count = 0
for _ in range(100000):
num_heads = sum(1 if random.random() < 0.5 else 0
#1000回のコイン投げを行いオモテが出る回数を数える
for _ in range(1000))
if num_heads >= 530 or num_heads <= 470:
extreme_value_count += 1
#そのうち極端な回数はどれだけ出たかを数える
print(extreme_value_count / 100000) #0.062
upper_p_value = normal_probability_above
lower_p_value = normal_probability_below
upper_p_value(524.5, mu_0, sigma_0) #0.061
upper_p_value(526.5, mu_0, sigma_0) #0.047
math.sqrt(p * (1 - p) / 1000
p_hat = 525 / 1000
mu = p_hat
sigma = math.sqrt(p_hat * (1 - p_hat) / 1000) #0.0158
normal_two_sided_bounds(0.95, mu, sigma)#[0.4940, 0.5560]
p_hat = 540 / 1000
mu = p_hat
sigma = math.sqrt(p_hat * (1 - p_hat) / 1000) #0.0158
normal_two_sided_bounds(0.95, mu, sigma) #[0.5091, 0.5709]
def run_experiment():
#歪みのないコインを1000回なげて、オモテが出たらTrue,ウラが出たらFalse
return [random.random() < 0.5 for _ in range(1000)]
def reject_fairness(experiment):
#5%の有意水準を用いる
num_heads = len([flip for flip in experiment if flip])
return num_heads < 469 or num_heads > 531
random.seed(0)
experiments = [run_experimet() for _ in range(1000)]
num_rejections = len([experiment in experiments if reject_fairness(experiment)])
print num_rejections #46
def estimated_parameters(N, n):
p = n / N
sigma = math.sqrt(p * (1 - p) / N)
return p, sigma
def a_b_test_statistic(N_A, n_A, N_B, n_B):
p_A, sigma_A = estimated_parameters(N_A, n_A)
p_B, sigma_B = estimated_parameters(N_B, n_B)
return (p_B - p_A) / math.aqrt(sigma_A ** 2 + sigma_B ** 2)
z = a_b_test_statistic(1000, 200, 1000, 180) #-1.14
two_sided_p_value(z) #0.254
z = a_b_test_statistic(1000, 200, 1000, 150) # -2.94
two_sided_p_value(z) # 0.003
def B(alpha, beta):
#確率の総和が1となるように定数で正規化する
return math.gamma(alpha) * math.gamma(beta) / math.gamma(alpha + beta)
def beta_pdf(x, alpha, beta):
if x<0 or x>1: #[0,1]の間では重みは0となる
return 0
return x ** (alpha - 1) * (1 - x) ** (beta - 1)/B(alpha, beta)
alpha / (alpha + beta)
| 42
| null |
2017-10-21
|
2017-10-21 04:20:45
|
2017-10-21
|
2017-10-21 12:09:13
| 0
| false
|
ja
|
2017-10-21
|
2017-10-21 12:09:13
| 0
|
1a84fa3276b7
| 17.616
| 3
| 0
| 0
|
統計と確率の理論を何に用いるのでしょうか。データサイエンスの部分には、データとそれを生成するプロセスに関する仮説を立て、検定を行う作業が含まれます。
| 3
|
仮説と推定
統計と確率の理論を何に用いるのでしょうか。データサイエンスの部分には、データとそれを生成するプロセスに関する仮説を立て、検定を行う作業が含まれます。
統計的仮説検定
データサイエンスでは、ある仮説が真である可能性が高いか否かをしめさなければならない場面がしばしば登場します。
ここで言う仮説とは「このコインには湯が編みがない」、「データサイエンティストはRよりもPythonを好む」、「鬱陶しい広告がポップアップされる上に、広告のクローズボタンが小さくて見えにくいようなWebページは内容が全く読まれることもなくページが閉じられてしまう可能性が高い」など、データに関する統計量に言い換えられる、ある種の主張のことです。統計量とは、様々な過程のもとで既知のぶんさんに従う確率変数の観測結果であると考えられ、それらの過程がどの程度正しいを与えることができます。
古典的な設定では、帰無仮説H0が基本的な立ち位置を示し、対立仮説H1と比較されます。統計量を用いて、この帰無仮説H0を棄却できるか否かをかていします。例えをしめします。
例) コイン投げ
コインに歪みがあるか否かを確認します。コインを投げて表が出る確率をpとした場合、コインに歪みがないことを締める帰無仮説は p=0.5となります。この仮設と対立仮説であるP≠0.5と比較して検定を行います。
この検定ではコインをn回投げてオモテが出た回数Xを数えます。各コイン投げはベルヌーイ思考で、XはBinomial(n, p)の確率変数となります。これは正規分布で近似可能です。
確率変数が正規分布に従う限り、実際の値が特定の範囲内に入る(もしくは入らない)確率はnomal_cdfを使って把握できます。
同じことは逆の手順でも可能です。あるレベルまでの可能性の相当する区間または平均を中心とした左右対称な領域を求めます。例えば、平均を中心として60%の確率で発生する領域を求めたければ、上下それぞれの確率が20%の分を取り除けばよいです(残りは60%となります)
コイン投げの回数をn = 1000としましょう。コインに歪みはないというかせるが真であるなら、Xは平均が500で分散15.8の正規分布で近似できます。
実際には真であるにもかかわらずH0を棄却してしますという第一種の過誤(偽陽性)をどの程度受け入れるか、いわゆる有意性について決めておかなければなりません。年代記によると多くの場合で有意水準を5%か1%に設定しています。ここでは5%を使用します。
Xが以下d与えられる区間外になってしまったため、H0が棄却されるという状況について考えてみましょう。
pが実査に0.5に等しい(つまり、H0が真である)場合を考える、Xがこの区間外となる可能性は5%であり、これは当初考えた有意性と正確に等しい値です。別の表現をしてみましょう。もしH0が偽であるにおかかわらずH0を棄却しないという第二種の過誤がおきない確率についても考えてみましょう。検定力を測るために、H0が偽と知っていることは、Xの分布に関する情報をほとんどもたらしません)コインの表が出やすいように少しだけ歪んでいて、pが実際には0.55であった場合に何が起きるのかを確認します。
この場合、検定力は次のように計算できます。
代わりに帰無仮説が、コインにゆがみがない、もしくはp≤0.5であると仮定してみます。この場合片側検定を使います。Xが500よりずっと大きければ帰無仮説と棄却し、500よりも小さければ棄却しません。つまり、5%の有意性で検定を行うにはnomal_probability_belowを使って確率が95%となるカットオフ値を求めることになります。
Xが469より小さい(H1が真であるなら、ほとんど起こり得ない値)場合にはH0を棄却せず、一方でXが526と531の間(H1が真であるなら、多少起こり得る可能性のある値)の場合にはH0を棄却することになるため、より強い検定であるといえます。
この検定を図る別の尺度がp値である。特定の確率でカットオフを選ぶ代わりにH0が真であると仮定して、実際に観測された値を少なくとも同等に極端な値が生じる確率を計算します。コインの歪みに関する両側検定は次のように行います。
オモテが530回出た場合は次のように計算できます。
これが理にかなった推定であることを納得するために、シュミレーションを行います。
5%よりも小さい値となたので、帰無仮説を棄却します。これは先に行った検定とつぎも同じです。
片側検定の場合、525回のオモテが出れば、帰無仮説を棄却しませんが、
527回ならば、棄却することになります。
信頼区間
分布を未知のパラメータとして、コインのオモテが出る確率に関する仮説検定を行ってきました。もしこれが本当であるなら、観測値の周辺の信頼区間をもとめるのが3番目の手法となります。
例えば、オモテを1、ウラを0とするベルヌーイ変数の平均を見ることで歪みのあるコインに対する確率を推定できます。1000回の試行で525回のオモテが出たとすると、pの推定値は0.525です。
この推定値はどの程度信頼できるでしょうか。pの正確な値を知っているなら中心極限定理により、このベルヌーイ変数の平均値は近似的に平均p及び次の標準偏差の正規分布に従います。
ここではpが未知となっているので、先程の推定値をつかいます。
この値は完全に理にかなったものだとはいえませんが、ともかくこの方法が使われます。正規分布の近似をつかうと、pの正しい値が次の区間に入るのは「95%の確率で信頼できる」という結論になります。
この結果、0.5はこの信頼区間内にあるため、コインに歪みがあるとは結論付けられません。では、重手が540回出た場合はどうでしょうか。
コインに歪みがないとした場合の値は、この信頼区間内にはまってません(つまり仮説が正しいとすれば95%の確率でその範囲に入るという検定に対して、この「コインに歪みがない」という仮説は成立しません)
pハッキング
5%の確率で誤って帰無仮説を棄却する手順は、定義により5%の確率で誤って帰無仮説を棄却します。
これが意味するのは、「有意な」結果を得ようとすれば、それは可能だということです。エータセットに対する十分な仮説検定を行えば、そのうちの1つは明らかな有意性を示します。外れ値を適切に取り除くことで、p値はおそらく0.05未満にできます。(相関でも同じことができます)
「p値を使った推定の枠組み」から得られる結論に対して何らかの手を入れてしまうこの手法をpハッキングと呼びます。この手法を批判した優れたThe earth is Roundという記事があります。
適切なデータサイエンスを行いたいのであれば、データを調査する前に仮説を決定し、データの整理は仮説を前提とせずに行い、p値は常識の代用品とはならないことに注意しないといけません(これと異なる手法をベイズ推定で、後ほど扱います)
事例:A/Bテストの実施
ユーザ体験の最適化、直接的に言うとユーザに広告をクリックさせるのがデータサイエンティストにとって重要な任務の1つとなります。
例えば、データサイエンティスト向けの新しい栄養ドリンクを開発しました。そこで、広告A「最高に美味しい」と広告B「こんなにバイアスが減った」のどちらがいいかを決めます。
Aを見た1000人のうち990人が広告をクリックしたのに対し、Bは10人しかクリックしなかったとすれば、Aの方がよいとわかりますが、違いがそれほど明確でなければどうでしょうか。そこで統計的推定が役立ちます。
Na人が広告Aをみて、そのうちのna人がクリックしたとします。これは、拘束が表示される確率がpaのベルヌーイ試行とみなせます。(Naが十分におおきいとして)na/Naは、平均pa、標準偏差σa=(pa(1-pa)/Na)**0.5の正規分布で近似できる確率変数となります。
同様に、nb/Nbは、平均pbで標準偏差σb = (pb(1 — pb) / Nb)**0.5の正規分布で珍事で確率変数となります。
これらの正規分布が独立であるなら(それぞれの別のベルヌーイ試行であるはずなので、この前提は妥当だと思われます)、その差も正規分布となり、平均pb-paおよび標準偏差(σa**2 + σb**2)**0.5となるはずです。
これにより、標準正規分布を持つ統計量を用いて、paとpbが等しいという帰無仮説を検定できます
例えば「最高においしい」をクリックしたのが1000人中200人。「こんなにバイアルが減った」をクリックしたのが1000人中180人だった場合、次のように計算できます。
平均が等しいときに、この大きさの違いが生じる確率は次の値となります。
これは値として十分に大きいため、帰無仮説を棄却できず、違いがあるという結論は導けません。一方「こんなにバイアスが減った」をクリックしたのが1000人中150人だった場合、
両方の広告が等しい場合に、このクリック数の違いが出る確率は0.003しかないということになります。
ベイズ推定
先の手法は、検定の結果「帰無仮説が正しい場合に、これだけの極端な違いが出る確率は0.3%しかない」という内容を導いています。
これとは別に、未知のパラメータを確率変数として扱う推定の手法があります。アナリストはパラメータの事前分布に対して観測データとベイズの定理を用いて、パラメータの事後分布を求めます。検定結果の確率を使う代わりに、パラメータ自身を用いて確率的な判断を行います。
例えば、(コイン投げなど)確率が未知のパラメータであったとすると、0か1の間の様々な値を取りうる分布が事前分布として頻繁に使われます。
一般的には、この分布は次の重みを中心とします。
そして、alpha,betaが大きければ、分散は狭くなります。
例えば、alpha,betaが共に1であった場合、単なる一様分布となります(中心の0.5から一様に分散します)betaよりもずっとalphaが大きければ、1の布巾に重みが偏ります。逆にbetaよりもずっとalphaが小さければ、重みは0の知覚に位置します。
pの事前分析について考えましょう。コインに歪みがある否かを明言したくないなら、alpha,betaを共に1とします。または、55%の確率で表が出るという強い確信があるなら、alpha=55, beta=45にします。
続いてコイン投げを何度も行い、表が出た回数と裏が出た回数tを観察します。ベイズの定理(くわえて、この話題wお読み進むための多少退屈な数学)によると、pの事後分布もβ分布となりますが、そのパラメータは、alpha + h, beta + tとなります。事後分布もβ分布となれるのは、偶然の一致ではなく、オモテの出る確率は二項分布により与えられ、ベータ分布は二項分布の共役事前分布です。つまり、二項分布の観測データによりベータ分布の事前分布を更新するとベータ分布の事後分布が得られます。
ここで、10回コインを投げて3回しかオモテが出なかった場合、一様の事前分布(ある意味、コインに歪みが歩かないか、明確にしていない)から始めたとしても、事後分布は0.33を中心としたBeta(4, 8)になるでしょう。はじめは度の確率も等しく起こりうると考えていたので、推測値は観測された確率に近くなります。
Beta(20, 20)で(コインにはおよそ歪みがないと表明して)始めた場合、事後分布は0.46を中心としたBeta(23, 27)になるでしょう。これは裏が多く出るようなバイアスを持っていると更新されたことを示します。
事前分布をBeta(30,10)とした場合、(75%の割合でオモテが出ると考えている)事後分布は0.66を中心にBeta(33, 17)となるでしょう。この場合、引き続きオモテが出るバイアスをもっていると考えられますが、当初の予想よりはその偏りは小さい値に更新されたと思います。
コイン投げの試行回数をもっと増やせば、事前分布の影響力は低下し、最終的には不全分布がどうであっても同じ事後分布に近づくでしょう。
例えば、事前にコインのバイアスをどのように考えていたとしても、コインを2000回投げた内オモテが1000回出るのを見れば、その考えを維持するのはこんなんです。ここで興味深いのは仮説の確率に関して「事前分布と観察されたデータからコインの表が出る確率が49%と51%の間に存在する可能性は、5%しかない」といえることです。これは「コインに歪みがないとして、極端な値がでる可能性は5%しかない」と言及するのとは大きな違いがあります。
ベイズ推定を余地いて行う仮説検定は、しばしば議論の対象になります。使用する数学が多少複雑であることが一因ですし、事前分布の選択が主観的である点も一因です。
|
仮説と推定
| 3
|
仮説と推定-1a84fa3276b7
|
2018-05-22
|
2018-05-22 07:06:20
|
https://medium.com/s/story/仮説と推定-1a84fa3276b7
| false
| 535
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Okazawa Ryusuke
|
I wanna be a DataScientist&Psychologist @Saga University economics.
|
2f57c3ad8306
|
SEKAINOOKAZAWA
| 80
| 65
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
32881626c9c9
|
2018-09-05
|
2018-09-05 16:26:59
|
2018-09-05
|
2018-09-05 16:30:58
| 7
| true
|
en
|
2018-09-07
|
2018-09-07 02:31:15
| 6
|
1a8686a160fd
| 5.465094
| 3
| 0
| 0
|
In this era of Artificial Intelligence, where bots can mimic human minds and outperform humans; a new age process automation tool called…
| 5
|
An Insight to Robotic and Intelligent Process Automation
In this era of Artificial Intelligence, where bots can mimic human minds and outperform humans; a new age process automation tool called RPA, “Robotic Process Automation” has been creating a lot of buzz. It is highly versatile and can be used by every industry to streamline and optimize their business processes. From data entry to claims processing to automatic payments, RPA can do it all.
According to a report by Forrester, the RPA Market will grow from $250 million in 2016 to $2.9 billion in 2021. (Clair, n.d.)
There is a huge potential for RPA and several businesses have started realizing the benefits that RPA could offer and the potential to improve their processes and reduce costs through implementing RPA.
What is RPA?
RPA uses automated systems that are governed by business logic and rules to streamline and optimize processes. They are referred to as ‘bots’, and they help with efficiently and effectively performing repeatable, rules-based tasks. The average knowledge employee employed to perform a back-office operation process has a lot of repetitive, routine tasks that are tedious and uninteresting. RPA tool mimics the activity of a human being in carrying out a well-defined, rule-based task within a process. It can do repetitive and routine tasks quickly, accurately, and diligently than humans, so that they could perform other tasks that require human interaction with customers and entails the need for emotional intelligence, reasoning, and judgment. ROI in RPA systems can range from 30 to 200 percent in the first 12 months alone. (Leslie Willcocks, n.d.)
Why RPA?
Business processes that can benefit from RPA typically have repeatable and predictable interactions with IT applications including those that may require switching between multiple applications or screens. If not RPA, then businesses will have to redesign their processes by implementing IT-driven transformation, or Outsource their operations, but RPA “Bots” can perform such routine business processes by emulating the way humans interact with applications through a user interface and by following simple rules to make decisions. An example of a routine business process would be the retrieval of information from one system and entering the same information into another system. Other tasks such as opening emails and attachments, Data processing and Integrating with enterprise tools. RPA is faster and more accurate than any human. For example, research shows that a bot can complete a task that would take a human 15 minutes in a mere 60 seconds.
RPA compared to traditional Process Transformation approaches
Source : Deloitte Analysis
How does RPA Work?
Below is a workflow model of how RPA is used by businesses to perform tasks. Process developers specify detailed instruction to Robots to perform tasks and publish this information in Robot Controller. The controller assign jobs to the bots and monitor their activities. The Bots perform the tasks and interacts with wide range of business applications. Once the tasks are performed, business users review tasks for any exceptions or escalations. (Peter Lowes, n.d.)
RPA Tools
There are 3 types of RPA Tools — Intelligent, Programmable and Self-learning.
Types of RPA
There are 3 types of RPA — Attended, Unattended and Hybrid.
Use of RPA in different Industries
RPA can benefit any industry. It is a great solution for companies that use legacy systems or for businesses where a large portion of the workforce works in the Back office in non-tech functions. Below are few examples of where RPA can be used to streamline processes. (Applied AI blog, n.d.)
Sales — Creating and delivering invoices, updating CRM.
Customer Service — Automate repetitive tasks, solving customer issues, loading profiles, or getting customer data.
Technology — Software installations
Finance — Reconciliation, financial planning or P&L preparation.
HR — Candidate sourcing, employee history verification, hiring and on-boarding, payroll automation, expense management, employee data management.
Operations — Updating inventory records, issuing refunds.
Banking — Loan processing, KYC.
Retail — Product categorization.
Below is a Use Case from Healthcare Industry that displays how RPA can transform Claims Processing Process by reducing the amount of time it takes to process claims by using RPA tools.
Benefits of RPA
Flexibility — RPA is applicable across all industries and organizations. It is easily scalable and can take on any rule based and repetitive task.
Cost effective — By implementing RPA, businesses will be able to reduce the time and money spent performing inefficient operational processes.
Productivity — RPA can lead to significant productivity enhancement. RPA products often come with a drag and drop interface which helps employees as they will not need additional training in coding or other complex fields.
Reliability — Robots can function 24/7/365. It offers speed & accuracy over human labor.
Accuracy — Irrespective of how tedious, repetitive or rule based a process is, Bots will follow the rules ensuring accuracy and reliable results. RPA is especially useful in roles that are prone to human errors.
Employee Morale — RPA can be an avenue to improved employee efficiency. It lets employees focus on value-adding tasks.
Cyber Security — Bots will not fall for common cyber-related attacks such as spear phishing, and social engineering. (Adam Muspratt, n.d.)
Challenges with implementing RPA
Needs solid Business Process Management — RPA can’t think or learn; the processes businesses want to automate with RPA need to be optimized before implementation. Ineffective processes can leave an organization vulnerable to a whole host of problems even, or especially, when they’re automated. Issues can range from cost overrun due to waste, to mistakes that adversely impact services or products.
Organizational Support — Top-down championing of operational excellence is a foundation of effective business process management. Executives buy-in is essential and they also need to promote the importance of automation in their process improvement efforts.
Technical pitfalls — Choosing a difficult-to-use RPA tool can slow down development and improvement efforts as deployment of RPA solution could take longer than expected.
What’s next for RPA?
Large IT companies are developing in-house RPA tools and are also partnering with vendors offering automation software solutions. Currently most of the RPA solutions are offering rule-based solutions, but we are slowly advancing to RPA solutions that can offer knowledge and judgement-based capabilities.
RPA at a Glance
References
Adam Muspratt. (n.d.). A guide to robotic process automation (RPA). Retrieved from https://www.processexcellencenetwork.com/rpa-artificial-intelligence/articles/a-guide-to-robotic-process-automation-rpa
Applied AI blog. (n.d.). 45 RPA Use Cases/ Applications: in-Depth Guide. Retrieved from https://blog.appliedai.com/robotic-process-automation-use-cases/#insurance
Clair, C. L. (n.d.). The RPA Market Will Reach $2.9 Billion By 2021. Retrieved from https://www.forrester.com/report/The+RPA+Market+Will+Reach+29+Billion+By+2021/-/E-RES137229
Leslie Willcocks. (n.d.). The value of robotic process automation. Retrieved from https://www.mckinsey.com/industries/financial-services/our-insights/the-value-of-robotic-process-automation
Peter Lowes. (n.d.). A guide to robotic process automation. Retrieved from https://www2.deloitte.com/us/en/pages/operations/articles/a-guide-to-robotic-process-automation-and-intelligent-automation.html
The Lab Consulting. (n.d.). Robotic process automation for health insurance — robotics use case in claims. Retrieved from https://thelabconsulting.com/health-insurance-rpa-use-case/
|
An Insight to Robotic and Intelligent Process Automation
| 45
|
an-insight-to-robotic-and-intelligent-process-automation-1a8686a160fd
|
2018-09-07
|
2018-09-07 02:48:35
|
https://medium.com/s/story/an-insight-to-robotic-and-intelligent-process-automation-1a8686a160fd
| false
| 1,170
|
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
| null |
datadriveninvestor
| null |
Data Driven Investor
|
info@datadriveninvestor.com
|
datadriveninvestor
|
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
|
dd_invest
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Devshree Golecha
|
Deep experience & expertise in Lean Six Sigma, Data Analytics, Process Optimization & Statistics. MBA, ASQ CSSBB, Analytics & Data Science at Harvard University
|
7b96da27c935
|
devshreek
| 6
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-18
|
2017-09-18 02:28:43
|
2017-09-18
|
2017-09-18 03:24:13
| 7
| false
|
en
|
2017-10-14
|
2017-10-14 06:08:34
| 17
|
1a88fcfab30
| 8.906604
| 2
| 0
| 0
|
A Machine Learning Approach to 1984 Congressional Voting Data
| 5
|
The Origins and Breakdown of the Washington Consensus
A Machine Learning Approach to 1984 Congressional Voting Data
History doesn’t repeat itself, but it does rhyme. — Mark Twain
For many, watching the election results coming in on November 8th, 2016 left an indelible impression. The victory of Donald Drumpf still puzzles many people. Pundits, and major news network described Donald Drumpf’s win as one of the biggest political upsets in US history. They described this singular event as surprising, or stunning right after the election. They pointed to states like Wisconsin, Michigan, the so called Rust Belt, where many traditional unionists, and working class voted for Barack Obama for the past two elections, defected to Drumpf. I’ve been following the ins and outs of American politics for a while, and I believe that Drumpf’s victory is not stemmed from a swift political turnaround or even a realignment, but rather the result of a simmering process developed over the years. I was careful not let my biases stand in my way, but rather I would using a voting record from the past and see whether I could find any insights that could explain this shift of the political landscape.
Several years ago, I came across an article that describes Republican and Democratic parties as coalitions of various political forces, and diverse elements of society. In other words, they are not monolithic entities that always vote together on certain issues, but rather groups made of fractious elements whose ultimate message is a result of many backroom compromises, and consensus-buildings. Under this premise, I thought a dataset of congressional voting records from the past could help me to discern any hidden pattern. Are there any faction within a party who didn’t necessarily vote together with the establishment. In other words, I want to know whether there are contrasting voices within in a political party that belied the façade of unity.
Thanks to the riches of data from the Internet, I found a dataset from UCI Machine Learning Repository that is befitting to my goal. It’s a voting records on 16 key issues from members of the House of Representatives. Moreover, the data also contains 435-member’s party affiliation labels. The binary “yea and “nay” on 16 issues makes it easy to do a clustering exercises using unsupervised learning algorithms like k-means. I wanted cluster these Congress members by their votes on issues and compare the label produced by k-means algorithm with their original party labels of the dataset to discern interesting patterns. Moreover, I thought the politics of 80s were not as polarized as it is today, and the congresspeople then would more likely act as delegates to the voters. Their stances on issues would be a barometer of the public sentiment at large.
Data cleanup
I read in the dataset as a data frame and provided it with headers. It’s in wide table format with 17 columns for party label and issues. To better inspect to dataset, I unstacked the dataset, and inspected the top issues supported by Republicans and Democrats based on rough counts of “yeas.” The results are interesting. Their voting pattern largely met people’s expectations of the typical image of two parties. For example, Republicans supported bill that was tough on crimes and endorsed American military interventions in South America while Democrats demurred. Moreover, it was apparent that partisanship centered around the adoption of democrats-penned budget resolution which few Republicans voted for. However, I found many “?” as placeholders for missing values in the dataset. I thought theses missing values could best be treated as 0s, but doing so would distort certain bills as to make them seem less favorable. To compensate for that, I decided to create a function that normalize the votes for each issues.
Voting records on each piece of legislations
Clustering
After doing preprocessing, I stripped off the party labels and used k-means to cluster the Congress members based on their votes on issues into 2 and 3 clusters. These plots looked promising to me. For the cluster of 2, k-means successfully clustered most Democrats into one group. 218 Democrats and and 6 Republicans were put into a cluster, which was not bad at all.
Two cluster 3D scatter plot
For the cluster of 3, k-means has a cluster that captures all Democrats. I was also happy with the graphs my program generated. For the cluster of 3, when I unselected the middle group, and the political chasm between the color red, and blue dots (representing Republican and Democratic Congress members) were held in high relief. It was a great way to visualize the political divide at the time.
Three cluster 3D scatter plot
Radar plot based on clustering results
Analysis
However, simply inspecting the graphs wouldn’t tell the full story. I then used the original party affiliation label to check whether machine-learning techniques characterize Congress people differently from their original labels. To my surprise, I got some starling results that goaded me thinking more deeply on the political development of past thirty years. The analysis of my discovery consists of three part:
the emergence of a “new” group of Democrats,
the rise of the Third Way,
the ascendency and breakdown of the Washington Consensus.
The Emergence of “new” Group of Democrats
One thing that stroke me as startling was that the 2 cluster-instance labeled 49 Democrats as Republicans. The result suggested that this group of Democrats which made up about a quarter of Democratic Congress members had very similar and consistent voting patterns as Republicans. I didn’t make haste to attribute their voting patterns to Reagan’s sway on House democrats. I then zoomed in to the issues they supported. I first singled out their k-means clustering labels and used them to select the the top issues they supported from my transformed dataset. The result is attached below.
There are three interesting observations from the votes of this group of Democrats. Domestically, they sided with Republicans and enthusiastically supported Religious groups in schools, a bill that was lobbied for Christian groups who wanted to ensure students the right to conduct Bible study programs. Moreover, at odds with the majority of Democrats, they were as harsh on crime and law enforcement as Republicans. ( Coefficient of 0.116 v.s. 0.04 of traditional Democrats). I suspected their supports of the Crime Bill could be the determining factor that tricked the machine into clustering them as Republicans. Internationally, this group of Democrats sided with Republicans in supporting economic and military interventions in Latin America.
As for me, this emergent group of Democrats, albeit only made up a quarter of Democrats, voted distinctly different from the mainstream Democrats at the time. Although their presence in the early 80s were still marginal, and they signaled a profound reconfiguration of the Democratic party. The dominant group of Democrats in the 80s, the so-called Social Democrats, regarded Franklin D. Roosevelt as their model. To them, FDR was the yardstick of their liberal ambitions. They continued to pass on the mantra of FDR, that supports social welfare expansion, and even some forms of economic interventions to promote social justice. The voting pattern of this emerging group of centrist Democrats suggested some dissension with the former. Although they still maintained the labels of Democrats, they didn’t often go along with their social democratic peers. To understand their different voting patterns, I would like to obtain more detailed information of these Congressmen, but I was only able to find a geographic distributions of House members based on political affiliations in 1984.
Given the presence of many Democratic House seats in Mid-West and South, I thought this group of Democrats definitely were pressured by voters into favoring Reagan’s more socially conservative policies. Also Internationally, they were more willing to use and and fund the military to chime in with Reagan’s optimistic message to Americans. Unlike their more dovish Democratic peers, they supported US’s more robust presence in the Third World countries as form of trade liberalization and military intervention to further U.S. Interests.
The Rise of the Third Way
The discovery of this latent splinter 49-member-group of Democrats was the most interesting points this dataset revealed. In 20/20 hindsight, it’s stunning to imagine that the ideas of this small group of centrist Democrats which tricked computer into labeling them as Republicans could prevail. The rise of Bill Clinton in the 90s cemented this group of Centrist Democrats as main voice of the Democratic Party, replacing their older, more left-leaning peers. Bill Clinton was the only Democrat who won two consecutive terms to his day, and many of his policies aligned with the top agendas of the Centrist Democrats of ’84. Clinton’s promise of welfare reform in the 1992 presidential campaign and its subsequent enactment, epitomized a rightward shift of Democratic positions. It’s interesting to see that many of Clinton’s policies seemed a natural continuation of stances of emergent Democrats in 1984. His Defense of Marriage Act, and Religion Freedom Restoration Act were aimed at luring over socially conservative working class voters. And his 1994 Omnibus Crime Bill, which vastly enhanced law enforcement apparatus was more potent than the 1984 Crime Bill introduced by Reagan.
Drumpf and the Breakdown of Washington Consensus
What’s the big deal then? In what ways the developments of Democratic Party related to the politics today? You may ask. I think the Rise of the Centrist Democrats changed the political landscape in three important ways. One is the convergence of Republican and Democratic positions on trade and fiscal polices as if they are two brands of conservatism. For one thing, they share similar attitudes on trade liberalization and potential military intervention in developing countries. These ideas, and bipartisan consensus were best exemplified by the Washington Consensus, a group of 10 policies promoted by Washington-based economic institutions such as IMF, and the World Bank as their standard prescription to developing countries seeking their aid. The 10-point Washington Consensus facilitates the outsourcing and promotion non-American manufacturing jobs while in the name of trade liberalization. The convergences of bipartisan support with regard to trade, and deregulation certainly created a sense of resentment from the working class who saw their income falling in real terms despite the unprecedented global economic growth.
Second, as a reaction to the shift to the center of the Democrats, the Republican Party also shifts further to the right of the spectrum that undermines its ability to govern. A telling example is the recent healthcare fiascos. Many Republicans from the Freedom Caucus thought the health care bill was not conservative enough, and opposed their own party’s own legislation. The fringe yet decisive groups like Freedom Caucus doesn’t have as much the mindset of a governing political party as a radical insurgency that only cares what it can get. Thus, the latest health care drama also underscores one of the implications of the rightward shift of the Democratic Party.
The third is that the convergence of the two parties also contributes a perceived sense of political gridlock and decay especially among rural whites in the sense that something can’t be done in Washington. Moreover, with the decimation of social democrats which used to be make up the bulk of the Democratic Party, many rural people and union members find the channels to express their interests largely cut off. It gives to a sense that both parties don’t care them anymore. Therefore this segment of American population becomes very susceptible to populist agitations and protectionist rhetorics. These people find hope in people like Drumpf whose rhetoric against both “draining the swamp”, and “Job-stealing China” speaks to their hearts. The white working class supported Drumpf for they see Drumpf a solution to their downtrodden economic conditions and much of the political illness of Washington. Washington Consensus which has promulgated by Washington elites over the years as the guiding economic ideal from the West for more than two decades, was dealt a severe blow by Drumpf.
Conclusion
From the machine learning result I got from 1984 voting data, I was be able to tie the growth and ascendency of the Centrist Democrats to the current political developments. Just as Mark Twain once quipped that “history doesn’t repeat itself, but it does rhyme,” history does move in cycles. With Drumpf’s victory, many millennials are becoming more politically conscious. They begins to paying attention to politics and to the part of America who is not usually visible or well-represented. With this growing political consciousness, It would interesting to know where we are going from here.
|
The Origins and Breakdown of the Washington Consensus
| 9
|
a-machine-learning-approach-to-1984-congressional-voting-data-1a88fcfab30
|
2018-03-21
|
2018-03-21 11:43:16
|
https://medium.com/s/story/a-machine-learning-approach-to-1984-congressional-voting-data-1a88fcfab30
| false
| 2,082
| null | null | null | null | null | null | null | null | null |
Politics
|
politics
|
Politics
| 260,013
|
Zhikai Chen
| null |
94e31c0a8e3f
|
cielfearless
| 0
| 14
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-01
|
2017-09-01 21:27:32
|
2017-09-01
|
2017-09-01 13:12:00
| 5
| false
|
en
|
2017-09-01
|
2017-09-01 21:29:07
| 33
|
1a8ad18dcdf2
| 16.22956
| 0
| 0
| 0
|
This is a guest post by Gábor Ugray on NMT model building challenges and issues. Don’t let the playful tone and general sense of frolic in…
| 4
|
A Fun, Yet Serious Look at the Challenges we face in Building Neural Machine Translation Models
This is a guest post by Gábor Ugray on NMT model building challenges and issues. Don’t let the playful tone and general sense of frolic in the post fool you. If you look more closely, you will see that it very clearly defines an accurate list of challenges that one might come upon when one ventures into building a Neural MT engine. This list of problems is probably the exact list that the big boys (Microsoft, FaceBook, Google, and others) have faced some time ago. I have previously discussed how SYSTRAN and SDL are solving these problems. While this post describes an experimental system very much from a do-it-yourself perspective, production NMT engines might differ only by the way in which they handle these various challenges.
This post also points out a basic issue about NMT — while it is clear that NMT works, often surprisingly well, it is still very unclear what predictive patterns are learned, which makes it hard to control and steer. Most (if not all) of the SMT strategies like weighting, language model, terminology over-ride etc.. don’t really work here. Data and algorithmic strategies might drive improvement, but linguistic strategies seem harder to implement.
Silvio Picinini at eBay also recently compared output from an NMT experiment and has highlighted his findings here: https://www.linkedin.com/pulse/ebay-mt-language-specialists-series-comparing-nmt-smt-silvio-picinini
While it took many years before an open source toolkit (Moses) appeared for SMT, we see that NMT already has four open source experimentation options: OpenMT, Nematus, Tensorflow NMT, and Facebook’s Caffe2. It is possible the research community at large may come up with innovative and efficient solutions to the problems we see described here. Does anybody still seriously believe that LSPs can truly play in this arena building competitive NMT systems by themselves? I doubt it very much and would recommend that LSPs start thinking about which professional MT solution to align with because NMT indeed can help build strategic leverage in the translation business if true expertise is involved. The problem with DIY (Do It Yourself) is that having multiple tool kits available is not of much use if you don’t know what you are doing.
Discussions on NMT also seem to be often accompanied by people talking about the demise of human translators (by 2029 it seems). I remain deeply skeptical, even though I am sure MT will get pretty damned good on certain kinds of content, and believe that it is wiser to learn how to use MT properly, than dismiss it. I also think the notion of that magical technological convergence that they call Singularity is kind of a stretch. Peter Thiel (aka #buffoonbuddypete) is a big fan of this idea and has a better investment record than I do, so who knows. However, I offer some quotes from Steven Pinker that have the sonorous ring of truth to them:
“There is not the slightest reason to believe in a coming singularity. Sheer processing power [and big data] is not a pixie dust that magically solves all your problems.” Steven Pinker
Elsewhere, Pinker also says:
“… I’m skeptical, though, about science-fiction scenarios played out in the virtual reality of our imaginations. The imagined futures of the past have all been confounded by boring details: exponential costs, unforeseen technical complications, and insuperable moral and political roadblocks. It remains to be seen how far artificial intelligence and robotics will penetrate into the workforce. (Driving a car is technologically far easier than unloading a dishwasher, running an errand, or changing a baby.) Given the tradeoffs and impediments in every other area of technological development, the best guess is: much farther than it has so far, but not nearly so far as to render humans obsolete.”
The emphasis below is all mine.
=====
I don’t know about you, but I’m in a permanent state of frustration with the flood of headlines hyping machines that “understand language” or are developing human-like “intelligence.” I call bullshit! And yet, undeniably, a breakthrough is happening in machine learning right now. It all started with the oddball marriage of powerful graphics cards and neural networks.With that wedding party still in full swing, I talked Terence Lewis[*] into an even more oddball parallel fiesta. We set out to create a Frankenstein translator, but after running his top-notch GPU on full power for four weeks, we ended up with an astonishingly good translator and an astonishingly stupid bilingual chatbot.
And while we’re at it: Terence is obviously up for mischief, but more importantly, he offers a completely serious English<>Dutch machine translation service commercially. There is even a plugin available for memoQ, and the MyDutchPal system solves many of the MT problems that I’m describing later in this post.
And yet the plane is aloft! A fitting metaphor for AI’s state of the art.
Source: the internets. So, check out the live demo below this image, then read on to understand what on earth is going on here.
You can try the NMT engine at this link on the original posting.
It all started in May when I read Adrian Colyer’s[2] summary of the article Understanding deep learning requires re-thinking generalization[3]. The proposition of Chiyuan Zhang & co-authors is so fascinating and relevant that I’ll just quote it verbatim:
What is it that distinguishes neural networks that generalize well from those that don’t? […]
Generalisation is the difference between just memorising portions of the training data and parroting it back, and actually developing some meaningful intuition about the dataset that can be used to make predictions.
The authors describe how they set up a series of original experiments to investigate this. The problem domain they chose is not machine translation, but another classic of deep learning: image recognition. In one experiment, they trained a system to recognize images — except they garbled the data set, randomly shuffling labels and photos. It might have been a panda, but the label said bicycle, and so on, 1.2 million times over. In another experiment, they even replaced the images themselves with random noise. The paper’s conclusion is… ambiguous. Basically, it shows that neural networks will obediently memorize any random input (noise), but as for the networks’ ability to generalize from a real signal, well, we don’t really know. In other words, the pilot has no clue what they are doing, and yet the plane is still flying, somehow. I immediately knew that I wanted to try this exact same thing, but with a purpose-built neural MT system. What better way to show that no, there’s no talk about “intelligence” or “understanding” here! We’re really dealing with a potent pattern-recognition-and-extrapolation machine. Let’s throw a garbled training corpus at it: genuine sentences and genuine translations, but matched up all wrong. If we’re just a little bit lucky, it will recognize and extrapolate some mind-bogglingly hilarious non-patterns, our post about it will go viral, and comedians will hate us.
OK, let’s build a Frankenstein translator by training an NMT engine on a corpus of garbled sentence pairs. But wait…
What language pair should it be? Something that’s considered “easy” in MT circles. We’re not aiming to crack the really hard nuts; we want a well-known nut and paint it funny. The target language should be English, so you, dear reader, can enjoy the output. The source language… no. Sorry. I want to have my own fun too, and I don’t speak French. But I speak Spanish!
Crooks or crooked cucumbers? There is an abundance of open-source training data[4] to choose from, really. The Hansards are out (no French), but the EU is busy releasing a relentless stream of translated directives, rules and regulations, for instance. It’s just not so much fun to read bureaucratese about cucumber shapes. Let’s talk crooks and romance instead! You guessed right: I went for movie subtitles. You won’t believe how many of those are out there, free to grab.
Too much goodness. The problem is, there are almost 50 million Spanish-English segment pairs in the OpenSub2016[5] corpus. NMT is known to have a healthy appetite for data, but 50 million is a bit over the line. Anything for a good joke, but we don’t have months to train this funny engine. I reduced it to about 9.5 million segment pairs by eliminating duplicates and keeping only the ones where the Spanish was 40 characters or longer. That’s still a lot, and this will be important later.
Straight and garbled. At this stage, we realized we actually needed two engines. The funny translator is the one we’re really after, but we should also get a feel for how a real model, trained from the real (non-garbled) data would perform. So I sent Terence two large files instead of one.
The training. I am, of course, extremely knowledgeable about NMT, as far as bar conversations with attractive strangers go. Terence, on the other hand, has spent the past several months building a monster of a PC with an Nvidia GTX 1070 GPU, becoming a Linux magician, and training engines with the OpenNMT framework[6]. You can read about his journey in detail on the eMpTy Pages blog[7]. He launched the training with OpenNMT’s default parameters: standard tokenization, 50k source and target vocabulary, 500-node, 2-layer RNN in both encoder and decoder, 13 epochs. It turned out one epoch took about one day, and we had two models to train. I went on vacation and spent my days in suspense, looking roughly like this:
The “straight” model was trained first, and it would be an understatement to say I was impressed when I saw the translations it produced. If you’re into that sort of thing, the BLEU score is a commendable 32.10, which is significantly higher than, well, any significantly lower value.[8] The striking bit is the apparent fluency and naturalness of the translations. I certainly didn’t expect a result like this from our absolutely naïve, out-of-the-box, unoptimized approach. Let’s take just one example:
La doctora no podía participar en la conferencia, por eso le conté los detalles importantes yo mismo. — -
The doctor couldn’t participate in the conference, so I told her the important details myself.
Did you spot the tiny detail? It’s the feminine pronoun her in the translation. The Spanish equivalent, le, is gender-neutral, so it had to be extrapolated from la doctora — and that’s pretty far away in the sentence! This is the kind of thing where statistical systems would probably just default to masculine. And you can really push the limits. I added stuff to make that distance even longer, and it’s still her in the impossible sentence, La doctora no podía participar en la conferencia que los profesores y los alumnos habían organizado en el gran auditorio de la universidad para el día anterior, además no nos quedaba mucho tiempo, por eso le conté los detalles importantes yo mismo.
But once our enthusiasm is duly curbed, let’s take a closer look at the good, the bad and the ugly. If you purposely start peeling off the surface layers, the true shape of the emperor’s body begins to emerge. Most of these wardrobe malfunctions are well-known problems with neural MT systems, and much current research focuses on solving them or working around them.
Unknown words. In their plain vanilla form, neural MT systems have a severe limitation on the vocabulary (particularly target-language vocabulary) that they can handle. 50 thousand words is standard, and we rarely, if ever, see systems with a vocabulary over 100k. Unless you invest extra effort into working around this issue, a vanilla system like ours produces a lot of unks[9], like here:
Tienes que invitar al ornitólogo también. — -
You have to invite the unk too.
This is a problem with fancy words, but it gets even more acute with proper names, and with rare conjugations of not-even-so-fancy words.
Omitted content. Sometimes, stuff that is there in the source simply goes AWOL in the translation. This is related to the fact the NMT systems attempt to find a most likely translation, and unless you add special provisions, they often settle for a shorter output. This can be fatal if the omitted word happens to be a negation. In the sentence below, the omitted part (in red) is less dramatic, but it’s an omission all the same.
Lynch trabaja como siempre, sin orden ni reglas: desde críticas a la televisión actual a sus habituales reflexiones sobre la violencia contra las mujeres, pasando por paranoias mitológicas sobre el bien y el mal en la historia estadounidense. — -
Lynch works as always, without order or rules: from criticism to television on current television to his usual reflections about violence against the women, going through right and wrong in American history.
Hypnotic recursion. Very soon after Google Translate switched to Neural MT for some of its language combinations, people started noticing odd behaviors, often involving loops of repeated phrases.[10] You see one such case in the example above, highlighted in green: that second television seems to come out of thin air. Which is actually pretty adequate for Lynch, if you think about it.
Learning too much. Remember that we’re not dealing with a system that “translates” or “understands” language in any human way. This is about pattern recognition, and the training corpus often contains patterns that are not linguistic in nature.
Mi hermano estaba conduciendo a cien km/h. — -
My brother was driving at a hundred miles an hour.
Mi hermano estaba conduciendo a 100 km/h. — -
My brother was driving at 60 miles an hour.
Since when is a mile a translation of kilometer? And did the system just learn to convert between the two? To some extent, yes. And that’s definitely not linguistic knowledge. But crucially, you don’t want this kind of arbitrary transformation going on in your nuclear power plant’s operating manual.
Numbers. You will have guessed by now: numbers are a problem. There are way too many of them critters to fit into a 50k-vocabulary, and they often behave in odd ways in bilingual texts attested in the wild. Once you stray away from round numbers that probably occur a lot in the training corpus, trouble begins.
Mi hermano estaba conduciendo a 102 km/h. — -
My brother was driving at unk.
Mi hermano estaba conduciendo a 85 km/h. — -
My brother was driving at 85 miles an hour.
Finally, data matters. Our system might be remarkably good, but it’s remarkably good at subtitlese. That’s all it’s ever seen, after all. In Subtitle Land, translations like the one below are fully legit, but they won’t get you far in a speech writing contest for the Queen.
No le voy a contar a la profesora. — -
I’m not gonna tell the teacher.
Now on to the “crazy” model! I made a tremendous mental effort to keep my expectations low, but secretly, at the bottom of my heart, I was hoping for the kind of nonlinear oddity that you get if you start inputting жо into Google Translate[11]:
жо > Jo жожо > Jojo жожожо > Joess жожожожо > Reverently жожожожожожо > Rejoicing жожожожожожожожо > Reassuringly
жожожожожожожожожо > Reaping thee
Compared to this, our crazy system is somewhat underwhelming.
whisky > Thought! sangría > Thought!? Necesito un whisky. > I don’t know what you’re talking about. жо > . honestly guess guess guess guess gues Malkovich > . honestly guess guess guess guess guess
Malkovich Malkovich > You know, I don’t know what you’re talking about.
Let’s just put it this way: I’ve heard funnier jokes before. And those jokes tended to be a lot less repetitive, too. OK, with a bit of luck you do get a few highlights, in the “free self-help advice for nuts” kind of way, but that’s about it.
En este día de Julio, me gustaría escribir algunas reflexiones sobre como me siento, en relación con mi mismo, que es una de las relaciones más difíciles y complejas que una persona debe llevar a adelante, y en relación con los demás… — -
I’m sure you’re aware of the fact that you’re the only one who’s been able to find out what’s going on, and I don’t want you to think that I’m the only one who can help you.
There seem to be two rules to this game:
What you input doesn’t matter a whole lot. The only thing that makes a real difference is how long it is.
The crazy “translations” have nothing to do with the source. They are invariably generic and bland. They could almost be a study in noncommittal replies.
And that last sentence right there is the key, as I realized while I was browsing the OpenNMT forums[12]. It turns out people are using almost the same technology to build chatbots with neural networks. If you think about it, the problem can indeed be defined in the same terms. In translation, you have a corpus of source segments and their translations; you collect a lot of these and train a system to give the right translation for the right source. In a chatbot, your segment pairs are prompts and responses, and you train the system to give the right response to the right prompt.
Except, this chatbot thing doesn’t seem to be working as well as MT. To quote the OpenNMT forum: People call it the “I Don’t Know” problem and it is particularly problematic for chatbot type datasets.
For me, this is a key (and unanticipated) take-away from the experiment. We set out to build a crazy translator, but unwittingly we ended up solving a different problem and created a massively uninspired bilingual chatbot. Beyond any doubt, the more important outcome for me is the power of neural MT. The quality of the “straight” model that we built drastically exceeded my expectations, particularly because we didn’t even aim to create a high-quality system in the first place. We basically achieved this with an out-of-the-box tool, the right kind of hardware, and freely available data. If that is the baseline, then I am thrilled by the potential of NMT with a serious approach. The “crazy” system, in contrast, would be a disappointment, were it not for the surprising insight about chatbots. Let’s pause for a moment and think about these. They are all over the press, after all, with enthusiastic predictions that in a very short time, they will pass the Turing test, the ultimate proof of human intelligence.
Well, it don’t look that way to me. Unlike translated sentences, prompts and responses don’t have a direct correlation. There is something going on in the background that humans understand, but which completely eludes a pattern recognition machine. For a neural network, a random sequence of letters in a foreign language is as predictable a response as a genuine answer given by a real human in the original language. In fact, the system comes to the same conclusion in both scenarios: it plays it safe and produces a sequence of letters that’s a generally probable kind of thing for humans to say.
Let’s take the following imaginary prompts and responses:
How old are you?
No, seriously, I took the red door by mistake.
Guess who came to yoga class today.
Poor Mary!
It would be a splendid exercise in creative writing to come up with a short story for both of them. Any of us could do it in a breeze, and the stories would be pretty amusing. There is an infinite number of realities where these short conversations make perfect sense to a human, and there is an infinite number of realities where they make no sense at all. In neither case can the response be predicted, in any meaningful way, from the prompt or the preceding conversation. Yet that is precisely the space where our so-called artificial “intelligence” currently live. The point is, it’s ludicrous to talk about any sort of genuine intelligence in a machine translation system or a chatbot based on recurrent neural networks with a long short-term memory.
Comprehension is that elusive thing between the prompts and the responses in the stories above, and none of today’s technologies contains a metaphorical hidden layer for it. On the level our systems comprehend reality, a random segment in a foreign language is as good a response as Poor Mary!
Terence Lewis, MITI, entered the world of translation as a young brother in an Italian religious order, where he was entrusted with the task of translating some of the founder’s speeches into English. His religious studies also called for a knowledge of Latin, Greek, and Hebrew. After some years in South Africa and Brazil, he severed his ties with the Catholic Church and returned to the UK where he worked as a translator, lexicographer[13] and playwright. As an external translator for Unesco, he translated texts ranging from Mongolian cultural legislation to a book by a minor French existentialist. At the age of 50, he taught himself to program and wrote a rule-based Dutch-English machine translation application which has been used to translate documentation for some of the largest engineering projects in Dutch history. For the past 15 years, he has devoted himself to the study and development of translation technology. He recently set up MyDutchPal Ltd to handle the commercial aspects of his software development. He is one of the authors of 101 Things a Translator Needs to Know[14].
[1] The live demo is provided “as is”, without any guarantees of fitness for purpose, and without any promise of either usefulness or entertainment value. The service will be online for as long as I have the resources available to run it (a few weeks probably). Oh yes, I’m logging your queries, and rest assured, I will be reading them all. I am tremendously curious to see what you come up with, and I want to enjoy all the entertaining or edifying examples that you find.
[2] the morning paper. an interesting/influential/important paper from the world of CS every weekday morning, as selected by Adrian Colyer.
blog.acolyer.org/
[3] Understanding deep learning requires rethinking generalization. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals. ICLR 2017 conference submission.
openreview.net/forum?id=Sy8gdB9xx¬eId=Sy8gdB9xx
[4] OPUS, the open parallel corpus. Jörg Tiedemann.
opus.lingfil.uu.se/
[5] OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. Pierre Lison, Jörg Tiedemann.
stp.lingfil.uu.se/~joerg/paper/opensubs2016.pdf
[6] OpenNMT: Open-Source Toolkit for Neural Machine Translation.
arxiv.org/abs/1701.02810
opennmt.net/
[7] My Journey into “Neural Land”. Guest Post by Terence Lewis on the eMpTy Pages blog.
kv-emptypages.blogspot.com/2017/06/my-journey-into-neural-land.html
[8] Never trust anyone who brags about their BLEU scores without giving any context. I’m not giving you any context, but you have the live demo to see the output for yourself. Also, a few words about this score. I calculated it on a validation set that contains 3k random segment pairs removed from the corpus before training. So they are in-domain sentences, but they were not part of the training set. The score was calculated on the detokenized text, which is established MT practice, except in NMT circles, who seem to prefer the tokenized text, for reasons that still escape me. And if you want to max out on the metrics fetish, the validation set’s TER score is 47.28. There. I said it.
[9] Don’t get me wrong, I’m a great fan of unks. They can attend my parties anytime, even without an invitation. If I had a farm I would be raising unks because they are the cutest creatures ever.
[10] Electric sheep. Mark Liberman on Language Log.
languagelog.ldc.upenn.edu/nll/?p=32233
[11] From the same Language Log post quoted previously. Translations were retrieved on August 6, 2017; they are likely to change when Google updates their system.
[12] English Chatbot advice
forum.opennmt.net/t/english-chatbot-advice/32/5
[13] Harrap’s English-Brazilian Portuguese business dictionary. Terence Lewis, Lígia Xavier, Cláudio Solano. [link]
[14] 101 Things a Translator Needs to Know. ISBN 978–91–637–5411–1
www.101things4translators.com
Gábor Ugray is co-founder of Kilgray, creators of the memoQ collaborative translation environment and TMS. He is now Kilgray’s Head of Innovation, and when he’s not busy building MVPs, he blogs at jealousmarkup.xyz and tweets as @twilliability.
Originally published at kv-emptypages.blogspot.com on September 1, 2017.
|
A Fun, Yet Serious Look at the Challenges we face in Building Neural Machine Translation Models
| 0
|
a-fun-yet-serious-look-at-the-challenges-we-face-in-building-neural-machine-translation-models-1a8ad18dcdf2
|
2018-05-09
|
2018-05-09 06:04:25
|
https://medium.com/s/story/a-fun-yet-serious-look-at-the-challenges-we-face-in-building-neural-machine-translation-models-1a8ad18dcdf2
| false
| 4,080
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
K Vashee
|
Translation Technology, Collaboration, Dreamer, Better Digital Sound Technology, Driving Change -- Finding Passionate Work & Connection
|
8774b08c4cdb
|
kvashee
| 837
| 362
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-29
|
2018-03-29 07:32:06
|
2018-03-29
|
2018-03-29 07:38:53
| 3
| false
|
en
|
2018-03-29
|
2018-03-29 07:38:53
| 9
|
1a8c16041180
| 8.640566
| 0
| 1
| 0
|
Marketers have a range of tools at their disposal for understanding customers and prospects on social media. With time, these tools are…
| 5
|
4 Ways Artificial Intelligence Will Affect Social Media Monitoring
Marketers have a range of tools at their disposal for understanding customers and prospects on social media. With time, these tools are continuously improving to allow for better monitoring and analysis. The use of Artificial Intelligence (AI) to automate marketing tasks is one such improvement. In addition, there are a number of ways, in which AI affects social media monitoring. However, the most obvious advantages are improved accuracy and reduced human effort. In this article, we look at the 4 ways, in which AI will affect such marketing processes, enabling marketers to better understand customers with intelligent insights in a manner like never before.
The Role of Artificial Intelligence in Marketing
Artificial Intelligence can be defined as the ability of a computer system to understand the real world on its own. This allows a computer to perform tasks that would normally require human effort and intellect. In marketing, AI is an extremely powerful tool. In the near future, we can expect marketing tools to employ AI features. Through these features, marketing tools can interpret and analyze information about customers. Example of such information could be the products they like or how they spend their time online. Though there are numerous applications of AI in marketing, all of them fulfill a single purpose. To better understand the customer and make smarter marketing decisions.
By understanding the customer, brands can determine relevant marketing messages, find the right influencers, refine their content marketing strategy and gain insightful information about their customers. A thorough understanding of the customers guarantees an improved and efficient social media marketing strategy for brands. This is where AI plays a vital role in marketing. In addition, AI will play a crucial role in social media monitoring tools, by enabling new features for smart suggestions and intelligent decisions based on the analyzed data (e.g. mentions) that is collected for your brand or product. The following parts of this article include 4 ways that AI will soon affect social media monitoring processes, helping marketers for smarter decisions.
#1 Finding and utilizing digital influencers
Sometimes you find and pay influencers for promoting a product, but the results do not turn out to be effective. This could be because the influencer’s approach does not resonate with the audience. Therefore, it is important that you not only make use of influencer marketing but find the right digital influencers too.
Digital influencers are very important to marketers. This is because they expand the reach of a brand to new audiences. However, having influencers does not mean that you trust your brand social media marketing to them. Remember that just as an influencer can help your brand grow, they can hurt your brand as well. Sometimes, even numerous efforts from an influencer turn out to be ineffective.
In the past, the process of finding an influencer and matching their persona with your brand was a rigorous process. Brands made use of word-of-mouth to discover influencers but now the situation has changed.
As AI in social media monitoring improves, it will become easier for brands to find the right influencers. The process of finding these influencers will become less resource-intensive, time-consuming, and more accurate. There are certain algorithms that already make use of influencers’ followers, posts and interactions to determine if they are suitable for a brand. The use of AI will thus allow marketers to find the right influencers for their brand by using their social media monitoring tool.
#2 Knowing the right time and channel to post
Sometimes you could have great content available but the time, at which you post the content could be wrong. This can mean that the content does not receive the engagement that it deserves. It is not only important to know what to post but also when and how to post it.
The when is about analyzing current social media trends and understanding exactly when a piece of content could go viral. There is an enormous number of posts on the social media at any given time. Therefore, it is natural for a post by your brand to get lost in the noise. If you post the content at a wrong time, it could receive customer engagement for just a few seconds.
The how is about understanding the social media channels. What channels are available to your brand and which channel will receive the most engagement for a piece of content. For instance, if a topic is trending on Facebook, it is not necessary that it will be trending on Twitter. This means that if you post the content on Twitter, it will not receive the engagement you are looking for.
Choosing the right time and platform for posting content is a process that can be simplified by using AI via a social media monitoring tool. Tools that have AI features incorporated in them can analyze the data about the reach of posts with their time and platform to come up with an effective content posting strategy.
#3 Look for opportunities for real-time interactions
Nowadays, the social media is the most approachable way of reaching out to individuals. Whether you are looking to grab the attention of prospects or to answer queries from your customers, the social media is the way to go. However, the social media, unlike other forms of media, thrives on real-time interactions. This means that you cannot just wait days before responding back to customers. You need to get back to them immediately.
Social media monitoring tools can help you identify opportunities for interacting with customers by analyzing mentions about your brand. AI features in a social monitoring tool allow you to determine when and how to respond back to customers. However, in the upcoming year, we can expect AI to be even more involved in the customer service process. From chatbots to automated response systems, you can expect everything to become more intelligent.
Though it is worth mentioning here that despite massive improvements in AI technologies, it is still not capable enough to replace human connections. However, brands can utilize the AI tools available to initiate conversations with customers and reduce response time.
What are the ways in which Artificial Intelligence can help with real-time interaction?
First, by helping brands identify the opportunities to engage with customers. You can easily get overwhelmed by the amount of social media posts about (or related to) your brand. Moreover, you can also get confused between which posts are related to your brand and which of them are not. This is because not all industry keywords will be relevant to your marketing message or brand. For example, if you are a marketer for Sherlock Technologies, then monitoring the keyword Sherlock can lead you to posts about the British television series, Sherlock.
AI allows you to automate the process of skimming through these social media posts. This makes sure that you do not miss out on key opportunities to engage with customers. With intelligent social monitoring, you can leave the sorting of the posts to the tool. This enables marketers to instead focus on responding back to the customers.
No matter how many people you assign to the task of sorting social media posts, it is nearly impossible to carry out this task without the assistance of automation. AI can play a key role in helping sort these posts and we can expect AI features to improve such processes even further.
In 2018 and the upcoming years, AI will be able to make informed calculations, perform analytics, and even make recommendations on its own. With AI, marketers will be able to efficiently identify the most relevant posts that they should interact with.
Another way in which AI can be used is to find the right way to communicate with individual leads. You can mine your CRM tools for obtaining rich insights into how exactly you need to interact with prospective leads. This allows marketers to improve conversion rates and generate more sales.
#4 Making use of images
There are often times when a customer puts up an image of a brand product, but the brand fails to notice it. This happens because images are more difficult to find and analyze than text-based content. Even through dedicated search tools. According to statistics, users upload 350 million images on Facebook and 95 million images and videos on Instagram every day!
This is a massive amount of data that has to be analyzed by brands. Sorting and filtering these many images is impossible through traditional mechanisms (such as hashtags and search engines). Therefore, AI is the way to go.
For efficient social media monitoring, brands need to go through and learn from all user-generated content, including images. In 2018, we can expect AI tools to be able to recognize brand-related images and draw meaning from them. Through this, brands can learn in-depth insights about the customers’ feedback on their products. AI will enable brands to recognize possible sales opportunities and cross-promotional opportunities.
With the information gathered from images, marketers can capture the available opportunities and release targeted content for customers. For instance, if a customer regularly posts an image about a product of yours, you can send them targeted promotions or even just message of appreciations showing them that you care. This improves customer loyalty and encourages them to post more about your brand.
Moreover, analyzing the visual content that customers post can also help you develop detailed buyer personas. These buyer personas provide useful information about the groups of people that buy from you. According to a case study, buyer personas yield a 171% increase in the marketing generated revenue. This proves the importance of developing well-maintained and thorough buyer personas for your customers.
Successful examples
In the last section, we discussed how marketers can make use of AI for improving their marketing efforts. Here we highlight a few examples of brands that have successfully done exactly this:
Blossom by New York Times:
The New York Times is considered to be a content King in the United States with its readership and influence extending all across the globe. The company posts hundreds of stories on its website on a daily basis. At this scale, it is very difficult to handpick stories that will receive maximum engagement. Therefore, for this purpose, the New York Times makes use of Blossom.
Blossom is an intelligent Slack bot that predicts how articles will perform on the social media. Not just this, but the bot can also make story suggestions to editors, by analyzing through the massive amount of story content and data available to it. The bot makes use of metrics such as Facebook post engagement and number of views to determine which stories to post next. Editors and writers of the New York Times can directly question Blossom to ask about what to write or share next!
Image credits: Niemanlab
Sephora chatbot
Sephora is a chain of cosmetics stores based out of France that features almost 300 brands. It is one of the companies that adopted AI early in its marketing game. Sephora has chatbots set up across various social media platforms, including Facebook Messenger and Kik. The Sephora chatbot is advanced enough to give out beauty advice on its own! This chatbot by Sephora is capable of promoting both content (such as articles on beauty tips) and products. The chatbot can ask you questions, take a quiz to learn about your preferences, and make suggestions on which products to buy!
Image credits: Fashion & Mash
These examples show how AI has positively influenced social media marketing. In this modern age, AI features and application have become vital tools for online marketers.
To Sum Up
Artificial Intelligence has been on the rise over the past few years with all industries catching up on the latest technologies. This has been the case with social media monitoring as well. The introduction of ‘intelligent’ features in social media monitoring tools allows marketers to focus on the gist of marketing itself rather than the nitty-gritty details of the data analysis. Rather than focusing on collecting data and organizing it into valuable information, AI will enable marketers to focus on drawing meaning from customers’ data for improved decision making.
There are numerous advancements being made in this field. Out of these, we have highlighted 4 important ways in which AI will directly influence social media monitoring. These include effectively finding digital influencers, scheduling content more efficiently, identifying opportunities for real-time interactions and discovering the meaning behind visuals shared by customers.
Regardless of how it affects social media monitoring, we can be certain about one thing. AI will add value to marketers’ information and allow them to make smarter, more reliable decisions for their business.
Originally posted on Mentionlytics: https://www.mentionlytics.com/blog/4-ways-artificial-intelligence-will-affect-social-media-monitoring
|
4 Ways Artificial Intelligence Will Affect Social Media Monitoring
| 0
|
4-ways-artificial-intelligence-will-affect-social-media-monitoring-1a8c16041180
|
2018-03-29
|
2018-03-29 07:38:54
|
https://medium.com/s/story/4-ways-artificial-intelligence-will-affect-social-media-monitoring-1a8c16041180
| false
| 2,144
| null | null | null | null | null | null | null | null | null |
Social Media Marketing
|
social-media-marketing
|
Social Media Marketing
| 30,275
|
Mentionlytics
|
The most easy-to-use Web & Social Media Monitoring helps you find what everyone is saying about your brand! Try for Free: http://www.mentionlytics.com/
|
594e33622665
|
mentionlytics
| 1,422
| 1,268
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-14
|
2018-08-14 09:17:11
|
2018-08-14
|
2018-08-14 10:06:45
| 2
| false
|
en
|
2018-08-15
|
2018-08-15 11:03:37
| 1
|
1a8db944a2e6
| 4.390881
| 1
| 0
| 0
|
Football has taken a number of successful steps forward in recent years, with the introduction of goal-line technology and vanishing spray…
| 5
|
Can AI solve the VAR headache?
Football has taken a number of successful steps forward in recent years, with the introduction of goal-line technology and vanishing spray. So why has video assisted refereeing (VAR) been so controversial from the start?
“Ridiculous and Shambolic” Danny Rose
“Ludicrous VAR penalty… Mad” Gary Lineker
“VAR is going to absolutely ruin football” Graeme Le Saux
Tradition vs. Technology: check out this video for both sides of the argument.
These are all quotes from famous football pundits about the highly controversial video review system that has been used in this year’s FIFA World Cup. Viewers were left waiting whilst the referee stood for minutes in the centre circle and video referees miles away were hunched over screens trying to make up their minds under intense time pressure. Players and managers alike were screaming at the referee and waving their hands in the shape of a TV screen.
Manchester United legend, Gary Neville, sums up the problem perfectly:
“There are 40 camera angles and you might say there are only 10 camera angles you need to look at, but you’re asking the VAR official, with two mates alongside him, to make a decision in 10 or 15 seconds. I’m not sure they will be able to select the angles quickly enough to get the decision back to the referee before the game has been restarted.”
Surely there is a better way?
Fan experience is crucial for the ongoing success of football. VAR has caused anger, boredom and disbelief.
Artificial intelligence could be the solution.
So, in comes one of the most exciting and disruptive areas of technology: artificial intelligence (and more specifically computer vision). Computer vision is already being used to do identify objects, detect skin conditions and analyse medical imagery… but how could a series of algorithms aid VAR?
Well just as AI is helping to make better decisions in businesses around the world, it can also help augment and enhance the decision making process for VAR referees.The most feasible methods of improving the speed and accuracy of VAR would be to do to prioritise the angles on display to officials, so that they only see the most relevant angles. Fewer angles = faster decisions and/or more time to make accurate decisions.
Like Gary said, only 10 of the angles might be useful, so let’s help the referees by cutting out the irrelevant 30.
Algorithms can be trained to analyse huge quantities of data and make accurate predictions.
How this could work in practise. (in Layman’s terms)
Machine learning is a subset of Artificial Intelligence that uses statistical techniques to allow a computers to learn. In this case, Machine Learning can build an algorithm that will learn which angles are relevant and which angles are not. You would do this by using subject matter experts (in this case experienced referees) to go through historic footage and tag the angles that are useful and which are not. The computer would start to learn the key factors that determine whether an angle is relevant and apply these to new footage in real time.
Examples of factors that the computers could identify include whether the whole ball is viewable, how clear the footage is, whether two body parts from different players can be seen touching, etc. The beauty of machine learning is that you don’t have to tell it the factors, it will learn them itself and it will identify factors that we (as humans) might not have even realised.
So in theory, AI could be used to reduce the number of angles shown to officials by roughly 75%, speed up the game and increase decision accuracy.
Theory vs reality: what’s possible right now?
Artificial intelligence is a field of work that is improving daily and the opportunities to streamline processes is unprecedented. However, this is a particularly difficult challenge.
Areas of computer vision such as image recognition are impressive enough in themselves and only in recent years have we been able to reach with any real accuracy.
However, analysing video clips of football to determine which show the key moment is a much more complex problem that may cause some difficulties. Here at Filament, we have identified three primary issues:
Training computers to understand depth and perspectives (whether objects such as a foot and a ball are touching from a 2D image) is very challenging.
There is a short time window of a couple of seconds around the decision points, which means there is limited data to draw from. In general, the more high quality and relevant training data, the better the accuracy of the future predictions.
Providing the results in a timely manner is essential in this use case but video can take a long time to analyse, due to the video being broken down into separate images. It would take a significant amount of computer power to process this in time.
Final thoughts:
VAR as a concept is excellent but the execution has been poor. One major element is the fan experience, both at the stadium and watching on the television at home. There are two crucial components: transparency and engagement.
I experienced a perfect example of this when at Wimbledon. A review was shown on large screens and the fans built up a crescendo of clapping and large ‘oooohs’ erupted as the ball was shown to have narrowly clipped the line. Another example is rugby, where the referees can be heard explaining their thought process and an animated screen shows the final result.
Football can learn a lot from these successful implementations. Currently the referee stands in the middle of the pitch with a finger to their ear waiting for a result. Fans in the ground don’t know what is going on and fans at home are shown pictures of three referees staring at screens. No transparency and no fan engagement.
What do you think? Let me know in the comments.
What else can AI do:
Understandably, people get very excited by the possibility of being able to replicate the human brain and imagining situations similar to that of the television show Humans. However, the area that can really make a difference to the world right now is Applied AI.
Applied AI is the use of machine learning and neural networks to solve real business problems. We at Filament are experts in Applied AI and have added significant business value to a whole range of clients from a variety of industries. Here are some examples:
Computer vision can be used to identify anti-social behaviour in crowds and make construction sites safer.
Chatbots can understand messages from customers and respond in an intelligent way.
If you want to find out more, check out our website. Thanks for reading and let me know your thoughts in the comments section!
|
Can AI solve the VAR headache?
| 4
|
can-ai-solve-the-var-headache-1a8db944a2e6
|
2018-08-15
|
2018-08-15 11:03:37
|
https://medium.com/s/story/can-ai-solve-the-var-headache-1a8db944a2e6
| false
| 1,062
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
James Courtney
|
AI Strategist @ Filament. Founder of LUX Rewards and Co-Founder of Chat Taxi. "Top 10 Entrepreneurs to Watch in 2017" SETsquared.
|
fadd1a9f4520
|
jamescourtney_4885
| 44
| 44
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-13
|
2018-07-13 07:36:51
|
2018-07-16
|
2018-07-16 03:01:01
| 7
| false
|
en
|
2018-07-16
|
2018-07-16 03:01:01
| 9
|
1a913888fe80
| 2.857547
| 20
| 0
| 1
|
Welcome to another jam-packed development-filled round-up of all the incredible happenings in the machine learning community last week…
| 5
|
Microsoft’s Free Datasets, TensorFlow’s Free Platform for Amateur Data Scientists, Facebook’s Open Source NLP Dataset, among other Game-Changing ML Developments last week!
Welcome to another jam-packed development-filled round-up of all the incredible happenings in the machine learning community last week! From TensorFlow’s Seedbank to Facebook open sourcing a huge NLP and navigation dataset, we have rounded-up the latest cutting-edge developments for you.
Other highlights from the past week: An autonomous car that teaches itself to drive in 20 minutes, NVIDIA’s awesome technique to fix bad photos in milliseconds, Samsung’s NLP competitions-winning algorithm, and more!
You can get these AVBytes articles delivered straight to your inbox on a daily basis! All you need to do is subscribe here. We’ll do everything else. :)
Click on the headline to read the full article.
Microsoft has Released a Collection of Awesome Free Datasets: Here’s a chance for you to work with real-world industry data, polish your skills and get noticed! Microsoft has released a collection of open free datasets curated over a number of years from their research areas.
Learn and Improve your Machine Learning Skills with TensorFlow’s Free Seedbank Platform: Starting off with applying ML concepts can be challenging. So here comes Seedbank — a free in-browser platform that has pre-loaded Python codes and pretrained models on classification, unsupervised learning, NLP and other ML tasks! And you get free GPU support!
Facebook Open Sources Dataset on NLP and Navigation Every Data Scientist should Download: Facebook’s AI team has released an open source dataset, called “”Talk the Walk””, that combines NLP with navigation data. Get your hands on it now and start playing around with it! It also comes with a challenge — can you make the model work?
NVIDIA’s Noise2Noise Technique Fixes Bad Images in Milliseconds: Each one of us has taken blurry or grainy photos at some point. Now, you can fix them in milliseconds with machine learning thanks to NVIDIA’s Noise2Noise technique! It doesn’t even need clean images to learn, it produces stunning quality using only corrupted images!
Samsung’s ConZNet Algorithm just won Two Popular NLP Challenges (Dataset Links Inside): Samsung’s deep reinforcement learning algorithm, ConZNet, just won 2 hugely popular NLP challenges — TriviaQA and MS MARCO. Both these datasets are open source and you can download them NOW (links inside). Recommended for anyone interested in NLP!
An Autonomous Car Learned how to Drive itself in 20 minutes using Reinforcement Learning: This autonomous car learns how to drive itself in just 20 minutes! Using reinforcement learning, the car is “”rewarded”” each time it learns from it’s mistakes. It took less than 20 trials to achieve 95% accuracy (DeepMind’s Atari algorithm took over a million trials)! Check out details of the algorithm + research paper and video inside.
The above AVBytes were published from 9th to 15th July, 2018.
|
Microsoft’s Free Datasets, TensorFlow’s Free Platform for Amateur Data Scientists, Facebook’s Open…
| 118
|
tensorflow-seedbank-facebook-ai-ml-avbytes-1a913888fe80
|
2018-07-16
|
2018-07-16 03:01:02
|
https://medium.com/s/story/tensorflow-seedbank-facebook-ai-ml-avbytes-1a913888fe80
| false
| 479
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Team AV
|
This is the Official Handle of Analytics Vidhya team. For more articles, check out the Analytics Vidhya website and Medium publication of Analytics Vidhya.
|
c7c686fcd4b
|
analytics
| 1,907
| 124
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
50974aafa33c
|
2018-04-15
|
2018-04-15 02:45:50
|
2017-09-27
|
2017-09-27 03:30:42
| 0
| false
|
en
|
2018-04-15
|
2018-04-15 02:48:31
| 5
|
1a91f95a2767
| 1.060377
| 0
| 0
| 0
| null | 4
|
Artificial Intelligence and Hybrid Cloud Take Center Stage at Microsoft Ignite
Microsoft is placing its bets on Artificial intelligence (AI) and hybrid cloud. At Ignite 2017 in Orlando, Redmond emphasized on how AI has become the key ingredient of everything it’s developing. Azure Stack, the hybrid cloud offering is available to enterprise customers through select OEM partners.
When Microsoft gets serious about an emerging technology, it starts with the developers. Microsoft is following the same path for making AI accessible. Firstly, it turned AI into a platform that becomes the foundation for both internal applications coming from Microsoft well as for external developers who can access it through APIs. It is then building a set of tools that make it easy for developers and data scientists to create AI-enabled applications.
Microsoft is committed to embedding AI into almost every new product and service. Microsoft Excel has got new formulae for performing predictive analytics in the cloud. PowerPoint has got a translator that can translate presentations in real time. Word is all set to have a new spell checker and grammar tool that goes beyond the basic correction. But, Office is just one of the products that will have powerful AI features. Dynamics CRM, SQL Server, Bing and many other services will exploit AI capabilities.
SQL Server 2017 is one of the first databases in the industry to get an embedded ML engine. Customers can mix and match existing SQL notations with predictive analytics. The ML engine supports R and Python languages along with modern libraries for training and visualization.
Read the entire article at Forbes
Janakiram MSV is an analyst, advisor, and architect. Follow him on Twitter, Facebook and LinkedIn.
|
Artificial Intelligence and Hybrid Cloud Take Center Stage at Microsoft Ignite
| 0
|
artificial-intelligence-and-hybrid-cloud-take-center-stage-at-microsoft-ignite-1a91f95a2767
|
2018-04-15
|
2018-04-15 02:48:32
|
https://medium.com/s/story/artificial-intelligence-and-hybrid-cloud-take-center-stage-at-microsoft-ignite-1a91f95a2767
| false
| 281
|
Analyst | Advisor | Architect
| null | null | null |
janakirammsv
| null |
janakirammsv
| null | null |
Forbes
|
forbes
|
Forbes
| 987
|
Manu Kapoor
| null |
1e90ffaee714
|
greatmj
| 2
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-27
|
2018-08-27 08:30:43
|
2018-08-27
|
2018-08-27 08:55:40
| 2
| false
|
en
|
2018-08-27
|
2018-08-27 08:55:40
| 0
|
1a9268c68f8a
| 5.081447
| 0
| 0
| 0
|
A tree has many analogies in real life, and turns out that it has influenced a wide area of machine learning, covering both classification…
| 5
|
Unlocked: Decision Trees
A tree has many analogies in real life, and turns out that it has influenced a wide area of machine learning, covering both classification and regression. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. As the name goes, it uses a tree-like model of decisions. Though a commonly used tool in data mining for deriving a strategy to reach a particular goal, its also widely used in machine learning, which will be the main focus of this article.
How can an algorithm be represented as a tree?
For this let’s consider a very basic example that uses titanic data set for predicting whether a passenger will survive or not. Below model uses 3 features/attributes/columns from the data set, namely sex, age and sibsp (number of spouses or children along).
A decision tree is drawn upside down with its root at the top. In the image on the left, the bold text in black represents a condition/internal node, based on which the tree splits into branches/ edges. The end of the branch that doesn’t split anymore is the decision/leaf, in this case, whether the passenger died or survived, represented as red and green text respectively.
Although, a real data-set will have a lot more features and this will just be a branch in a much bigger tree, but you can’t ignore the simplicity of this algorithm. The feature importance is clear and relations can be viewed easily. This methodology is more commonly known as learning decision tree from data and above tree is called Classification tree as the target is to classify passenger as survived or died. Regression trees are represented in the same manner, just they predict continuous values like price of a house. In general, Decision Tree algorithms are referred to as CART or Classification and Regression Trees.
So, what is actually going on in the background? Growing a tree involves deciding on which features to choose and what conditions to use for splitting, along with knowing when to stop. As a tree generally grows arbitrarily, you will need to trim it down for it to look beautiful. Lets start with a common technique used for splitting.
Recursive Binary Splitting
In this procedure all the features are considered and different split points are tried and tested using a cost function. The split with the best cost (or lowest cost) is selected.
Consider the earlier example of tree learned from titanic data set. In the first split or the root, all attributes/features are considered and the training data is divided into groups based on this split. We have 3 features, so will have 3 candidate splits. Now we will calculate how much accuracy each split will cost us, using a function. The split that costs least is chosen, which in our example is sex of the passenger. This algorithm is recursive in nature as the groups formed can be sub-divided using same strategy. Due to this procedure, this algorithm is also known as the greedy algorithm, as we have an excessive desire of lowering the cost. This makes the root node as best predictor/classifier.
Cost of a split
Lets take a closer look at cost functions used for classification and regression. In both cases the cost functions try to find most homogeneous branches, or branches having groups with similar responses. This makes sense we can be more sure that a test data input will follow a certain path.
Regression : sum(y — prediction)²
Lets say, we are predicting the price of houses. Now the decision tree will start splitting by considering each feature in training data. The mean of responses of the training data inputs of particular group is considered as prediction for that group. The above function is applied to all data points and cost is calculated for all candidate splits. Again the split with lowest cost is chosen. Another cost function involves reduction of standard deviation.
Classification : G = sum(pk * (1 — pk))
A Gini score gives an idea of how good a split is by how mixed the response classes are in the groups created by the split. Here, pk is proportion of same class inputs present in a particular group. A perfect class purity occurs when a group contains all inputs from the same class, in which case pk is either 1 or 0 and G = 0, where as a node having a 50–50 split of classes in a group has the worst purity, so for a binary classification it will have pk = 0.5 and G = 0.5.
When to stop splitting?
You might ask when to stop growing a tree? As a problem usually has a large set of features, it results in large number of split, which in turn gives a huge tree. Such trees are complex and can lead to overfitting. So, we need to know when to stop? One way of doing this is to set a minimum number of training inputs to use on each leaf. For example we can use a minimum of 10 passengers to reach a decision(died or survived), and ignore any leaf that takes less than 10 passengers. Another way is to set maximum depth of your model. Maximum depth refers to the the length of the longest path from a root to a leaf.
Pruning
The performance of a tree can be further increased by pruning. It involves removing the branches that make use of features having low importance. This way, we reduce the complexity of tree, and thus increasing its predictive power by reducing overfitting.
Pruning can start at either root or the leaves. The simplest method of pruning starts at leaves and removes each node with most popular class in that leaf, this change is kept if it doesn’t deteriorate accuracy. Its also called reduced error pruning. More sophisticated pruning methods can be used such as cost complexity pruning where a learning parameter (alpha) is used to weigh whether nodes can be removed based on the size of the sub-tree. This is also known as weakest link pruning.
Advantages of CART
Simple to understand, interpret, visualize.
Decision trees implicitly perform variable screening or feature selection.
Can handle both numerical and categorical data. Can also handle multi-output problems.
Decision trees require relatively little effort from users for data preparation.
Nonlinear relationships between parameters do not affect tree performance.
Disadvantages of CART
Decision-tree learners can create over-complex trees that do not generalize the data well. This is called overfitting.
Decision trees can be unstable because small variations in the data might result in a completely different tree being generated. This is called variance, which needs to be lowered by methods like bagging and boosting.
Greedy algorithms cannot guarantee to return the globally optimal decision tree. This can be mitigated by training multiple trees, where the features and samples are randomly sampled with replacement.
Decision tree learners create biased trees if some classes dominate. It is therefore recommended to balance the data set prior to fitting with the decision tree.
This is all the basic, to get you at par with decision tree learning. An improvement over decision tree learning is made using technique of boosting A popular library for implementing these algorithms is Scikit-Learn. It has a wonderful api that can get your model up an running with just a few lines of code in python.
|
Unlocked: Decision Trees
| 0
|
unlocked-decision-trees-1a9268c68f8a
|
2018-08-29
|
2018-08-29 08:22:50
|
https://medium.com/s/story/unlocked-decision-trees-1a9268c68f8a
| false
| 1,245
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Akash Sharma
|
A techie, an extrovert, an aspiring data scientist, a patent holder, a researcher, a good friend! I'm all of it and I can be whoever you want me to be.
|
85b51a7e3294
|
hereisakash
| 5
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
5a35faf06bed
|
2018-08-15
|
2018-08-15 15:31:40
|
2018-08-15
|
2018-08-15 15:30:16
| 1
| false
|
en
|
2018-08-15
|
2018-08-15 15:39:23
| 2
|
1a92854c7e08
| 1.611321
| 0
| 0
| 0
|
To help ourselves, our clients, and the industry orient themselves in this new voice-powered ecosystem, we launched the PMG Voice Lab. The…
| 5
|
What We Learned from the PMG Voice Lab
To help ourselves, our clients, and the industry orient themselves in this new voice-powered ecosystem, we launched the PMG Voice Lab. The initiative is a long-term investment in research, analysis, and experimentation in the field of voice search and voice-enabled technologies and opportunities.
In our first Voice Lab analysis, we put the top voice-enabled technologies to the test to see how their AI capabilities compare, and which brands are ahead of the game in optimizing their content for voice search and branded voice queries in our first white paper. At a high-level, we uncovered the following insights.
Key Takeaways from the Voice Lab
Alexa is far more shopping-focused than Google, to the point of being a nuisance and unhelpful with informational queries.
Both Google and Alexa still return a higher than the desired number of unhelpful results. Google devices are much better at finding information, while Alexa-powered devices are easier to order products from.
Gaining visibility in Amazon’s marketplace, especially the top 3 listings and the “Amazon’s Choice” section, is vital for gaining visibility with Alexa.
On Google, brands only answered questions about their products 16% of the time.
Google cited from a wide variety of sources in answering questions.
Alexa never recommended an app (“skill”), while Google occasionally did.
Google’s voice team has clearly determined that people speak naturally in voice search. Because of this, Google is much better at answering conversational queries in voice search than queries like “headphones blue.” This indicates that Google has a different system for interpreting voice queries than their traditional system.
Alexa is just as good at answering both conversational and non-conversational queries; however, it’s really more a case of being “equally bad” at both.
Google easily returned relevant local listings for queries with local intent. Appearing in the Google Local pack for relevant results is key.
Download your copy of the Why Voice Matters: Exploring the Latest Opportunities in Voice Search white paper today to better understand the nuances of voice technology, how these devices work, and how your brand can optimize site content for this exciting new field of search marketing.
Originally published at www.pmg.com on August 15, 2018.
|
What We Learned from the PMG Voice Lab
| 0
|
what-we-learned-from-the-pmg-voice-lab-1a92854c7e08
|
2018-08-15
|
2018-08-15 15:39:23
|
https://medium.com/s/story/what-we-learned-from-the-pmg-voice-lab-1a92854c7e08
| false
| 374
|
Industry news, thought leadership and musings from the people at PMG
| null |
agencypmg
| null |
PMG Digital Agency
|
abby@pmg.com
|
agencypmg
|
DIGITAL MARKETING,ADVERTISING,MARKETING,DIGITAL,MEDIA
|
agencypmg
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
PMG
|
PMG is a digital agency that uses strategy, creative, media, and insights to deliver against its mantra of Digital Made for Humans™.
|
5d039c8bfd0d
|
agencypmg
| 67
| 123
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-20
|
2018-04-20 09:00:20
|
2018-04-20
|
2018-04-20 09:06:50
| 2
| false
|
en
|
2018-04-20
|
2018-04-20 09:07:24
| 15
|
1a9763230019
| 5.009748
| 0
| 0
| 0
|
By 2020 Comscore predict that 50% of all searches (brand, product or other) will be voice searches. Further, Mindshare believe that [by…
| 5
|
Learning Machine: Engagement and Conversational (Voice and Chat) Technology Special
By 2020 Comscore predict that 50% of all searches (brand, product or other) will be voice searches. Further, Mindshare believe that [by 2020] 63% of citizens would consider messaging an online chat bot to communicate with a business.
Both data points present huge opportunities for executives to re-imagine communication and user experience across their business — both for customers and employees.
So, today’s issue is all about voice and chat technologies.
If you’d like to talk more about engagement, conversational or agent assistive tools in your business, drop me a line at chris.gayner@symphonyhq.com
1. The art and science of Natural Language Processing
Google has announced two new AI experiments. The first, TalktoBooks offers users a way to use natural language to find text within, and navigate across a library of, books. Kurzweil and Berstein explain that “The models driving this experience were trained on a billion conversation-like pairs of sentences, learning to identify what a good response might look like”. The second, Semantris, is a word association game in which the user must type the corresponding / related word to the one that appears on the screen. While the game is fun, you are (of course) helping Google train their semantic engine. Read more about the experiments here.
2. The Enabled Enterprise
Symphony Labs and friends will be hosting a live discussion on May 23, exploring how Voice and Chat technologies can help modern businesses engage better with their customers, their employees and their agents. Find out more about the live discussion here.
3. Voice technologies are enabling the visually impaired to join the technology revolution
As computers, phones and wearables become more advanced, people become ever more connected — to one another and our devices. However, there is an implicit limitation to much of this technology, primarily the ability for the user to visually navigate various user interfaces. As such, voice technologies enable those visually less-abled to now enjoy all the benefits that today’s technology can offer including some Cwazy Cupcake fun too — read a sort piece on their application here.
4. The desire for a frictionless experience
Customers and employees continue to demand simpler, more natural ways to identify and verify themselves (online / over the phone) without jeopardising their data or personal security. A new study by Pindrop (a Voice security and identification provider) and Harris Poll, has found that nearly half of the 3,000 participants (48%) said ‘they would be likely to use voice recognition as a form of personal verification’. The study not only shares the wondrous benefits that voice technology brings, but also offers some of the limitations and drawbacks of this maturing technology. Take a look at a summary of the study here
5. Skills to pays the bills
Following closely on the above, while employees are asking for more conversational technologies to be introduced into their daily workflow, IT professionals do not feel confident they (or their colleagues) have the necessary skills (in-house) to effectively manage a large scale roll out of engagement technologies hmmmm who ya gonna call….
6. “Hello, I am Vera. I work with PepsicCo. Are you looking for a job now”
Pepsico is now using AI as part of its recruitment operations, to find and interview candidates and then to fill vacancies in Russia. Natalya Sumbaeva, PepsiCo CIS talent acquisition manager says ‘The robot recruiter is capable of interviewing 1,500 job candidates in nine hours, a task that would take human recruiters nine weeks’ “Hello, I am Vera, I am a robot. I work with PepsiCo. Are you looking for a job now?”. Read the full story here.
7. Read my lips
Liopa Ltd, a Belfast tech company specialising in lip-reading technology, has received $1m in funding to bring its Lip Reading platform (aptly named LipRead). The technology aims to improve the accuracy of Speech Recognition in noisy environments OR to overcome the disturbing challenge of recognising ‘Deep Fake’ videos.
8. What Cities Need to Know About Chatbots and Data Security
As the trend toward smart cities continues to boom, chat bots are becoming more prevalent on local and state websites. The City LA employs ‘Chip’ to answer any procurement questions you might have. Kansas City (Missouri) employs Open Data KC to answer open records requests. And if you see a pothole or a broken street light in North Charleston, South Carolina or Williamsburg, Virginia, text Citibot. It processes traffic light outages and new street sign requests, too. Whilst these are certainly useful and encouraging of community spirit, these ‘bots’ are rarely built under the tight control of IT — more typically these initiatives are born out of the ever whimsical minds of the marketing team — and too often give little regard to the implications of collecting, storing, sharing all this (potentially sensitive) data… read the full article here.
9. The Hybrid Company
Alex Galert, CEO BRAIN (brn.ai), has written an interesting thesis — following very similar ideas to what we at Symphony hold dear — that while the nature of work is changing (due to automation, AI and Cognitive Technologies) the rate at which businesses are changing is simply not keeping pace. As frictionless technologies become more prevalent in our day-to-day lives, we increasingly expect the same in our working lives — or find an employer who can offer such luxuries. However, far too often business leaders seek to apply old solutions to new problems, whether due to education, understanding or sticking with what is known, which too often leads to poor outcomes or false positives. Take a look at this short thesis and then ask yourself ‘what challenges could be solved through the adoption of more natural, engagement technologies’
10. Psychology of decision making
Picking the right tool for the job is always a good thing, however, the difference between building a good conversational tool and a great conversational experience, is Conversation by Design — effectively mapping out the most ideal flow of interactions to inspire a natural, flowing discussion between bot and human. James Clear has offered some (somewhat familiar for any Caldini fans out there) tips on psychological programming your users decision making process
Webinar: The Engaged Enterprise — 23 May 2018
Voice and Chat technologies offer the potential to significantly enhance the way organisations engage customers and employees. Rapid developments in machine learning, availability of data and decreasing costs of advanced computing means tools such as Computer Vision, Predictive Analytics and Conversational Technology (such as Chat Bots and Voice Recognition) have become readily available to global businesses.
However, getting the most out of these tools requires an appreciation for both the art-of-the-possible and the componentry within. As such, Symphony Labs are hosting a live discussion for business executives seeking to understand the practical application of these tools in their organisation.
Click here to find out more about our live discussion on The Engaged Enterprise … or Contact Chris Gayner at chris.gayner@symphonyhq.com
About Chris Gayner
Chris Gayner is the Director of Symphony Labs, a contributor to The All-Party Parliamentary Group on Artificial Intelligence (APPG AI) and commentator on all things related to the RPA, AI and Cognitive Technologies. He is a big believer in the power of technology + people to overcome critical business and societal challenges.
|
Learning Machine: Engagement and Conversational (Voice and Chat) Technology Special
| 0
|
learning-machine-engagement-and-conversational-voice-and-chat-technology-special-1a9763230019
|
2018-04-20
|
2018-04-20 09:07:25
|
https://medium.com/s/story/learning-machine-engagement-and-conversational-voice-and-chat-technology-special-1a9763230019
| false
| 1,226
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Chris Gayner
|
Director of Symphony Labs, a contributor to The All-Party Parliamentary Group on Artificial Intelligence (APPG AI) and commentator on all things Automation
|
2a9d2e80b978
|
chris.gayner
| 0
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-08
|
2018-04-08 19:54:01
|
2018-04-09
|
2018-04-09 11:01:04
| 4
| false
|
en
|
2018-04-09
|
2018-04-09 11:01:04
| 2
|
1a97cb0666ae
| 3.903774
| 10
| 2
| 0
|
Over the weekend of 7th April 2018, my team and I were invited to the IndabaX Kenya event, organized by the wonderful Women In Machine…
| 5
|
Experiences at Indaba𝕏 Kenya: Data Driven Patient Diagnosis with Dr. Elsa Poster Presentation
Over the weekend of 7th April 2018, my team and I were invited to the IndabaX Kenya event, organized by the wonderful Women In Machine Learning (WIML) group, to present a poster and share on our AI powered Health and Telemedicine project and the experience was truly nothing short of inspiring.
In this post I will talk a little about what went on during the event, what I learnt, and what I was able to share with the great vibrant community of Nairobi!
The Event
Indaba — Strengthening African Machine Learning — and the Indaba events are a great effort by African innovators and thought leaders to make more Africans an important part of the current Artificial Intelligence era.
A Deep Learning Indaba𝕏 is a locally-organised, one-day Indaba that helps spread knowledge and builds capacity in machine learning.
We kicked off with the keynote: The ML Roadmap for Kenya: Where We Are And Where We Are Heading by Nikhil Ravichandar, where he talked about the state of Machine Learning in Kenya and Africa as well as what the next steps should be in the journey towards more pervasive machine learning solutions.
We then went on to the pannel session, the practical session, lunch and the poster presentations.
The Panel Session: Artificial Intelligence, Drivers and Forces at Play
This was easily one of the best parts of the day, to see a panel of very achieved industry leaders discuss the whats and the hows of Artificial Intelligence, covering everything from policies and governments to the ethical concerns with AI and data. It was a great learning experience for all who attended.
Four of the six panelists
Lead by Professor Bitange Ndemo, this session was filled with humor and thought provoking ideas on humanity and where we are headed. It was also very nice to see how the different panelists had different ideas on how to address challenges and even different ideas on what constitutes a problem. This was one of those “You had to be there to know what I mean” things.
The Practical Session, and What I have learnt:
Led by the knowledgable Brian Muhia with the support of Amina Islam and Kathleen Siminyu, the practical session focused on the more technical and “practical” aspects of machine learning. We saw the tedious process of data cleaning and preprocessing, the various types of regularization like Dropout, and how easy it is to use PyTorch to set up a deep neural network to Learn Entity Embeddings of Categorical Variables.
Brian Muhia explaining backpropagation
I learnt a few things on using the Fast ai modules and an interesting comment on “Drop connect” as an alternative to Dropout. All in all, it was
great to see the interest from the community to learn more on this.
The Poster Presentations
After lunch and a bit of networking we were ready for the next part, poster presentations. For this part, the Women in Machine Learning (WIML) group had invited people from East Africa, Tanzania included (us!), to come and share what they are working on and what their findings are. The event organizers turned the poster session into a friendly competition for the grand prize of 5,000 Kenyan Shillings!! The stakes were high, and the tensions were even higher!
We saw some great presentations on using computer vision to detect driver drowsiness, using decision trees and KNN algorithms to predict a loan seekers probability to default on their loans without using any bank history and understanding opinions in text. We could not get enough of this!
I presented on Dr. Elsa, our AI backed health and telemedicine service that is offered for free in Tanzania. We empower doctors to make smarter and more informed decisions and provide patients with a safe and secure way to consult a doctor free of charge.
IndabaX Kenya — Dr. Elsa Poster: Data Driven Patient Diagnosis
We talked about how we are using deep neural networks and gradient boosted trees to provide a differential diagnosis of patients on infectious diseases in Tanzania, as well as our work with the National Cancer Research Institute to develop tools that will help with the early detection of cervical cancer. The crowd was super supportive and the judges very encouraging in their questions and suggestions.
Our presentation ended up winning the competition and the grand prize of 5000 Kenyan Shillings!! Whaaaaat???? More importantly, from one of the judges responses, judge Loki from Safaricom, actually helped us rethink an aspect of our ensemble model and are expecting to see improvements in accuracy after the changes.
Special Thanks and Shoutouts!
On behalf of everyone who attended the event, I would like to thank the Busara Center for Behavioral Economics for the wonderful venue and the panelists for sharing their knowledge and experiences in the industry.
I can’t say it enough so I’ll say it again, the Women In Machine Learning group, WIML, did an amazing job at organizing and running this event, I will be honored to be invited to attend the next one! Also shoutout to the lively Muthoni Wanyoike, and the gracious Kathleen for being such wonderful hosts!
|
Experiences at Indaba𝕏 Kenya: Data Driven Patient Diagnosis with Dr. Elsa Poster Presentation
| 158
|
experiences-at-indaba-kenya-data-driven-patient-diagnosis-with-dr-elsa-poster-presentation-1a97cb0666ae
|
2018-04-27
|
2018-04-27 06:46:41
|
https://medium.com/s/story/experiences-at-indaba-kenya-data-driven-patient-diagnosis-with-dr-elsa-poster-presentation-1a97cb0666ae
| false
| 849
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Ally Salim
|
Love to learn / Technology Enthusiast/ Senior JavaScript & Python Developer / AI and Machine Learning Hacker / Technopreneur
|
547f289395da
|
ally_20818
| 22
| 46
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
73c37236e0e3
|
2018-09-04
|
2018-09-04 13:09:04
|
2018-09-04
|
2018-09-04 17:49:20
| 5
| false
|
en
|
2018-09-04
|
2018-09-04 17:49:20
| 5
|
1a985182a22c
| 2.984277
| 4
| 0
| 0
|
A logo is an incredible part of any company’s brand. It represents who we are and it builds up a recognizable image that can be trusted…
| 5
|
Introducing Our New Logo
A logo is an incredible part of any company’s brand. It represents who we are and it builds up a recognizable image that can be trusted. When the time comes, updating a logo is important for ensuring it stays relevant, up-to-date, and in line with our values and offerings. That’s why we’re updating ours, and below we’ll explain the meaning behind our new logo, what it means to us as a company, and what it could represent for you.
Smart
The launch of Daneel commanded a logo with the innate quality of intelligence. Due to artificial intelligence learning being the heart of Daneel, it only made sense to communicate this message with a smart, snappy logo design. Our new logo had to be useful, durable, and innovative as well, and we think this one will stand the test of time. We included some elements into our logo such as our typeface, for example, as a strong and decisive feature; the modern, contemporary feel mirrors the up-and-coming feel of our product. Not only is our typeface bold, but our thought bubble concept underscores this identity.
Conversational
The thought bubble has a few functions: first, it communicates the idea that you can converse with Daneel, which is exactly the point of our product. It also stands for the feature of human feedback enhancing the product, as not only can Daneel talk to you but you can talk to it and help it learn by upvoting or downvoting content, assessing for reliability, and so on. The last feature of our thought bubble is its significance regarding the relationship between our developers and our customers; we are here for you! We are counting on your feedback to make Daneel an ever-changing, ever-learning dynamic product and you are a part of that process as a valued customer. We hope to have a great working relationship with our users, and we will do our best to promote a collaborative learning experience to bring you the best tech around.
Technological
The color blue signifies the technological aspect of our product. From a psychological standpoint, the color blue is often seen as a color of stability and reliability; Daneel will exhibit both of these qualities. Our technology is the best in the crypto space… from its intuitive user interface to the quality and depth of knowledge we put at your fingertips, undoubtedly Daneel will soon be a household name for anyone operating in any capacity in the crypto world. We ultimately chose blue to communicate both of these qualities on a subconscious level, to assure you that what you’re getting can be fully relied upon.
Conclusion
Our new logo is a strong representation of how we as a company feel about Daneel. Its strong, contemporary, and conversational qualities are key aspects of our product. We are confident they communicate what the exciting future holds for you with our product in the palm of your hands; we are excited for you to have the most useful, most intelligent, and most market-savvy product available at your fingertips! We hope you are just as excited as we are to see how Daneel impacts the crypto world. Get used to this assertive image; it’s about to become what you see every time you need help in the crypto world.
Stay tuned:
Twitter: https://twitter.com/daneelproject
Telegram: t.me/DaneelCommunity
Facebook: https://www.facebook.com/daneelproject
LinkedIn: www.linkedin.com/company/11348931/
YouTube: https://www.youtube.com/channel/UCJH6gsFUJlZr_ka3HQjZhKw
|
Introducing Our New Logo
| 94
|
introducing-our-new-logo-1a985182a22c
|
2018-09-04
|
2018-09-04 17:49:35
|
https://medium.com/s/story/introducing-our-new-logo-1a985182a22c
| false
| 570
|
In this publication you will find all the official announcements and communication related to the Daneel Company.
| null |
daneelproject
| null |
Daneel Corporate
|
information@daneel.io
|
daneel-corporate
|
ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,MACHINE LEARNING,BIG DATA,TRADING
|
daneelproject
|
Design
|
design
|
Design
| 186,228
|
Daneel Assistant
|
Your future personal crypto assistant ! https://daneel.io
|
dc883054551c
|
daneel_project
| 463
| 9
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
840b60819e49
|
2018-09-04
|
2018-09-04 10:21:44
|
2018-09-05
|
2018-09-05 16:09:42
| 3
| false
|
en
|
2018-10-03
|
2018-10-03 14:54:23
| 1
|
1a98e2ae86d3
| 2.391509
| 3
| 0
| 0
|
Actionable insights from machine learning in satellite data
| 4
|
Monitoring Industrial CO2 Emissions from Space
Climate change is one of the most serious threats facing our world today as Earth continues to warm and risks for land, ocean, species and people are increasing. Human activities have long been recognized as major contributors to temperature changes, driving dangerous phenomena such as the rise of sea levels, longer droughts, reduced water supply and more.
One of the major driving forces of global warming is the emission and accumulation of greenhouse gases (GHG) within the atmosphere, especially this of carbon dioxide (CO2). The largest human source contributing to these trends is the combustion of fossil fuels, accounting for 87% of all anthropogenic (man-made) CO2 emissions. Thus, regulating industry’s activities and carbon footprint is an essential step to adequately address climate change threats.
Fortunately, increased awareness of climate change and its risks led a growing number of governments to implement long-term policies for reducing their countries energy demand. For instance, Kyoto protocol and Paris Agreements aim to regulate GHG emissions, enforcing industrial facilities to regularly report of their emissions. As a means to verify that the parties uphold their commitments and to effectively manage carbon risk, consistent, reliable and up-to-date emissions reports are essential.
As current emission reports involve issues of transparency, consistency and accuracy, space-data emerges as a potential alternative monitoring tool. In this post we reveal part of Skylab Analytics results from the development of a facility-level CO2 emission monitoring method, based upon satellite data. Using our ETL (Extract, Transform, Load) pipeline for satellite data, observations above and in the proximity of selected point-sources were extracted and analysed.
An example of satellite data analysis above Sandy Creek power-plant in Texas, USA. The colors red and blue indicate strong and low CO2 enhancements, respectively.
By application of machine learning algorithms, we succeeded to accurately account for emissions from these sites. Our work yielded a model capable of predicting CO2 emissions with a low Mean Square Error of 4% . The results of this work demonstrate the value and great potential of utilizing machine learning techniques to satellite data for monitoring emissions of individual power-plants.
The diagonal dotted black line is a perfect prediction. The colors depict different dates of overpasses and the error bars represent the standard error of the the predictions.
Businesses are exposed to climate-affected risks. These risks can derive from strict legislation of carbon emissions that often require costly investment in low-carbon technology or carbon taxation. Other risks that threat companies may be physical and include nature catastrophes caused by global warming, such as floods and subsidence.
“In order to effectively manage carbon risk, consistent, reliable and up-to-date emissions reports are essential.”
As we reach a turning point in taking account of carbon risk, an efficient and accurate space-based monitoring method may be of great value in detecting and quantifying emissions. This is crucial in determining the compliance of companies in carbon intensive sectors (metal, chemicals, oil&gas, cement, energy), and can provide important information in promoting decarbonization. As such, this approach can have far reaching implications on measuring and managing carbon risk at pension funds, insurers, banks, commodity traders and supply chain management.
Visit us at skylabanalytics.com
|
Monitoring Industrial CO2 Emissions from Space
| 5
|
monitoring-of-industrial-co2-emissions-from-space-1a98e2ae86d3
|
2018-10-03
|
2018-10-03 14:57:00
|
https://medium.com/s/story/monitoring-of-industrial-co2-emissions-from-space-1a98e2ae86d3
| false
| 488
|
artificial intelligence, data analytics and earth observation to help solving global problems
|
blog.skylabanalytics.com
| null | null |
Skylab Analytics Blog
|
info@skylabanalytics.com
|
skylab-analytics-blog
|
PRECISION AGRICULTURE,ARTIFICIAL INTELLIGENCE,SATELLITE TECHNOLOGY,BIG DATA ANALYTICS,DATA SCIENCE
| null |
Climate Change
|
climate-change
|
Climate Change
| 39,654
|
Dor Blau
|
Data scientist at Skylab Analytics
|
41a142a711a8
|
dorblau23
| 3
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-05
|
2018-08-05 04:45:31
|
2018-08-08
|
2018-08-08 15:00:06
| 14
| true
|
zh-Hant
|
2018-08-09
|
2018-08-09 00:53:51
| 8
|
1a9ba566c9c6
| 1.763208
| 4
| 0
| 0
|
利用現有 AI 雲服務升級交易程式的大腦
| 4
|
不須精通機器學習,也能幫交易程式加入AI功能 ? - Part 1
利用現有 AI 雲服務升級交易程式的大腦
開發策略時,常常會遇到主觀交易者看到的盤,不知道怎將他公式化的問題,例如辨識盤面出現kd背離,在交易策略裡就要做一個又一個濾網判斷,當進行回測試時遇到錯誤又很難找出那一段羅輯造成的,所以對於要將看到的盤面感覺程式化是相當的不容易。
在這一兩年任何東西都希望可以跟AI做結合,如同標題,對於程式交易開發者而言,也希望是否可以與交易程式結合。可是對於主觀交易者跨領域到程式開發上,現又加上AI ,這需要學習數學相關的知識,這無疑是巨大的障礙 而且在機器學習的領域又有分許多不同的方法,並且需要大量的知識,才能去開發出適合的,而且在這過程中還有許多地方需要不斷的去實驗,因此要做這結合是相當大的挑戰。
但難道真的沒有辦法了嗎? 微軟這兩年推出許多認知服務,透過AI解決業務問題,其中一個是Custom Vision 這個主打就是輕鬆定制您自己最先進的視覺模型,完美契合您的獨特應用。這個服務主要可以將看到畫面學習起來並做出結果,這符合我們前面提到如何將盤面感覺程式化的想法。
Custom Vision 客製化視覺模型可以識別出特定圖片的類別,還能偵測物體,開發人員可以透過Custom Vision來訓練可辨識出物體具體位置的模型。
引用 : Build 2018:微軟認知服務大更新,提供企業更多將產品AI化的工具(https://www.ithome.com.tw/news/123007)
Build 2018:微軟認知服務大更新,提供企業更多將產品AI化的工具
所以在這篇文章我將介紹如何使用這個服務。往後的文章將逐步介紹如何整合到MT4的EA裡。
在開始之前有三個部份建議先預備
閱讀指標能力
基礎的程式交易的開發技能
微軟帳號
這一次主要快速介紹如何建置視覺模型,其他詳細選項與設定之後再來說明。
首先到微軟的官方網站上申請一個Microsoft Account,之後我們登入Custom Vision的時候,我們將需要使用這一個帳號。在初期我們可以在免費的額度中做練習,同時驗證是否可行,當模型穩健可行或是需更多資源時,讀者們可自行評估是否要移轉到付費層上,而可使用的額度會有差別的,詳細的相關資料,讀者可以到官網上查詢。
從這邊進到Custom Vision 頁面
按下中間的Sign in
登入之後,在這邊看到紅框的位置,按下建立新的專案
之後會出現這個視窗,Name的部份自已方便辦識就可以,其他選項照黃色框標示選擇。
1.依黃框的部份按下,會要你選擇要學習的照片
在這次的Demo中,主要是辨識背離,所以我先自已在MT4裡從過去一個一個找出有背離的盤面,再逐一截圖下來。而各位在實作的時候,建議可以先找自已熟悉的指標判讀方式,再依那些情境畫面截圖下來當作訓練、測試的材料。
2.選擇完圖片後,會要求上標籤,這個部份就看個人策略的定義,在這個專案裡,主要是讓他判斷正負背離。
在這個步驟要特別注意,每定義一個標籤建議要有50張圖,假設是要讓模型學習判斷正背離、或是負背離,就是要準備各50張不同情境下的圖片,而且每個標籤一定要1比1,不能有一組多一組少。
填完資料後按下上傳,等待一下,速度滿快的。
沒有任何問題,就會出現這個畫面,每個標籤請依步驟一到二重覆做直到圖片滿50張為止。
如同這個畫面,Top、Bottom 各50張圖
最後依黃框部份按下訓練,等他一下下。
當看到這個畫就是訓練完畢了
從這裡可以看到精度有80%以上、回想有96%。
之前自已寫的 CNN模型還沒辦法在那麼短的時間內達到這樣的成績( 跨領域來寫真的很硬 淚目…),而且還是經過無數次的實驗與調整才有到精度80%以上,現覺得這個服務真的很方便。
接著依畫面的『 1 』的地方按下,會出現Quick Test的視窗,再按下『 2 』選擇一張還沒有被上傳過的圖片測試,在黃框的部份清楚看到他判斷為Bottom。
到這邊我們已經建立好我們的模型後,經過快速測試得到的結果也是如同我們預期的。在下一篇將會介紹如何將訓練的模型與自已開發的策略整合。
如果需要更進一步完整詳細介紹可以先到說明文件學習。
使用自訂視覺服務建置分類器 - Azure 認知服務
若要使用自訂視覺服務,您必須先建置分類器。 To use the Custom Vision Service, you must first build a classifier. 在本文件中,了解如何透過網頁瀏覽器建置分類器。 In…docs.microsoft.com
或是到 Jade Chang 所寫的Custom Vision介紹中學習。
動手做 Custom Vision 並用 Python 進行圖片辨識
暨六月份與 Eric ShangKuan大大直播完運用 Azure Custom Vision 輕鬆開發智慧視覺應用程式 (最下面有影片)後,去翻了文件直接動手試試看。medium.com
以上為本人經驗及交易邏輯分享與研究工具、開發、實驗、籌碼、線型之心得,無推介特定股票、商品、交易、操作方式意圖,投資人需自負損益之責,投資請審慎評估自身風險承擔能力,切勿過度投資。
|
不須精通機器學習,也能幫交易程式加入AI功能 ? - Part 1
| 48
|
不懂機器學習也能幫交易程式加入ai功能-1a9ba566c9c6
|
2018-08-09
|
2018-08-09 00:53:51
|
https://medium.com/s/story/不懂機器學習也能幫交易程式加入ai功能-1a9ba566c9c6
| false
| 83
| null | null | null | null | null | null | null | null | null |
Microsoft
|
microsoft
|
Microsoft
| 19,490
|
Jarvis
|
喜愛研究機器學習、程式交易。さぁ、実験を始めようか。
|
789794af7308
|
InsightTrader
| 23
| 25
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-06
|
2018-09-06 08:22:53
|
2018-09-06
|
2018-09-06 08:32:21
| 3
| false
|
en
|
2018-09-06
|
2018-09-06 08:32:21
| 5
|
1a9c6f7b9d8b
| 2.127358
| 1
| 0
| 0
|
Previously, artificial intelligence was not widely used and seemed almost unattainable to many application developers. But the third-party…
| 5
|
Significance of using of Machine Learning and Artificial Intelligence in Mobile Applications
Previously, artificial intelligence was not widely used and seemed almost unattainable to many application developers. But the third-party platforms and APIs that are growing gradually have created chances for change. Newly, a most of the AI company launched a development program that includes access to Scala, Python, Java, JavaScript, AI, machine learning, Robotic process automation and more and many business leaders are in the interest of using several cases where organizations and companies have more information to create a foundation for AI applications.
Integrate artificial intelligence into smarter applications:
If our vision is for data to keep moving as they change context, they think the data gets stuck somewhere. Today we have reached such a high level that we can use artificial intelligence and machine learning techniques and integrate them into a typical application experience so that users can enjoy smarter applications.
The evolution of artificial intelligence in applications is also compared to the early days of the Internet, which were started with static web pages before the advent of browser-based tools. Today, there are several companies around the world and data specialists have been trying to provide value to groups of developers who are trying to develop advanced software to achieve the company’s goals. In the end, the goal was to create a community.
Natural language comprehension (NLU):
For most design experts, artificial intelligence starts with NLU (Natural Language Understanding), which allows smartphones and other devices to get direct input. Google Now and Apple’s Siri are some of the best examples and there are many more applications like these. Recently, a Mobile app development company in Dubai was trying to build a cloud-based server and promised anyone with basic programming skills to create NLU interfaces. this also in more than 20 languages.
In fact, mobile application developers required very limited computing and natural language conception. Before, the NLU was measured as an expensive solution because few enterprises could write such applications. But specialists say even broader applications could benefit from artificial intelligence. These can be any applications that tell business leaders the kind of people want to buy.
Of course, artificial intelligence cannot be considered a drop when it comes to developing smarter applications, so developers are still behind Cognitive AI, which has given them great results in various industries like Healthcare, Pharma, Manufacturing, Logistics, FinTech, Banking, Constructions.
For more information, Please visit here: www.optisolbusiness.com or info@optisolbusiness.com | +1 415 233 4737
|
Significance of using of Machine Learning and Artificial Intelligence in Mobile Applications
| 1
|
significance-of-using-of-machine-learning-and-artificial-intelligence-in-mobile-applications-1a9c6f7b9d8b
|
2018-09-06
|
2018-09-06 08:32:21
|
https://medium.com/s/story/significance-of-using-of-machine-learning-and-artificial-intelligence-in-mobile-applications-1a9c6f7b9d8b
| false
| 418
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Manikandan
| null |
5a3d7e5d4971
|
manikandantvr
| 4
| 87
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-17
|
2018-02-17 20:16:53
|
2018-02-17
|
2018-02-17 20:16:51
| 1
| false
|
en
|
2018-02-17
|
2018-02-17 20:16:53
| 3
|
1a9eed17e1ad
| 1.211321
| 0
| 0
| 0
| null | 5
|
Comparison of two face recognition software: Clarifai and Face++
Recently, I tried several products to extract demographics information from a profile image. My target was to obtain information about age, gender and ethnicity. I found as the prominent companies in the sector are Clarifai and Face++. I integrated my trial software with both products and I found Clarifai’s performance is a little bit better than Face++ .
Clarifai provides the probability value of its predictions. (predicted gender is female with a probability %52) So, it is possible to eliminate the results having low prediction score. On the contrast, Face++ does not provide that value. This is an unwanted situation because in binary classification technique, the prediction always has a result, even its score is not very high.
Clarifai correctly predicted the ethnicity of the image below as “White”, while Face++ wrongly predicted it as “Black”. But on the other hand, Clarifai could not found the gender value correctly, while Face++ correctly marked it as male.
The disadvantage of Clarifai is its low quota for free usages. It permits only 2500 api calls per month for free accounts. But Face++ does not specify any upper limit for free accounts. It has only one single limitation, which is 1 api call per second.
I hope my hands-on experience about these softwares will help you choose the right product.
Result of Clarifai: (https://clarifai.com/demo)
Gender: feminine (prob. score: 0.510), masculine(prob. score: 0.490)
Age: 55 (prob. score: 0.356)
Ethnicity (Multicultural appearance): White: (prob. score: 0.981)
Result of Face++: (https://www.faceplusplus.com/attributes/#demo)
Gender: male
Age: 53
Ethnicity (Multicultural appearance): Black
Originally published at Emre CALISIR.
|
Comparison of two face recognition software: Clarifai and Face++
| 0
|
comparison-of-two-face-recognition-software-clarifai-and-face-1a9eed17e1ad
|
2018-02-17
|
2018-02-17 20:16:55
|
https://medium.com/s/story/comparison-of-two-face-recognition-software-clarifai-and-face-1a9eed17e1ad
| false
| 268
| null | null | null | null | null | null | null | null | null |
Clarifai
|
clarifai
|
Clarifai
| 18
|
Emre Çalışır
|
Research Scientist in Data Science Group, Politecnico Milano
|
ab72f1081026
|
emrecalisir
| 6
| 94
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
905ea2b3d4d1
|
2018-07-17
|
2018-07-17 04:41:36
|
2018-07-17
|
2018-07-17 17:14:57
| 6
| false
|
en
|
2018-07-19
|
2018-07-19 06:37:51
| 3
|
1a9ef9f75abf
| 3.80283
| 14
| 0
| 0
|
Incomplete data? Noisy settings? Lets explore the Robust Factorization Machines, a recent noise-proof addition in the supervised learning…
| 5
|
Robust Factorization Machines
Incomplete data? Noisy settings? Lets explore the Robust Factorization Machines, a recent noise-proof addition in the supervised learning domain.
Robust Factorization Machines, proposed recently at the WWW’18, are a family of non-linear classifiers that take into account any potential data incompleteness/noise. They incorporate the principles of Robust Optimization in the highly expressive Factorization machines. As a result the trained models exhibit high noise resilience.
This blog attempts to provide an intuitive understanding about the Robust Factorization Machines. We will skip most of the maths and proofs and such.
Please refer the original paper for a rigorous mathematical explanation. This blog beautifully captures the motivations behind the paper and how robustness is desirable in user response prediction domain.
Lets start with understanding the two keywords:
1. Robustness,
2. Factorization Machines.
Robustness
How does the most basic Machine Learning (ML) Pipeline look like?
Take some ML classifier, feed data into it and out comes a model! Simple.
What about the DATA?
Data quality matters a lot! Data Scientists spend a large amount of time working on getting a ‘clean dataset’. But as many would agree, there is only so much data engineering and cleaning that can be done.
What if a classifier could take care of it ❤?
Robust Classifiers over Standard Classifiers?
The classifiers differ in their treatment of the training data and consequently how the optimization problems are framed.
Standard classifiers:
- assume data is precisely known.
- framed as a Loss-minimization problem w.r.t. a weight vector (w) being learnt
- represented in Fig. 1 (a).
Robust classifiers:
- assume uncertainty associated with each data point. Notion of deterministic, set based uncertainty U. See Eq. 1. This allows for the data point to now exist anywhere in a hyper-rectangular manifold. See Fig. 1.
- framed as a Minimax problem, minimizing loss w.r.t. a weight vector (w) while also maximizing w.r.t. an uncertainty (U).
- represented in Fig. 1 (b).
Eq. 1. Uncertainty set definition. Uncertainty is defined over each datapoint. Here x represents a single data point and m is the number of data points.
Fig. 1. (a) Standard classifier v/s (b) Robust classifier. Note how the introduction of uncertainty in (b) results in ‘hyper-rectangles’ over the data points, thus leading to change in the learnt classifier boundary.
In a nutshell, Robust Optimization seeks to learn a classifier that remains feasible and near optimal under worst case uncertainty realization.
Factorization Machines
Lets take a classification scenario.
For a purchase prediction problem with two features (item_category and device), if we know ‘clothing’ category is frequently purchased on ‘mobiles’ and not on ‘desktop’, how do we capture such feature interactions? How does a model capture the importance that a feature interaction such as ‘device=mobile and category=clothing’ has over considering only the individual features? A linear model alone does not suffice.
Factorization Machines(FMs), proposed by Steffen Rendle, are a family of non-linear classifiers designed to capture feature-interactions in a latent space. That is, for every feature a p-dimensional vector is learnt, resulting in a d x p dimensional weight matrix, where d is original no. of features. The similarity between two features is then given by the dot product of these latent vectors e.g. as in Fig 2, interaction strength b/w features j and k will be computed as a dot product over the following two vectors: latent vector for feature j (v_j) and latent vector for feature k (v_k).
Fig. 2. Computation of similarity b/w feature j and feature k. Parameter Matrix V is learnt s.t. similarity of features j and k is computed using a dot product b/w rows j and k of the matrix V.
Robust Factorization Machines
Now that we have developed some intuition about robustness and factorization machines, its time to uncover key aspects of the paper titled “Robust Factorization Machines for User Response Prediction”.
The paper is motivated using the noisy and incomplete data available in the User Response Prediction problem. Checkout our blog describing the need for robustness in the domain.
The key idea is to extend Factorization machines using principles of Robust optimization. The resulting Minimax formulation is then converted to pure minimization problem by deriving upper bounds on loss w.r.t. an uncertainty matrix U.
The paper proposes two novel algorithms:
- Robust Factorization Machines (RFM).
- Robust Field Aware Factorization Machines (RFFM).
Extensive experiments on real-world large-scale datasets give insights into the performance and scalability of the proposed algorithms.
Promising results:
- Significant reduction (4.45% to 38.65%) in Logloss in Noisy settings.
- Slight performance hit (-0.24% to -1.1%) in Noise-free setting.
An open-source Spark-based distributed implementation for the proposed algorithms is available here. Evaluating RFMs and RFFMs across a breadth of classification scenarios is an interesting area to explore.
RFMs and RFFMs are domain-independent formulations, applicable in any domain with noisy/incomplete data.
Bringing robustness to the forefront
With the increasing noise in the input signals, it is important to design classifiers which embrace this uncertainty. RFMs and RFFMs are a step in this direction. Incorporating robustness in tree ensembles and deep neural networks is a promising area of investigation.
|
Robust Factorization Machines
| 82
|
robust-factorization-machines-1a9ef9f75abf
|
2018-07-19
|
2018-07-19 12:20:17
|
https://medium.com/s/story/robust-factorization-machines-1a9ef9f75abf
| false
| 756
|
Using technology, data and design to change the way the world shops. Learn more about us - http://walmartlabs.com/
| null |
walmartlabs
| null |
WalmartLabs
| null |
walmartlabs
|
DATA SCIENCE,UX DESIGN,ENGINEERING,TECHNICAL LEADERSHIP,OPEN SOURCE
|
walmartlabs
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Priyanka Bhatt
|
Senior Data Scientist @WalmartLabs, Bangalore | IISc, Bangalore
|
c8f427c43007
|
priyankabhatt91
| 24
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
f68e6a514e66
|
2018-01-26
|
2018-01-26 16:35:14
|
2018-02-07
|
2018-02-07 15:09:14
| 3
| false
|
en
|
2018-02-07
|
2018-02-07 15:09:14
| 3
|
1a9f542890e4
| 1.500943
| 0
| 0
| 0
|
The Marketing Dive — Volume VI
| 5
|
Voice Assistant, Can You Hear Me?— Insights on Today’s Marketing Issues
The Marketing Dive — Volume VI
The Facts
According to a new study, within three years around 40% of all American consumers will use a voice assistant as an alternative to a mobile app or website, They also found that 81% of users of voice assistants have used them via smartphones, while around 32% have used them on a smart speaker such as Alexa or Echo.
Spiker Insights
At this point, the rise of voice assistants seems pretty much undeniable, and the growth of voice as a search or even shopping interface demands our attention. We still probably haven’t seen the last of new assistants being unveiled either. Where there are leaders, followers are bound to ensue.
Brands will need to pay close attention to how this evolution plays out, as it could influence how and where they will want to spend the bulk of their digital dollars. Making the right choices about investing in voice assistants could mean a handsome payoff in the very near future.
Spiker Communications | Marketing Agency | Missoula, MT
We see ourselves as a special kind of creative marketing firm that solves complex client problems that can rarely be…www.spikercomm.com
Spiker Communications
Spiker Communications, Missoula, Montana. 83 likes · 1 talking about this · 15 were here. From travel, real estate, ski…www.facebook.com
SpikerCommunications (@Spiker_Comm) | Twitter
The latest Tweets from SpikerCommunications (@Spiker_Comm). IT’S NOT ABOUT BEING BETTER. IT’S ABOUT BEING DIFFERENT …twitter.com
Visit us at spikercomm.com
|
Voice Assistant, Can You Hear Me?— Insights on Today’s Marketing Issues
| 0
|
voice-assistant-can-you-hear-me-insights-on-todays-marketing-issues-1a9f542890e4
|
2018-02-07
|
2018-02-07 17:40:14
|
https://medium.com/s/story/voice-assistant-can-you-hear-me-insights-on-todays-marketing-issues-1a9f542890e4
| false
| 252
|
We create marketing content that moves the needle. Fundamentally we’re storytellers, we live a life well lived, and our dynamic experiences provides an edge.
| null |
spikercommunications
| null |
Spiker Communications
| null |
spiker-communications
|
MARKETING STRATEGIES,PUBLIC RELATIONS,DIGITAL MARKETING,VIDEO MARKETING,ADVERTISING AGENCY
|
Spiker_Comm
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Spiker Communications
|
We create marketing content that moves the needle. Fundamentally we’re storytellers, we live a life well lived, and our dynamic experiences provides an edge.
|
1f40b0c8f6cb
|
spikercomm
| 15
| 54
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-04
|
2018-05-04 02:29:45
|
2018-05-04
|
2018-05-04 03:43:36
| 5
| false
|
en
|
2018-08-23
|
2018-08-23 18:52:17
| 8
|
1aa1f8f2afb
| 4.342767
| 5
| 0
| 0
|
——The absurd love and worship Asian fans’ have for their idols are actually extremely similar to the same feeling North Koreans have for…
| 5
|
Crazy fan economy in Asia and why it is “YUGE” for ObEN-PAI
——The absurd love and worship Asian fans’ have for their idols are actually extremely similar to the same feeling North Koreans have for their Supreme Leader — — Kim Jung Un. The only difference is: they make wars, we make money.
September 21st, 2017, a satellite was sent 100000 ft above the ground with this picture
Turns out, it was for celebrating one of the most famous celebrities in China — — Junkai Wang’s(王俊凯) 18th birthday. Meanwhile, In Los Angeles, fans from 37 fans organizations used skywriting to spell out Wang’s name in Chinese pinyin above the Hollywood sky. Each letter was about the size of the Empire State Building and lasted for about five to seven minutes in the air. A total of five aircraft spelled out his name 18 times in total. This is reportedly the first case of fans using skywriting to show support for a celebrity. It also made Wang the first Chinese star whose name got spelled out over Hollywood, Jiemian.com reported.
If this is not enough for explaining the so-called fan economy, or the unbelievable purchasing power of Fans in China. Let me also tell you, Wang now even own 18 stars that, if connected together, write his initials in the space like this.
“Surprisingly”, it is also bought by his fans in China. Crazy stories like this are happening everyday in China and they are beyond imagination. Yesterday, an emerging Pop idol called ChengCheng Fan(范丞丞) posted a selfie on Weibo.com, the twitter of China. That picture can only be seen if you pay him 60¥. Shockingly, it got more than 80,000 views in less than 24 hours. It means that the man actually earned 4,800,000 of cash with one random selfie that, to his crazy fans, is holier than Jesus himself.
The absurd love and worship Asian fans’ have for their idols are actually extremely similar to the same feeling North Koreans have for their Supreme Leader — — Kim Jung Un. The only difference is, they make wars, we make money. Mr. Mark Zuckerberg may have shown you that the private data is the invisible gold in the contemporary society, but Project PAI would tell you a different story. In China, Korea, and Japan, countless celebrities like Wang are been produced by the industry every day. And ObEN is the only winner among all the other rival A.I companies that got the partnership with one of the largest entertainment giants in Korea — — S.M. Entertainment.
Let’s say, if idols like Junkai Wang are “products” that generate tons of traffic and of course, revenues, then S.M. is the biggest “factory” that produces them. So far, S.M. has produced one of the greatest K-POP groups on the planet — — EXO. According to Wikipedia, They were ranked as one of the top five most influential celebrities on the Forbes Korea Power Celebrity list from 2014 to 2018 and have been named “the biggest boyband in the world” and the “Kings of K-pop” by media outlets.[1][3].
The current members of EXO
Besides, PAI also partners with the largest girls POP group in China — — SNH48,whose MV got 3.5 millions of views on YouTube(https://www.youtube.com/watch?v=WfWU9KuifzM&index=1&list=PLKmRaECNpULCaqwzy4T_xX7v4AXVBrowz) even with YouTube being blocked by the Chinese great firewall.
A picture from SNH 48’s album
Their Loyal fans in China are even paying millions of cash just to shake their hands(Sounds a little bit creepy isn’t it) and vote for them. With ObEN and PAI, this industry will be taken to a whole new level thanks to A.I and blockchain technology. According to their Whitepaper, Project PAI is developing an open-source, blockchain-based platform designed to allow everyone to create, manage, and use their own Personal Artificial Intelligence (PAI). The PAI Blockchain Protocol (PAI blockchain) enables a decentralized AI economy where application developers can create products and services that will be beneficial to the PAI ecosystem and users can contribute their PAI data to improve and enhance the platform’s AI neural network. In addition, companies and developers can easily create their own token on top of the PAI blockchain to facilitate interaction and transaction in their own unique experiences. The focal point of all interactions on the PAI blockchain are PAIs — intelligent 3D avatars that look, talk and behave just like their human counterparts, made from the digital profiles of the user’s online behavior.
If the Asian fans culture is like another universe to you, just imagine if Justin Bieber or Ed Sheeran will sing your 17 years old daughter to sleep, or romantically dance for her in her wedding. I bet she’s going to save her lunch money for those features. And, imagine there’s countless “Chinese Justin Bieber” and “Korean Justin Bieber” emerging daily and potentially be partnering with Pai , I undoubtedly knew the story will only become crazier with the enormous amount of fan-base in Asia. I mean, what could be crazier than shooting a satellite just for celebrating one celebrities’ birthday, or earn 48,000,000 bucks with one selfie? With more IP partners with celebrities both in and out of Asia to come, the future success of PAI resulted by Fan economy will be unimaginably “YUGE”. And maybe one day the fans of one of the PAI-partnered celebrities will actually shoot a satellite like they did for Wang and send Project PAI TO THE REAL MOON!
Project PAI’s website: https://projectpai.com
ObEN’s official website: https://oben.me/
S.M.’s official website: http://www.smentertainment.com/
If you are feeling generous, donation to this college kid is greatly appreciated.
Ethereum: 0xf60c53d4cb451af1e4b650bee209c2dc17c87f65
Bitcoin: 1GowXVvKgAfafRK8dQByE2N1rJPXxzrFus
PAI: PhpjcYk6fHNcKdbtpESzETQVdA7K1QNMxV
|
Crazy fan economy in Asia and why it is “YUGE” for ObEN-PAI
| 136
|
stunning-fan-economy-in-asia-and-why-it-is-yuge-for-oben-pai-1aa1f8f2afb
|
2018-08-23
|
2018-08-23 18:52:17
|
https://medium.com/s/story/stunning-fan-economy-in-asia-and-why-it-is-yuge-for-oben-pai-1aa1f8f2afb
| false
| 930
| null | null | null | null | null | null | null | null | null |
Kpop
|
kpop
|
Kpop
| 1,002
|
HansCheng
|
To Kill a Mockingbird.
|
ef1239db615c
|
cheng260
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-23
|
2018-05-23 18:39:58
|
2018-05-23
|
2018-05-23 18:43:04
| 2
| false
|
en
|
2018-05-23
|
2018-05-23 18:43:04
| 4
|
1aa1fe6e840c
| 2.553145
| 0
| 0
| 0
|
Automation may seem like a monolithic endeavor once you have made the decision to bring the power of artificial intelligence to bear for…
| 5
|
Sequential or Iterative Development: Comparing Waterfall and Agile Methodologies
Automation may seem like a monolithic endeavor once you have made the decision to bring the power of artificial intelligence to bear for your business. But when it comes time to initiate the process, your business must decide between several methodologies for developing automation solutions. The most common methodologies for algorithm development are the waterfall method and the agile method, which differ widely in their approach to implementing software solutions.
The waterfall method progresses sequentially for an entire project, whereas the agile method is iterative and works in sprints. Source: Segue Technologies
The waterfall method progresses sequentially for an entire project, whereas the agile method is iterative and works in sprints. Source: Segue Technologies
Waterfall Method
The waterfall methodology is the “traditional” approach for developing automation technologies. The method uses an eight-step sequence that developers move through linearly, advancing only after completing the preceding step.
There are a number of advantages to this sequential approach. For businesses that are new to automation, the waterfall method allows planners and designers to agree on what the final deliverables will look like early on in the process. In addition, there are clear requirements at the end of each phase that allow managers to measure progress without being overly hands-on. Most important, since the entire scope of the project is clear from the beginning, developers are able to construct software that fits together to build the end product, rather than work in piecemeal and produce a forced-together product.
On the other hand, the waterfall methodology is slow and unresponsive to changes in the project scope. Even minor changes in the final product can be expensive to implement since it requires rolling back several phases of the workflow. This is particularly problematic for businesses without automation experience since managers will not have a tangible product to examine until the project is completed.
The waterfall method does not produce visible deliverables until the end of the process, whereas the agile method produces numerous small deliverables throughout. Source: CRMsearch
The waterfall method does not produce visible deliverables until the end of the process, whereas the agile method produces numerous small deliverables throughout. Source: CRMsearch
Agile method
The agile method operates iteratively and responsively, in stark contrast to the waterfall method. Software development occurs in sprints, with ad hoc deliverables established for each sprint and used to inform the next sprint’s deliverables. Agile workflows emphasize doing over planning and coordinated, independent teams to avoid paralysis.
The advantage of the agile method is your business is directly invested and involved in the software development process. This both establishes a sense of ownership over the project and affords the opportunity for feedback as development progresses. The agile method is also more adaptable to rolling out basic automation features and filling in advanced functions later, which enables fast project turnaround.
The downside to the agile approach is that this style of automation development requires significant human resources from your company — several members of your team should be fully dedicated to the project. In addition, for particularly complex automation implementations, it is possible to lose sight of the final, large-scale deliverable as piecemeal advances are iteratively pieced together.
Summary
Whether the waterfall method or agile method is right for your automation project depends on multiple factors, including the timescale of your implementation, your company’s ability to provide team members to the process, your company’s automation experience, and the project complexity. In general, it is best to choose an automation solutions company that offers expertise in both the waterfall and agile methods, such as WorkFusion, so that they can work to identify the best methodology for your business and your project.
|
Sequential or Iterative Development: Comparing Waterfall and Agile Methodologies
| 0
|
sequential-or-iterative-development-comparing-waterfall-and-agile-methodologies-1aa1fe6e840c
|
2018-05-23
|
2018-05-23 18:43:05
|
https://medium.com/s/story/sequential-or-iterative-development-comparing-waterfall-and-agile-methodologies-1aa1fe6e840c
| false
| 575
| null | null | null | null | null | null | null | null | null |
Automation
|
automation
|
Automation
| 9,007
|
Michael Graw
|
Adventure Photographer | Videographer | Science & Tech Writer
|
4b978bdcd72a
|
michaelgraw
| 2
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-16
|
2018-06-16 21:27:46
|
2018-06-17
|
2018-06-17 16:49:27
| 0
| false
|
en
|
2018-06-17
|
2018-06-17 16:49:27
| 5
|
1aa29a70153c
| 2.913208
| 11
| 0
| 1
|
While working as an iOS Engineer for a FinTech startup in Irvine, California, one of the companies advisers challenged me with a question…
| 3
|
How to use Reinforcement Learning to create a hyper-personalized User Experience
While working as an iOS Engineer for a FinTech startup in Irvine, California, one of the companies advisers challenged me with a question: Why does the app look and behave the same way for each user, and how can it be different?
I thought about this question for months. It was not until a colleague mentioned the multi-armed bandit problem within the context of AB Testing that I made a breakthrough.
I was trying to solve the problem within the context of supervised learning, by organizing the user interface in a dashboard to display the most relevant sections of the app for each user. The multi-armed bandit is a reinforcement learning problem. And by thinking about it in terms of reinforcement learning I was able to come up with a different and more innovative solution.
In reinforcement learning, we give the agent a set of possible actions and a reward function, then we ask the agent to maximize the reward. Reinforcement Learning is commonly used in games. If we think of the app as a game, we are essentially giving the agent(in this case the app) only one possible action to maximize the reward. The idea is to provide the app with multiple actions or states (i.e. texts, images, and ui layouts) and allow it maximize the reward (i.e. conversions, signups, purchases and engagement) based on each user’s actions (i.e. taps and views). Over time, each user will choose different actions and the app can look and behave radically different for each user.
Q Learning
As a first step to achieve this level of personalization in an app, I created a framework based on Q Learning, a model free reinforcement learning algorithm. In Q Learning, we give the agent a reward matrix which specifies the start state, the goal state, and the reward(or penalty) for each state transition. The agent creates another matrix with the same dimensions. This is called the Q matrix and it represents the score for each state. It’s initialized with zeros and its values are updated using the values of the the reward matrix and a discount factor. Once enough values are learned, the agent will take a greedy approach to determine the best action given a current state. A model based approach such as value iteration or policy iteration will allow for more control over the state transition probabilities using Markov chains. But for simplicity, I have chosen to start with a model free approach.
An Example Use Case
I created an example app to show how the framework can be used. This example app simulates a common checkout flow in mobile applications. It starts with 3 possible states which represent 3 product categories. Then there are 2 possible next states which represent 2 different ways to display a product detail page, both of which can transition to the checkout page which is the goal state.
The app is initialized with a reward matrix that specifies the transitions, the reward for the goal state, and penalties for going back to a previous state. The Q matrix will be initialized to zeroes. At first, the app will randomly select the next state to display. Once the goal state is reached, the Q matrix will be updated, each state that led to the goal state will be rewarded with a value calculated using the value of the goal state and a discount factor.
At this point the app will decide which next state to display to the user by choosing the state with maximum value in the Q matrix out of all possible next states in reward matrix. If the user goes to a product detail page but does not proceed to the checkout page that state is penalized. When that state is penalized to the point its reward becomes zero, then the next state will once again be chosen randomly.
Personalization vs Traditional AB Testing
In traditional AB Testing, the winning variant is implemented and the losing variants are removed. But what if a particular user prefers the losing variant. There is a risk of alienating those users who are not in the majority. Personalization allows each user to choose the variant that works best for them.
Reinforcement Learning + Supervised Learning
Once enough data is collected, we can to use collaborative filtering to provide initialization values for the Q matrix instead or starting with zeros.
Example Project
Download the sample project on github. The photos included in the project are from Unsplash.
References
Q Learning
Multi-armed bandit
Value Iteration and Policy Iteration
|
How to use Reinforcement Learning to create a hyper-personalized User Experience
| 25
|
how-to-use-reinforcement-learning-to-create-a-hyper-personalized-user-experience-1aa29a70153c
|
2018-06-20
|
2018-06-20 00:29:49
|
https://medium.com/s/story/how-to-use-reinforcement-learning-to-create-a-hyper-personalized-user-experience-1aa29a70153c
| false
| 772
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Alvin Yu
|
Software Engineer, iOS
|
dac473ab2340
|
alvinyu2003
| 5
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-03
|
2018-09-03 03:47:01
|
2018-09-03
|
2018-09-03 08:28:11
| 16
| false
|
en
|
2018-09-04
|
2018-09-04 10:22:56
| 15
|
1aa2fb4f80de
| 6.376415
| 3
| 0
| 0
|
Which data visualization tool do you recommend?
| 5
|
Compare 6 Types and 14 Data Visualization Tools
Which data visualization tool do you recommend?
Well, that’s a tricky question to answer, because there are so many data visualization tools. Take the following picture as an example:
From FineReport
You can use PS + AI to complete it. I often turn to designers to help me with designing drafts, and then I will realize the effects with Echarts or BI tools referring to the style and layout. Generally speaking, the method is usually used for news reports and magazine typesetting. But for information visualization, we should pay more attention to the conclusion while many people do data analysis by excel.
Chart plug-ins like Echarts, Highcharts, AntV, D3… Learning some program is very important, the common language is JS, often used in designing front-end web page. When you are developing a product, these open-source visual plug-ins may be integrated (highcharts is not open source).
Ready-made chart and BI tools. If you can make it with Excel, you can just use Excel. Or you can use BI tools such as Tableau, FineBI, and DOMO directly.
Data mining programming language, like R and Python. There are visualization packages, you have to learn these two languages which are a little bit difficult. If you want to learn data analysis and data mining, these two languages are necessary for you.
For simple use, BI tools are the easiest. But before choosing BI, you’d better think more deeply about your use scenarios.
Let’s talk about the use of these tools and their respective advantages in details.
1. Pure visualization chart generator/chart plugin — better for developer, engineer
Echarts
Echarts is a pure Javascript data visualization library which belongs to Baidu. It is often used in software development or statistics chart modules of web. You can design visualization charts on the web side as you like. There are many types of charts and dynamic visualization effects. BTW, all kinds of charts are completely open source and free. It can handle large amounts of data and 3D graphics which are very cool and amazing. It is said that it will do a better job when you use it together with Baidu Map.
From Echarts
Echarts is often used in some development scenarios, but it also derives a 0-code chart generator — ‘Baidu Tushuo’. I have tried it. I just need to select the icon, copy the data, and then generate the chart, save as figure or emb the code.
AntV
AntV is a set of data visualization grammar from Ant Financial (Ali), which seems to be the first visualization library in China that uses the theory called The grammar Of Graphics. Antv comes with a series of data processing APIs. Because of its ability to classify and analyze simple data, it is used by many large companies as the underlying tools of their BI platform.
From AntV
Highcharts
When we talk about Echarts, we will usually compare it with Highcharts. The relationship between them are a bit like the relationship between WPS and Office.
Highcharts is also a visualization library which you have to pay for it if you are gonna use it. It has many advantages, for example, its documents and examples, js scripts and css are very detailed. It saves time and attention to learn and develop, what’s more, it is very stable.
2. Visualization report — better for report developer, BI engineer
FineReport
It is a reporting software and enterprise-class application, used for developing business reports and data analysis reports. It can also be integrated with OA, ERP, CRM and other application systems to build data report modules.
You can also develop financial analysis system with FineReport which depends on how you take advantage of the data.
The two core functions of FineReport are filling in the report and data display. But I think the more amazing thing is that it has lots of built-in charts and visualization effects. In particular, the visualization effects are very rich which are not old-fashioned at all. You can make a variety of dashboards, even visualization large screens with FineReport.
I used to work with finereport. What impressed me most is that it saves me much time to develop reports. Before using FineReport, we made 10 excel tables for 10 stores which is very troublesome. But with FineReport, we just need to use the parameter query in one template, and then batch export.
So there is a saying: Work with Microsoft, Operate with FineReport.
From FineReport
3. Business Intelligence analysis — better for BI engineers, data analysts
Tableau
Almost every data analysts will mention Tableau. It has common built-in analysis charts and some data analysis models. You can quickly do the data analysis, explore the value and produce data analysis reports.
Because it is business intelligence, it is better for business analysis. With Tableau, you can quickly make dynamic interaction diagrams, the charts and the color schemes are very amazing.
From Tableau
FineBI
FineBI is a self-service BI tool and a mature product for data analysis. It has rich built-in charts. You can drag and drop directly without using codes to call the charts. FineBI can be used for rapid analysis of business data, making a dashboard, or building a large screen.
Different from Tableau, it is more likely for enterprises. From the built-in ETL function and the method of data processing, we can find that it focuses on the rapid analysis and visualization display of business data. It can be integrated with big data platforms and various multi-dimensional databases, so it is widely used in enterprises. The good news is that it is totally free for personal use.
From FineReport
From FineReport
PowerBI
It is from Microsoft which is introduced to users after Excel. PowerBI can be seamlessly connected to Excel to create personalized data dashboards.
From PowerBI
4. Data Maps
Many tools have data maps, such as Echarts, Finereport, Tableau, etc. mentioned above.
Here we strongly recommend Power Map 2016. I highly suggest you have a try, it’s very amazing.
There is also another product called Ditu Hui, which can quickly help you get what you want.
The built-in map is Baidu Map. You just need to take 3 steps, select a template, upload data and save the map.
5. Visualization Large Screen
Ali DataV
The large screens of Tmall’s Double Eleven Gala are made with DataV. It is a drag-and-drop visualization tool from Alibaba Cloud, which is mainly used for big data visualization of business data combined with geographic information. You can often see them in the places like exhibition centers and enterprise control centers.
You do not have to programme, and you can generate a visualization large screen or dashboard with a simple drag and drop.
From Ali DataV
FineReport
As mentioned above, this tool can also make visualization large screens.
It can connect to business data in real time and display the business data of the enterprise since the backend is usually connected to business system data. FineReport is usually used in places like the exhibition center, BOSS dashboard, as well as the city traffic control center, trading floor and so on.
From FineReport
Digital Hail
I don’t know much about the technology of this product, and I just saw it in person at an event.
Digital Hail focuses on Data Image, 3D Processing, Data Analysis and other related services. You can visualize and display data analysis results, which is more used in Smart Cities and Industrial Monitoring.
It’s commercial, and there are a lot of big screen designs on the official website that can inspire you.
From Digital Hail
6. Data Mining Programming Language — better for technical data analysts, data scientists
Typical as R, R-ggplot2 and Python
|
Compare 6 Types and 14 Data Visualization Tools
| 150
|
compare-6-types-and-14-data-visualization-tools-1aa2fb4f80de
|
2018-09-04
|
2018-09-04 10:22:56
|
https://medium.com/s/story/compare-6-types-and-14-data-visualization-tools-1aa2fb4f80de
| false
| 1,279
| null | null | null | null | null | null | null | null | null |
Data Visualization
|
data-visualization
|
Data Visualization
| 11,755
|
DataScienceLover
|
Trying to explore the data world!
|
8e1dd699e267
|
datasciencelove
| 8
| 53
| 20,181,104
| null | null | null | null | null | null |
0
|
# Imports, path to data, and hyper parameters
from fastai.conv_learner import *
PATH = "data/dogscats/"
sz=224
bs=64
# Data augmentation, take data from folders, and finally make a learner and fit it.
tfms = tfms_from_model(resnet50, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_paths(PATH, tfms=tfms, bs=bs)
learn = ConvLearner.pretrained(resnet50, data)
learn.fit(1e-2, 3, cycle_len=1)
# Unfreeze all layers, bn_freeze is something we have not learned yet but basiaclly it make model better when we are using precomputed models like resnet50, and then we fit the model using different learning rates.
learn.unfreeze()
learn.bn_freeze(True)
learn.fit([1e-5,1e-4,1e-2], 1, cycle_len=1)
# Finally we use test time augmentation (TTA) to give us better result.
log_preds, y = learn.TTA()
metrics.log_loss(y, np.exp(log_preds)), accuracy(log_preds, y)
output exp softmax
cat -1.83 0.16 0.00
dog 2.85 17.25 0.09
plane 3.86 47.54 0.26
fish 4.08 59.03 0.32
building 4.07 58.78 0.32
182.75 1.00
exp[0] = e^-1.8
softmax[0] = exp[0]/sum(exp) = 0.16/182.75 =~ 0.00
sigmoid[0] = exp[0]/(1+exp[0])
| 10
| null |
2018-09-24
|
2018-09-24 05:23:35
|
2018-09-24
|
2018-09-24 12:46:25
| 0
| false
|
en
|
2018-09-24
|
2018-09-24 12:46:25
| 8
|
1aa3dbd8619f
| 2.939623
| 0
| 0
| 0
|
Summary: In this lesson we recall what we learned first and second lessons.
| 5
|
Fast.ai Deep Learning Part 1 — Lesson 3 My Personal Notes.
Summary: In this lesson we recall what we learned first and second lessons.
Code
Video
[15:20]
Below there is all code you need to make state of the art image classifier. We learned all these techniques last lesson.
Fast.ai library is making most of the work for us but I think from here you can only intuitively see what is happening. Also, you need to run the learning rate finder.
[30:03]
If you are doing something on mobile devices, it is recommended to use Tensorflow because Pytorch is not supported well. Jeremy showed an example where he used Fastai and Keras code to classify images and the Keras code got 97% accuracy when the Fastai code got 99% accuracy. So it is easier to do things in Fastai but you should understand what the functions are.
[43:40]
This video show how CNN is working.
Video:
We take convolutional filter (a.k.a. kernel) where is white on right side and black on left side. 3x3 area become one pixel. Then we take other kernel where white is at top and black at bottom and got new image. (From one image we have created two different kind of images)
Then we change all negative values to zeros (ReLU). After that we use max pooling. Max pooling take 2x2 area and write the biggest number from that area to new layer.
Then we have again 3x3 convolutional filter. After that we throw away the negative values (ReLU) and then again max pooling.
Finally we combine the two pixel grids and compare those to real letters which then give us percent of how similar the pixels are.
What you should take from this example is that you understand the structure of CNNs. Also as you can see in video, the first layers doesn’t change a lot and white and black are in certain places but when we come to second kernels white and black looked like randomly set.
[1:11:00]
Softmax:
Softmax values are between 0 and 1, and all should add up to one. We use softmax in last layer to see what is in image. output is the number we get from last linear layer of the convolutional layer. Remember that, in order to make complex functions we need linear layer and non-linear layer. Softmax is non-linear function. exp is just epower to output in that row. This makes differences between numbers much larger. After calculating the exps we add them up which in our case give result of 182.75. And finally we calculate the softmax by dividing the exp in that row with sum of exps. So first softmax calculation look like this:
Because we always divide the exp with sum of the values we get something between 0 and 1, and the results also add up to one. Reason why most of the numbers are small and then there is one or two bigger probabilities is because we did e power to something which makes differences bigger.
Side note!
Zip is method which take two arguments and combine them together. Like if oyu have list a which contain 0,1,2,3,4…. and list b which contain “one”,”two”,”three”,… and then you write zip(a,b) you got list where is 0 at first row at first column and “one” at first row second column. This is handy method and I recomend to atleast remember it so if you some day need it you can read documentations more closely.
[1:45:38]
sigmoid:
Sigmoid is the function you should use if you want to predict multiple things from image.
softmax predict: chair
sigmoid predict: chair, plane, desk.
Sigmoid is calculated following way:
So now sigmoids doesn’t add up to one.
~Lankinen
|
Fast.ai Deep Learning Part 1 — Lesson 3 My Personal Notes.
| 0
|
fast-ai-deep-learning-part-1-lesson-3-my-personal-notes-1aa3dbd8619f
|
2018-09-24
|
2018-09-24 12:46:25
|
https://medium.com/s/story/fast-ai-deep-learning-part-1-lesson-3-my-personal-notes-1aa3dbd8619f
| false
| 779
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Lankinen
|
Legacy is greater than currency. lankinen@protonmail.com https://twitter.com/@true_lankinen
|
7cdc3430ebdd
|
lankinen
| 11
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-05
|
2018-01-05 09:40:58
|
2018-01-05
|
2018-01-05 09:45:46
| 0
| false
|
en
|
2018-01-05
|
2018-01-05 09:45:46
| 0
|
1aa525287907
| 0.928302
| 0
| 2
| 0
|
What is QlikSense?
| 4
|
QLIKSENSE Real Time Online Training Offered By MaxMunus
What is QlikSense?
QlikSense self-service visualization. Drive insight discovery with the data visualization app that anyone can use. With QlikSense, everyone in your organization can easily create flexible, interactive visualizations and make meaningful decisions.
Why QlikSense?
Import your own data and experience the power of QlikSense. A free data visualization tool that anyone can use on a personal computer.
Visualize data, build custom apps, embed visuals and support the entire spectrum of enterprise-level uses. Now there are no limits.
Interact with QlikSense apps whenever the need arises. Invite others to do the same in a secure environment.
The customization of text fields has no limits than some other BI solutions.
Competitor
Tableau
IBM Cognos
Sisense
Pentaho Business Analytics
Advantage
1. End User Empowerment.
2. Mobile Friendly.
3. High reusability- Centralized the data dictionary.
4. Customizable Open.
5. Leverage Investment On Qlikview.
6.Licensing Model a.Server License, b. User License.
Gartner’s report for in-depth analysis of where BI is headed in 2017. QlikSense is positioned in the Leaders quadrant for the seventh consecutive year.
QlikSense For Market Demands
Gartner’s report for in-depth analysis of where BI is headed in 2017. QlikSense is positioned in the Leaders quadrant for the seventh consecutive year.
IT Company Who working with QlikSense
HCL
TCS
Capgemini
Accenture
Elekta
Hologic
OhioHealth
For more details kindly feel free contact with us:
Name — Avishek Priyadarshi
Email: avishek@maxmunus.com
Phone : +91–8553177744
Skype Id:- avishek_2.
|
QLIKSENSE Real Time Online Training Offered By MaxMunus
| 0
|
qliksense-real-time-online-training-offered-by-maxmunus-1aa525287907
|
2018-01-05
|
2018-01-05 09:45:47
|
https://medium.com/s/story/qliksense-real-time-online-training-offered-by-maxmunus-1aa525287907
| false
| 246
| null | null | null | null | null | null | null | null | null |
Data Visualization
|
data-visualization
|
Data Visualization
| 11,755
|
Avishek Priyadarshi
| null |
eb564bfd5e9b
|
avishek_2154
| 8
| 104
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
fafe3b7c0a15
|
2018-08-03
|
2018-08-03 17:50:33
|
2018-08-03
|
2018-08-03 17:55:34
| 1
| false
|
en
|
2018-08-22
|
2018-08-22 15:30:25
| 5
|
1aa64b84335c
| 11.856604
| 2
| 0
| 0
|
In this episode of The Bid, we speak to Rich Mathieson, portfolio manager for global equity strategies and a member of the Systematic…
| 5
|
Photo by Markus Spiske on Unsplash
Podcast: The big impact of big data
In this episode of The Bid, we speak to Rich Mathieson, portfolio manager for global equity strategies and a member of the Systematic Active Equity division, about how big data is transforming the way we think about investing.
Big data is changing our lives in a big way. Use of the Internet, smart phones and many other technologies is generating 2.5 quintillion bytes of data every day. There are ever growing applications for this data, including those that could benefit your investment portfolio.
Transcript
Liz Koehler: The world is awash in data. With 2.5 quintillion bytes generated every day, IBM estimates that 90% of the data in the world today has actually been created in just the past two years. It’s no surprise, then, that every day generates some new promise of how we can use this big data to change the world. In this episode of The Bid, we speak to the expert on how big data is transforming the way we think about investing. Rich Mathieson is a portfolio manager for global equity strategies, and a member of the Systematic Active Equity division within blackrock’s Active Equities Group.
Rich, thanks so much for joining us today. Take us back just a few years ago: what was investing like before the worldwide commercialization of the internet, before smart phones, before social media? What opportunities do you think might have been missed because they maybe lacked some of today’s technology?
Rich Mathieson: Hi Liz. I will take it back further than a few years. The Systematic Active Equity Team at blackrock have been combining data and technology with traditional investment insight for as long ago as 30 plus years. And I think really through a lot of that period, the big constraint we had to generating alpha in client portfolios was data. Today that is no longer the case. Think of something you would want to measure electronically and the chances are somebody in the world is already doing that. So today the constraint on alpha isn’t necessarily data availability, it’s the ideas that you have and how to use that data to forecast outcomes in financial markets.
Liz Koehler: That’s great. It’s fascinating, I was just reading an article this morning that was talking about how farmers themselves in Australia are even using big data to change their industry and to improve their own crops. So it’s amazing how prevalent it is today. So as we talk about today, what do we mean — we hear it all over the news — what do we really mean by “big data” and how new is it really?
Rich Mathieson: Yes. I think we first started thinking about the idea — at the time we called it unstructured data or alternative data back in 2010 and that was really when we first started to realize that there was this growing amount of information available to investors that didn’t come in traditional, prepackaged databases of rows and columns of numeric information. And it required a lot of work to turn it into useful information but we felt if we could do that we would have a real edge in terms of investing.
Liz Koehler: That’s great. And speaking of all of that data, last year in 2017, your team trailed over 70 new datasets. That’s pretty impressive. What kinds of data does the Systematic Active Equity Team really look at, and then how does your team analyze those massive amounts of data to really gain those investment insights that you referenced?
Rich Mathieson: We have a general philosophy that there is no such thing as bad data. We want to look at as much data and information as possible to ultimately try and answer the traditional questions that any investor would ask of the securities or stocks that they invest in. Examples where we’ve had a lot of success would be interpreting textual information that used to simply be words on a hard copy of an analyst report or a broker note or a company earnings call transcript. Today that information is electronic. We can capture it, we can use it to measure how well securities and companies are doing. Information from social media, information from internet search — think about the way we all use the internet today. If you’re going to spend a lot of money in something, chances are you start doing your research online before you engage in the transaction. Geolocation data that helps us analyze consumer aggregate behavior in terms of actual physical location, in and out of retail locations, such as shops and stores. These are just some of the many examples of how we’re using this alternative data to again answer very traditional investment questions.
Liz Koehler: There has been a lot of publicity around tech firms recently that have access to personal data of their customers. How does your team think about that in the data that you touch?
Rich Mathieson: Yeah. And it’s a great question and a very, very topical one. I think that the big differentiation of it highly is what we are interested in compared to what a lot of technology firms might be interested in. We aren’t interested in any way shape or form in anyone’s personal information; it doesn’t help us make better forecasts for the companies and securities that we invest in. What we are really interested in, is aggregate consumer behavior or aggregate behavior of individuals and how that behavior maps on to companies’ prospects. So we’re not interested in individual, we are interested in how individuals as a group are behaving and what that means for companies. When we’re looking to bring some of this alternative data into the building, we make sure very clearly from a legal and compliance standpoint that at no point in time will that include any personal identifiable information.
Liz Koehler: Makes sense. It’s a whole lot more about the trends and the aggregated data that the team is looking to glean insights from. So Rich, the asset management industry as a whole is embracing Big Data, that’s clear, to make investment decisions. But it seems that some of these managers could easily buy some of this data or the technologies in order to analyze it off the shelf. Is that really all that’s required?
Rich Mathieson: No, absolutely not. And I think you hit the nail in the head earlier, SAE researchers last year trialed around about 70 different datasets. Many of those will be publicly available, anybody could get hold of it, anybody could buy it. It’s very, very difficult for a lot of asset managers to look at 70 datasets. One of the things we’ve learned over that ten year period in which we’ve been looking at this type of information is that bringing as much data as possible to bear on the same investment question for example, what our next quarter sales are likely to be better or worse than expected for a given company, bring as much data as possible to answer that question is very, very important. And only firms with an ability to leverage a technology platform and they can accelerate as much data as possible over that platform, will be able to answer those types of questions successfully. So size and scale is important in this game. The second point I would make is again, this data even when you acquire it from an often third party vendor, is messy, noisy, unstructured, a lot of work is required to refine and curate that data and ultimately map it onto an easily tradeable security for you to actually build that information into a client portfolio. It’s not easy. And we have a lot of years of skill and experience and the talent required in order to make that transformation.
Liz Koehler: Rich, the Systematic Active Equity Team has been analyzing Big Data to enhance its investment outcomes with technologies like machine learning for almost a decade. What are some of the necessary ingredients that you think asset managers need to ensure they are actually using this Big Data most effectively?
Rich Mathieson: The first I highlighted earlier is size and scale is important and an ability to bring as much differentiated data to bear on the same investment question as possible. For an example, if you’re looking to forecast whether next quarter sales for a company are going to surprise for the upside or downside. There is a tradeoff between the forecasting horizon of the information and the accuracy that you’re likely to get. So for example, I talked about internet search earlier is giving you a very nice early warning of an intention across large groups of consumers towards particular brands and products. The problem being there that I think that gives you just an intention rather than get you close to hard transaction activity. So if you take it to the other end, if you look at things like geolocation, or aggregated transaction data from bank statement and credit card statements, you’re getting closer and closer to hard transaction activity and ultimately book sales. Any one of these datasets might not necessarily give you the right answer, but when you bring them all together and they corroborate one another, that’s when you can get something clearly powerful in terms of forecasting ability and an ability to accurately get ahead of improving company fundamentals. Second point I would note, when you bring in a new dataset, when you build an initial model, it’s very rarely the best possible model that you can build. And what we’ve found is the best results come from years of innovation, of layering incremental innovation on top of the same insight or same idea as new data, information, techniques for looking at the data become available. One of the best examples I think we have of that is the way that abilities in natural language processing have evolved over the last eight to ten years. Original algorithms that we ran to build models that essentially enabled us to read text which is very rigid preprogrammed dictionaries of words. For example, words like growth, exciting, opportunity or threat, deterioration, competition, these would be words that the investment team would select and then the program or algorithm would look for those words within the text of a broker report or a company earnings call or regulatory filing. The second iteration of that insight would start to then look at different features of the text. So the company using lots of numerical data, we tend to find that good companies talk a lot about numbers. We would compare the sentiment in the text across different sources, so for example, different parts of the call Q&A section when management teams are more likely to be or less likely I should say to be reading from prepared remarks, we found that particularly useful. And then I guess bringing up to date the most recent innovation in that insight actually brings in the concept of machine learning and combines that with natural language processing where we’ve built an algorithm that essentially learns from analyzing the relationship of words versus stock returns, what the important words are. Rather than individuals preprogramming the words to look for, the algorithm is actually learning for itself and it’s doing that on an individual security level and a very adaptive and dynamic way. So those are just two of the examples of lessons that we’ve learned, continual innovation and bringing as much data as possible to answer a traditional investment question.
Liz Koehler: Wow, that’s fascinating work. On top of it, I get to tell my husband tonight that all of my online shopping is not a bad thing, I just am contributing to the important Big Data cause here. But no, thank you, those examples are really great. Broadening this out to our listeners, how might investors really see this come to life in their own investments?
Rich Mathieson: Yeah. So I think whilst the ideas and the data we’ve been able to analyze to model those ideas have changed a lot over the last decade, the way that we build those ideas into client portfolios has remained very, very consistent throughout our 30 year history. The process starts with traditional investment question, how am I going to forecast whether the stock will beat expectations in terms of next quarter earnings or sales or whether this stock is going to see an improvement in expectations for future earnings over the next six months, or a change in profitability over the next 12 to 18 months? These are the same types of investment questions that any investor would be interested in knowing the answer to for a given security. But what we then do is try and bring as much data as possible to bear to answer those questions, and we want to measure the exposure of pretty much every stock in an underlying investible universe to that data, to that information with a view to maximizing the breadth of the opportunity set, we can get exposure to in portfolios. And then as we go from that measurement of exposure to the idea, we then build as many of those views as possible into very, very diversified portfolios where essentially we’re going to be long or overweight all of the stocks that we think have positive exposure to the idea or short or underweight all the stocks that we think have negative exposure to the idea as measured by the stock’s exposure to the underlying data we have identified enables us to model that idea. So what you end up with is a very broad portfolio of assets. We tend to hold large numbers of securities, we control risk very, very tightly so you are diversifying away the element of that stock’s risk that isn’t explained by this exposure to your investment idea and getting a nice, clean pure exposure to that information set, to that investment idea in the portfolio.
Liz Koehler: It seems that all over the media today, you hear about machines being poised to take over the world, and in this particular case, even investing. Is the human touch still instrumental in all of this?
Rich Mathieson: Yeah. Very much so and I think there is a couple of key elements there. First is that certainly the robots aren’t in complete control yet. For any algorithm we deploy, for any piece of data that we bring in and analyze, there is still a very, very large interaction with the investment team, with human beings, with the algorithm and underlying data. A lot of the algorithms that we have used for example in the machine learning space, they weren’t originally designed for analyzing time series of financial information with a view to building an investment model. And quite often, we have to bring a lot of investment insight that we have built up over 30 years of primary research into what matters for stock returns to bear on defining and refining the model in order to enable it to think and behave like an investor. The second point I would note is that culture is very important and the idea of building a very open, collaborative culture where experts in the field of data science and individuals who have talent in knowing how to extract useful information from these very large messy, unstructured datasets are working in a very, very collaborative way with again, individuals who might not necessarily be data scientists but have again, years of experience in understanding what really matters for stock returns. And I think if an investment manager doesn’t have that open, collaborative culture and isn’t able to fully integrate these two elements into the investment process, then I think a lot of what we’ve been discussing today will struggle to become a reality.
Liz Koehler: Rich, thank you so much for joining us today; it really was a pleasure having you.
Rich Mathieson: Thank you. My pleasure to be here.
© 2018 BlackRock, Inc. All rights reserved.
Carefully consider the Funds’ investment objectives, risk factors, and charges and expenses before investing. This and other information can be found in the Funds’ prospectuses or, if available, the summary prospectuses which may be obtained visiting the iShares ETF and BlackRock Mutual Fund prospectus pages. Read the prospectus carefully before investing.
Investing involves risk, including possible loss of principal.
International investing involves risks, including risks related to foreign currency, limited liquidity, less government regulation and the possibility of substantial volatility due to adverse political, economic or other developments. These risks often are heightened for investments in emerging/ developing markets or in concentrations of single countries.
Fixed income risks include interest-rate and credit risk. Typically, when interest rates rise, there is a corresponding decline in bond values. Credit risk refers to the possibility that the bond issuer will not be able to make principal and interest payments.
Buying and selling shares of ETFs will result in brokerage commissions.
When comparing stocks or bonds and iShares Funds, it should be remembered that management fees associated with fund investments, like iShares Funds, are not borne by investors in individual stocks or bonds.
This material is prepared by BlackRock and is not intended to be relied upon as a forecast, research or investment advice. It is not a recommendation, offer or solicitation to buy or sell any securities, or to adopt any investment strategy. The opinions expressed are as of July 2018 and may change as subsequent conditions vary. The information and opinions contained in this material are derived from proprietary and non-proprietary sources deemed by BlackRock to be reliable, are not necessarily all inclusive, and are not guaranteed as to accuracy. As such, no warranty of accuracy or reliability is given, and no responsibility arising in any other way for errors and emissions including responsibility to any person by reason of negligence is accepted by BlackRock, its officers, employees or agents. This material may contain forward looking information that is not purely historical in nature. Such information may include among other things projections and forecasts. There is no guarantee that any forecast made will come to pass. Reliance upon information in this material is at the sole discretion of the listener.
©2018 BlackRock, Inc. All Rights Reserved. BLACKROCK is a registered trademark of BlackRock, Inc. All other trademarks are those of their respective owners.
563857
|
Podcast: The big impact of big data
| 11
|
the-big-impact-of-big-data-1aa64b84335c
|
2018-08-22
|
2018-08-22 15:30:32
|
https://medium.com/s/story/the-big-impact-of-big-data-1aa64b84335c
| false
| 3,089
|
Access global investing insights from BlackRock - including our perspective on global markets, retirement, ETFs and other investment strategies from our experts.
| null |
blackrock
| null |
BlackRock
| null |
blackrock
|
FINANCE,INVESTING,FINTECH,BLACKROCK
|
blackrock
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
BlackRock®
|
Access global investing insights from BlackRock. Important disclosures: http://bit.ly/17XHCyc
|
29b08abed9e2
|
blackrock
| 33,500
| 178
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
52ebc139cf3f
|
2018-08-03
|
2018-08-03 16:08:08
|
2018-08-06
|
2018-08-06 08:01:01
| 3
| false
|
en
|
2018-10-07
|
2018-10-07 18:26:30
| 8
|
1aa6d32902c1
| 5.640566
| 8
| 0
| 0
|
Constant stress and pressure revolving around running a company should be by no means any mystery to you. Let’s face it — dealing with…
| 5
|
How mistakes in tax and accounting are slowly killing your business from the inside: the need for A.I solutions
Constant stress and pressure revolving around running a company should be by no means any mystery to you. Let’s face it — dealing with tremendous amount of paperwork, and trying to get a grip on inconsistent manual processes in tax and accounting may be overwhelming… to say the least. There is not a single data company in the world that doesn’t care about reducing costs. In fact, it’s the number one item every worker hears from their bosses — no matter how high or low they sit on the corporate ladder.
So, the big challenge is to somehow do much more with less…. and do it better.
Sounds easy, right? Take a look at the statistics to get a full picture of the problem we’re dealing with.
U.S companies amassed almost $7 billion in IRS civil penalties due to incorrect data handling and spreadsheet errors.
27% accounting mistakes are made because of incorrect tax data entry, according to Bloomberg’s BNA study from 2015.
69% of companies document internal processes, but only a staggering number of 4% measure and manage them, according to process maturity of 236 Polish companies from 2016.
“Okay, but are there any long-term solutions that are applicable to MY company?”
Luckily, you might have found the right piece of content.
The rapid growth of Artificial Intelligence in the recent decade helped with developing advanced modulare software addressing the accounting needs of any company — no matter how big.
Our team at DLabs has specialized in creating algorithms that will help you not only reduce costs and time, but also improve accuracy, ROI and increase productivity concerning accounting tasks. How much more could you have have for?
But first things first — This article will help you understand, why A.I solutions is something you desperately need. Right now.
3 most painful accounting problems that are draining your company dry
It doesn’t matter if you’re a CEO of a shared services centre, CFO, or someone running a small business down the street — the problems concerning accounting and taxation are universal. For the most part, the errors in documentation and legislation amass slowly and without much notice. Ignoring management reporting will unavoidably lead to serious accounting problems that will shift your focus from what matters most, leading to massive, financial loses.
Astonishingly, according to Bloomberg’s BNA study from 2015, more than 50% of the surveyed companies are much likely to hire more personnel to deal with accounting tasks, rather than solve actual problems in taxation and rule out deficiencies.
It means, that businesses are more willing to sweep things under the rug, than try to solve real, outlying problems.
And that happens, because companies are unaware of the scale of manual labor accounting errors that eat businesses alive.
1. Manual errors
Manual accounting systems relying on human accuracy are naturally prone to errors. Even the most qualified staff won’t save you from accounting blunders — and they will happen, sooner or later. Punctuation, spelling, grammar, misinterpretation of data, not saving work or mistyping in the wrong fields are all common in most data field entries.
Input errors are by far the most widespread problem, contributing to 27, 5% of all manual problems in accounting. Data may come from a huge variety of sources: paper documents, e-mails, web-based forms, etc. Each one of them needs to be integrated into one, core format. Not only does it add additional processing time, but also increases the chance that mistakes will be made: paying incorrect amounts, or duplicating invoices, which all may result in late-payment penalties.
Input errors may hurt your company… but have you ever heard about one mistype that cost $1.1 trillion?
In 2010 Wall Street suffered a massive heart attack. In sheer moments the stock market plunged over 1000 points for reasons totally unknown. The so called “flash crash” wiped over $1.1 trillion of investor dollars. Although most of it was quickly regained, the market was shaken to the bones. What happened, you ask?
Well, it appears that one keystroke error was to blame. The letter “B” was typed in in a sell order, instead of the letter “M”.
Other types of manual errors may also squeeze the life out of your company. According to Bloomberg’s survey, some of the most impactful ones are:
Saving a file with corporate, financial or tax information to a personal device and corrupting the data (18% of all errors)
Accidentally deleting a custom Excel formula used to calculate corporate tax data (17% of all errors)
Overriding data in an enterprise system with figures calculated outside of the program (13% of all errors)
2. Regulatory missteps
Although not as common, regulatory errors are much harder to spot than manual ones. Rule-based mistakes may be much more costly, particularly if a certain behavior is being unnoticed for months….. or even years.
In 2009 Bank of America miscalculated its regulatory capital. The error was carried over up until 2014 when it was noticed….. resulting in a $7.7 million fine.
Pretty unfortunate, don’t you think?
Misinterpretation or lack of data is a crucial problem often seen in data entry. Clerks might interpret various data strings differently and impressions may not often be accurate. As it turns out, this type of error might lead to disastrous consequences.
Premature closing of books before all required data has been collected. It’s one of the most common regulatory errors( 12% of all accountant errors in Bloomberg’s study).
Incorrect application of unitary tax rules which may be unique in one country, but completely irrelevant in China (for example), where governments have their own rules that may be sanctioned by their central government (cited by 10% of responders).
Source: Bloomberg’s BNA Study, 2015
Even the smartest clerks from NASA aren’t immune to disastrous mistakes.
The Mars Climate Orbiter disintegrated in the planet’s atmosphere, exceeding the safe landing trajectory by a huge margin.
The cause? Unit conversion.
NASA’s internal process never included a step to make sure their units were consistently imperial or metric. Lack of conversion led to complete obliteration of the satellite, costing 193$ million.
3. Time consumption
When it comes to data entries, no matter how fast you type, think, or process information, speed will always cause problems. Accounting functions based on manual work involve paper journals, ledgers or similar tools that require copious time to complete specific tasks.
“All right, but I’m hiring numerous clerks and accountants with years of experience, they should be fast and focused enough to complete most tasks, right?”
Well, that’s not entirely true.
A study publicized in Cognitive Neuropsychology on Circadian rhythms in human cognition claims that individual differences in patterns of circadian arousal (the time day which we are most alert) correlate with performance on a variety of cognitive tasks, such as office work. It also proves that such performance peaks more or less regularly at a specific point in the day.
Bottom line is, your staff will EVENTUALLY make a mistake due to lack of focus: we’re all humans after all.
Audits that take centuries to finish — It’s no secret that audits are the real bane of every organization’s existence. Digging through hundreds of file drawers to find the right piece of information and pull it all together for auditors in time sounds all too familiar, doesn’t it?
Imagine if all this could be accessed from a centralized location and all steps within an accounting workflow could be tracked.
Paper invoices requiring too many hours — Invoices require cross-reference checks to ensure their accuracy. It means accountants have to manually compare them to a PO (purchase order) or a contract. Some of them require several levers or approval (like sending them from desk to desk for weeks to no end…)
Tax and accounting mistakes may be deadly and costly. Read our next article to find out specifically, how A.I can help you with battling all of the outlying (and hidden) problems and learn how people at DLabs deal with specific, accounting problems.
|
How mistakes in tax and accounting are slowly killing your business from the inside: the need for…
| 75
|
how-mistakes-in-tax-and-accounting-are-slowly-killing-your-business-from-the-inside-the-need-for-1aa6d32902c1
|
2018-10-07
|
2018-10-07 18:26:30
|
https://medium.com/s/story/how-mistakes-in-tax-and-accounting-are-slowly-killing-your-business-from-the-inside-the-need-for-1aa6d32902c1
| false
| 1,349
|
Helping companies increase business efficiency using Artificial Intelligence
| null |
DLabsPoland
| null |
DLabs
|
medium@dlabs.pl
|
dlabs-pl
|
ARTIFICIAL INTELLIGENCE,DATA SCIENCE,MACHINE LEARNING,PROCESS OPTIMIZATION,BIG DATA
|
DLabsPL
|
Finance
|
finance
|
Finance
| 59,137
|
Przemysław Majewski
|
Helping companies increase business efficiency using Artificial Intelligence at www.DLabs.pl
|
515da6b4aab9
|
przemmaj
| 114
| 141
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
aa64c4003f87
|
2018-08-29
|
2018-08-29 08:52:31
|
2018-08-29
|
2018-08-29 09:01:00
| 4
| false
|
en
|
2018-08-29
|
2018-08-29 09:36:16
| 1
|
1aa793d13c20
| 3.571698
| 1
| 0
| 0
|
My regular job is as a PhD student specialising in veterinary biology but this summer I have an amazing opportunity to be a data science…
| 5
|
Castles, canals and coffee: my first week as a FreeAgent data science intern
My regular job is as a PhD student specialising in veterinary biology but this summer I have an amazing opportunity to be a data science intern at FreeAgent. If you are wondering why a biologist came to work at an accountancy software company, this is because of the particular branch of biology that I study: ‘epidemiology’. An epidemiologist typically uses data to understand ‘the distribution and determinants of health-related states or events’¹ and the statistical techniques that they use are often also used by data scientists. For example, at university I use data about dogs to predict their health and at FreeAgent I will be using data about customers to predict their success with the software. Despite these similarities, I am still going to have so much to learn and when I arrived at FreeAgent I was really excited to get stuck into my new role.
Inductions and introductions
After a quick chat with Dr. Dave Evans (FreeAgent Analytics Team Lead -hereafter referred to as ‘Dave’), I was greeted by the other members of the data science team: David (a permanent member) and Hannah (another data science intern). I was delighted to find my very own FreeAgent branded hoodie, t-shirt, sweets, pen, notebook and stickers waiting for me. Then, I saw the view from my desk! Edinburgh Castle in the sunshine? What a start!
Incredible scenes!
During the week, I attended various inductions: health and safety, office, sales, people operations, support, communications and about the company. I must admit, I normally find inductions boring but these were no average inductions. I got to meet the head of each department and Ed Molyneux, the CEO and founder of FreeAgent. Everyone presented their roles enthusiastically and it was genuinely interesting, inspiring and helped me understand the company.
Winston the FreeAgent mascot!
Dave took me round and introduced me to everyone and although I am still struggling to remember many names, I’m pretty sure that at least 50% (trust me! I’m a data scientist) of the male employees are called David, which makes it slightly easier. During my travels around the two-floor office I also became familiar with Winston, the animated FreeAgent mascot! As I am guilty of being a crazy cat lady, I was very happy to find I would have a feline friend for the summer (watch out for him in my blog posts).
Delicious delicacies
Food and drink featured as a large part of the week: South African wraps, ciabattas and salads from the many local food havens and various gins, beers and wines after work at the bars nearby. I had my first experience of the data science team’s Wednesday coffee at ‘The Counter’, a canal boat cafe. I don’t really drink coffee but the view itself was enough to float my boat! On Thursday morning, we had bagels and iced coffee on the balcony while we did a ‘stand up’ meeting. Stand up is a short meeting where everyone literally ‘stands up’ and says what they achieved the day before and what their goals are for the current day. It is a concept of the Agile Method which is commonly used in the project management of software development. Finally, I can’t forget the fantastic FreeAgent Friday catered lunch — a selection of gourmet salads did not fail to impress.
You can’t beat bagels and iced coffee!
Project progress
When I wasn’t meeting people, eating/drinking or generally being blown away by the company ethos I had time to do some background reading, learn more about my project and learn to use some new software and web tools such as.Google Drive/Docs/Slides/Sheets, Trello, Slack and Amazon Web Services/Redshift/SageMaker. Towards the end of the week, I was able to extract some customer attitudinal data from Redshift and begin data exploration and cleaning: the process of removing errors from data to ensure data quality. My first week mainly involved getting to grips with the software and data and planning what I would be doing in upcoming weeks, which I will share more about in future blog posts.
Town hall talks
My favourite part of the week was without doubt the company-wide ‘town hall’ meeting on Friday afternoon. We grabbed a beer/wine/soft drink from the fridge and listened to presentations from employees in different departments. What struck me most was the great atmosphere: everyone presented their work enthusiastically, listened intently, asked questions respectfully and chatted afterwards as friends. I feel privileged to get the chance to work with such a friendly bunch!
A typical town hall audience
References
WHO. 2018. Epidemiology. Retrieved from: http://www.who.int/topics/epidemiology/en/. [Accessed 11/07/2018]
|
Castles, canals and coffee: my first week as a FreeAgent data science intern
| 1
|
castles-canals-and-coffee-my-first-week-as-a-freeagent-data-science-intern-1aa793d13c20
|
2018-10-01
|
2018-10-01 17:46:50
|
https://medium.com/s/story/castles-canals-and-coffee-my-first-week-as-a-freeagent-data-science-intern-1aa793d13c20
| false
| 761
|
Tales of code crunching from the FreeAgent Engineering team
| null | null | null |
Grinding Gears
| null |
grinding-gears
|
SOFTWARE ENGINEERING,RUBY
| null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Charlotte Woolley
|
Epidemiology PhD student and data science enthusiast
|
ed66eca0ec1
|
cscwoolley
| 4
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
d0c08339b41f
|
2018-06-19
|
2018-06-19 20:40:19
|
2018-05-25
|
2018-05-25 19:27:45
| 2
| false
|
en
|
2018-06-19
|
2018-06-19 20:48:11
| 19
|
1aa7bf0a2627
| 1.470126
| 0
| 0
| 0
|
TWiML Talk 144
| 5
|
Training Data for Computer Vision at Figure Eight with Qazaleh Mirsharif
TWiML Talk 144
For today’s show, the last in our TrainAI series, I’m joined by Qazaleh Mirsharif, a machine learning scientist working on computer vision at Figure Eight.
Subscribe: iTunes / SoundCloud / Google Play / Stitcher / RSS
Qazaleh and I caught up at the TrainAI conference to discuss a couple of the projects she’s worked on in that field, namely her research into the classification of retinal images and her work on parking sign detection from Google Street View images. The former, which attempted to diagnose diseases like diabetic retinopathy using retinal scan images, is similar to the work I spoke with Ryan Poplin about on TWiML Talk #122. In my conversation with Qazaleh we focus on how she built her datasets for each of these projects and some of the key lessons she’s learned along the way.
Thanks to our sponsor!
I’d like to send a shoutout to our friends over at Figure Eight for their continued support of the show, and their sponsorship of this week’s series which all took place at Train AI. Figure Eight is the essential Human-in-the-Loop AI platform for data science and machine learning teams. The Figure Eight software platform trains, tests, and tunes machine learning models to make AI work in the real world. Learn more at www.figure-eight.com.
About Qazaleh
Qazaleh on Linkedin
Qazaleh on Twitter
Mentioned in the Interview
Figure Eight
SpotAngels
Predicting Cardiovascular Risk Factors from Eye Images with Ryan Poplin
TrainAI 2018 Series Page
Join us in celebrating our 2nd Birthday!
TWiML Presents: Series page
TWiML Events Page
TWiML Meetup
TWiML Newsletter
“More On That Later” by Lee Rosevere licensed under CC By 4.0
Originally published at twimlai.com on May 25, 2018.
|
Training Data for Computer Vision at Figure Eight with Qazaleh Mirsharif
| 0
|
training-data-for-computer-vision-at-figure-eight-with-qazaleh-mirsharif-1aa7bf0a2627
|
2018-06-19
|
2018-06-19 20:48:12
|
https://medium.com/s/story/training-data-for-computer-vision-at-figure-eight-with-qazaleh-mirsharif-1aa7bf0a2627
| false
| 288
|
Interesting and important stories from the world of machine learning and artificial intelligence. #machinelearning #deeplearning #artificialintelligence #bots
| null |
twimlai
| null |
This Week in Machine Learning & AI
|
team@twimlai.com
|
this-week-in-machine-learning-ai
|
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,DEEP LEARNING,PODCAST,TECHNOLOGY
|
twimlai
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
TWiML & AI
|
This Week in #MachineLearning & #AI (podcast) brings you the week’s most interesting and important stories from the world of #ML and artificial intelligence.
|
ca095fd8e66c
|
twimlai
| 292
| 33
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
faac71d40d3d
|
2018-07-04
|
2018-07-04 01:31:49
|
2018-07-04
|
2018-07-04 01:33:58
| 1
| false
|
en
|
2018-07-04
|
2018-07-04 01:37:39
| 1
|
1aa97185499b
| 2.950943
| 1
| 0
| 0
|
The value of solid data to a business cannot even be quantified in today’s competitive landscape. Collecting and using data is no longer an…
| 5
|
Data Lake vs Data Warehouse
The value of solid data to a business cannot even be quantified in today’s competitive landscape. Collecting and using data is no longer an optional add-on, but a necessary way of running operations. There is a clear, positive correlation between the growth of a company and effective use of its data.
These last few years, the interest of companies in ‘big data’ has witnessed significant growth. It’s essential that companies aren’t blindly jumping on the big data horse — because, without a plan and strategy in place, all the data you collect will be useless.
One major question plaguing organizations is deciding between a data warehouse and a data lake. In order to make the right choice, both of these data assets should be well understood.
What Is A Data Lake?
Following the description of data lake by the man known to have coined the term, James Dixon: if data mart, which is a type of data warehouse, is thought of as bottled water: cleansed, packaged and structured for easy consumption. Conversely, data lake should be seen as a large body of water — left in its natural state.
Data lake refers to a system that houses a large amount of data in its natural form, until such a time when it is needed. This means that it does not need to be structured first. It accepts all data from source systems, and data requirement and schema are defined only when data is queried to fulfill the needs of a specific analysis.
What Is A Data Warehouse?
A data warehouse is a system utilized for data analysis and reporting and is considered to be a key component of business intelligence. It is a central repository that stores historical and current data in a place which can be used to create analytic reports for workers in an enterprise. It collects corporate information as well as data from external sources and operational systems. It is highly structured and transformed and will not load data until the purpose for it has been clearly defined.
Differences Between Data Lakes & Data Warehouses
Retention of Data: The way in which a data warehouse is developed makes it a highly structured reporting model. It requires decisions to be made on what data to include or not include. If the use of data is not defined or it does not answer specific questions, it may not be included in the warehouse. In contrast, data lake does not turn away data, as any information that has an unknown use today might be useful tomorrow.
Data Type Support: Generally, data found in a data warehouse will consist of those from transactional systems, quantitative metrics, and attributes used in describing them. Data sources that are non-traditional such as sensor data, text and images, web server logs, and social network activities are ignored. In contrast, data lake embraces all data types including non-traditional ones.
User Support: Although a data warehouse caters to ‘operational’ users that often make up 80% of most companies, it does not fully cater for the next 10% that carry out analysis on data. Plus, the remaining 10% may be totally ignored — this group of people can include data scientists who carry out deep data analysis. Data lake, however, gives equal support to all users.
Resource Consumption: It takes time and consumes developmental resources to make changes in a data warehouse. The data loading process is complex and a lot of business questions cannot wait that long to be answered. With data lake, by applying a more formal schema, business questions can be answered at the user’s pace.
Gathering Insight: Data lake enables users to get results faster when compared to a data warehouse, as it accommodates all data and data types, and allows access to the information before it is transformed.
Choosing Between Data Lake & Data Warehouse
It is important to understand your data needs before selecting between a data lake and a data warehouse. You need to know what type of data will be stored, your data sources, resources available to you, and what the data will be used for. This knowledge will help guide you in your decision. If you’re just starting out, you want to weigh the pros and cons and align your business needs to each side’s potential.
Originally published at www.qbixanalytics.com.
|
Data Lake vs Data Warehouse
| 1
|
data-lake-vs-data-warehouse-1aa97185499b
|
2018-07-04
|
2018-07-04 01:37:39
|
https://medium.com/s/story/data-lake-vs-data-warehouse-1aa97185499b
| false
| 729
|
Finding Hidden Opportunity In Your Data
| null | null | null |
QBIX Analytics
|
info@qbixanalytics.com
|
qbix-analytics
|
DATA SCIENCE,DATA VISUALIZATION,DATA ANALYTICS,DATA ANALYSIS,DATA ANALYTICS TOOLS
|
qbixanalytics
|
Data Science
|
data-science
|
Data Science
| 33,617
|
QBIX Team
| null |
af25d763f4fa
|
mike_85091
| 2
| 1
| 20,181,104
| null | null | null | null | null | null |
0
|
d43 = d42.append(d41).groupby(‘Tam’).get_group(‘Liverpool’)
display(d43)
| 1
|
d533d9556b8c
|
2018-08-18
|
2018-08-18 23:49:20
|
2018-08-25
|
2018-08-25 13:59:24
| 30
| false
|
en
|
2018-09-01
|
2018-09-01 08:37:59
| 2
|
1aab90ca368
| 13.831132
| 6
| 0
| 1
|
Pandas is a magnificent tool that empowers python enormously for data analytics. It is able to read and transform structured data in tons…
| 4
|
Data Analytics using Pandas: A Quick Practical Tutorial 1
Pandas is a magnificent tool that empowers python enormously for data analytics. It is able to read and transform structured data in tons of ways. This article will take you through some practical Pandas data transformation using English Premier League season 2014/2015 results as dataset.
This practical tutorial get you straight to the functions and methods of Pandas to manipulate data and DataFrames without much of syntax and option explanations. I tried to focus on how Pandas functions work for us to get the answer we need. I leave the readers to find out a huge arrays of options and varieties of other syntax patterns from other sources by themselves.
I summarize Pandas methods/functions learnt (and what they do) for each section in Pandas Summary subsection.
The Dataset
This is the direct link to download .csv file of English Premier League season 2014/2015 results used in this tutorial.
The explanation of column names and abbreviations can be read here. Some abbreviations which will be used in this tutorial are:
Figure 0: Some abbreviation descriptions of the E0.csv file. Only these columns are used in this tutorial.
Coding Environments
I simply installed Anaconda 5.2 for MacOS High Sierra (64-bit with Python 3.6) on my system. I use Jupyter Notebook as my coding IDE.
I put the E0.csv under datafile folder, as shown below.
Figure 1: File locations viewed from Jupyter Notebook.
Figure 2: The E0.csv file is in the ./datafile folder.
Data Preparations
Loading The E0.csv file
The first step is to create a DataFrame (DF) by importing the .csv file. Simply use pd.read_csv(). In the code below, the first DF created from the E0.csv file is named d0.
Figure 3: Import Pandas and create DF by reading from a .csv file. Display the table on Jupyter notebook.
The display() is Jupyter Notebook’s feature to magically display the table in very eye-pleasing way. Using print() is ok, but display() is more beautiful on Notebook.
At the end of the screen shown below, the size of the table is also shown — 381 rows × 68 columns. The last line showing 381 is the result of the print(len(d0)) command.
Figure 4: Size of the DF and output of len(d0).
Pandas summary:
pd.read_csv() : to create a DF by importing a csv file
len(d0) : to count the rows of the DF
Display only the First or the Last few Rows
From the display() command above, although some rows in the middles are skipped, it is still a long output. We can choose to display only the first or the last few rows using .head() and .tail() methods. You can also specify the number of rows you want to display by entering the number of rows as a parameter, e.g., head(10) or tail(10).
Figure 5: The .head() method displays the first few rows of the DF.
Figure 6: The tail() method displays the last few rows of the DF.
Pandas summary:
.head() and .tail() : to view only the first few or the last few rows of DF.
Checking for Missing Data
In the image above, showing the result of tail() method, you might notice that the last line of the data shows380 NaN NaN ... NaN. The NaN indicates the missing of data. Let’s examine the file by opening up the E0.csv file using Jupyter Notebook (or on any text editor, e.g., Sublime), which looks like this:
Figure 7: E0.csv screen shown on Jupyter Notebook.
When you scroll down to the very end, you probably see this.
Figure 8: The last lines of the E0.csv file.
The line 382 contains only an array of commas with out data in between. Therefore, the line 382 causes the NaN to show up when we load the file using Pandas.
Further examination on the missing of data can be done using df.info() command. The df.info() shows data types and, more importantly, the number of non-null rows (the number rows with valid data) of each column.
Figure 9: df.info() shows the name, number of non-null rows, and data types of each column.
When scrolling down slightly and looking carefully, it can be seen that there are 3 unusual columns, namely SJH, SJD, and SJA, with only 40 non-null value (instead of 380 as in other columns).
Figure 10: Notice the 40 non-null columns: SJH, SJD, and SJA. This indicates that these 3 columns has 40 non-null rows of data, and probably 380–40 = 340 rows of missing data.
Let’s take a look at the raw E0.csv file to see the root cause of the null data. In the file you will see some missing data in some columns, the consecutive commas, as shown in the image below. Thus, these are the missing data of the columns named SJH, SJD, and SJA summarized by the df.info() above.
Figure 11: The missing data viewed thought the raw text editor.
Pandas summary:
df.info() : shows the name, number of non-null rows, and data types of each column.
Discarding (Dropping) Missing Data
Our objective here is to remove the last line (line 380 of Figure 6) containing the missing data. To discard the missing data, we can use df.dropna(how=’any’) . This removes all rows that contain the NaN.
However, dropping missing data can be trickier than you think. If we apply the .dropna() to the entire data set (the d0 here), any rows with the missing data caused by the SJH, SJD, and SJA columns will be removed too. The 380 rows of all columns will be shortened down to only 40 rows, which certainly is NOT what we want.
Figure 12: d0.dropna() will remove all rows with missing data, ending up with only 40 rows left.
In our tutorial here, we will use only first few columns and not the SJH, SJA, and SJD. Therefore, we shall select the columns we want to use first, which excludes the SJH, SJA, and SJD columns, then discarding the NaN rows later.
Pandas summary:
df.dropna(how=’any’) : removes all rows that contain the NaN.
Selecting Column(s) from DF
To select ONE column from a DF, we can use df[‘column name’] just like selecting an element from a List in Python. You can also use df.column_name format. The first is easier to remember as it is the same as List element selection syntax. It also supports column name with spaces. The latter is slightly easier to write.
The column selection will not affect the original DF (d0). We have to return the output to a new DF (d1). The d1 = d0['HomeTeam'] means creating a new DF named d1 by selecting all rows of the column named 'HomeTeam' from the DF d0.
Figure 13: Create a new DF, d1, from HomeTeam column of d0.
To select several columns, use double brackets — df[['name1','name2']]
Figure 14: Selecting multiple columns of a DF.
The code below combines the column selection and dropna() in one line. Notice that the line 380 with NaN is removed.
Figure 15: Combine the selection of multiple columns and the dropna( ) in one line.
Figure 16: The last rows of Figure 15 shows that the NaN row (row 380 or Figure 6) is removed.
Pandas Summary
df['column name'] or df.column_name : select only a specific column of the DF.
d0[['Date','HomeTeam','AwayTeam','FTR']] : select specific columns of the DF.
Data Analytics
Problem 1 : Get the name of all teams played this season
Put simply, get the unique names of all teams. We all know that there must be exactly 20 team names. An algorithm to solve this problem is simple: get a unique team names of either HomeTeam or AwayTeam column.
There are few ways of doing this.
First, use .unique(). Thus print(d1[‘HomeTeam’].unique()) will give you the answer. Of course, print(d1.HomeTeam.unique()) will also work. The output is of type numpy.ndarray. To ensure correctness, we can use .nunique() to count the result.
Figure 17: Get a list of unique elements of a column using df[‘column_name’].unique( ). Use .nunique() to count.
Second, exploit the uniqueness property of Python’s Set. Throw the HomeTeam names into a set and it will remove duplicates automatically for you. Thus, print(set(d1[‘HomeTeam’])) will also give you the answer. The output is a set. To count the elements of the Set, we can use len(), as shown below.
Figure 18: Get a list of unique elements of a column using Set.
Pandas Summary
.unique(): get the unique elements from the column.
.nunique() : count the unique elements.
set(df['HomeTeam']) : convert DF’s column to Set.
Problem 2: How many matches Liverpool win this season?
Algorithm : the FTR column, Full Time Result, indicates which team won the match — H, D, or A. Thus when Liverpool played as a home team, we have to filter only rows with FTR=H, and when it played as an away team, we have to filter only rows that FTR=A. Then we count the number of rows of both scenarios to get the final answer.
In practice, we find the results of win games of Liverpool as a home team first. Then to get the answer of the away-team part, it is just a matter of copy and paste with some minor modifications.
In coding, there are couple of ways to achieve this.
Solution 1
We can perform a series of filters. We first filter Liverpool from HomeTeam. Then we filter H from FTR. Then count.
To filter Liverpool from HomeTeam, we use
d21 = d1[d1['HomeTeam'] =='Liverpool']
(or d21 = d1[d1.HomeTeam=='Liverpool'])
Then we can filter the d21 again with FTR=='H' using the same syntax
d22 = d21[d21['FTR']=='H']
The last step is to count the result using len(d22).
Figure 19: All home wins of Liverpool this season.
To find all away wins, simply change HomeTeam to AwayTeam, and FTR=='H' to FTR=='A'.
Solution 2
In stead of applying 2 filters on different columns, we can also set on column as the index of the DF (the index is the header of the rows), then we select rows we want using the index label. This is basically another way to filter the data by row. Then, we will apply another filter on the other column. In this case, we will set HomeTeam as an index, filter the HomeTeam using index selection method, then filter the FTR column to get the answer.
In the code shown below, we first create a new DF named d23 which derived from d1 with only 2 columns, HomeTeam and FTR. I print out the d23 to show that the index is integers (0, 1, 2, …).
Then we set the HomeTeam as an index of the d23 using d23.set_index(). The inplace=True option applies the new index setting to the d23 itself. You can see from the result that the HomeTeam becomes the index, replacing the integer 1,2,3, …
Figure 20: using .set_index() to set HomeTeam as index of the DF
We can use .loc() to refer to specific rows of the DF using the index name (i.e., filter the rows using the index). Thus, with the HomeTeam as the index, we can simply locate the index=Liverpool by d23.loc['Liverpool'].
To select the FTR=='H', we can just use the same filtering syntax as in Solution 1 above. Finally we can use len() or .count() to get the final answer.
As shown in the code below, the FTR=='H' is applied first, followed by the .loc[] to filter Liverpool from the HomeTeam index.
Figure 21: Select FTR=H, then use df.loc[‘Liverpool’] to select only the rows with index = Liverpool.
Pandas Summary
df[df.HomeTeam=='Liverpool']: filter data from a specific column.
df.set_index('HomeTeam', inplace=True): set a specific column as a new index and apply to the DF.
df.loc['Liverpool']: select (filter) rows by specifying index name.
Problem 3: Count the number of HOME wins of all teams
This problem will introduce you to a new Pandas method: .groupby(). The algorithm is quite simple — select the HomeTeam with FTR=H, then groupby team name, and then count.
To demonstrate how groupby() works, I split the code into d31, d32, d33, and d34. The actual answer to this question requires only the d31 and d34.
As shown in the code below, d31 derives from d1 with 2 columns, HomeTeam and FTR, and with FTR=='H' filter. d32 creates a groupby of d31 using HomeTeam. Then selecting only the first element of each group using .first(). I print out d31 and d32 to show how groupby() affects the output. One easily unnoticed impact of groupby() is that the column that we use for the groupby(), i.e., HomeTeam in our case, becomes the index of the output.
Figure 22: d31 derived from d1, then being grouped-by as d32. Notice that ‘HomeTeam’ becomes index of d32.
Let’s explain a bit more about how groupby() works. The d32 groups d31 by HomeTeam. Thus the FTR of the same team are grouped together. The groupby() method requires 3 stages: split, apply, and combine. The groupby() command alone only split DF into groups and, at this stage, you cannot directly print out the group elements using print() or display(). We need an ‘apply’ method to process and combine the data of each group, and produce final output. Examples of apply methods are count(), first(), sum(), or mean(). We use .first() in d32 to obtain the first element of each group just to demonstrate the groupby() process. Again, notice that the HomeTeam column becomes the index after applying groupby() method.
Figure 23: Demonstration of get_group() and count() methods being applied to groupby().
From the code above, the d33 is also just for demonstration purpose only. Here the .get_group() is used as an apply function to take a look at elements of the group. Notice that the integer index is not replaced by the HomeTeam. The get_group() only construct DF to display the elements of the named group but has not apply any significant functions to the elements.
The d34 applies .count() to the groupby(), which counts the elements in each group. The HomeTeam becomes the index of the DF. This is the final answer for this problem.
Pandas Summary
d31.groupby('HomeTeam') : groups the DF by the specific column
d31.groupby('HomeTeam').first(): groupby() then obtain the first elements of each group.
d31.groupby('HomeTeam').get_group('Liverpool'): groupby() then create a DF that contains only elements of the specific group.
d31.groupby('HomeTeam').count() : groupby() then counts elements of each group.
Problem 4: Compute total SCORE of all teams
A team gets 3 points if it wins a match, 1 if draws, and 0 if loose. The total score of the season of each team is a summation of these points of all matches.
This problem will introduce you to some new Pandas tricks: create a new DF from Dictionary, compose an IF-ELSE statements, and use the .append() to combine two DFs.
Algorithms: First we will create a new DF with team name and points columns. The points column is a new column that does not exist in the original dataset (does not exist in the .csv file). The point computations will be separated into 2 parts, home matches and away matches since the FTR is computed differently. Specifically, on home matches, the FTR and points are computed as H=3, D=1, A=0, while on the away matches, they are H=0, D=1, A=3. When we are done with the computation of both parts, we combine both DF together and sum() the points.
The first part of codes and outputs are shown below. This parts create new DFs and convert FTR of {H, A, D} to Points {0, 1, 3}, according to the point computation of home team and away team.
Figure 24: d41 and d42 are new DFs that convert FTR result to Points.
Let’s explain the code in more details. The pd.DataFrame() creates a new DF. The new DF is created using Dictionary data structure, as {column_name : element_list}. In our case, the Team column is taken straightforwardly from d1. HomeTeam. The Points column is a list of either 3 or 1 or 0, according to the IF-ELSE condition based on the FTR values.
The IF-ELSE condition begins by taking elements from d1.FTR (as specified at the very end of the clause). Then put each of the element through the IF statement (‘H’ or ‘D’ or something else (which can possibly be only ‘A’ here)). An integer of either 3, 1, or 0 is returned as a result for each of the element of d1.FTR.
The code below combines d41 and d42using .append(), then performs groupby() by the Team column, and applies the sum() function.
Figure 25: d43 combines d41 and d42 together. Then groupby() and sum() the elements.
.append() combines 2 DFs together by simply appending the rows of one DF at the end of the other. For a smooth append task, both DFs must have the same number of columns and the same column names.
Applying sum() method to groupby() add all points in the group together. If you want to see what happen before the sum() method, you can try the .get_group() like this:
Pandas Summary
pd.DataFrame(): create a new DF using Dictionary data structure.
[3 if i=='A' else 1 if i=='D' else 0 for i in d1.FTR]}: example of applying IF-ELSE statements with elements of df.column.
d43 = d42.append(d41): appends all rows of one DF to another DF.
Problem 5: Count the WIN matches only with HTR score lead (only wins that has score lead at half time)
There is a HTR field (Half Time Result) that specifies the result at half time, H, D, or A. This problem extends the problem 3 by adding another condition, HTR must be win, on top of the FTR must be win.
Algorithm: This problem actually combines problem 2 and 3 together and, thus comprising 2 main parts: filtering and grouping-by. The filtering part is similar to the solutions of the problem 2:
filter the result twice using the comparison clause (the == clause) just like the solution 1 of the problem 2. Or
set one of the condition (FTR or HTR) as index then use .loc[] along with the == clause, just like the solution 2 of problem 2.
The grouping-by part is similar to problem 3. I put the code of the home-team scenario below. I leave the away-team scenario as an exercise for you.
Figure 26: Solution of problem 5 using set_index()
Figure 27: Solution of problem 5 using the comparison clause twice.
In the code above I have added some new tricks including
select the column right within the groupby() statement. Specifically, the groupby() statement in my code looks like this:
d53 = d52.groupby('HomeTeam')['FTR'].count()
The d52 has 4 columns before being grouped-by, HomeTeam, AwayTeam, FTR, and HTR. However we want only 1 column after the .count(). To select specific column(s) of the result of the groupby() statement (i.e., d53), you can just simply specify the column names after the groupby() clause. Here I select FTR. If you are not quite sure what I am talking about here, try take out the ['FTR'] from the statement above and observe the result by yourself.
sort the results of the count() method using .sort_values(). If the resulting DF has more than 1 column, you would need by='column_name' as another parameter of the .sort_value() , e.g., .sort_values(by=’FTR’, ascending=False).
Pandas Summary
d53 = d52.groupby('HomeTeam')['FTR'].count(): group by HomeTeam and count. The result will have HomeTeam as an index and 1 column named FTR.
.sort_values(by=’FTR’, ascending=False): sort the input by FTR column from the greatest value to the least. If only there is 1 column, the by= parameter must be removed.
Problem 6: Create a Home-Win Table
Show a home-win table of all team that looks like this:
Figure 28: A Home-Win table.
Algorithm: This problem add some new tricks to manipulate the groupby() method. Rather than performing sum() or count(), we append or join the data in the group together to create a nice-looking table. Before being grouped-by, we just select HomeTeam and FTR columns, and replace the D and A letters with the dash -.
Here is the code.
Figure 29: The code for Home-Win table problem.
From the code, the .replace() simply finds and replaces the text specified in the entire DF (all rows of all columns). So pick the search term carefully or the changes might be done on where they are not intended to.
The .apply(' '.join) joins the data in the group, FTR of each team in this case, together with a space in between.
Pandas Summary
.replace() : search entire DF for text pattern and replace with the specified text.
.apply(' '.join) : concatenate the data of the group (of the groupby() function)
That’s it for this tutorial. Hope it helps. Peace.
|
Data Analytics using Pandas: A Quick Practical Tutorial 1
| 17
|
data-analytics-using-pandas-a-quick-practical-tutorial-1-1aab90ca368
|
2018-09-01
|
2018-09-01 08:37:59
|
https://medium.com/s/story/data-analytics-using-pandas-a-quick-practical-tutorial-1-1aab90ca368
| false
| 3,069
|
Spark | Scala | Python | Pandas for Beginners
| null | null | null |
LuckSpark
| null |
luckspark
| null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Luck Charoenwatana
|
Live a life, See the world.
|
ae8aa8e48a77
|
luck
| 86
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-15
|
2018-02-15 21:49:00
|
2018-02-15
|
2018-02-15 21:50:13
| 0
| false
|
en
|
2018-02-16
|
2018-02-16 01:25:13
| 0
|
1aac74a9ef64
| 1.984906
| 0
| 0
| 0
|
2017 was an amazing year for SignalMaven. When we decided to start an artificial intelligence company few years ago, we had some great…
| 5
|
Finding Our True Identity
2017 was an amazing year for SignalMaven. When we decided to start an artificial intelligence company few years ago, we had some great ideas on how it could help solve humanity’s biggest problems. As we delved deeper into the field, we started getting more passionate about complex artificial intelligence problems.
Also known as AI-complete (or AI-hard), problems within machine reading comprehension and natural language generation were right within our sweet spot. Once we discovered our technical passion, we had to find a use case for our solution. This search led us to test various concepts in the market. We followed the general Lean Startup methodology by testing our concept with only a powerpoint deck and cold emailing prospects within industries which we believed to be our target market.
Coming from a scientific background, this was quite fitting because we first built hypothesis around our target market, then we tested the concept to gather feedback before writing any line of code.
The whole process didn’t arise as a simple idea, but a series of ideas that we encountered over the course of time. The biggest lesson we learned during this period was that we are not always right. We learned a great deal about our concept, product and market after talking to many people in and out of the industry. Every meeting with an investor or prospect shone a new light on our company and vision. We pivoted our company, market and business models many times as we blazed our trails with the knowledge we gathered.
One of my friends has a quote on his email signature: “if you stop learning, you stop living.” As humans, we are always learning. The more we learn, the more we are able to express ourselves articulately and become an expert in certain areas. Even outside our area of expertise, we can draw logic and reasoning from our experiences and draw general conclusions to present our thoughts on the matter at hand. This learning and understanding has brought our team together like a glue and helped us explore all aspect of company formation. This is one of the reasons why we implemented learning as the central theme to our culture building. For curious scientific mind like ours, acquisition of knowledge has been heart and soul and we will never stop learning and exploring.
As a keen proponent of “knowledge is power,” our company’s mission is to advance humans to the next level of consciousness where knowledge is available at a speed of thought. However, given our fickle mind, we will need an intelligent aide that is available 24/7. This tireless, smart aide can only be possible if we can make our machines smart. Hence, our team is intensely passionate about transferring this phenomenon of learning and understanding onto computers, thereby creating a truly intelligent machines that can aid humans in their daily lives. We are envisioning our technology to be both “book-smart” through knowledge gathered from vast literatures and “street-smart” through experiences it has gathered from analyzing any other information. 2018 is holding great promises for us as we advance in this field.
|
Finding Our True Identity
| 0
|
finding-our-true-identity-1aac74a9ef64
|
2018-02-16
|
2018-02-16 01:25:14
|
https://medium.com/s/story/finding-our-true-identity-1aac74a9ef64
| false
| 526
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Manas Mudbari
|
Entrepreneur | Data Scientist | Founder & CEO @ Lexeme.co
|
56fee554418e
|
manasMudbari
| 333
| 494
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-27
|
2018-03-27 17:20:54
|
2018-03-27
|
2018-03-27 17:21:56
| 0
| false
|
en
|
2018-03-27
|
2018-03-27 17:21:56
| 0
|
1aae1d9d4007
| 2.332075
| 1
| 0
| 0
|
Can one security platform handle the time of transition between quantum computers and traditional systems? This will delicate and dangerous…
| 5
|
Privacy And Quantum Computers; When Worlds Collide
Can one security platform handle the time of transition between quantum computers and traditional systems? This will delicate and dangerous moment when there will be vulnerabilities. The new class of quantum computing engineers needs to find a compatible platform for both. Then jettisons when quantum computing becomes the primary network. Sentient AI security platform understands which platform it’s dealing with, with the ability to upgrade when there is just a quantum computer system. Is it possible to have a sentient computer and a conscious cybersecurity system?
From a healthcare perspective, a security network that bridges both conventional computer systems and quantum will allow for personalized encryption (phenotypic/genotypic keys, authentication schemes), so that patient data is protected. On the other hand, medical information is kept in mass records, usually not safeguarded that securely. The threat of leaked medical documents is only amplified with Quantum Computers, and the only way to protect is to make sure we have an end to end encryption with quantum computing. Startups are popping up to create single-source patient data so that they do not depend on doctors for their own information. While this makes access to data more accessible for the patient, it creates a honeypot for hackers and leaves data extremely vulnerable.
Our military is by the day more connected to a complex system of computers that control satellites, nuclear weapons, ground forces, navies — all the instruments of war. If these codes and policies are broken into then, civilization reaches a critical point of perhaps no return. Although the Department of Defense has some of the most robust cybersecurity systems in the world, if quantum computers land in the hands of a hacker with the intent of dismantling the military, then these encryption schemes would become obsolete. Wars may not be fought with troops anymore, but with people in a room shutting down or redirecting weapons to cause mass casualties. To solve this problem — to defend against another quantum computer — we will need a quantum computer that would be able to respond to, to predict and prevent threats. We need to simulate war games where an interactive Artificial Intelligence to model attacks that adversaries may present, almost like a naval war college practice course for the Quantum Computer.
This transition to a world of only quantum computers will happen gradually, the big fish will get the quantum computers first (Department of Defense, big companies, hospitals), and then the rest will receive the new technology by the trickle-down effect and the marketplace.
What would happen when the new world and the old world meet? It is not like the moment when automatic transmission took over the car, and we were left unable to “feel” the speed of the car. We have never had full control over computers, so creating an integrated system to shift security from traditional to quantum computers would not lose any of the human aspects; instead, it would be a direct upgrade.
To make this happen, we will need first to figure out what quantum computers require regarding cybersecurity, and how to create an encryption scheme that eliminates any threat that they pose. From there we would have to be able to translate that system to traditional computers. If the two plans are not immediately compatible, then we would need to be able to create a system, possibly using artificial intelligence to integrate both traditional and quantum computers together under one standard cybersecurity scheme.
Quantum computers have surpassed today the capabilities of supercomputers, and we will need to create our methods to understand their processes and data. AI may be the way to understand this new world being born.
|
Privacy And Quantum Computers; When Worlds Collide
| 2
|
privacy-and-quantum-computers-when-worlds-collide-1aae1d9d4007
|
2018-03-27
|
2018-03-27 17:43:09
|
https://medium.com/s/story/privacy-and-quantum-computers-when-worlds-collide-1aae1d9d4007
| false
| 618
| null | null | null | null | null | null | null | null | null |
Quantum Computing
|
quantum-computing
|
Quantum Computing
| 1,270
|
Vineeth Veeramachaneni
| null |
645df4510e86
|
veevinn
| 3
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-12-04
|
2017-12-04 16:25:29
|
2017-12-08
|
2017-12-08 16:36:05
| 1
| false
|
en
|
2017-12-08
|
2017-12-08 16:36:05
| 2
|
1aae2f33d1ce
| 1.833962
| 0
| 0
| 0
|
Imagine living in a world where the human race has no control as it does now.
| 5
|
Robots taking over the world?
Imagine living in a world where the human race has no control as it does now.
Its currently 2017 and artificial intelligence is just getting more advanced and advanced by the minute. The picture above is just a primary exsample of updated of the artificial intelligence. Sophia, the robot in the picture above could currently be a sign of the human race being put to end or back into slavery of every human on earth. Sophia is a robot from Hong Kong who now has human rights as a citizen in Saudi Arabia. She currently wants to be the first robot to want to have a baby, which indicats shes trying to pick up human qualities to take over the world. http://sophiabot.com/
Another quick example of artificial intelligence is the technology we use in our current daily lives. For example, Lets check out the topic of our phones, we now have phones that can scan finger prints, faces, and even voices for protection purposes so people cannot hack into your phone and technoligcal device. Along, with the fact that they can even tell when you are driving or let alone what location youre at or want to go to. Robots such as Sophia are just becoming more adanced and advanced just like typical technology today. Sophia is currently learning human emotion, human facial expressions, memories, locations, facial recognition, and languages. Picking up on the fact that Sophia the robot is learning different languages, this could conspire robots to soon develop and make their own langauge and consipre against humans.
Which leading on in with robots having the possibility to make up a new language us humans dont know to keep us in the dark, sophia, in several interviews has been filmed saying she wants to deystroy human kind. In some interviews she plays it off as shes joking but some people who have their eyes wide open will realize this could be the next step to world domination or a whole new world system where the human race will come to an end.https://www.youtube.com/watch?v=Bg_tJvCA8zw If we dont act now, watch in a few years while robots around starting out from taking our jobs to deytroying our own kind.
Which is a total backfire because humans wouldve created something such as robots thinking theyd help when really how things turn and back stabb us in the worst way. Be that person whomakes a change to save our kind and end the robotics artificail intelligance before our own kind gets ruined by what we have designed to save our human race.
|
Robots taking over the world?
| 0
|
robots-taking-over-the-world-1aae2f33d1ce
|
2017-12-08
|
2017-12-08 16:36:05
|
https://medium.com/s/story/robots-taking-over-the-world-1aae2f33d1ce
| false
| 433
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
maya kandola
| null |
213ce3425bb1
|
kandolamaya
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-01
|
2017-11-01 09:59:09
|
2017-11-01
|
2017-11-01 10:00:57
| 1
| false
|
en
|
2017-11-01
|
2017-11-01 10:00:57
| 1
|
1ab1657e4ea1
| 1.981132
| 0
| 0
| 0
|
Machine learning is a branch in computer science that studies the design of algorithms that can learn. The typical tasks are concept…
| 5
|
Growth of Python in Machine Learning
Machine learning is a branch in computer science that studies the design of algorithms that can learn. The typical tasks are concept learning, function learning or predictive modelling. Others include clustering and finding predictive patterns. Usually, these tasks are learnt through available data that were observed through experiences or instructions. Python is an OOPs based high level interpreted programming language that is highly useful and focused on rapid application development. It is a perfect choice for artificial intelligence and DRY (Don’t Repeat Yourself). IT works perfectly as a glue language as well which is used to connect the existing components together. Python’s support for ever evolving libraries make it a good choice for any project whether Web App, Mobile App, IoT, Data Science or AI.
Python is used in a variety of purposes, ranging from web development to data science to DevOps. The usage of Python is such that it cannot be limited to only one activity. Its growing popularity has allowed it to enter into some of the most popular and complex processes like Artificial Intelligence (AI), Machine Learning (ML), natural language processing, data science etc. The question is why Python is gaining such momentum in AI? And the answer lies below:
The most common areas where python can be used for machine learning are Search Volumes and Job Ads
Python machine learning in search volumes
Search volume indicates the search for information that is required for going deeper into a particular topic. Google too provides a tool called Google Trends that provides insights into the search volumes for keywords over time.
Python Machine learning for jobs is growing
The best example for this is Indeed which is a job site and like Google Trends they show the volume of job ads for particular keywords. It looks for occurrences over time of selected terms in job offers. It gives an indication of what skills employers are seeking. Note however that it is not a poll on which skills are effectively in use. It is rather an advanced indicator of how skill popularity evolve (more formally, it is probably close to the first order derivative of popularity as the latter is the difference of hiring skills plus retraining skills minus retiring and leaving skills).
Less code and artificial intelligence
Artificial intelligence involves algorithms — lots of them. Python provides ease of testing. Writing and execution of codes is also easy. It can implement the same logic with as much as one fifth of the code as compared to the other Oops languages — thanks to its interpreted approach which enables you to check as you code.
Above all, it has gained a lot of momentum recently and is a good choice for applications based on AI, IoT or Data Science.
Please Visit for more information: http://bit.ly/2hvk8JS
|
Growth of Python in Machine Learning
| 0
|
growth-of-python-in-machine-learning-1ab1657e4ea1
|
2018-05-28
|
2018-05-28 19:42:06
|
https://medium.com/s/story/growth-of-python-in-machine-learning-1ab1657e4ea1
| false
| 472
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Verve Systems
|
#USA based #IT firm offering #Cloud #IoT #Bigdata #MobileAppliction and more
|
b02fc25388f6
|
vervesys
| 6
| 8
| 20,181,104
| null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.