audioVersionDurationSec
float64 0
3.27k
⌀ | codeBlock
stringlengths 3
77.5k
⌀ | codeBlockCount
float64 0
389
⌀ | collectionId
stringlengths 9
12
⌀ | createdDate
stringclasses 741
values | createdDatetime
stringlengths 19
19
⌀ | firstPublishedDate
stringclasses 610
values | firstPublishedDatetime
stringlengths 19
19
⌀ | imageCount
float64 0
263
⌀ | isSubscriptionLocked
bool 2
classes | language
stringclasses 52
values | latestPublishedDate
stringclasses 577
values | latestPublishedDatetime
stringlengths 19
19
⌀ | linksCount
float64 0
1.18k
⌀ | postId
stringlengths 8
12
⌀ | readingTime
float64 0
99.6
⌀ | recommends
float64 0
42.3k
⌀ | responsesCreatedCount
float64 0
3.08k
⌀ | socialRecommendsCount
float64 0
3
⌀ | subTitle
stringlengths 1
141
⌀ | tagsCount
float64 1
6
⌀ | text
stringlengths 1
145k
| title
stringlengths 1
200
⌀ | totalClapCount
float64 0
292k
⌀ | uniqueSlug
stringlengths 12
119
⌀ | updatedDate
stringclasses 431
values | updatedDatetime
stringlengths 19
19
⌀ | url
stringlengths 32
829
⌀ | vote
bool 2
classes | wordCount
float64 0
25k
⌀ | publicationdescription
stringlengths 1
280
⌀ | publicationdomain
stringlengths 6
35
⌀ | publicationfacebookPageName
stringlengths 2
46
⌀ | publicationfollowerCount
float64 | publicationname
stringlengths 4
139
⌀ | publicationpublicEmail
stringlengths 8
47
⌀ | publicationslug
stringlengths 3
50
⌀ | publicationtags
stringlengths 2
116
⌀ | publicationtwitterUsername
stringlengths 1
15
⌀ | tag_name
stringlengths 1
25
⌀ | slug
stringlengths 1
25
⌀ | name
stringlengths 1
25
⌀ | postCount
float64 0
332k
⌀ | author
stringlengths 1
50
⌀ | bio
stringlengths 1
185
⌀ | userId
stringlengths 8
12
⌀ | userName
stringlengths 2
30
⌀ | usersFollowedByCount
float64 0
334k
⌀ | usersFollowedCount
float64 0
85.9k
⌀ | scrappedDate
float64 20.2M
20.2M
⌀ | claps
stringclasses 163
values | reading_time
float64 2
31
⌀ | link
stringclasses 230
values | authors
stringlengths 2
392
⌀ | timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0
| null | 0
| null |
2018-03-22
|
2018-03-22 21:39:14
|
2018-03-23
|
2018-03-23 16:55:21
| 0
| false
|
en
|
2018-03-23
|
2018-03-23 17:05:12
| 1
|
19d785cea1bd
| 2.245283
| 0
| 0
| 0
|
As of last year, Facebook has 2.2 billion (2,200,000,000) active users. These people share their dreams, aspirations, political…
| 3
|
Facebook : Darkness inside
As of last year, Facebook has 2.2 billion (2,200,000,000) active users. These people share their dreams, aspirations, political afflictions, personal moments and personal choices (like DC or Marvel) with their family and friends around them. Facebook sells or uses this data to run targeted campaigns. Contrarily people joined facebook in first place because it provided safety to user’s data unlike other companies in early internet.
Being advertisement company, there is conflict of interest between facebook and user’s privacy and data security.
Last few years saw rise of deep learning and ai technologies. These tehnologies are exponentially evolving .These technologies give facebook many ways to exploit user data and adversely affect human society. (As a AI practiotiner)
Scenario 1 Real election results and stratergy (This is possible with current technology): In US, 66% of adults use fb on daily basis. These people share their political afflictions in terms of posts or comments. With cloud gpu infrastructure in place, they can easily profile a person’s affliction to political parties with NLP and DL. They already know it and it is as easy as finding total number of users in facebook. Thus, fb can predict results of polls with 100% accuracy and also influence it. This was not possible two or one years ago. Think of a situation where a insider or engineer has access to this data is giving it to politicians or wrong people.
Scenario 2 Running pro/anti policy propaganda (This is possible with current technology): As discussed earlier facebook already knows the sentiment of person to policy. In small African and Asian countries where government is not powerful or capable enough to detect election meddling. Facebook can run propaganda campaign on all the citizens of country. For example, you are Donald Trump supporter. Your newsfeed has atleast two articles everyday involving russian meddling of us elections and pro global trade article. This would slowly influence person and change his ideology.
Scenario 3 Total human control (This is not possible as of now. But many steps are being taken in this direction as pointed out by francois) : Notoriously facebook has run many experiments in past about user behaviour and mood. As facebook knows everything about its users. It also has control of newsfeed of users. So it has both data and tools to completely manipulate people. The AI with in few years will be very much capable to run this large scale control problem. Humans would never know that it is happening as ai would be strong enough. Human thinking is very limited and naive when it come to processing of large data. Alpha Go Zero can very good example of how powerful AI can be.
In all these scenarios, users wont know that it is happening, until something really bad happens. Once AI algorithms get better enough, It would be impossible to detect any such incident. This has grave and real threat to democracies around the world.
Solution : Facebook should make its newsfeed and tending news algorithms public. It should concretely tell its users that how is it using their data. It should make all the experiments and usage of data public.
If facebook doesn’t do this, then the only wayout is privacy respecting companies like apple enter social networking. They should build a end to end encrypted social networking or a big step in that direction. So, that nobody including insiders don’t misuse data.
The data should be treated like money in stock market. No one should exploit data. There should be open and clean system in place.
|
Facebook : Darkness inside
| 0
|
facebook-darkness-inside-19d785cea1bd
|
2018-03-23
|
2018-03-23 17:05:13
|
https://medium.com/s/story/facebook-darkness-inside-19d785cea1bd
| false
| 595
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Jyothir Aditya
|
INquisitive, Futurist, FOSS member
|
a6ece48dbdde
|
elomas
| 3
| 14
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-28
|
2017-11-28 19:56:35
|
2017-12-03
|
2017-12-03 14:46:19
| 3
| false
|
en
|
2018-04-06
|
2018-04-06 15:46:14
| 1
|
19d7ac5b247a
| 1.938679
| 3
| 0
| 0
|
Gain flexibility
| 5
|
Overlapping text annotations
Gain flexibility
When it comes to text annotation, sometimes you need to annotate entities or fragments of text contained within others or simply overlapping others.
One of our goals at tagtog is to allow users to quickly train machine learning models. Often, your training data is not great in terms of quality or quantity, however, you can still achieve quick results with acceptable accuracy, and in some cases, such an action might solve your problem. From there you can iterate.
Overlapping annotations increase the flexibility and allow you to make the most out of your data
Let’s take a closer look into this type of annotations.
For example: Toyota Corolla , where Toyota is a Company and Toyota Corolla is a Vehicle Model.
One annotation containing another. Each entity type is represented by one color.
The user needs to read the text, so the point is to visualize the annotations, and not to disturb the user while reading it. We did that by not increasing the height among lines or the space among words.
That was easy! let’s go with the next example.
Three entities annotated, two annotations are overlapping
From the visual side, as in the previous case, we didn’t break the text structure to provide a nice reading experience. Annotations:
Reddit as a Company
Reddit closed $50 million in funding as Investment
$50 million in funding at a $500 million valuation as Valuation
Most of the text annotation tools out there do not support such annotations, and the process to create an annotated corpora can be stricter, slower and more expensive.
You can handle other scenarios as annotations contained within the exact same text span. This is convenient for an annotation representing more than one concept. For example:
Sample of customer feedback. Two annotations (first in pink, second in yellow) within the same span represent two entities
In this case the span brake adjuster represents a Vehicle Part and a Failing Part.
Summing up:
Using overlapped or contained annotations can help you focus on the value of your data and reduce your dependency from rigorous guidelines, hierarchies and other painful steps very often not required to get results.
And YES, it is possible to display these annotations and make the annotation journey a pleasant and efficient one.
With this text annotation tool you can generate training data at scale. You can use it for free at tagtog.net, send us your feedback!
|
Overlapping text annotations
| 96
|
overlapping-text-annotations-19d7ac5b247a
|
2018-04-06
|
2018-04-06 15:46:15
|
https://medium.com/s/story/overlapping-text-annotations-19d7ac5b247a
| false
| 368
| null | null | null | null | null | null | null | null | null |
Annotations
|
annotations
|
Annotations
| 243
|
🍃tagtog
|
A text annotation tool to train #AI. Easy. 🔗tagtog.net
|
72d1dec46312
|
tagtog
| 11
| 19
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-06
|
2018-06-06 10:43:06
|
2018-06-06
|
2018-06-06 10:43:01
| 2
| false
|
en
|
2018-06-06
|
2018-06-06 10:43:07
| 10
|
19d81917d482
| 4.017296
| 0
| 0
| 0
| null | 5
|
Businesses Are Investing in Tech to Improve CX but Falling Short
A new Mitel survey of 5,000 adults from the UK, France, Germany, the United States, and Australia indicates a measurable disconnect between the advancements organisations think they are making to deliver exceptional customer experience and how customers actually view their commercial interactions. Specifically, fewer than half of respondents believe the technology needed to deliver the perfect online buying experience is available. This stands in stark contrast to findings of a previous Mitel survey in which 90 percent of IT decision-makers optimistically reported progress in improving customer experience through the use of technology.
While a clear sign of the growing pains associated with digital transformation (DX) initiatives underway globally, the new survey also uncovers an opportunity for technology to play a key role in defining and keeping pace with changing buyer behavior and preferences. In fact, over half of those surveyed believe machine-to-people interactions will positively transform the customer experience (CX).
Vertical Visionaries, Leaders and Followers
As customer experience becomes increasingly critical for businesses to remain relevant and compete, Mitel’s survey shows significant differences in customer satisfaction across vertical industries. Growing use of cloud communications and applications, combined with emerging technologies like the Internet of Things (IoT), artificial intelligence, chatbots, and natural language processing (NLP), are creating new ways for companies to nurture and build customer relationships. Winning companies will be those that are able to differentiate their brands by delivering seamless experiences across physical and digital environments, devices and channels. Currently, some segments are doing better than others.
Financial Services and Hospitality top the charts for customer service: In the UK, less than two fifths (38%) of respondents said they were very satisfied with the customer experience on average across all industries. Financial services, a beneficiary of the recent uptick in global investments in fintech, receives the highest marks in customer service with 43% of survey respondents describing the service as great, followed by hospitality at 42%.
Physical retail isn’t dead, but the customer experience is: Just over 60% of shopping done by UK respondents still takes place in a physical store, though that number is shifting. When asked about the challenges faced by today’s bricks-and-mortar retail outlets, 60% of respondents in the UK say the fact retail stores are struggling has more to do with the customer experience they provide, not products. Nearly half (49%) said “customer service just doesn’t exist any longer.”
A seamless omnichannel approach is critical for this market. Chatbots can be used to manage simple tasks, while IoT and team collaboration tools open up new avenues for communications across media, whether it’s voice, email, SMS, web chat, social media or a website.
Speed is the game in sports and entertainment: In the fast-paced world of sports and entertainment, immediate and clear communication is a necessity. Almost half (47%) of respondents point to simplicity and speed as the most important factor in a good customer service experience in the UK, slightly higher than the global average (45%).
Responsiveness vital in healthcare: Healthcare organisations receive the lowest marks from respondents in all countries when it comes to customer service. Nearly a fifth of UK respondents (18%) described their experience as ‘unsatisfying’. Simplicity and speed as well as responsiveness i.e. getting an answer to a request quickly are equally important here; both were named by 30% of respondents as most important.
Additional insights from the data indicate:
Bots, AI and machines can fill the customer service gap: Consumers appear to be increasingly comfortable with machine-to-people interactions when shopping online, with 83% saying they are satisfied dealing with automated processes. Most do not want to interact with a person while shopping online unless the service is very complicated, or they’re having difficulties finding what they’re looking for. Over half in the UK (53%) said if they could “shop without speaking to a person, that would be a great thing” and 51% agreed that machine-to-people interactions will positively transform customer experience. Even so, physical retailers need to balance the use of technology. Consumers do expect people to efficiently help them when shopping in a physical storefront.
“As physical and digital worlds begin to seamlessly intersect, how effectively a company serves its customers across both domains determines tomorrow’s winners and losers,” said Jon Brinton, Senior Vice President of Customer Experience Solutions at Mitel
“By supplementing existing applications and investments with new technologies such as AI, team collaboration and IoT, companies can better communicate and collaborate internally and externally and begin to proactively deliver the level of customer experience buyers expect.”
Mitel’s study is the latest in its Business Insights Survey Series, which builds on previous research from August 2017 where more than 75 percent of IT decision-makers said they planned to tie together devices, emerging technologies, and communications and collaboration capabilities within two years to enable machine-to-people interactions to improve customer experience. This body of work expands on the concept established by Mitel in 2016 of “Giving Machines a Voice” to enable IoT and other machine triggers to launch real-time communications workflows that can improve how companies work and collaborate. Exploring a different angle, this survey examines how consumers view customer experience in shopping for goods and services across market segments, including retail, hospitality, sports and entertainment, healthcare, financial services and utilities.
For more results and a closer look at regional or country-specific data, download the white paper.
About Mitel
A global market leader in business communications powering more than two billion business connections, Mitel (Nasdaq:MITL) (TSX:MNW) helps businesses and service providers connect, collaborate and provide innovative services to their customers. Our innovation and communications experts serve more than 70 million business users in more than 100 countries.
|
Businesses Are Investing in Tech to Improve CX but Falling Short
| 0
|
businesses-are-investing-in-tech-to-improve-cx-but-falling-short-19d81917d482
|
2018-06-06
|
2018-06-06 10:43:09
|
https://medium.com/s/story/businesses-are-investing-in-tech-to-improve-cx-but-falling-short-19d81917d482
| false
| 963
| null | null | null | null | null | null | null | null | null |
Apac
|
apac
|
Apac
| 286
|
UC Today
|
Unified Communications Stories
|
bd51979d153c
|
uctoday
| 13
| 74
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-10-01
|
2018-10-01 08:30:30
|
2018-09-07
|
2018-09-07 06:09:14
| 2
| false
|
en
|
2018-10-01
|
2018-10-01 08:38:10
| 12
|
19d88f5aeca1
| 6.587107
| 0
| 0
| 0
|
Project Management is a specialist’s job. One that can be mastered only with years of practice. But by calling it a specialist’s job, I am…
| 5
|
The Role of Bots and AI in Project Management
Project Management is a specialist’s job. One that can be mastered only with years of practice. But by calling it a specialist’s job, I am in no way undermining the fact, that a typical project manager needs to wear multiple hats at work.
Essentially you are accountable for the success and failure of the project. So it is not just enough for you to have just the functional and domain knowledge, you need to be good with people as well.
To sum it up a Project Manager’s scope of work can be summarised across 3 broad areas:
Planning and Strategizing: Define the scope, build an execution plan — prioritize and delegate accordingly and manage the budget.
Managing the Team: Facilitating commitment and productivity, dealing with obstacles and motivating the team members.
Managing Expectations: Aligning project to business goals, managing stakeholders and communicating project status.
What is Project Management AI:
AI and bots are about reliably offloading a task from a human to a machine counterpart. AI is what makes machines intelligent, and a bot is simply the software that performs the task on the user’s behalf.
With that basic understanding, let us try to shed some light on what a Project Management AI system would be like.
Ideally, it should be able to handle the day-to-day management and administration of the project without any human input. It should be able to automate simple tasks and develop an understanding of key elements. Then use this understanding to uncover insights, make recommendations and perform more complex tasks.
Now here is the paradox, apart from the processes, Project Management has a strong human element to it, which in my opinion is the most critical part.
Considering this fact can AI and bots make inroads into a project manager’s life?
Gartner predicts that by 2030, as much as 80% of the routine work — which represents the bulk of human hours spent in today’s PPM disciplines, can be eliminated as a result of collaboration between humans and smart machines.
Today’s project management practices rely heavily on human input. All data points must be collated, organized and consumed by human beings.
But that’s not the optimum use of the human mind’s intuitive abilities. Innovation and critical thinking should be the key attributes of a project manager’s role as routine work gets managed by machines.
So humans need is not be apprehensive about machines replacing them, rather look at this as an opportunity to improve productivity.
While machines can tirelessly forage data, look for patterns and variances to offer data-driven actions and recommendations, humans can back this up with their intuition and soft skills to shepherd the team towards the project goal.
So how can this collaboration work?
Unlike before, a project manager’s responsibilities today are inclined towards that of a coordinator and coach and less of a dictator. This requires project managers to be proactive, and if they are alerted of an impending problem, they can stay way ahead of the game.
This is in contrast to the modern day project manager’s role, who spends most of his time defining and collating information for decision making. This information gathering role of the PMO should be reduced and replaced by smart machines that link goals, strategies and the potential and actual investments that support them.
Simple bots can perform a variety of tasks, like updating task sheets, setting up reminders, generating project reports and much more. However, add AI to the mix and you have a far more advanced system that can as well deliver advice and not just data.
Some of these use cases could include:
Project Collaboration:
Collaboration across various stakeholders is a major concern, especially in large teams and more so if the team is spread across different time zones.
Transparency on who is working on what, and being able to communicate across different groups add context to collaboration and scrape out inefficiencies. Such a setup gives the PMO a lot more control to ensure that nothing falls through the cracks.
So how can a bot help here?
Chatbots can help you stay in sync with your team. For example, a chatbot like Meekan can help you schedule team meetings. All you need to do is ask Meekan to book a meeting slot, and it will match everyone’s calendars and quickly find common free times.
Need to reschedule in case someone drops out at the last moment? Again you just need to ask and Meekan will help you find an alternate time. All this while avoiding the back and forth of emails. Plus the interactions are more fun and engaging.
Data Consistency:
The often unmentioned challenge with project teams in any organization is the suitability and quality of data.
Some teams enter minimal to no data, and even the most disciplined teams might make errors that render the data unreadable by machines.
Given the widespread usage of chat applications, chatbots can connect with team members at the end of their workday and gently ask them to input the status of the task assigned to them. Add to this a few layers of metadata, and the AI engine can check data consistency and provide meaningful advice to improve the quality of data an user is inputting.
For example, Ayoga ActBot powered by the Applozic chat framework sends timely alerts to project members getting them to fill timesheets, respond to RFIs and update their work progress through a familiar chat interface.
As an extension to this, the bot can collate all the entries and publish daily status reports with a breakdown of all the tasks the team members are working on and any major roadblocks they are facing.
Task Management:
Employees at SMBs are expected to handle different tasks according to the demands of the project. While juggling multiple tasks isn’t easy, it becomes all the more difficult when everything is done manually.
Without proper monitoring, employees moving in and out of tasks leads to loss of accountability and unplanned resource allocation. For instance, you wouldn’t want your key engineer to be pulled away onto other projects, nor would you want him to handle trivial tasks. Similarly, you would also want to know about the performance of every team member and how aligned their deliverables are to the overall project goal.
Now as AI develops an understanding of sprints and task descriptions, new metrics can be revealed that weren’t available earlier. For example, in software projects, bots can monitor every change made to the source code and link it to the developer and task involved. The bot can then report bugs in any line of code, the person who made the commit and the task that relates to it. This will allow for real, actionable indicators of individual and team performance.
Stratejos is a chatbot that can do most of these and assist you with team coordination. It can identify if a sprint or project is about to run into trouble and help you resolve this. It can also help you improve in real time by monitoring practices and providing training content for the problems that might occur.
Risk Predictions:
Remember the Tom Cruise starrer Minority Report? The movie is about a special police unit called “PreCrime” who could look into the future to detect a crime and then stop it from occurring.
Now imagine you have a list of likely delays, risks, and problems even before they occurred. Wouldn’t it make your life as a project manager a lot easier?
You might be thinking this is too far-fetched. Probably it is, but by no means is it improbable. It is only a matter of aggregating the right pieces of data and mine through it to predict the possible outcomes. A simple example is, monitoring the time spent on a certain task can help you predict whether it will meet the deadline.
As an extension of this, AI can unobtrusively collect metadata and look at how team members do their jobs. Many industries like credit scoring, counter-terrorism, banking, and finance are already doing it to predict events before they happen.
A lot of predictions can be made by analyzing your team’s behavior and their habits. These predictions can be purely operational like predicting likely delays, probable quality issues and otherwise as well, like low team morale, personal issues etc.
Exciting time ahead
The future is that of human-computer complementarity. A lot can be done just by getting the division right and automating certain processes and then training people to work alongside computers.
Imagine AI doing all the mundane tasks and then assigning the rest to the right team member based on their skills and expertise. It just doesn’t stop there, machines can predict shortcomings, generate actionable reports and share the best practices at every step. This will be so powerful and useful.
The best part, it is not really as far-fetched as it may seem. The recipe is simple, all this can be achieved with a mix of standard software development, opinionated views on how projects run and machine learning technologies.
Project Management AI is going to have a huge impact on how projects run and that is for the better. Teams taking advantage of this will definitely have an edge over those that don’t. And that’s something to be excited about.
Author Bio: Satadeep Biswas is a marketing and tech geek working out of Kommunicate’s Bangalore office. If you don’t find him in the corner cabin trying out the latest SaaS in the market, you might well find him at one of the football fields around the city. He covers topics about the good and bad of tech and its implication for businesses. You can find more articles by Satadeep on applozic.com and kommunicate.io.
Originally published at blog.proofhub.com on September 7, 2018.
|
The Role of Bots and AI in Project Management
| 0
|
the-role-of-bots-and-ai-in-project-management-19d88f5aeca1
|
2018-10-03
|
2018-10-03 08:35:22
|
https://medium.com/s/story/the-role-of-bots-and-ai-in-project-management-19d88f5aeca1
| false
| 1,644
| null | null | null | null | null | null | null | null | null |
Startup
|
startup
|
Startup
| 331,914
|
Satadeep Biswas
| null |
4de47414fc9d
|
satadeep
| 4
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
4689c8214177
|
2018-07-19
|
2018-07-19 20:50:30
|
2018-07-19
|
2018-07-19 19:43:00
| 4
| true
|
en
|
2018-09-12
|
2018-09-12 17:11:36
| 3
|
19d91b0db45e
| 7.103774
| 0
| 0
| 0
|
Test statistics (including p-value) is a must-know concept in finance and data science. Process of test statistics can be used to help us…
| 5
|
Statistical Hypothesis Testing
Test statistics (including p-value) is a must-know concept in finance and data science. Process of test statistics can be used to help us make calculated decisions. When statisticians analyse a pattern and want to prove a claim, they attempt by finding a sample that represents the population under test. Before performing an experiment on the sample, scientists have an idea of what the expected results need to be. Please read Disclaimer before proceeding.
Test statistics process can be used to determine the needs of a population in a country, which projects to fund, future strategies of a large organisation, whether a medicine has any effect on a disease and so on.
What Is The Process Of Test Statistics?
As outlined in the article “Hypothesis Analysis Explained”, the process of test statistics consists of 5 steps:
Start by stating your expected claim known as Null Hypothesis.
Outline the minimum significance level/confidence level before you can reject the claim.
Calculate your sample results mean and standard deviation
Calculate your test statistics.
Finally based on the outcome, claim is stated.
The chosen test statistic is dependent on the distribution of your sample. This article will focus on three different types of test statistics: T Statistic, Z Statistic and F Statistic. Each of these test statistics have their own distribution table which can be used to find the p-value to compare expected and observed results.
The diagram highlights the five steps. I will explain the five steps in detail below
1. State Your Claim
There are two hypothesis of any test:
Null Hypothesis — what is known as true for the model and what we want to test. This is what we want to test.
Alternate Hypothesis — what we need to accept if Null Hypothesis is not true. This is what we believe is true.
Suppose your colleague statistician makes a claim that the average number of software bugs in a system will reduce by 10% after all IT staff have been given training on system testing. This is the Null Hypothesis. Alternative Hypothesis is that the number of bugs are not reduced by 10% post training.
You can then test the well-known claim known as “Null Hypothesis” using test statistics.
2. Determine your significance level:
Your significance level indicates how confident you are about the results you have calculated to support your claim. It is known as Alpha. Usual value for alpha is 1% or 5%. Lower alpha implies that you are very certain about the results. Chosen confidence level forms foundation of risk management credit metrics, for example PFE @ 95%.
p-value is the minimum level of significance before a claim can be rejected.
3. Once a sample is chosen to represent a population, its mean and standard deviation is calculated.
For example, to test the claim that training reduces bugs, you can train 50 developers out of 1000 developers in your company and test if the number of bugs are reduced by 10%. The sample of 50 developers is representing the population of 1000 developers. Once you have the observed results, the question to ask is:
Was this by chance?
Can you trust the sample you chose?
This brings us to the core of the article: Calculating Statistical Test
There are three well known test statistics: T, Z and F. These test statistics have their own properties, formula, and usages.
T Statistical Hypothesis Test:
Used for testing means of two small populations.
Sample follows: Student T Distribution For Null Hypothesis to be true
Sample Size: Less than 30
Population Standard Deviation: Unknown
Formula To Calculate T Stat:
[Sample Mean — Hypothesised Population Mean]/[Standard Deviation Of Sample/ SquareRoot(Sample Size)]
Example: You have a sample of 10 cars and you want to measure average fuel consumption of all cars in the town. Your hypothesised claim is that on average, cars consume 10 liter of fuel per day. Let’s also consider that you are 99% confident in the methodology. You can then compare the hypothesised mean with the sample mean and work out if you need to reject Null Hypothesis based on T distribution table at 99%.
Z Statistical Hypothesis Test:
Used for testing means of two large populations.
Sample follows: Normal Distribution For Null Hypothesis to be true
Sample Size: Greater than 30
Requires that the conditions are reliable
Population Standard Deviation: Known
Formula To Calculate Z Stat:
[Sample Mean — Hypothesised Population Mean]/[Standard Deviation Of Population/ SquareRoot(Sample Size)]
Example: Assume you have collected sample of 50 men to compare the average number of people wearing blue shirts in a population. Let’s also consider that you are 95% confident in your model. You can then compare hypothesised mean with the sample mean and work out if you need to reject Null Hypothesis based on Z distribution table at 95%.
Standard deviation/Sqrt(sample size) is the sample standard error.
This is the noise/dispersion of sample from the mean. This measure is based on the sample size. The formula indicates that larger the sample size, lower the impact of standard deviation and closer the sample value is to the population value.
T vs Z Test Statistics
Important to note that Z and T stats differ in how standard deviation is taken in the denominator. T stat takes sample standard deviation and Z stat uses population standard deviation.
F Statistical Hypothesis Test:
Used for comparing variances of two populations. Variation is the sum of the squared deviations of each observation from its group mean divided by the error degree of freedom.
If you want to test joint hypothesis then t or z test are not enough. F test can be used to compare properties of two samples:
Sample follows: F Distribution
Sample Size: Any size
Sample Standard Deviation: Unknown
Formula:
[Variance of sample 1]/[Variance of sample 2]
Unlike t or z tests that can assess only one regression coefficient at a time, F-test can be used to assess multiple coefficients simultaneously.
Example: You want to compare variability of software bugs in two IT systems in your company.
You can use F-statistics to test the overall significance for a regression model, to compare the fits of different models, to test specific regression terms, and to test the equality of means.
Use F-value in the F-distribution to calculate probability, known as p value. If the probability is low enough, we can conclude that our data is inconsistent with the null hypothesis.
F-test in regression model compares fit of different linear models. F-test of the overall significance is a specific form of the F-test. It compares a model with no predictors. A regression model that contains no predictors is also known as an intercept-only model.
5. State your Claim:
For two tailed test (more in Hypothesis Analysis Explained), for example when a Null Hypothesis is considered to be within a range then size of a sample, calculate the test statistic value and then check if the calculated value is within the range from the distribution table.
Let’s assume we got 0.50 as the Z statistic value. Our target claim from Z-distribution table is that the expected value needs to be within -1.96 to 1.96.
As -1.96 < 0.50 < 1.96, we can safely accept the Null Hypothesis.
This hypothesis could also be tested on regression analysis whether x and y have any relationship.
Example:
If you have mean and standard deviation of a sample and you are asked to calculate 95% confidence interval of two tail test then:
1. Calculate standard error first.
Remember it’s standard deviation / sqrt( sample size)
2. Choose your test statistic: Is it T, Z or F test problem? From what we learnt above, if we know the standard deviation of the sample and sample size is >30 then it is a Z Test.
Z test means your sample test statistic follows a normal distribution. You can find Z distribution table to find Z value of 2.5 percentage. This gives you +/-1.96. Anything out of this range invalidates your null hypothesis.
From your significance level (say 5%), as this is a Two Tail test, find the Distribution table for values: +/- 2.5% (2.5% because 5/2 =2.5. We divided alpha by 2 because 2 is the number of tails in the test).
If it is a Z test then Look up 2.5 in Z statistics table. This then gives us our minimum threshold values of our test statistic:
z+ = 1.96 and z- = -1.96
If the calculated z statistic > 1.96 or <-1.96 then reject the Null Hypothesis.
P(Z>1.64) = 1–0.095 = 5%
Feed in respective values to your test statistics formula:
Z Test Formula: [Sample Mean — Hypothesised Population Mean]/[Standard Deviation Of Sample/ SquareRoot(Sample Size)]
[Standard Deviation Of Sample/ SquareRoot(Sample Size)] is the standard error of sample mean
For us to accept null hypothesis, hypothesised value needs to be within the range of -1.96 to 1.96.
Can we use R squared to check if relationship is statistically significant?
R squared measures strength of relationships between dependent and independent variables. In my article “How Do I Predict Time Series?”, I explained how R squared is calculated. While R-squared provides an estimate of the strength of the relationship between your model and the response variable, it does not provide a formal hypothesis test for this relationship. The overall F-test determines whether this relationship is statistically significant.
If the P value for the overall F-test is less than your significance level, you can conclude that the R-squared value is significantly different from zero.
Lastly, A Brief Outline Of Chi Square
If the sample follows chi square distribution then use chi sqhared test to find the p value.
The steps remain the same as above except the formula of test statistics is:
([sample size — 1] x sample variance) / hypothesised population variance.
Summary
In this article, I outlined basics of test statistics including T, Z and F Test Statistics. We understood that the process of test statistics can be used to help us make calculated decisions. The process of hypothesis analysis is explained here: Hypothesis Analysis Explained
Hope it helps.
|
Statistical Hypothesis Testing
| 0
|
part-2-test-statistics-t-z-f-19d91b0db45e
|
2018-10-03
|
2018-10-03 21:05:46
|
https://medium.com/s/story/part-2-test-statistics-t-z-f-19d91b0db45e
| false
| 1,697
|
This blog aims to bridge the gap between technologists, mathematicians and financial experts and helps them understand how fundamental concepts work within each field. Articles
| null | null | null |
FinTechExplained
| null |
fintechexplained
|
FINANCE,RISK MANAGEMENT,TECHNOLOGY,DATA SCIENCE,FINTECH
| null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Farhad Malik
|
Explaining complex mathematical, financial and technological concepts in simple terms. Contact: f_m55@hotmail.com
|
d9b237bc89f0
|
farhadmalik84
| 113
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
f0d2cb72198b
|
2018-03-20
|
2018-03-20 07:20:13
|
2018-03-20
|
2018-03-20 07:20:58
| 1
| false
|
fr
|
2018-03-20
|
2018-03-20 07:20:58
| 0
|
19d991aec880
| 1.566038
| 1
| 0
| 0
|
Le discours ambiant des influenceurs du web est aujourd’hui axé sur les promesses et les dangers de l’intelligence artificielle en se…
| 3
|
Intelligence artificielle, business et R.O.I.
Le discours ambiant des influenceurs du web est aujourd’hui axé sur les promesses et les dangers de l’intelligence artificielle en se focalisant sur des problématiques éthiques, philosophiques, voire métaphysiques. Ces réflexions sont évidemment essentielles, capitales, et doivent être menées à leur terme pour fixer un cadre de développement et d’exploitation harmonieux de ces technologies. Toutefois,
Du point de vue de l’entreprise, il faut revenir aux fondamentaux : business et retour sur investissement.
Les solutions d’intelligence artificielle sont aujourd’hui exploitées dans trois domaines principaux :
l’automatisation
la prédiction
l’expérience client/utilisateur
Pour l’entreprise, l’intelligence artificielle doit être considéré comme une série d’outils, mis à sa disposition pour atteindre des objectifs de business. L’implémentation de ces outils, doit être gérée en mode projet, intégrant toutes les compétences internes (et éventuellement externe) concernées par le projet. L’énergie et l’attention doivent être portés sur la réalisation concrète d’un objectif défini préalablement.
C’est pour cette raison, que les indicateurs clés de performance doivent définis très clairement et partagés par les membres du groupe Projet.
Les KPI doivent être chiffrés, lisibles et compréhensibles par l’ensemble des collaborateurs de l’entreprise.
Dans le domaine de l’automatisation, il peut s’agir d’améliorer la qualité du service (temps d’attente, accès rapide à des informations utiles, et…), dégager du temps/hommes pour apporter une valeur ajouté supplémentaire (idéalement différenciante) au marché.
Les capacités de prédiction de l’intelligence artificielle, quant à elles, peuvent être appliquées à de très nombreux domaines de l’entreprise. Des prévisions sur les stocks à la maintenance du matériel en passant par la détection de tendances marketing, la liste des applications n’est limitée que par la capacité créative de leurs concepteurs et par le R.O.I. des solutions imaginées.
L’expérience utilisateur (UX) et l’expérience client (CX) vont également connaître une mutation profonde. Les entreprises ont aujourd’hui la possibilité d’intégrer des solutions d’intelligence artificielle à chaque étape du cycle de vente : acquisition, conversion, rétention. Il peut s’agir de Chatbots, d’outils CRM intégrant l’IA, d’assistants virtuels, … .
La multitude des possibilités de projets IA oblige à une priorisation.
Une analyse précise des caractéristiques de chaque projet permet d’établir cette priorisation et le R.O.I. fait très certainement partie des plus importants.
|
Intelligence artificielle, business et R.O.I.
| 27
|
intelligence-artificielle-business-et-r-o-i-19d991aec880
|
2018-09-17
|
2018-09-17 20:56:23
|
https://medium.com/s/story/intelligence-artificielle-business-et-r-o-i-19d991aec880
| false
| 362
|
Une approche opérationnelle de l’intelligence artificielle en entreprise
| null | null | null |
demain.ai
|
hello@demain.ai
|
demain-ai
|
INTELLIGENCE ARTIFICIELLE,ARTIFICIAL INTELLIGENCE,TRANSFORMATION
|
demain_ai
|
Intelligence Artificielle
|
intelligence-artificielle
|
Intelligence Artificielle
| 478
|
demain.ai
|
Pure-Player de l’IA, orienté business — Paris
|
415499267e97
|
demain_ai
| 6
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-06
|
2017-11-06 05:58:25
|
2017-11-06
|
2017-11-06 05:59:44
| 1
| false
|
en
|
2017-11-06
|
2017-11-06 05:59:44
| 0
|
19da735d3d8c
| 4.592453
| 0
| 0
| 0
|
How will artificial intelligence and big data impact the sales profession?
| 1
|
THE SALES PROFESSION AND A.I.
How will artificial intelligence and big data impact the sales profession?
It may be different than you think…
— — —
Recent studies reveal that 75% of all financial service companies will be using some form of artificial intelligence within the next 12 months. You see these headlines every day and are already familiar with the jobs that face immediate risk: compliance, customer support, research and trading. But these “first-to-fall” profession total only about 15% of global employees. The truth is that A.I. will eventually disrupt all jobs in the financial sector. And in time…all jobs. In the end, no industry will be exempted from what is coming.
But what’s being glossed over by these alarmist headlines is that not every job will be “disrupted” equally. And therein lies your key to winning in the new world.
NEW FIRE
You are living through the automation of labor. This is a revolution on par with the discovery of fire, electricity or the wheel. Human intellect is eradicating the inefficiency of human labor. And you are watching it in real-time. Most people have not wrapped their head around the implications of this. No matter how you earn your money today, these developments will impact your career, either directly or indirectly, in the next few years.
But freedom from labor should be a welcome development. Unless all you have to offer the marketplace is your personal labor.
The formula for success has changed. The old equation revolved around trading your time for money. This new ability for machinery and software to execute rote tasks better than humans has left most folks fearful and confused. Their cheese has been moved. But the way forward for such folks is simpler than you think. All you really need to do is understand the new success formula. And that will lead you to your new cheese.
The real issue isn’t the technology itself. It is that machine learning has invalidated the classical human construct of “go to school-get a job-move up the ladder.” The old formulas was labor-based. But the internet has democratized information and software have leveled the benefits of “hard work.” So now, you can no longer compete on labor because that has been commoditized.
So what should you do now?…
Let’s re-frame this question as, “What can humans do, that is beyond the capability of machine-run, artificial intelligence?”
THAT is the proper question to be asking yourself. So let’s explore it.
ALCHEMY
Since personal labor doesn’t matter anymore, the key differentiation between man and machine has now switched to non-linear constructive capabilities. This is more commonly known as…”creativity.” This is now about that human ability to take from one industry and cross-reference the formula into another to create new demand. This is the new “lead-to-gold” alchemy in this technology-driven world.
For example, when an engineer-by-training applies European sports-car designs to driver-less electric cars, that leads to a $20 billion dollar net worth. Forget about the plans for solar propulsion or landing humans on Mars. These conceptual efforts are very real — and something no artificial intelligence can challenge anytime soon. Visionary execution will become the new godliness. Like turning a hamburger stand in San Bernardino into the world’s largest franchise network, with a $90 billion dollar valuation. Such things are simply not in the cards for any software just yet.
So A.I is not going to replace the creative end of the business equation anytime soon. And that occupies a big surface area of global business…especially SMALL business, which is much closer to the end-consumer than the faceless multi-national corporations. Multiple areas of the creative spectrum will be untouchable for years by algorithms — and we’ll explore many of them in upcoming articles. But right now, we will focus on one particular department within your current organization. It is the area that every company maintains as their “tip of the spear” with customers…. Sales.
A GLASS OF WINE
A.I.’s ability to winnow real opportunity from tire-kickers by profiling data is downright amazing. So that’s not the problem. The real issue is on the other end of the sales equation — the prospect.
The labor part of the sales process can absolutely be offloaded to big data and algorithms. You will have no problem convincing me that the vetting of opportunity can be automated effectively. You MAY even be able to convince me — over a glass of wine — that the sales pitch itself could be automated.
But THAT is the problem …the glass of wine part.
As a discipline, “Sales” is actually the trading of currency for value. And whether you like it or not, people trade on humanity — not on data. Where the Vulcans get confused is their assumption that the best technology always wins. But again humans trade on emotion, not logic. Every true sales professional understands that making that human connection is the most important step in closing any sale.
Every week, I watch some of the most successful people in the world make business decisions “from the gut.” Legendary hedge fund managers have told me they look at the hard data…and then weigh that against their “intuition.” Business is not solely about data or it would have been over a long time ago. If it were all just bits and bytes then that mysterious formula for unlocking sales revenue would already be taught in every University. No, there is another piece. Something more amorphous. Something that is NOT rooted in quantum coding.
I provide global money managers with technology tools to objectively gauge their broker performance. They’re given all the data they need to assess their brokers. And yet many also want a direct trading line into one specific firm whom they “trust”…. “as a backup”….”just in case.” It’s never the same firm. Yet that FIX line is always predicated upon one specific person at that recipient shop…it’s never the general firm itself.
LIVE BY THE SWORD
Trust has always been the basic requirement for human interaction. The reason we shake hands with our RIGHT hand is because in Medieval times, that was the sword-wielding hand. Extending your sword-hand outward, was a gesture of trust. That gesture still endures today. But think about that. It’s not logical anymore. And yet it is human. Therein lies the rub of artificial intelligence.
Right now, the sales process is too ingrained in humanity itself to be outsourced fully to technology just yet. Maybe that changes someday. But not in this generation. Because existing bonds of trust can last a career, if not a lifetime.
So big data will continue to be gathered and A.I. will continue to refine it. Don’t bet against it. But remember that there is another side to the technology coin — and it has a human face.
Because emotion is a very real thing.
|
THE SALES PROFESSION AND A.I.
| 0
|
the-sales-profession-and-a-i-19da735d3d8c
|
2017-11-06
|
2017-11-06 05:59:44
|
https://medium.com/s/story/the-sales-profession-and-a-i-19da735d3d8c
| false
| 1,164
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
MarcAngelosNYC
|
Marc is a 20 year sales professional in New York City. He hustles hard and shares thoughts on how you can best prepare for the future.
|
ec84eeec3943
|
MarcAngelosNYC
| 7
| 12
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
32881626c9c9
|
2018-09-23
|
2018-09-23 02:23:15
|
2018-09-23
|
2018-09-23 02:29:04
| 1
| false
|
en
|
2018-09-24
|
2018-09-24 11:13:33
| 2
|
19de1ba3338b
| 5.362264
| 0
| 0
| 0
|
If you cannot do an RCT for everything, how can you figure causation from observational data?
| 3
|
Correlation and causation — Part 2: How to infer causation from observational data.
Big Data is Observational!
If you cannot do an RCT for everything, how can you figure causation from observational data?
Let’s take it right where we left on the last post. If you have not read it, I recommend doing it first. We finished with the idea that an RCT is not proper to every question. But if you cannot do an RCT for all questions, how then CAN you prove causation from observational data?
There are criteria you can use to help you infer causation from observational data. To use them, you also have to use reason.
I will introduce to you a framework presented first by Sir Austin Bradford Hill in 1965. A Professor Emeritus of Statistics at the University of London, in his Presidential Address to the Royal Society of Medicine, he stated:
However, before deducing causation and taking action, we shall not invariably have to sit around awaiting the results of that research [of how a change exerts that influence].
HILL AB. THE ENVIRONMENT AND DISEASE: ASSOCIATION OR CAUSATION? J R SOC MED. 1965;58(5):295–300. DOI:10.1177/003591576505800503.
A causal framework
Here are the criteria Sir Hill proposed:
Strength
Consistency
Specificity
Temporality
Biological gradient
Plausibility
Coherence
Experiment
Analogy
I will address them one-by-one.
Strength
This aspect is intuitive, and we do that every day. You do not need to burn your hand more than once to know not to touch a hot surface. One split second and your brain already determined the cause of the pain.
However, as intuitive as it may be, this criterion is also controversial. It is controversial because there are uncountable causal effects in the health sciences that are not very strong. Yes, they are harder to detect and even to accept, but they are causal.
We can use as an example the sunburn. The effect of the sun rays on your skin is not nearly as perceptible as the effect of the boiling surface, but it is not any less valid. If we give too much emphasis to strength, we may miss critical cause-effect links.
Despite that weakness, this concept is sound. It makes sense because if the effect of a particular variable in the results is strong, it is harder to be explained by other observed or unobserved confounding variables.
Consistency
Here is another intuitive criterion. If the effect of the variable under consideration is consistent in different studies, done by different people, at separate locations, and in different times, it is less likely to be a spurious association. One found by chance.
Admittedly, this is important — albeit not strictly necessary — to strengthen some body of evidence in a particular direction. Nonetheless, neither consistency nor any of the other criteria can be used in isolation to conclude causation from observational data.
Specificity
Specificity may not be as intuitive as the first two. Here, it has nothing to do with the specificity characteristic of a diagnostic test (the number of people with negative results who genuinely do not have the condition under study).
Specificity is used here in its ‘English’ meaning: “The quality of belonging or relating uniquely to a particular subject.” (Oxford dictionary)
In other words, one variable is related to one particular effect, and not a whole bunch of them.
Again, we need to be careful here. We now that cigarettes can cause lung cancer. It also can cause bladder cancer, clogged arteries in your legs, and heart attacks, among others. One variable positively can influence several others. Sometimes, the many different effects are all mediated by the same or a similar mechanism inside different cells.
In Sir Hills own words:
In short, if specificity exists we may be able to conclude without hesitation; if it is not apparent, we are not thereby necessarily left sitting irresolutely on the fence.
SIR AUSTIN B. HILL
Temporality
We have talked about that one already. It is essential and not always easy to decide what came first. To ascertain causation from observational data you need to be confident of which came first.
Biological gradient
The biological gradient is an interesting concept. You used it to decide the sun can burn your skin. The more you expose yourself to the sun the worse the sunburn.
This notion is true of MANY biological processes. The more you smoke, the more likely to have lung cancer. The more alcohol you drink, the more likely it will be to develop cirrhosis of the liver. The longer your heart is without blood, the more it suffers and loses function.
This is not a characteristic of ALL biological processes though. But its presence makes indicating causation from observational data much more straightforward.
Plausibility
Someone once told me that “the road to hell is paved with biological plausibility.” And it is true. Our brains are amazing, and it seems we can explain anything.
Some connections may not be plausible because we lack knowledge. There are others that are not true that we can still explain.
If you have your hypothesis BEFORE looking at the data, as you should, it helps not to “create plausibility” in trying to explain unexpected results.
It is still vital to have plausibility if you want to find out causation from observational data.
Coherence
Is the evidence from your observational study congruent with what is know about the pathology of the disease or any other data available? If so, that is coherent, and it matters.
If your results fly in the face of everything that is known one of two alternatives is true:
You are wrong.
Everyone else is wrong.
That is not to say that it is impossible for everyone else to be wrong. History of science is full of examples. These are the revolutionary discoveries that impact the world profoundly. But most science is evolutionary.
Before changing it all, you have to show YOU are right. More evidence from more studies will be needed to build coherence. And, please, have an internally coherent argument.
Experiment
Although an RCT may not always be possible, sometimes experimentation CAN be done. Let us say that you noticed that workers around a specific type of dust have more of one particular lung condition. Let us also assume that you can significantly cut dust inhalation by introducing a respirator. Finally, let us say you do it. Did that reduce the incidence of the lung condition?
I so, your hypothesis was probably right.
I believe we will have to develop our skills to look at all the data we have gathered with different eyes. Looking for ways to find answers by using Natural Experiments, Instrumental Variables, and Regression Discontinuity design. That may be the crux of how to harness the real power of what we call Big Data.
Analogy
Different variables often affect the results through similar courses. Analogies may help our understanding. By comparing a new predictor with an old one, about which we know more, it may be easier to decipher the new connections.
In summary of how to ‘prove’ causation from observational data
Sir Hill himself stated that these are not hard-and-fast rules to decide causation from observational data. Additionally, it should be clear that not all need conditions need be present and that their presence does not guarantee causation.
Because there is no universal rule, I affirmed this requires reasoning. It also requires knowledge of the subject at hand, and the capacity to break with convention — as good reasoning often does.
So, although correlation does not mean causation, we can infer causation from correlation based on a set of criteria and sound reasoning.
Finally, I want to say that no statistical test can be used as a substitute for thinking here. And statistical analyses often confuse some aspects of this deduction. A classic example is when one study’s p-value is ≤ 0.05, and another is not. Some will interpret that to be incoherent. It is likely not.
To me, these analysis of available data with smart designs and sound reasoning are exhilarating. Big data will need us to be better at it and apply it more often. These are fascinating times!
This post was previously publish at theepidemiologist.ca.
|
Correlation and causation — Part 2: How to infer causation from observational data.
| 0
|
correlation-and-causation-part-2-how-to-infer-causation-from-observational-data-19de1ba3338b
|
2018-09-25
|
2018-09-25 00:47:48
|
https://medium.com/s/story/correlation-and-causation-part-2-how-to-infer-causation-from-observational-data-19de1ba3338b
| false
| 1,368
|
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
| null |
datadriveninvestor
| null |
Data Driven Investor
|
info@datadriveninvestor.com
|
datadriveninvestor
|
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
|
dd_invest
|
Data Science
|
data-science
|
Data Science
| 33,617
|
Marcello Schmidt
|
Physician and PhD candidate in Clin Epi
|
3848186733a2
|
schmidt.marcello
| 1
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-31
|
2018-05-31 05:46:19
|
2018-05-30
|
2018-05-30 13:16:29
| 0
| false
|
en
|
2018-05-31
|
2018-05-31 05:48:14
| 3
|
19deae63eff6
| 6.116981
| 0
| 0
| 0
|
There is no denying to the fact that Artificial Intelligence is going to have a tremendous effect in our daily life. In fact, its…
| 4
|
Hot Topics in Artificial Intelligence for Thesis and Research
There is no denying to the fact that Artificial Intelligence is going to have a tremendous effect in our daily life. In fact, its applications have already made their presence felt across the world. In the coming time, this technology will be implemented in almost every application and device. All the big companies of the world are investing a lot for the research and development of AI. Talking about academics, there are a number of topics in artificial intelligence for thesis and research. Before going into that, let us discuss what exactly is artificial intelligence.
Artificial Intelligence
Artificial Intelligence is the technology in which human thinking and intelligence are imposed in machines and computer systems so as to create intelligent systems that can act and work like human beings. Simply meaning, AI implements human intelligence in machines. The concept of AI is based on the use of specially designed algorithms and revolves around the following traits:
Knowledge
Problem Solving
Learning
Planning
Reasoning
Perception
An AI system comprises an agent and the environment. An agent is the one that perceives its environment through sensors and acts in that through actuators. The agents, also known as intelligent agents, are of the following four types:
Simple Reflex Agents
Model-Based Reflex Agents
Goal-Based Agents
Utility-Based Agents
Types of Artificial Intelligence
Artificial Intelligence can be classified into a number of ways based on algorithms, implementation, and tasks to be performed. One such classification is Weak AI and Strong AI.
Weak AI — Weak AI, also known as narrow AI, is designed for one narrow task.
Strong AI — Strong AI has human cognitive abilities to perform unfamiliar tasks.
An American professor Arend Hintz has categorized Artificial Intelligence into the following four types:
Type 1: Reactive Machines
Type 2: Limited Memory
Type 3: Theory of Mind
Type 4: Self-Awareness
Applications of Artificial Intelligence
There are a number of applications of Artificial Intelligence some of which are:
AI plays an important role in the gaming industry. AI techniques and algorithms are used for pathfinding in many video games.
Speech Recognition is another application of AI in which intelligent systems can understand and recognize human talks.
AI has played a great role in healthcare particularly in finding the right treatment for a particular problem like cancer.
Artificial Neural Network is used by various financial institutions for financial trading.
The applications of robotics are also increasing in areas where there is a risk to human life.
Thesis and Research Areas in Artificial Intelligence
Students looking for hot topics in artificial intelligence for thesis and research can work on any one of the following areas in artificial intelligence. Here is the list of interesting areas in artificial intelligence for thesis and research:
Machine Learning
Robotics
Artificial Neural Network
Natural Language Processing
Computer Vision
Sentiment Analysis
Biometrics
Data Mining
Machine Learning
Machine Learning is currently the major application of AI. It is also a hot topic for thesis and research in AI. It is widely believed that machine learning and artificial intelligence are one and the same thing but it is not so. We can say that Machine Learning is an approach to Artificial Intelligence. Often these two terms are used interchangeably. Machine Learning gives systems the ability to learn automatically from the experience without being programmed explicitly. Students looking for hot topics in artificial intelligence can definitely find a one in machine learning. The whole idea of machine learning is based on the algorithms which are categorized as:
Supervised Learning
Unsupervised Learning
Reinforcement Learning
In machine learning, the algorithms receive an input value and use historical data to predict the output. Deep Learning is the subfield of Machine Learning just as Machine Learning is a subfield of Artificial Intelligence
Robotics
Robotics is another popular field for research in artificial intelligence. It is a mixture of mechanical, electronics, and computer science engineering. It deals with the construction of robots that can act and work like human beings. A robot generally contains a sensor to perceive the environment and actuator to interact with the environment. The robots can be used in areas where it is difficult for human beings to sustain. It finds its application in manufacturing industries, space exploration, healthcare, military, and many more. AI plays an important role in robotics in perception, reasoning, learning, decision making. These are the binding pillars of human-robot interaction. Robots have the ability to learn from their experience. Choose this field if you have interest in robots and their working.
Artificial Neural Network
It is yet another trending research area in AI. It is a computer-based model and is based on the structure and functions of a biological neural network. It imitates the working of the human brain. A biological neural network consists of nerve cells known as neurons connected to other nerve cells via axons. A neuron can communicate to the other neurons. In artificial neural network or ANN, there are multiple nodes that represent neurons. These nodes are connected to each other through links just as neurons are connected through axons. A weight is associated with each link. There are two types of Artificial Neural Network topologies:
Feedforward ANN
Feedback ANN
There are three interconnected layers in ANN. The first layer consists of the input neurons. These neurons send data to the second layer. The second layer sends output neurons to the third layer. ANN has various applications including computer vision, speech recognition, medical diagnosis, and machine translation. There are various ideas for thesis and research in this area.
Natural Language Processing
Natural Language Processing is a field of artificial intelligence that provides computers the ability to analyze and interpret human languages. There are various techniques for interpreting human language that uses statistical and rule-based algorithms. It gives computers the ability to understand human speech. Natural Language Processing is being used in spell check, sentiment analysis, translation, financial markets etc. Statistics combined with deep learning are used in NLP algorithms. There are two main components of Natural Language Processing:
Natural Language Understanding(NLU)
Natural Language Generation(NLG)
Following are the main steps in Natural Language Processing:
Lexical Analysis
Syntactic Analysis
Semantic Analysis
Discourse Integration
Pragmatic Analysis
Computer Vision
Computer Vision is an important research area in Artificial Intelligence. It is a field that aims at providing human vision to computer systems. The process of computer vision is based on the following three steps:
Image acquisition
Image processing
Image analysis and manipulation
The main applications of computer vision include augmented reality, image restoration, motion recognition, biometrics, forensics, face recognition, robotics to name a few. Following are the main algorithms used in computer vision:
Viola-Jones algorithm
Lucas-Kanade algorithm
Adaptive thresholding
Kalman filter
Image Colorization
Students looking for thesis topics in artificial intelligence can find an interesting one in computer vision.
Sentiment Analysis
Sentiment Analysis is a process that measures people’s viewpoint through Natural Language processing, linguistics, and text analysis. Data Mining process and techniques are used for this purpose to extract and capture data. It is also known as opinion mining. Sentiment Analysis is important for businesses and brands to find out what customers think about that particular brand. There are certain tools for sentiment analysis some of which are:
Meltwater
Hootsuite
Tweetstats
Marketing Grader
Google Alerts
Pagelever
Semantria
Rapidminer
Sentiment Analysis mainly finds its application in social media monitoring to have an overview of the public opinion. Valuable insights can be extracted from the social data.
Biometrics
Biometrics is a technology used for identification and access control by identifying the physical and behavioral traits of an individual. It is also a good research area in artificial intelligence. The biometric identifiers are of two types categorized on the basis of physiological characteristics and behavioral characteristics. Physiological characteristics include fingerprints, facial recognition, voice recognition while behavioral characteristics include walking and other gestures. This technology is widely used in corporate and public security systems.
A biometric system includes:
A sensor to collect data and change it into a useful format
A biometric template
A decision process
The main applications of biometrics technology include logical access control, physical access control, time and attendance, and surveillance.
Data Mining
Data Mining is one of the newest technologies that provide a base to the artificial intelligence and machine learning. The main application of data mining in artificial intelligence is to process and evaluate the collected data. AI and data mining techniques have been used in combination to solve problems of classification, segmentation, association, and prediction. Data Mining is the process of extracting information from data by identifying patterns in large datasets. Statistics play a crucial role in data mining process. Data Mining has a number of applications, especially in the financial market. There are a number of thesis topics in artificial intelligence related to data mining.
These were the trending areas in artificial intelligence for M.Tech thesis and research. The latest thesis topics in artificial intelligence are given below.
Here is the list of latest thesis topics in Artificial Intelligence as well as for research:
The classification technique for the face spoof detection in artificial neural networks
The iris detection and reorganization system using classification and glcm algorithm
The pattern detection system using an algorithm of textual feature analysis and classification
The plant disease detection using glcm and KNN classification in neural networks
To propose a technique for the prediction analysis in data mining
The sentiment analysis technique using SVM classifier in data mining
The heart disease prediction using a technique of classification
These are the topics on which you can work. You can contact us for any kind of thesis-related help on any of these topics.
Originally published at www.writemythesis.org on May 30, 2018.
|
Hot Topics in Artificial Intelligence for Thesis and Research
| 0
|
hot-topics-in-artificial-intelligence-for-thesis-and-research-19deae63eff6
|
2018-05-31
|
2018-05-31 05:48:15
|
https://medium.com/s/story/hot-topics-in-artificial-intelligence-for-thesis-and-research-19deae63eff6
| false
| 1,621
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Write Mythesis
| null |
bbd6f3d333b
|
writemythesis2018
| 1
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-10
|
2018-07-10 13:28:00
|
2018-07-04
|
2018-07-04 14:16:38
| 2
| false
|
en
|
2018-07-10
|
2018-07-10 13:28:39
| 17
|
19df10d7ae90
| 3.983333
| 0
| 0
| 0
|
For the last five years, we’ve been discussing the future of AI with many organizations, including Intel, at the forefront of that…
| 5
|
The Future of AI is Here — The Digital Transformation People
For the last five years, we’ve been discussing the future of AI with many organizations, including Intel, at the forefront of that conversation. However, while ensuring the world gets to experience new heights and unparalleled technologies, organizations failed to notice that the future is upon us.
This realization dawned on me when I attended AI Devcon in San Francisco as an Intel Partner. The conference was hosted by several opinion leaders, such as Naveen Rao, the VP and GM of artificial intelligence at Intel. The conference helped me gain insight into the impact of our efforts and brought me to the realization that it is time other organizations jumped on the AI bandwagon initiated by Intel.
Why Do You Need to Start Using AI Now
There are multiple reasons you should start leveraging AI. The market presents a lot of potential for the future and set unprecedented records; that future seems bright. The current market growth potential for AI is high, and the market which currently stands at $3 Trillion is expected to grow to over $8 Trillion in the next five years. That is roughly a 200 percent growth over that period.
Besides the increasing market growth for AI, there is also a need to increase customer experiences. Customers now want the best experience possible and organizations that can offer those experiences will come out on top. Thus, if you want to ensure the right experience for your customers, I can’t stress enough the importance of jumping onto the AI bandwagon.
Examples of AI Implementation
AI is currently being implemented across a number of industries, and there are numerous use cases that set a standard when leveraging AI. Here we look at some of those use cases, and how organizations can pull from them for their own industries.
Improving Patient Outcomes
Medical data is extremely difficult to measure and analyze, which is why only a data science setup with the capabilities to host this complicated data can actually garner presentable insight. There are numerous cases of Intel working with medical institutions to improve patient outcomes through the collection and analysis of data. By gaining insight that was previously not available, medical experts can now use AI to give patients the right treatments at the right time. This helps improve patient outcome and increases the credibility of data science and AI tools.
https://www.thedigitaltransformationpeople.com/connecting-talent/
Image Rendering in Filmmaking
The use of AI in image rendering for filmmaking improves user experience and ensures that everyone gets to witness a flawless experience while watching a film. AI can be used in filmmaking to increase the graphic representation of different living animals. The data from the movement and stimulation of animals is perfectly represented inside the film to create an honest representation of the movements made by living creatures.
Real-time AI Music
Real-time AI music is now a reality and Intel’s Movidius sits at the forefront of such advances. The technology has been credited with using set responses to add value to music and create a rhythmic tone. The model gathers insights and creates responses based on the frequencies of the content.
Machine Learning at AWS
Amazon has been leveraging machine learning to provide customers with suggestions and better understand their needs. Amazon is also using machine learning to create innovations in devices such as Alexa and Amazon Go. Amazon Sagemaker, which is at the forefront of Amazon’s machine learning initiative, brings machine learning to the cloud to benefit developers and enterprises.
Use of AI by Ferrari
The use of AI in a Ferrari is geared towards helping achieve the following functions:
Helping drivers achieve faster times in race circuits.
Helping engineers pioneer desired responses from the engine of the car.
These insights have been garnered through intelligent data sets achieved through drone technology.
What is the Basis of AI Machine Learning?
Machine learning is based on four types of learning:
Supervised Learning: What we see in the world today is supervised learning, where machines are supervised and fed data tools required for garnering actionable insight.
Transfer Learning: The transfer of knowledge you get from one insight into another data set. Transfer learning gives organizations the leverage they need to make machines learn from examples.
Unsupervised Learning: Learning without the presence of able data. This means to learn without the presence of specific variables. Unsupervised learning has transformed the concept of machine learning.
Reinforcement Learning: Reinforcement learning provides an infinite amount of experience and data. Reinforcement learning gathers actionable insight from that data. Model based reinforcement learning spurs from this method.
AI and Ethics
There will be a big discussion on ethics when AI starts making its own decisions. Whether these decisions comply with the human ethical standards that we currently follow is something that remains to be seen. We can take the example of a self-driven car in this use case. The car would have to make decisions, such as colliding into a passerby or not colliding into a pedestrian or going into a brick wall nearby. What’s interesting is that these decisions will be taken in real-time, and how AI gets to pull this ethical implication off is something that defines our future.
The growth of AI during the next 50 years can be envisioned. All that is needed to propel this growth forward is a solid infrastructure, software and facilities. The infrastructure will be provided by different hardware tools and the community with the assistance of developers. Once these developers have the necessary infrastructure, we will see a broader implementation of AI across the globe.
Read more articles tagged: AI, Featured, Machine Learning
Originally published at www.thedigitaltransformationpeople.com on July 4, 2018.
|
The Future of AI is Here — The Digital Transformation People
| 0
|
the-future-of-ai-is-here-the-digital-transformation-people-19df10d7ae90
|
2018-07-10
|
2018-07-10 13:28:40
|
https://medium.com/s/story/the-future-of-ai-is-here-the-digital-transformation-people-19df10d7ae90
| false
| 954
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
The Digital Transformation People
|
Follow for insights on #DigitalTransformation #Disruption #FoW #Analytics #CyberSec Click here for our newsletter: https://tinyurl.com/yd3ckeqv
|
475f38b7f49d
|
TheDigitalTP
| 1,403
| 1,244
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
a095e0538d84
|
2018-08-25
|
2018-08-25 20:14:06
|
2018-08-25
|
2018-08-25 20:23:19
| 4
| false
|
en
|
2018-08-25
|
2018-08-25 20:23:19
| 7
|
19df59b97e7d
| 5.537736
| 28
| 0
| 1
|
If you go to college, you probably have participated in at least a couple of student organizations. I’m starting my 1st semester as a…
| 4
|
k-Nearest Neighbors: Who are close to you?
If you go to college, you probably have participated in at least a couple of student organizations. I’m starting my 1st semester as a graduate student at Rochester Tech, and there are more than 350 organizations here. They are sorted into different categories based on the student’s interests. What defines these categories, and who says which org goes into what category? I’m sure if you asked the people running these organizations, they wouldn’t say that their org is just like someone else’s org, but in some way you know they are similar. Fraternities and sororities have the same interest in Greek Life. Intramural soccer and club tennis have the same interest in sports. The Latino group and the Asian American group have the same interest in cultural diversity. Perhaps if you measured the events and meetings run by these orgs, you could automatically figure out what category an organization belongs to. I’ll use student organizations to explain some of the concepts of k-Nearest Neighbors, arguably the simplest machine learning algorithm out there. Building the model consists only of storing the training dataset. To make a prediction for a new data point, the algorithm finds the closest data points in the training dataset — its “nearest neighbors.”
How It Works
In its simplest version, the k-NN algorithm only considers exactly one nearest neighbor, which is the closest training data point to the point we want to make a prediction for. The prediction is then simply the known output for this training point. Figure below illustrates this for the case of classification on the forge dataset:
Here, we added three new data points, shown as stars. For each of them, we marked the closest point in the training set. The prediction of the one-nearest-neighbor algorithm is the label of that point (shown by the color of the cross).
Instead of considering only the closest neighbor, we can also consider an arbitrary number, k, of neighbors. This is where the name of the k-nearest neighbors algorithm comes from. When considering more than one neighbor, we use voting to assign a label. This means that for each test point, we count how many neighbors belong to class 0 and how many neighbors belong to class 1. We then assign the class that is more frequent: in other words, the majority class among the k-nearest neighbors. The following example uses the five closest neighbors:
Again, the prediction is shown as the color of the cross. You can see that the prediction for the new data point at the top left is not the same as the prediction when we used only one neighbor.
While this illustration is for a binary classification problem, this method can be applied to datasets with any number of classes. For more classes, we count how many neighbors belong to each class and again predict the most common class.
Implementation From Scratch
Here’s the pseudocode for the kNN algorithm to classify one data point (let’s call it A):
For every point in our dataset:
calculate the distance between A and the current point
sort the distances in increasing order
take k items with lowest distances to A
find the majority class among these items
return the majority class as our prediction for the class of A
The Python code for the function is here:
Let’s dig a bit deeper into the code:
The function knnclassify takes 4 inputs: the input vector to classify called A, a full matrix of training examples called dataSet, a vector of labels called labels, and k — the number of nearest neighbors to use in the voting. The labels vector should have as many elements in it as there are rows in the dataSet matrix.
We calculate the distances between A and the current point using the Euclidean distance.
Then we sort the distances in an increasing order.
Next, the lowest k distances are used to vote on the class of A.
After that, we take the classCount dictionary and decompose it into a list of tuples and then sort the tuples by the 2nd item in the tuple. The sort is done in reverse so we have the largest to smallest.
Lastly, we return the label of the item occurring the most frequently.
Implementation Via Scikit-Learn
Now let’s take a look at how we can implement the kNN algorithm using scikit-learn:
Let’s look into the code:
First, we generate the iris dataset.
Then, we split our data into a training and test set to evaluate generalization performance.
Next, we specify the number of neighbors (k) to 5.
Next, we fit the classifier using the training set.
To make predictions on the test data, we call the predict method. For each data point in the test set, the method computes its nearest neighbors in the training set and finds the most common class among them.
Lastly, we evaluate how well our model generalizes by calling the score method with test data and test labels.
Running the model should gives us a test set accuracy of 97%, meaning the model predicted the class correctly for 97% of the samples in the test dataset.
Strengths and Weaknesses
In principle, there are two important parameters to the KNeighbors classifier: the number of neighbors and how you measure distance between data points.
In practice, using a small number of neighbors like three or five often works well, but you should certainly adjust this parameter.
Choosing the right distance measure is somewhat tricky. By default, Euclidean distance is used, which works well in many settings.
One of the strengths of k-NN is that the model is very easy to understand, and often gives reasonable performance without a lot of adjustments. Using this algorithm is a good baseline method to try before considering more advanced techniques. Building the nearest neighbors model is usually very fast, but when your training set is very large (either in number of features or in number of samples) prediction can be slow. When using the k-NN algorithm, it’s important to preprocess your data. This approach often does not perform well on datasets with many features (hundreds or more), and it does particularly badly with datasets where most features are 0 most of the time (so-called sparse datasets).
In Conclusion
The k-Nearest Neighbors algorithm is a simple and effective way to classify data. It is an example of instance-based learning, where you need to have instances of data close at hand to perform the machine learning algorithm. The algorithm has to carry around the full dataset; for large datasets, this implies a large amount of storage. In addition, you need to calculate the distance measurement for every piece of data in the database, and this can be cumbersome. An additional drawback is that kNN doesn’t give you any idea of the underlying structure of the data; you have no idea what an “average” or “exemplar” instance from each class looks like.
So, while the nearest k-neighbors algorithm is easy to understand, it is not often used in practice, due to prediction being slow and its inability to handle many features.
Reference Sources:
Machine Learning In Action by Peter Harrington (2012)
Introduction to Machine Learning with Python by Sarah Guido and Andreas Muller (2016)
— —
If you enjoyed this piece, I’d love it if you hit the clap button 👏 so others might stumble upon it. You can find my own code on GitHub, and more of my writing and projects at https://jameskle.com/. You can also follow me on Twitter, email me directly or find me on LinkedIn.
|
k-Nearest Neighbors: Who are close to you?
| 131
|
k-nearest-neighbors-who-are-close-to-you-19df59b97e7d
|
2018-08-27
|
2018-08-27 22:43:25
|
https://medium.com/s/story/k-nearest-neighbors-who-are-close-to-you-19df59b97e7d
| false
| 1,282
|
Your Ultimate Guide to Data Science Interviews
| null | null | null |
Cracking The Data Science Interview
|
le_j6@denison.edu
|
cracking-the-data-science-interview
|
DATA SCIENCE,TECHNICAL INTERVIEW,COMPUTER SCIENCE,STATISTICS,MACHINE LEARNING
|
james_aka_yale
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
James Le
|
Blue Ocean Thinker (https://jameskle.com/)
|
52aa38cb8e25
|
james_aka_yale
| 9,745
| 1,164
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-06
|
2018-08-06 02:06:13
|
2018-08-06
|
2018-08-06 02:10:03
| 1
| false
|
en
|
2018-08-06
|
2018-08-06 02:11:29
| 4
|
19e02656bf51
| 4.898113
| 0
| 0
| 0
|
Digital assistants are now ubiquitous in modern life. We rely on them to find out the weather, set appointments in our calendar, search for…
| 5
|
‘Sandra’ Podcast Looks at the Problem of Human Labor in the Information Age
Digital assistants are now ubiquitous in modern life. We rely on them to find out the weather, set appointments in our calendar, search for when movies are playing and a lot more. We trust them with personal information about our private lives and though maybe it’s occurred to us to think about the safety of giving them so much information, I doubt it’s stopped many. But forget about hackers trying to break in, what if those computerized voices weren’t bots at all, and actually humans we were just handing our information to? Writers Kevin Moffett and Matthew Derby of The Silent History have taken that possible future as their inspiration for a new podcast with chilling results. I talked to them over the phone about the cultural divide in technology today and about our tendency to believe machines over people and excerpts of that conversation are included below.
Sandra is a new scripted podcast from Gimlet Media written by Kevin Moffett and Matthew Derby and voiced by a phenomenal cast. Helen (Alia Shawkat at her most vulnerable) stars as a small town woman eager to start her new job and escape the clutches of a failed marriage to Donny (Christopher Abbott of Girls fame), a man who can’t seem to keep his life together.
Helen gets hired by a tech company famous for their amazing digital assistant, Sandra. Sandra is so responsive the users feel she’s practically human-like in her interactions. But this technology is hiding a dirty little secret: it’s powered entirely by people. Helen becomes a Sandra operator, answering the user’s questions about anything they ask, and when she talks to people, they hear a synthesized female voice as interpreted by the inimitable Kristen Wiig.
One of the ideas Moffett was interested in is how people react differently to an automated voice. “Some people talk to her like she’s a person, and other people treat her like a smart microwave”, he says. One character even trusts Sandra for medical advice but refuses to go to a real doctor. It’s a strange world we’re living in when people are more trustful of machines than people because they perceive machines as always correct. It allows us to isolate ourselves and can lead us down dangerous paths.
As a Sandra, Helen is emboldened by being a voice of wisdom to the users and discovers she has more power than she thought. She’s persuaded by her boss (Ethan Hawke in an over-the-top role) to get her husband to sign divorce papers and move on with her life. Everything seems to be going well until Helen forms a quasi-friendship with one of the Sandra users named Tad.
Helen thinks Tad is a good person because of the information available to her online, but in fact, the reality turns out to be very different; a reminder that the internet allows us to rewrite our public self in sometimes dangerous ways. When Helen divulges more information to him than she perhaps should, the consequences are potentially life-threatening.
Writer Matthew Derby went into this narrative wanting to explore the notion of the private self versus the public self and the rapidly changing borderline between the two. “We are all invested in the construction and maintenance of our public and private personas” he said. The dangers of believing everything you see and read about a person online are very real.
Instead of what could be a screed against evil corporations and data mining, Sandra aims to tell a more personal story about human labor in the information age. As with the digital assistants currently in existence in our world, realism and humanness is highly prized in Sandra. But because Sandra is run by actual people, the line between the operators’ lives and Sandra’s persona gets messy almost immediately. Helen’s boss pushes her to incorporate more of herself into her job because the realness of Sandra is what the users value. At work, Helen hears a couple argue and suggests that they break up and her boss is excited by that. He tells her that she did what no one else had the courage to do. A machines can give advice that would never be trusted by people. The problem with that, writer Kevin Moffett said “is that when you add humans to the system, you get all their messy biases and preferences, and all their racism and sexism.” Using real people turns out to be both Sandra’s greatest strength and biggest weakness.
I asked Derby if he thought we’re headed towards a scenario like what happens in Sandra and he referenced a Wired article talking about humans being paid to do simple tasks like taking soup cans off the shelf to teach robots how to do it. “These things are already happening,” he said. “We just pushed it into a speculative context.” Sandra takes the idea that we are all just cogs in a machine to its logical next step by placing human workers in the service of the ultimate digital assistant. By utilizing human reactions Sandra is able to learn and advise its users much better than a machine ever could.
The meat of the storyline is sandwiched around Helen’s various online interactions with the Sandra users that are often funny and realistic at the same time. Listening to little boys trying to trick Sandra into saying the word “butthole” while she is telling them about a bird’s cloaca felt like a glimpse into childhood. In another telling instance, an off-work Helen meets an ice-cream truck driver who cannot turn off the music in his truck. She recommends he try Sandra, and he refuses her help saying that he doesn’t need a device because he makes his own decisions and spouting conspiracy theories about what the computers do with all the information. Her charms win him over and Sandra reads the truck’s manual online giving the driver the information he needed in seconds.
Characters like her husband, Donny, and the ice cream truck driver represent another more subtle theme of the show about the cultural divide between the educated class who use technology and the ones who don’t due to ignorance or mistrust. Derby said he wanted to explore the cultural divide between what is called the “silicon necklace” that starts in New York and wraps around the country. Helen’s use of technology versus the ignorance of it with all the characters outside her work show how isolated these two cultures are.
The fact that Sandra uses human labor is a secret to their user base, but Helen has no problem discussing that aspect of her job with Donny. Within the fiction of the show, this suggests that she thinks Donny would never use this technology, or that if he did, no one would believe his story about Sandra because he’s from the uneducated class.
This part of the storyline, unfortunately, felt like a nagging plot hole because in what world do open secrets online stay secret for long? Ultimately, the show comes together with very strong performances all around, and the unlikelihood of the secret of Sandra didn’t diminish my enjoyment of a fantastic drama with strong character arcs, humor, and tight pacing.
Following the serialized fiction success of Homecoming, Gimlet Media sure looks like it has another winner on its hands. All seven episodes of Sandra are available now.
|
‘Sandra’ Podcast Looks at the Problem of Human Labor in the Information Age
| 0
|
sandra-podcast-looks-at-the-problem-of-human-labor-in-the-information-age-19e02656bf51
|
2018-08-06
|
2018-08-06 02:11:29
|
https://medium.com/s/story/sandra-podcast-looks-at-the-problem-of-human-labor-in-the-information-age-19e02656bf51
| false
| 1,245
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Joshua Dudley
| null |
b43e4900865c
|
joshuadudley
| 78
| 138
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-18
|
2018-05-18 06:49:00
|
2018-05-18
|
2018-05-18 06:58:38
| 1
| false
|
en
|
2018-05-18
|
2018-05-18 06:58:38
| 2
|
19e0d2663248
| 1.513208
| 0
| 0
| 0
|
Cryptocurrency mining has already taken a toll on the overall GPU supply across the globe. And GPU manufacturers like Nvidia are trying…
| 2
|
Cryptocurrency Mining Is Hindering SETI’s Search for Alien Life
Cryptocurrency mining has already taken a toll on the overall GPU supply across the globe. And GPU manufacturers like Nvidia are trying their best to keep the GPUs in stock for the gamers who are actually struggling to upgrade their PCs. Now, it looks like cryptocurrency mining is killing the search for alien life as well.
That’s right — scientists monitoring the universe for potential broadcasts by extra-terrestrial beings are struggling to get the right hardware due to the surge in demand from crypto-miners. As reported by BBC, SETI (Search for Extraterrestrial Intelligence) researchers want to expand operations at two observatories, but are not able to acquire the right graphics chips as they are in short supply.
Dr. Dan Werthimer, one of the researcher, said, “We’d like to use the latest GPUs… and we can’t get ’em. That’s limiting our search for extra-terrestrials, to try to answer the question, ‘Are we alone? Is there anybody out there?” He further noted that it is a new problem and they haven’t faced this kind of issue prior to the crypto-mania.
Cryptocurrency mining is a resource-intensive task, and it involves connecting computers to a global network and using them to solve complex computations. On successful completion, the process will reward you back with cryptocurrency. And sadly, GPUs are required by radio-astronauts to process large amounts of data as well.
The Radio-astronauts are not only on the look-out for any potential living beings on other planets, but also listen to the general sounds of the universe — as much is possible — to understand or solve mysteries and make discoveries. Tuning in to various points in the universe to look for any signals is naturally very intensive and therefore graphics chips are used to augment the computing power.
That being said, it is worth making a note that Nvidia is already looking to launch a dedicated card for GPU mining in coming months, and it is expected to solve a lot of problems going forward
|
Cryptocurrency Mining Is Hindering SETI’s Search for Alien Life
| 0
|
cryptocurrency-mining-is-hindering-setis-search-for-alien-life-19e0d2663248
|
2018-05-18
|
2018-05-18 06:58:39
|
https://medium.com/s/story/cryptocurrency-mining-is-hindering-setis-search-for-alien-life-19e0d2663248
| false
| 348
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Fazil
|
Hi Am a Digital Marketing Student and loves Writing Blogs On Niches Like Health and Money Earning Programs. My website is http://earnlikepro.info
|
836a3ffa4c7d
|
fazil4fazi
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
c0061b44f28b
|
2018-06-28
|
2018-06-28 13:08:32
|
2018-06-28
|
2018-06-28 14:24:57
| 1
| false
|
id
|
2018-08-31
|
2018-08-31 04:03:24
| 2
|
19e11ff5ac42
| 7.54717
| 0
| 0
| 0
|
Poin Anda Akan Diatur Ulang Ke Nilai Awal!
| 5
|
Sistem Runtuh. Apakah Anda Ingin Memulai Lagi?
Poin Anda Akan Diatur Ulang Ke Nilai Awal!
*Catatan ini pernah disertakan dalam sebuah diskusi berjudul “Google Glass Failure and The Horizon of Sight” pada 10 Feb 2017.
Di tahun 2010, sebelum google meluncurkan video iklan Google Glass, Keiichi Matsuda memproduksi film pendek “HYPER-REALITY”. Film tersebut menceritakan — lewat sudut pandang pertama — seorang pekerja biasa Juliana Restrepo yang menjalani rutin kesehariannya menggunakan kendaraan publik sehabis bekerja, membeli kebutuhan rumah tangga dan peliharaan maya-nya. Pemandangan Juliana Restrepo berbeda dengan pemandangan yang ditangkap oleh mata telanjang kita. Di mata Juliana, terdapat layar yang menampilkan pemberitahuan media sosial, garis penunjuk di jalanan yang ingin dia tuju, browser dan asisten pribadi yang merupakan mesin kecerdasan buatan yang mengerti perintah-perintah operasional layar tersebut. Pemandangan itu pun ramai dengan warna-warni iklan dan benda-benda maya yang menindih pemandangan tanpa perangkat tersebut. Mata Juliana bisa kita bayangkan dengan perumpamaan sederhana, layar smartphone sudah bertindihan dengan retina mata.
Di film pendek tersebut, penonton tidak mengetahui bentuk perangkat yang Juliana kenakan namun dapat mengetahui bahwa pemandangan tersebut adalah buatan lewat adegan penghujung film dimana sistem perangkat mengalami kegagalan sehingga kenyataan seketika runtuh dan melemparkan Juliana kembali pada pemandangan mata badaniahnya.
Upaya Google untuk meluncurkan Google Glass ke publik adalah sesuatu yang bisa jadi kita tunggu-tunggu dimana informasi berada begitu dekat sampai di pelupuk mata. Namun upaya tersebut bukanlah tanpa masalah. Walaupun dekat dengan mata, kendali pada operasi dasar masihlah menggunakan tangan (sentuh, geser, dan ketuk) dan suara (perintah-perintah biner dan input teks). Di sana mata masih menjadi organ yang tidak bisa dikendalikan penuh seperti pada tangan yang mampu membentuk benda-benda material.
Pengembangan awal yang melibatkan percobaan publik menemukan bermacam kritik seperti resiko kesehatan, perangkat yang ringkih, harga yang cukup mahal, dan mengenai privasi. Apakah Google Glass dan perangkat serupa akan menjadi sesuatu yang umum atau hanya mimpi belaka, penulis memberi perhatian pada kemampuan perangkat tersebut mengubah pemandangan dunia sekitar dan kemungkinan kegagalannya oleh bug, fraud, dan error. Terkhusus yang kedua, penulis mencoba menjelajah konsep pemandangan dan penglihatan lewat Juliana Restrepo dan beberapa cerita yang mungkin relevan.
Dunia Juliana Restrepo merupakan pemandangan tanpa cacat dimana dinding kusam pertokoan kotanya dibalut oleh permukaan yang dibangkitkan oleh perangkat. Orang-orang di sekitarnya terlihat seperti NPC (Non Playable Character) dalam permainan The Sims yang bisa diajak berbicara dan bila sudah berteman, dapat kita lihat statusnya. Juliana di dunianya adalah satu akun dengan level 99 dan cemas akan poin dan bonus yang dimilikinya.
Di sepanjang film pendek Keiichi Matsuda, kita menyaksikan Juliana berinteraksi dengan layar-layar virtual, melakukan panggilan ke operator sistem, bermain game, dan bertanya pada mesin pencari, “siapa aku?”. Juliana bertanya pada perangkat apakah Ia bisa melakukan restart dan diberitahukan bahwa Ia akan kehilangan poin dan bonus yang diperolehnya selama ini. Artinya, dunia dimulai dari awal. Pengembalian kondisi tersebut bukanlah opsi yang sepele bagi Juliana. Ketika mengkonfirmasi, Ia menjawab “tidak”. Kemudian Juliana berangkat ke toko swalayan untuk membeli beberapa kebutuhan, dan di muka toko seekor anjing maya hadir. Dijelaskan di sebuah wawancara, anjing Juliana juga memiliki kebutuhan-kebutuhan dasar seperti makan, minum, bermain, mandi, dan berak. Kebutuhan primer si anjing dapat diperoleh ketika Juliana membeli berbagai hal yang telah terhubung dengan aplikasi penyedia piaraan tersebut. Di sepanjang jalan layar Juliana terus menampilkan iklan seperti kursus kebugaran, produk kecantikan, dan produk penyedia kebutuhan si anjing.
Pemandangan maya Juliana kemudian mengalami masalah dan Ia menghubungi operator yang merupakan Artificial Intelligence. Operator di awal berkata bahwa sistem sedang baik-baik saja dan poin-poin Juliana aman bersama mereka. Beberapa kali pemandangan mengalami kebisingan dan penampilan operator berubah-ubah. Kecemasan Juliana bertambah ketika operator salah menyapa Juliana sebagai Emilio. Kebisingan makin mengganggu dan sebentar sistem nampak baik-baik saja. Juliana makin ragu dan operator masih meyakinkan bahwa akunnya baik-baik saja namun Ia mesti menjalani konfirmasi identitas lewat tes biometrik yang bisa dilakukan di cabang operasional perangkat terdekat.
Di akhir film, terdapat adegan Juliana keluar dari swalayan dan mendapatkan dirinya ‘dilukai’ secara maya. Saat itu juga Ia panik, napasnya tersengal. Sistem pun mati sehingga semua poin dan bonus yang telah Ia kumpulkan lenyap dan kota nampak kusam tanpa lapisan-lapisan grafis yang memukau. Juliana menyeberang jalan menuju sebuah patung dan menghidupkan kembali sistem perangkatnya dari awal.
Kegagalan sistem untuk menyokong simulasi yang dijalankan oleh perangkat ini adalah pengalaman yang sudah pernah manusia alami dan memang dapat ditemukan di kehidupan sehari-hari. Kegagalan sistem juliana restrepo terbilang mirip dengan blue screen pada komputer kita, telepon pintar yang layarnya mati, atau perangkat penyimpanan yang terkena serangan virus. Namun kasus kita dan Juliana terbilang berbeda sebab pemandangan terikutkan di dalamnya. Kegagalan pemandangan pun dapat kita temukan dalam kasus seorang hansip bernama Budi yang sedang berjaga malam di kampung. Dalam patrolinya, dari pos ronda sampai lapangan bola, dalam keadaan setengah sadar itu, Ia terperanjat ketika sekilas mengarahkan pandang ke arah semak-semak dan pohon pisang. Kita sebagai pebaca mengetahui sosok putih (hantu) disana namun hansip ingin meyakinkan dirinya dengan mengusam matanya berkali-kali sampai pemandangannya begitu jelas dan Ia pun berteriak dan lari terbirit-birit
Dalam puisi epik Odiseus karangan Homer, diceritakan Odiesus sang raja Ithaca mengalami kekalahan di perang troya dan melakukan perjalanan kembali ke Ithaca. Odiseus dan dua belas kapalnya terhanyut ke sebuah pulau penuh teratai sehabis diterpa badai. Mereka berjumpa Lotopagus (pemakan teratai) yang memberikan buah, yang membuat pemakannya lupa akan perjalanan pulang mereka, kepada awak kapal Odiseus. Namun Odiseus dengah susah payah menarik mereka kembali ke kapalkapal mereka. Kemudian mereka memasuki sebuah gua di bawah perut seekor domba dan berjumpa Polyphemus, raksasa (biasa makan manusia) peranakan Poseidon dan Thoosa, yang hidup bersama raksasa sebangsanya dalam sebuah komuni.
Odiseus dan awaknya memasuki gua dan ketika itu Polyphemus baru saja pulang ke gua, Ia menutup pintu gua dengan batu besar dan tak lama kemudian memakan dua awak Odiseus. Esok paginya, Ia memakan dua lagi dan meninggalkan gua untuk merawat dombanya. Setelah kembali, sore harinya Ia memakan dua lagi. Odiseus tidak bisa tinggal diam. Lalu saat menjelang malam hari, Ia menawarkan anggur keras yang diperolehnya di perjalanan. Dalam keadaan mabuk, Polyphemus kemudian bertanya siapa nama Odiseus sebenarnya dan berjanji Ia akan memberi hadiah (yaitu dimakan seperti awak kapalnya) bila Odiseus sudi memberitahu namanya. Dengan cerdik, Odiseus pun menjawab, “Outis.” yang berarti “nobody”. Dengan begitu, janji tersebut harus dipenuhi. Namun naas bagi Polymephus, Ia harus memakan “nobody”. Dengan itu, seperti mengalami gagal mesin, Ia jatuh tertidur dalam mabuknya. Di kesempatan itu, Odiseus memanaskan sebatang kayu lantas menusuk mata Polyphemus. Polyphemus berteriak-teriak pada rekan-rekannya bahwa nobody telah menyakitinya namun raksasa-raksasa tersebut tidak bisa mengerti ucapannya yang tak masuk akal, lagi pula mereka sedang ngelantur karena mabuk.
Di pagi harinya, Polyphemus meninggalkan gua untuk memeriksa dombanya dan memastikan Odiseus tidak pergi kemana-mana. Odiseus dan awaknya saat itu bersembunyi di bawah domba dan Odiseus berhasi kabur setelah Polyphemus membiarkan dombanya pergi. Dalam pelariannya itu, Odiseus membuka identitasnya dan Polyphemus meminta pembalasan dendam lewat poseidon.
Janji Polymephus sebagai algoritma mengalami error dengan input “nobody” berakibat pada runtuhnya bahasa yang digunakan sebagai alat komunikasi. Ketika itu terjadi dalam kenyataan, terjadi paralisis pada tubuh yang sepanjang hidupnya memberi nama pada apa-apa. Paralisis itu terjadi pula pada si hansip dan Juliana Restrepo yang berkali-kali mencoba mengkonfirmasi kenyataan yang sedang runtuh. Di kasus hansip, Ia mengusap mata berkali-kali, dan Juliana memastikan semua baik-baik saja kepada operator sistem. Napas Juliana yang tersengal dapat dianggap mirip dengan Polyphemus yang terjatuh dan meracau akan memakan nobody. Namun kasus mereka berbeda dalam arah keterlemparan pemandangan setelah keruntuhan. Polyphemus terjerembab pada lubang dalam sistem bahasa yang tak mampu membangkitkan nobody sebagai sesuatu dalam pemandangan yang tersedia oleh mata. Sedangkan kegagalan perangkat augmentasi mengembalikan Juliana kepada pemandangan mata.
Penulis mencoba berandai bahwa pemandangan adalah sesuatu yang cacat sejak awal, dibutuhkan bahasa yang kompleks untuk membangkitkan hal-hal yang tak tercerap organ-organ inderawi. Untuk kembali ke posisi normal: pemandangan dapat dianggap sebagai sesuatu yang utuh, Polyphemus mesti merasionalisasi nobody sebagai sebuah tipuan dan tidak benar-benar ada namun itu terjadi belakangan. Juliana Restrepo sebaliknya dapat melakukan hal tersebut hanya dengan menganggap bahwa suatu waktu segala sesuatu akan kembali membaik seperti sedia kala. Dapat dibayangkan, proses terlempar dan pengembalian kepada kondisi normal bisa-bisa merupakan perulangan yang terus terjadi.
Pemandangan yang kita cerap sehari-hari tanpa bantuan perangkat adalah tempat kita terlempar setelah kastrasi, pemotongan kita sebagai janin dari ari-ari. Sloterdijk dalam bukunya “Spheres Vol. 1: Bubbles” pada bab Requiem for Discarded Organ menunjukkan bahwa pada awalnya, janin di rahim adalah entitas berbentuk bola gelap yang ditemani oleh ari-ari. Pemotongan janin dari ari-arinya adalah sebuah keharusan untuk memasuki dunia, kebudayaan, peradaban manusia. Di beberapa masyarakat, ari-ari dipandang sebagai pasangan kembar. Setelah pemotongan, ari-ari harus dikuburkan. Kehilangan yang dialami si janin, dapat disandingkan dengan kehilangan Orpheus atas Euridice yang menjadi kekosongan yang tidak mungkin lagi di kemudian hari, menjadi penyokong kenyataan dan nyanyian-nyanyian Orpheus setelahnya.
Euridice sedang berjalan-jalan di padang ilalang, yang mana Ia bertemu Satiros, satu bangsa penjahat kelamin dalam mitologi yunani. Euridise berhasil kabur dari Satir namun terjerembab ke dalam sarang ular dan ia dipatuk di bagian kaki. Euridise mati dan beberapa waktu kemudian Orpheus menemukannya. Orpehus meratapi kematiannya, mengambil lira dan memainkan lagu-lagu sehingga dewa-dewa dan para nymph ikut bersedih. Orpheus kemudian berkeinginan mengembalikan Euridise ke dunia-atas. Hades dan Persephone memperbolehkannya memasuki Aornum dengan jaminan bahwa dalam perjalannya kembali nanti, Orpheus berjalan di depan Euridice tanpa boleh melihat ke belakang. Ia pun beranjak ke dunia-atas namun di bawah kecemasan, ketika mencapai permukaan, Ia menatap balik dan kini Euridise lenyap untuk selamanya.
Pemotongan ari-ari sebelum janin boleh bergabung ke dalam dunia manusia adalah menatap balik Euridise. Janin terlahir dan pertama kali melihat, dipandang mampu berjalan di atas dua kaki dan tumbuh dewasa. Sloterdijk kemudian mengutip alkitab dimana Job bertanya pada tuhan dalam sebuah tuduhan:
Did you not pour me out like milk and curdle me like cheese? […] Why then did you bring me out of the womb? I wish I had died before any eye saw me
— (Job 10:10 & 18)
Lalu Sloterdijk mengutip baris-baris Rainer Maria Rilke:
Be dead in Eurydice, always — climb, with more song,
climb with more praise, back up into pure relation.
— (Sonnets to Orpheus, bagian 2, no. 13)
Kehilangan si janin dan Orpheus disini adalah landasan yang menyokong kenyataan mereka selanjutnya. Siapa ari-ari ini? Sloterdijk, masih, dalam Requiem for Discarded Organ menjelaskan dalam cerita seorang pasien dan analis. Pasien datang ke analis katakanlah oleh bermacam persoalan hubungan, gejala somatis, dan gangguan seksual, Ia disilakan terlentang di sofa dan sesi berakhir dan Ia tak mengatakan apa-apa. Beberapa sesi dijalani bulan itu dengan serupa dan ia pun berkata “Kita belum kemana-mana. Tetapi nampaknya membaik.” Ia tak berkata-kata lagi. Dan beberapa bulan kemudian, masih dalam bungkamnya, Ia berdiri di akhir sesi dan mengatakan bahwa Ia merasa baik, mengira Ia telah pulih kemudian berterimakasih pada analis dan pulang. Perjalanan hening si pasien dan analis ini Sloterdijk sebut sebagai pertemanan yang monadic, yaitu keduanya melakukan perjalanan dimana si analis diandaikan sebagai seorang psikoanalis tua (atau pertapa) yang menyokong si pasien. Si analis tahu bagaimana ia hadir tanpa mengintervensi keberadaan orang lain kecuali diperbolehkan, hanya melalui keberadaannya yang terpisah sekaligus menyimak.
Dari penjelajahan di atas mengenai pemandangan dan kenyataan, penulis mendapatkan kesan bahwa pemandangan dengan mata tanpa perangkat — yang dialami Juliana Restrepo, si hansip, Polyphemus, Orpheus, dan Janin — dibangun di atas kastrasi yang suatu waktu dapat runtuh. Keruntuhan pemandangan normal yang biasa-biasa saja bahkan tidak sepenuhnya disadari itu selalu menjadi momen evaluasi dalam keberadaan mereka. Namun untuk kasus Juliana, penulis masih membutuhkan materi lebih banyak untuk menyelidiki lebih lanjut fenomena pemandangan yang dibangkitkan oleh perangkat.
|
Sistem Runtuh. Apakah Anda Ingin Memulai Lagi?
| 0
|
sistem-runtuh-apakah-anda-ingin-memulai-lagi-19e11ff5ac42
|
2018-08-31
|
2018-08-31 04:03:24
|
https://medium.com/s/story/sistem-runtuh-apakah-anda-ingin-memulai-lagi-19e11ff5ac42
| false
| 1,947
|
Collected Essays and Reviews on Books, Films, Musics, and Cultures
| null | null | null |
ISH Review
|
ish.review@gmail.com
|
ish-review
|
BUKU,FILM,MUSIK,RESENSI,EVENTS
| null |
Google
|
google
|
Google
| 35,754
|
Haris Wirabrata
|
Masih senang jadi amatir
|
3d868af3ee4b
|
hwirabrata
| 55
| 43
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
2de7af531645
|
2018-03-19
|
2018-03-19 17:25:38
|
2018-03-19
|
2018-03-19 17:31:09
| 1
| false
|
en
|
2018-03-26
|
2018-03-26 15:50:28
| 1
|
19e15245914f
| 0.992453
| 2
| 0
| 0
|
Unlock the hidden value of your real estate photos
| 5
|
Out Of The Den And Into The Wild
For the last 6 months we have been full steam ahead building the first neural network to improve residential real estate valuations using your property photos.
We are proud to release v1.0 of the Foxy API. This will provide access to our most popular feature, the house2vec neural network, among others. By making simple REST calls, you will upload a set of images and receive an image feature vector to incorporate into your own machine learning algorithms.
Our private beta models are limited to Massachusetts properties. National level models will be available in the near future. If you want to see the magic yourself, sign up for our beta here.
We want the API to work well for your needs, so we will prioritize flexible improvements over stability in this initial release. In the future, we may change the functionality currently offered by the API.
This is only the beginning for Foxy! We couldn’t be more excited to work with the community to deliver a robust network and API that balances user needs and accuracy.
Thank you for the incredible support from industry, family and friends!
Subscribe to this blog for updates to our neural models and future additions to the API!
|
Out Of The Den And Into The Wild
| 3
|
out-of-the-den-and-into-the-wild-19e15245914f
|
2018-03-26
|
2018-03-26 15:50:31
|
https://medium.com/s/story/out-of-the-den-and-into-the-wild-19e15245914f
| false
| 210
|
Stories from Foxy AI. Unlock the hidden value of your real estate images
| null | null | null |
Be The Fox
|
Vin@foxyai.com
|
be-the-fox
|
ARTIFICIAL INTELLIGENCE,REAL ESTATE,COMPUTER VISION,DEEP LEARNING,MACHINE LEARNING
|
bethe_fox
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Vin Vomero
|
Founder, Foxy AI
|
84f98c0eaf92
|
VinVomero
| 4
| 12
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-23
|
2018-02-23 10:40:04
|
2018-02-23
|
2018-02-23 10:42:51
| 0
| false
|
en
|
2018-02-23
|
2018-02-23 10:42:51
| 0
|
19e27fd211cb
| 1.830189
| 1
| 0
| 0
|
The birth of SmartO was destined by life itself. There have already been various services, electronic organizers, schedulers, desktop and…
| 5
|
Retire strings on fingers! (The Birth of SmartO)
The birth of SmartO was destined by life itself. There have already been various services, electronic organizers, schedulers, desktop and mobile apps with these features… There are also voice assistants like Cortana, Siri and Google Assistant.
But they were not able to help us solve the everyday problems we might have.
Let’s hear it from Ildar Mukhametzhanov, founder of the project:
“…When I started my house renovation I had to buy supplies for the construction crew quite often. I was constantly bombarded with requests to get this or that. The construction workers often forgot to get the materials themselves. Suddenly they are out of fasteners, sometimes I forgot to buy some extra hardware on my way back. It was a really tough task: I had to get all the items from the hardware store in one go and all the more the crewmembers were constantly hounding me, asking me for other stuff.
Those endless thrashings come back to me in nightmares. Then I got the idea to create a tickler system that would help not only to remind what to buy, but also would make an optimal route to the store with the best price.
This idea has been brewing in my head for some time; it put some new ideas on top of the old ones… I’ve had some new cases where I couldn’t but say: “I need some smart app for this!”. My close people gave me some cases, some business acquaintances tipped in. I realized that I was on the threshold of something big, something that people really want!
I decided to move from ideas to actions: run and processed some focus groups. They suggested new ideas and features. Then held focus groups with business owners and managers. They also gave me important feedback.
My fellow entrepreneurs told me that it’d have been great to have this app or business as well. Because the user and the service provider are “two sides of the same coin”. People can tell companies what they want, and the companies will improve the quality of services and goods. This way both clients and companies will have their bread buttered on both sides!
Later I got the idea of implementing features, useful to service providers, like CRM systems. Entrepreneur focus group approved this idea, and users liked that they could post reviews about the service providers and get relevant information about a particular store, restaurant or car service…
Later I had an idea to monetize the features of the app to bring profit to the users. And I think no one would say “no” to additional income! That’s how I got the concept for an app with a unique set of features. I realized that it was up to me to develop this wonderful app!”
So begins the story of SmartO — the super app of the XXI century.
|
Retire strings on fingers! (The Birth of SmartO)
| 50
|
retire-strings-on-fingers-the-birth-of-smarto-19e27fd211cb
|
2018-04-05
|
2018-04-05 18:54:34
|
https://medium.com/s/story/retire-strings-on-fingers-the-birth-of-smarto-19e27fd211cb
| false
| 485
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
SmartO Project
|
SmartO is a free smartphone app with an open source and powerful expert system on blockchain with a unique set of features and clear monetization for everyone.
|
2bae9691f6ad
|
SmartOProject
| 35
| 15
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
1ba9fb672398
|
2018-05-22
|
2018-05-22 00:04:34
|
2018-05-22
|
2018-05-22 00:28:13
| 3
| false
|
en
|
2018-05-22
|
2018-05-22 05:40:40
| 11
|
19e31aa15d5b
| 3.066981
| 3
| 0
| 0
|
A few of us Domain developers recently attended this half-day Microsoft event in the Sydney CBD. The two main speakers of the event, Erik…
| 3
|
Microsoft Event: Mobile + AI
A few of us Domain developers recently attended this half-day Microsoft event in the Sydney CBD. The two main speakers of the event, Erik Polzin and Colby Williams, are based in the Bay Area of the US, and they joined Microsoft via the Xamarin acquisition. A few other Microsoft employees of the Asia Pacific region were also in attendance, and each spoke briefly about their favourite mobile app that uses artificial intelligence (AI), as well as their favourite app in general. None of them mentioned the Domain app, to our disappointment.
The event was broken down into three segments:
Excite
Here we first learnt about the history of Microsoft:
The speaker talked about how AI has changed the way we go about our everyday lives. From using Google Assistant to perform tasks, to Spotify creating automated and personalised playlists.
I then learnt a new industry buzzword: Appification, which of course means that a person is qualified to sell apples, e.g.
Person looking for a job at the supermarket (or Apple Store): “Yes I’ve got the appification for this job.”
(but seriously, I think this ‘word’ just means that everything is an app now)
Speaking of apps, we were made aware of how important the initial impressions are when you first install an app. If the user experience is not good, chances are the user would not use the app a second time.
We were then shown a clip of Minority Report, which demonstrated how a lot of things in that futuristic movie actually exists in the present day.
The first part was wrapped up with a Turing test. Two songs were played, one made by a human, and one made by an AI program. The room was pretty much split 50/50 regarding which song was which, and showed that AI is capable of making music that is comparable to that of a human made one.
Explore
“Our industry does not respect tradition, it only respects innovation” — Satya Nadella
This segment was mostly about some of the innovative services that Microsoft offers. We learnt that a lot of money (billions!) are invested into the MS R&D department.
We learnt a little bit about Azure Functions, which is the equivalent of AWS’s Lambda, and how it’s possible to scale just the compute intensive part of your app.
We were also made aware of Cosmos DB, a NoSQL database on Azure.
Next up was a live demo on how to set up a build server using App Center. It supports a wide variety of platforms, including iOS, macOS and Android.
Cognitive Services was next on the agenda. The services include:
Vision
Speech
Language
Knowledge
Search
We then saw a live demo of Seeing AI. By pointing the camera to a person, the app was able to estimate that person’s age and facial expression. It was also able to read a barcode and determine the product. Most impressively, it could read out text handwritten on a whiteboard.
Finally we briefly looked at Bot Framework and how it can integrate with Cognitive Services, to wrap up the Explore segment.
Experiment
After a quick break, a live demo of QnA maker was shown. There were a few technical issues with the demo, which shows the importance of having video walkthroughs as a backup measure.
Another live demo (customvision.ai) was shown which utilised image recognition with machine learning. Once again, there were some minor technically difficulties with the demo. As the room was full of developers, I think we all sympathised with the presenter when things didn’t work as intended. “That’s OK, we believe you.” one gentleman called out, which showed the camaraderie between us tech people (and also because it was close to 5pm).
Overall, it was a very informative afternoon. The key take away for me is the importance of machine learning, which the cognitive services relied upon. It echos what our CTO said at an internal meeting, that we should all strive to learn more about ML, as it is the present and future of our industry.
|
Microsoft Event: Mobile + AI
| 5
|
microsoft-event-mobile-ai-19e31aa15d5b
|
2018-05-22
|
2018-05-22 05:50:16
|
https://medium.com/s/story/microsoft-event-mobile-ai-19e31aa15d5b
| false
| 667
|
Domain.com.au is a leading Australian Real Estate Marketing and Technology firm. ASX:DHG
| null |
domain.com.au
| null |
Tech @ Domain
| null |
domain-tech
|
DEVOPS,NODEJS,REACTJS,AGILE,MOBILE APPS
|
Domaincomau
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Tony Law
|
Software Dev
|
7b57a34e3368
|
twllaw
| 2
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
969906acbdde
|
2018-05-05
|
2018-05-05 17:25:43
|
2018-05-07
|
2018-05-07 06:29:56
| 7
| false
|
en
|
2018-05-07
|
2018-05-07 11:08:24
| 2
|
19e32eeb6ab0
| 5.129245
| 4
| 0
| 0
|
One of the toughest part of every data scientist’s journey is to really understand what happens behind the hood of popular libraries like…
| 3
|
Writing Multivariate Linear Regression from Scratch
One of the toughest part of every data scientist’s journey is to really understand what happens behind the hood of popular libraries like scikit for implementing various machine learning algorithms.
This is a part my multi-post series on implementing various machine learning algorithms from scratch without using any machine learning libraries like scikit, pyspark etc.
The first algorithm that I am going to discuss is — the most basic — Multivariate Linear Regression.
What is Linear Regression ? Linear regression is a simple data prediction technique to predict dependant variable (Y) using its linear relationship to the independent variable (Y).
For example, I have data say number of rooms. Using the given input(number of rooms) I want to predict the price of a house. Well that’s not easy but at the end of this article you will surely get something. When I have many more features like (size of bedrooms, number of rooms, distance from city centre etc.) to predict the price of price of house then I’ll say its a multivariate Liner regression.
Now let me show you a glimpse of data using scatter plot. This graph will help you in basic understanding of how data looks like when plotted on graph.
From the above graph you may see some blue dots where each dot represent some value with respect to X and Y axis. Now I want to draw a line on the graph in such a way that covers maximum number of blue dots. Speaking technically, I want to find a slope of line in such a way where distance between each dot to line is minimum. This is called minimising Root Mean Square error
Hopefully I am clear so far. Moving ahead with my current data set. I have few columns by the name “size of room”,”number of bedrooms” and “price”.Based on size and number of bedrooms I want to predict the price of room.For your ease I have braked the entire process in step for easy learning.
STEP 1. Reading and Normalising the data.
Before we start with any problem we need to read and analyse the data.Well at first glance this step seems likely to be very easy, but it can be very painful if not taken care.
Why do we need to normalise the data? Well, that’s because some of our features might be in a range of 0–1 and other in 0–1000. Feeding the data as it is can lead to wrong fit.
Reading and Normalising the data
Normalize : To make data scalable to each other.
Before Normalising
After Normalising
If you can notice here, now the data here is scaled to some limited range.Which makes an easy visualisation of data on scatter plot.
Before we jump on this, we need to understand the use of hyperparameter in our model. These makes sense when we need to tune our model in order to minimise the cost function.Here our model is nothing but a mathematical equation of a straight line that is y = mx + c, where x is the given sets of input,m is the slope of line , c is the constant and y is the output(which is predicted)
Learning rate and iterations these are the hyper-parameter that plays an vital role in tuning our model.Learning rate is a hyper-parameter that controls how much we are adjusting the weights of our network with respect the loss gradient and how many times we need to tune our model is our iteration.
Analysis : In linear regression, we have the training set and the hypothesis. We already have the training set as above and our hypothesis will be:
Equivalent to y = mx + c
Where θ’s are the parameters and h(x) is y-predicted.
STEP 4. Calculate the cost function
The objective of the machine learning exercise is to find the values of these θ’s so that the function h shown above is close to actual values for the training examples. Speaking in mathematical terms, we want to minimize the difference between h(x) and the corresponding value of y squared. We will call this our cost functions.
In simple words it is a function that assigns a cost to instances where the model deviates from the observed data. In this case, our cost is the sum of squared errors. The goal of any supervised learning exercise is to minimize whatever cost we chose. Our cost function can also be shown using the below equation and this is how we can calculate our cost function.
Cost function equation
This equation is nothing but the summation of square of difference between the y-predicted and y actual divided by twice of length of data set. And from above equation our goal is to minimize the function of J.
Ploting J on graph will give you more clear understanding of this function.
Cost(Squared Error) Vs. No of Iterations
It can be easily seen from the above graph that at iteration near to 2 the value of cost is minimum and we can say that iterating our training data twice we will get the minimum cost function.
Let me quickly summarise what we have learnt so far.
Now let us see how this cost function looks like.
Cost function
Step 5. Gradient Descent.
What does it mean and where does it comes from ? If you can see from the cost function graph our ultimate goal is to move towards the bottom most point of the graph. A way to achieve this is using this Gradient Descent algorithm. As its name suggest we need to iterate the below procedure till convergence.
Gradient Descent Algorithm
Here α is the learning rate and we multiply that will the derivative or the gradient of J. Well gradient descent method is not only confined upto linear regression model but can be used in other model as well when iterative optimization of algorithm that finds the minimum value of a function comes into play.
Gradient Descent funtion
Lets check our obligatory analysis for the above code.
Gradient Descent
From the above graph, Our aim is to iterate from starting point and working with iterations in such a way that we finally land up on the minimum point of graph.This is achieved by tuning our model with learning rate and number of iterations.
So before I bind up let us summarise our learning's so far.
We have learnt our data set.
Normalizing the data set
Training our model with normalized data
Calculating the cost function
Minimizing the cost function.
You can access the complete code and the data set here
Thanks you for your patience …..Claps (Echoing)
|
Writing Multivariate Linear Regression from Scratch
| 24
|
writing-multivariate-linear-regression-from-scratch-19e32eeb6ab0
|
2018-06-14
|
2018-06-14 13:08:15
|
https://medium.com/s/story/writing-multivariate-linear-regression-from-scratch-19e32eeb6ab0
| false
| 1,081
|
All you need know about data science from scratch
| null | null | null |
Data Science 101
|
jainanchit51@gmail.com
|
data-science-101
|
DATA SCIENCE,MACHINE LEARNING,NEURAL NETWORKS,DEEP LEARNING,PYTHON
|
jainanchit51
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Anchit Jain
|
Machine learning engineer. Loves to work on Deep learning based Image Recognition and NLP. Writing to share because I was inspired when others did.
|
71f78d2dc770
|
jainanchit51
| 22
| 13
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-29
|
2018-09-29 05:24:46
|
2018-09-29
|
2018-09-29 05:48:01
| 1
| false
|
en
|
2018-09-29
|
2018-09-29 05:48:01
| 7
|
19e6b6db69cd
| 1.939623
| 0
| 0
| 0
|
Omnity Ico Review
| 5
|
OMNITY
Omnity Ico Review
OMNITY is a search engine for scientists, students, researchers and so on. Very often students and scientists need the same information, the search does not lead to the desired result, because the popular search engines do not display this information on the first page when searching.
Omnity is based on the fundamental advances in semantic link search technology in which we create landscapes of relationships based on meaning derived from semantic signatures of the whole document. In this way, the knowledge contained in the whole document can be deeply connected, only through shared ideas.
Intellectual Property Issue of Omnity
Omnity uses the strategy four times to protect its intellectual property, based on a combination of proprietary computer codes, trade secrets, trademarks, and patents. For computer code, Omnity has developed more than 400,000 lines of code in Haskell, Java, Python, Ruby and JavaScript. For trade secrets, Omnity has developed a series of language filters, and natural language processing steps allow for cross-comparisons with high semantics and minimal cross-domain exchange. Knowledge.The brand includes “Omnity”, Omnity logo, Omnity “Knowledge, Connected” line. Omnity also holds Omnity.io and the Omnity domain name.As for patents,
Features of Omnity
Collect
Millions of federal documents from most major agencies semantically
Manually upload documents manually through the drag and drop interface
Private Batch Ingestion billing is available (please contact sales@omnity.io)
Connect
Documents are connected through similar semantic signatures
Text is processed through Fusion of Natural Language Processing, Machine Intelligence and Graph Math
Three years in stealth R & D is provided by> 500,000 lines of code and $ 9 million in sponsorship
Protect
Linguistic sequences track semantic signatures of documents that change over time, place and copyright
Digital rights management for text-based content in engineering, finance, law, medicine, science in over 100 languages.
Hyper-hierarchical document storage for self-healing distributed document networks
Discovery Bots
The “Discovery Bots” search agent is enabled by dragging and dropping the entire file query
Auto Search starts are encoded by time and by user-selectable time
Discovery reports are periodically emailed to users
Features of Omnity token
The Early Acceptance Program (EAP) allows the token value to increase when used and resold internally (depending on value-added cumulative value).
Tags can be used to launch discovery bots (search agents discover content cyclically selected by the user)
Can use tags to customize the import of large documents
Industries that Omnity targets
FINANCE (Find the right investment, fast.)
LEGAL (Find the right argument, fast.)
RESEARCH (Place research in context, fast.)
Attorneys AND POLICIES (Express your voice and connect with impact.)
For more information, you can read the links and social media below:
Website: https://www.omnity.io/
Whitepaper: https://www.omnity.io/static/ico/Omnity-Whitepaper.pdf
Facebook: https://www.facebook.com/OmnitySearch/
Telegram: https://t.me/omnityio
Twitter: https://twitter.com/omnity_io
Author: Sixa
Bitcointalk profile link: https://bitcointalk.org/index.php?action=profile;u=2225617
|
OMNITY
| 0
|
omnity-19e6b6db69cd
|
2018-09-29
|
2018-09-29 05:48:01
|
https://medium.com/s/story/omnity-19e6b6db69cd
| false
| 461
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Instinct01
|
CRYPTONEWS
|
f25aec5aebe7
|
prakosobagas709
| 6
| 16
| 20,181,104
| null | null | null | null | null | null |
0
|
class SeperableConv2D(nn.Module):
def __init__(self, in_dims, out_dims, kernel_size=3, stride=1, padding=0, dilation=1, bias=False):
super(SeperableConv2D, self).__init__()
self.conv1 = nn.Conv2d(in_dims, in_dims, kernel_size,
stride, padding, dilation, groups=in_dims, bias=bias)
self.pointwise = nn.Conv2d(in_dims, out_dims, kernel_size=1,
stride=1, padding=0, dilation=1, groups=1, bias=bias)
def forward(self, x):
x = self.conv1(x)
x = self.pointwise(x)
return x
def __init__(self, in_dims, out_dims, kernel_size=3, stride=1,
padding=0, dilation=1, bias=False)
self.conv1 = nn.Conv2d(in_dims, in_dims, kernel_size,
stride, padding, dilation, groups=in_dims, bias=bias)
self.pointwise = nn.Conv2d(in_dims, out_dims, kernel_size=1,
stride=1, padding=0, dilation=1, groups=1, bias=bias)
def fixed_padding(inputs, kernel_size, rate):
kernel_size_effective = kernel_size + (kernel_size-1) * (rate-1)
pad_total = kernel_size_effective-1
pad_beg = pad_total // 2
pad_end = pad_total - pad_beg
padded_inputs = F.pad(inputs, (pad_beg, pad_end, pad_beg, pad_end), mode='reflect')
return padded_inputs
class SeperableConv2D_same(nn.Module):
def __init__(self, in_dims, out_dims, kernel_size=3, stride=1, dilation=1, bias=False):
super(SeperableConv2D_same, self).__init__()
self.conv1 = nn.Conv2d(in_dims, in_dims, kernel_size, stride, 0, dilation, groups=in_dims, bias=bias)
self.pointwise = nn.Conv2d(in_dims, out_dims, 1, 1, 0, 1, 1, bias=bias)
def forward(self, x):
x = fixed_padding(x, self.conv1.kernel_size[0], self.conv1.dilation[0])
x = self.conv1(x)
x = self.pointwise(x)
return x
def __init__(self, in_dims, out_dims, kernel_size=3, stride=1, dilation=1, bias=False):
super(SeperableConv2D_same, self).__init__()
self.conv1 = nn.Conv2d(in_dims, in_dims, kernel_size, stride, 0, dilation, groups=in_dims, bias=bias)
self.pointwise = nn.Conv2d(in_dims, out_dims, 1, 1, 0, 1, 1, bias=bias)
def forward(self, x):
x = fixed_padding(x, self.conv1.kernel_size[0], self.conv1.dilation[0])
x = self.conv1(x)
x = self.pointwise(x)
return x
class Block(nn.Module):
def __init__(self, in_dims, out_dims, reps, stride=1, dilation=1, start_with_relu=True, grow_first=True):
super(Block, self).__init__()
if out_dims != in_dims or stride != 1:
self.skip = nn.Conv2d(in_dims, out_dims, 1, stride=stride, bias=False)
self.skipbn = nn.BatchNorm2d(out_dims)
else:
self.skip = None
self.relu = nn.ReLU(inplace=True)
rep = []
filters = in_dims
if grow_first:
rep.append(self.relu)
rep.append(SeperableConv2D_same(in_dims, out_dims, 3, stride=1, dilation=dilation, bias=False))
rep.append(nn.BatchNorm2d(out_dims))
filters = out_dims
for i in range(reps - 1):
rep.append(self.relu)
rep.append(SeperableConv2D_same(filters, filters, 3, stride=1, dilation=dilation, bias=False))
rep.append(nn.BatchNorm2d(filters))
if not grow_first:
rep.append(self.relu)
rep.append(SeperableConv2D_same(in_dims, out_dims, 3, stride=1, dilation=dilation, bias=False))
rep.append(nn.BatchNorm2d(out_dims))
if not start_with_relu:
rep = rep[1:]
if stride != 1:
rep.append(SeperableConv2D_same(out_dims, out_dims, 3, stride=2))
self.rep = nn.Sequential(*rep)
def forward(self, x):
out = self.rep(x)
if self.skip is not None:
skip = self.skip(x)
skip = self.skipbn(skip)
else:
skip = x
out += skip
return out
def __init__(self, in_dims, out_dims, reps, stride=1, dilation=1, start_with_relu=True, grow_first=True)
if out_dims != in_dims or stride != 1:
self.skip = nn.Conv2d(in_dims, out_dims, 1, stride=stride, bias=False)
self.skipbn = nn.BatchNorm2d(out_dims)
else:
self.skip = None
self.relu = nn.ReLU(inplace=True)
rep = []
filters = in_dims
if grow_first:
rep.append(self.relu)
rep.append(SeperableConv2D_same(in_dims, out_dims, 3, stride=1, dilation=dilation, bias=False))
rep.append(nn.BatchNorm2d(out_dims))
filters = out_dims
for i in range(reps - 1):
rep.append(self.relu)
rep.append(SeperableConv2D_same(filters, filters, 3, stride=1, dilation=dilation, bias=False))
rep.append(nn.BatchNorm2d(filters))
if not grow_first:
rep.append(self.relu)
rep.append(SeperableConv2D_same(in_dims, out_dims, 3, stride=1, dilation=dilation, bias=False))
rep.append(nn.BatchNorm2d(out_dims))
if not start_with_relu:
rep = rep[1:]
if stride != 1:
rep.append(SeperableConv2D_same(out_dims, out_dims, 3, stride=2))
self.rep = nn.Sequential(*rep)
def forward(self, x):
out = self.rep(x)
if self.skip is not None:
skip = self.skip(x)
skip = self.skipbn(skip)
else:
skip = x
out += skip
return out
class Xception(nn.Module):
def __init__(self, in_dims=3, pretrained=False):
super(Xception, self).__init__()
# entry flow
self.conv1 = nn.Conv2d(in_dims, 32, 3, stride=2, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(32)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(32, 64, 3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(64)
self.block1 = Block(64, 128, reps=2, stride=2, start_with_relu=False)
self.block2 = Block(128, 256, reps=2, stride=2, start_with_relu=True, grow_first=True)
self.block3 = Block(256, 728, reps=2, stride=2, start_with_relu=True, grow_first=True)
# middle blocks
self.block4 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block5 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block6 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block7 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block8 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block9 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block10 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block11 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block12 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block13 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block14 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block15 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block16 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block17 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block18 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block19 = Block(728, 728, reps=3, stride=1, start_with_relu=True, grow_first=True)
self.block20 = Block(728, 1024, reps=2, dilation=2, start_with_relu=True, grow_first=False)
self.conv3 = SeperableConv2D_same(1024, 1536, 3, stride=1, dilation=2)
self.bn3 = nn.BatchNorm2d(1536)
self.conv4 = SeperableConv2D_same(1536, 1536, 3, stride=1, dilation=2)
self.bn4 = nn.BatchNorm2d(1536)
self.conv5 = SeperableConv2D_same(1536, 2048, 3, stride=1, dilation=2)
self.bn5 = nn.BatchNorm2d(2048)
self._init_weights()
if pretrained:
self._load_xception_weights()
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.relu(x)
x = self.block1(x)
low_level_feat = x
x = self.block2(x)
x = self.block3(x)
x = self.block4(x)
x = self.block5(x)
x = self.block6(x)
x = self.block7(x)
x = self.block8(x)
x = self.block9(x)
x = self.block10(x)
x = self.block11(x)
x = self.block12(x)
x = self.block13(x)
x = self.block14(x)
x = self.block15(x)
x = self.block16(x)
x = self.block17(x)
x = self.block18(x)
x = self.block19(x)
x = self.block20(x)
x = self.conv3(x)
x = self.bn3(x)
x = self.relu(x)
x = self.conv4(x)
x = self.bn4(x)
x = self.relu(x)
x = self.conv5(x)
x = self.bn5(x)
x = self.relu(x)
return x, low_level_feat
class ASPP_Module(nn.Module):
def __init__(self, in_dims, out_dims, rate):
super(ASPP_Module, self).__init__()
self.atrous_conv = nn.Conv2d(in_dims, out_dims, 3, stride=1, padding=rate, dilation=rate)
self.relu = nn.ReLU(inplace=True)
self.bn = nn.BatchNorm2d(out_dims)
self._init_weights()
def forward(self, x):
x = self.atrous_conv(x)
x = self.relu(self.bn(x))
return x
def _init_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight)
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def __init__(self, in_dims, out_dims, rate)
self.atrous_conv = nn.Conv2d(in_dims, out_dims, 3, stride=1, padding=rate, dilation=rate)
self.relu = nn.ReLU(inplace=True)
self.bn = nn.BatchNorm2d(out_dims)
def _init_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight)
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
class DeepLabv3_plus(nn.Module):
def __init__(self, in_channels=3, num_classes=21, pretrained=False):
print(f'Constructing Deeplabv3+ with {in_channels} input channels and {num_classes} classes')
super(DeepLabv3_plus, self).__init__()
self.xception_features = Xception(in_dims=in_channels, pretrained=pretrained)
rates = [1, 6, 12, 18]
self.aspp1 = ASPP_Module(2048, 256, rate=rates[0])
self.aspp2 = ASPP_Module(2048, 256, rate=rates[1])
self.aspp3 = ASPP_Module(2048, 256, rate=rates[2])
self.aspp4 = ASPP_Module(2048, 256, rate=rates[3])
self.global_avg_pool = nn.Sequential(nn.AdaptiveAvgPool2d((1, 1)), nn.Conv2d(2048, 256, 1, stride=1))
self.conv1 = nn.Conv2d(1280, 256, 1)
self.bn1 = nn.BatchNorm2d(256)
self.conv2 = nn.Conv2d(128, 48, 1)
self.bn2 = nn.BatchNorm2d(48)
self.last_conv = nn.Sequential(nn.Conv2d(304, 256, 3, stride=1, padding=1),
nn.BatchNorm2d(256),
nn.Conv2d(256, 256, 3, stride=1, padding=1),
nn.BatchNorm2d(256),
nn.Conv2d(256, num_classes, 1, stride=1))
def forward(self, x):
x, low_level_feat = self.xception_features(x)
x1 = self.aspp1(x)
x2 = self.aspp2(x)
x3 = self.aspp3(x)
x4 = self.aspp4(x)
x5 = self.global_avg_pool(x)
x5 = F.upsample(x5, size=x4.size()[2:], mode='bilinear', align_corners=True)
x = torch.cat((x1, x2, x3, x4, x5), dim=1)
x = self.conv1(x)
x = self.bn1(x)
x = F.upsample(x, scale_factor=4, mode='bilinear', align_corners=True)
low_level_feat = self.conv2(low_level_feat)
low_level_feat = self.bn2(low_level_feat)
x = torch.cat([x, low_level_feat], dim=1)
x = self.last_conv(x)
x = F.upsample(x, scale_factor=4, mode='bilinear', align_corners=True)
return x
def __init__(self, in_channels=3, num_classes=21, pretrained=False)
self.xception_features = Xception(in_dims=in_channels, pretrained=pretrained)
rates = [1, 6, 12, 18]
self.aspp1 = ASPP_Module(2048, 256, rate=rates[0])
self.aspp2 = ASPP_Module(2048, 256, rate=rates[1])
self.aspp3 = ASPP_Module(2048, 256, rate=rates[2])
self.aspp4 = ASPP_Module(2048, 256, rate=rates[3])
self.global_avg_pool = nn.Sequential(nn.AdaptiveAvgPool2d((1, 1)), nn.Conv2d(2048, 256, 1, stride=1))
self.conv1 = nn.Conv2d(1280, 256, 1)
self.bn1 = nn.BatchNorm2d(256)
self.conv2 = nn.Conv2d(128, 48, 1)
self.bn2 = nn.BatchNorm2d(48)
self.last_conv = nn.Sequential(nn.Conv2d(304, 256, 3, stride=1, padding=1), nn.BatchNorm2d(256), nn.Conv2d(256, 256, 3, stride=1, padding=1), nn.BatchNorm2d(256), nn.Conv2d(256, num_classes, 1, stride=1))
x, low_level_feat = self.xception_features(x)
x1 = self.aspp1(x)
x2 = self.aspp2(x)
x3 = self.aspp3(x)
x4 = self.aspp4(x)
x5 = self.global_avg_pool(x)
x5 = F.upsample(x5, size=x4.size()[2:], mode='bilinear', align_corners=True)
x = torch.cat((x1, x2, x3, x4, x5), dim=1)
x = self.conv1(x)
x = self.bn1(x)
x = F.upsample(x, scale_factor=4, mode='bilinear', align_corners=True)
low_level_feat = self.conv2(low_level_feat)
low_level_feat = self.bn2(low_level_feat)
x = torch.cat([x, low_level_feat], dim=1)
x = self.last_conv(x)
x = F.upsample(x, scale_factor=4, mode='bilinear', align_corners=True)
return x
| 30
| null |
2018-08-05
|
2018-08-05 16:01:12
|
2018-08-06
|
2018-08-06 15:21:26
| 0
| false
|
en
|
2018-08-06
|
2018-08-06 15:21:26
| 0
|
19e729bb2af6
| 7.10566
| 0
| 2
| 0
|
Seperable Conv 2D Block
| 2
|
DeepLabv3+ Pytorch code explained line by line (sort of)
Seperable Conv 2D Block
Parameters: in_dims (dimension of input tensor), out_dims (dimension of output tensor), kernel_size, stride, padding, dilation, bias (default to false due to BN layer making it redundant)
Create Conv layer which produces output of same dimensions, set parameter groups as in_dims (equivalent to having in_dims number of kernels, each convolved with a single channel of the input tensor and concatenated)
Create pointwise layer which produces output of out_dims with kernel size of 1, stride of 1, padding of 0, dilation of 1 and groups of 1 (equivalent to normal conv layer, single kernel for every input channel)
Seperable Conv layer can be thought of as a computationally more efficient Conv layer with fewer parameters.
Seperable Conv 2D Same Block
Similar to Seperable Conv block, except conv layer has 0 padding instead of being a parameter to be set
Input is padded using fixed_padding, which determines padding via the dilation and kernel size of the first conv layer before being sent to the conv and pointwise layers.
Xception Block
Parameters: in_dims (dimension of input tensor), out_dims (dimension of output tensor), reps (number of xception-resnet conv blocks), stride, dilation, start_with_relu (whether to start xception-resnet conv block with relu activation), grow_first (whether to convert dimension of input tensor from in_dims to out_dims at the start of the block or end)
If out_dims is not equivalent to in_dims or stride is greater than 1, it means that our input must be convolved+BN to a smaller dimension (dimension of the output tensor from the xception-conv block) in order to add the input tensor to the output tensor
If grow_first is set to True (convert dimension to out_dims at the start), ReLU + SeperableConv2D_same + BN added to become the first few layers of the block, which produces an output tensor with out_dims dimensions
Add (number of reps-1) ReLU+SeperableConv2D+BN blocks, each output tensor retaining the same number of dimensions
If grow_first is set to False (convert dimensions to out_dims at the end), ReLU+SeperableConv2D+BN block added as final block, converting dimensions from in_dims to out_dims
If start_with_relu is set to False, remove the first ReLU layer.
If stride is greater than 1, add final SeperableConv2D layer with stride=2, producing a tensor with half the shape of the input tensor while retaining the number of dimensions
Xception Network
ASPP Module
Parameters: in_dims, out_dims, rate (dilation)
Conv layer with padding=rate, dilation=rate, stride=1 and kernel size=3 (dilation provides greater contextual details for network to learn) ReLU activation and BN layer
Init_weights initializes the weights of Conv layers with Kaiming_normal initialization and the weights of BN layers with 1 and bias of BN layers with 0
DeepLabv3+ Model
Parameters: in_channels (channels of input images, RGB=3), num_classes (number of gt classes), pretrained (whether to use the ImageNet pretrained weights for the Xception backbone)
Initialize Xception backbone
Initialize ASPP modules for 1, 6, 12, 18 dilation/rates, each taking a 2048-dimension input tensor and producing a 256-dimension output tensor.
Initialize global average pool with a conv layer with kernel size of 1, converting from 2048-dimension to 256-dimension
Initialize Conv+BN layers that takes the output tensors from the ASPP modules as input and produces a 256-dimension output
Initialize Conv+BN layers that takes the lower level features from the Xception backbone and produces a 48-dimension output
Last Conv Sequential layer contains a Conv+BN layer that takes an input tensor (output tensor from conv1+bn1 layers and output tensor from conv2+bn2 layers) and produces an output tensor with num_classes dimensions, with 2 intermediary Conv+BN layers
Obtain outputs from Xception backbone (output tensor and tensor containing lower level features)
Obtain output tensors from each ASPP module from the output tensor produced by the Xception backbone and obtain global average pool value, which is upscaled to match the dimensions of the ASPP modules outputs
Concatenate outputs from the ASPP modules and the upscaled global average pool and pass it through the Conv1+BN1 layers and upscale it bilinearly
Pass low level features through Conv2+BN2 layers
Concatenate low level features, upscaled concatenated ASPP modules + global average pool and pass tensor through last conv sequential layer and finally upscale output tensor bilinearly
|
DeepLabv3+ Pytorch code explained line by line (sort of)
| 0
|
deeplabv3-pytorch-code-explained-line-by-line-sort-of-19e729bb2af6
|
2018-08-06
|
2018-08-06 15:21:26
|
https://medium.com/s/story/deeplabv3-pytorch-code-explained-line-by-line-sort-of-19e729bb2af6
| false
| 1,883
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Ryan Aidan
| null |
22d6e2e4e0f4
|
aidanaden
| 5
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-02
|
2018-08-02 14:14:54
|
2018-08-02
|
2018-08-02 15:06:55
| 4
| false
|
en
|
2018-08-02
|
2018-08-02 16:32:08
| 11
|
19e7d49da4f3
| 3.677358
| 0
| 0
| 0
|
Rhodinet
| 4
|
Artificial Intelligence AI, Uyo City meetup Saturday 4th August, 2018.
Rhodinet
AI Saturdays Cycle 2 started on 1 August 2018. participants applications are still open. Apply here. To apply as an ambassador click here.
About
About
Known as AI6, it is a structured study group organised in 50+ cities across the globe, including Bangalore, Lagos, Toronto, Singapore and Sunnyvale. It is community-driven and free-to-attend*.
AI Saturdays (AI6) is catalysed by Nurture.AI. The first cycle of AI6 began in January 2018 and ended in May 2018. Over 5,000 participants from 50 cities worldwide were part of the AI6 movement. Motivated by the overwhelming demand and positive feedback from the AI6 community, we decided to organise a second cycle for AI6 so we can continue delivering value to those who want to learn AI.
We will be kickstarting the second cycle of AI6 in August 2018 and are currently accepting ambassador applications.
Clockwise from top left: Jabalpur, Barcelona, Lagos, Taipei
This is a story from Rowen Lee AI6 Global Coordinator
“To be honest, I didn’t expect AI6 to explode to such a scale. What impressed me even more is the amazing AI6 ambassadors who volunteered time and brainpower to nurture an AI community in their city. Not to forget the cheery participants who were passionate about learning AI despite it being a difficult subject.
Approaching the end of cycle 1, I hunted down some AI6 ambassadors and participants for a chat. I talked to an asian ambassador who has a love of teaching; a determined student from a Nigerian city that lacks AI infrastructure; an enthusiastic American researcher who, despite working full time, commits to AI6 each week. Here are some of my takeaways:
High attrition rates were the main problem — participant count in most chapters gradually dwindled to a handful. Solutions to this varies across chapters : Bangalore aims to have 70% coding and 30% theory; Barcelona introduced quizzes using kahoot.it and hands-on coding sessions; Jabalpur requires its participants to pre-read materials before each lesson to keep everyone on the same pace; Lagos inserts interactive discussion sessions in between lectures videos.
There is an appreciation for AI6 as a global community. Posts in the AI6 Medium publication and AI6 Slack Channel has facilitated the exchange of AI knowledge, learning materials and even tips to organise study groups. The core team hopes to continue this in the next cycle.
Catering to a diverse audience can be challenging as AI6 participants come from diverse backgrounds; they range from yoga teachers, medical practitioners, PhD students, programmers to engineering students. There will be two different tracks in the next cycle to address this problem.
AI6 2.0!
Taking into account feedback received from the pilot cycle, the core team will roll out the following to improve the learning experience in AI Saturdays. And yes, cycle 2 is still free and open for anyone to attend!
Cycle 2 Rollouts
Everyone from AI Saturdays has a story to tell — how would you tell yours?
We hope to create a community of intermediate learners on the AI6 forum, where everyone could participate in discussions to facilitate learning.
Participants and ambassadors of this track should have completed Deep learning Specialization or Fast.ai(or anything equivalent). Those who would wish to join the intermediate track but do not meet the prerequisites can catch up during the precycle (July 2018). We recommend that you use this month to complete BOTH Andrew Ng’s Machine Learning and Deep learning Specialization on coursera.
Exciting times ahead
The “perfect” time to start (or continue) your AI journey is NOW.
Think of AI as the top of a pyramid of needs. Yes, self-actualization (AI) is great, but you first need food, water and shelter (data literacy, collection and infrastructure).
Asking the right questions and building the right products
This is only about how you could, not whether you should (for pragmatic or ethical reasons).
The promise of machine learning tools
‘Wait, what about that Amazon API or TensorFlow or that other open source library? What about companies that are selling ML tools, or that automatically extract insights and features?’
All of that is awesome and very useful. (Some companies do end up painstakingly custom-building your entire pyramid so they can showcase their work. They are heroes.) However, under the strong influence of the current AI hype, people try to plug in data that’s dirty & full of gaps, that spans years while changing in format and meaning, that’s not understood yet, that’s structured in ways that don’t make sense, and expect those tools to magically handle it. And maybe some day soon that will be the case; I see & applaud efforts in that direction. Until then, it’s worth building a solid foundation for your AI pyramid of needs.
|
Artificial Intelligence AI, Uyo City meetup Saturday 4th August, 2018.
| 0
|
meetup-for-uyo-city-saturday-4th-august-2018-19e7d49da4f3
|
2018-08-02
|
2018-08-02 18:41:02
|
https://medium.com/s/story/meetup-for-uyo-city-saturday-4th-august-2018-19e7d49da4f3
| false
| 789
| null | null | null | null | null | null | null | null | null |
Education
|
education
|
Education
| 211,342
|
Imo Okon
|
Front-end Web Developer
|
2a1c918cc9a3
|
rhodinett
| 10
| 31
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
3666ec396666
|
2018-08-08
|
2018-08-08 16:44:40
|
2018-08-08
|
2018-08-08 12:00:00
| 1
| true
|
en
|
2018-08-08
|
2018-08-08 17:01:56
| 2
|
19eacb572d8f
| 7.10566
| 30
| 2
| 0
|
Employee emails contain valuable insights into company morale — and might even serve as an early-warning system for uncovering malfeasance
| 5
|
What Your Boss Could Learn by Reading the Whole Company’s Emails
Employee emails contain valuable insights into company morale — and might even serve as an early-warning system for uncovering malfeasance
Illustration: Irene Rinaldi
By Frank Partnoy
When Andrew Fastow, the former chief financial officer of Enron, finishes a public-speaking gig these days, a dozen or so people from the audience are typically waiting to talk to him. Some ask about his role in the scandal that brought down the energy company. Others want to know about his six years in prison. After a 2016 event in Amsterdam, as the crowd was thinning out, Fastow spotted two men standing in a corner. Once everyone else had left, they walked up to him and handed him a laminated chart.
The men were there on behalf of KeenCorp, a data-analytics firm. Companies hire KeenCorp to analyze their employees’ emails. KeenCorp doesn’t read the emails, exactly — its software focuses on word patterns and their context. The software then assigns the body of messages a numerical index that purports to measure the level of employee “engagement.” When workers are feeling positive and engaged, the number is high; when they are disengaged or expressing negative emotions like tension, the number is low.
The two men in Amsterdam told Fastow that they had tested the software using several years’ worth of emails sent by Enron’s top 150 executives, which had become publicly available after the company’s demise. They were checking to see how key moments in the company’s tumultuous collapse would register on the KeenCorp index. But something appeared to have gone wrong.
The software had returned the lowest index score at the end of 2001, when Enron filed for bankruptcy. That made sense: Enron executives would have been growing more agitated as the company neared insolvency. But the index had also plummeted more than two years earlier. The two men had scoured various books and reports on Enron’s downfall, but it wasn’t clear what made this earlier date important. Pointing to the sudden dip on the left side of the laminated chart, they told Fastow they had one question: “Do you remember anything unusual happening at Enron on June 28, 1999?”
The so-called text-analytics industry is booming. The technology has been around for a while — it powers, among other things, the spam filter you rely on to keep your inbox manageable — but as the tools have grown in sophistication, so have their uses. Many brands, for instance, rely on text-analytics firms to monitor their reputation on social media, in online reviews, and elsewhere on the web.
Text analytics has become especially popular in finance. Investment banks and hedge funds scour public filings, corporate press releases, and statements by executives to find slight changes in language that might indicate whether a company’s stock price is likely to go up or down; Goldman Sachs calls this kind of natural-language processing “a critical tool for tomorrow’s investors.” Specialty-research firms use artificial-intelligence algorithms to derive insights from earnings-call transcripts, broker research, and news stories.
Does text analytics work? In a recent paper, researchers at Harvard Business School and the University of Illinois at Chicago found that a company’s stock price declines significantly in the months after the company subtly changes descriptions of certain risks. Computer algorithms can spot such changes quickly, even in lengthy filings, a feat that is beyond the capacity of most human investors. The researchers cited as an example NetApp, a data-management firm in Silicon Valley. NetApp’s 2010 annual report stated: “The failure to comply with U.S. government regulatory requirements could subject us to fines and other penalties.” Addressing the same concern in the 2011 report, the company clarified that “failure to comply” applied to “us or our reseller partners.” Even a savvy human stock analyst might have missed that phrase, but the researchers’ algorithms set off an alarm.
Granted, the study scoured old filings; the researchers had the benefit of hindsight. Still, a skeptical investor, armed with the knowledge that NetApp had seen fit to make this change, might have asked herself why. If she’d turned up an answer, or even just found the change worrying enough to sell her stock, she’d have saved a fortune: Embedded in that small edit was an early warning. Six months after the 2011 report appeared, news broke that the Syrian government had purchased NetApp equipment through an Italian reseller and used that equipment to spy on its citizens. By then, NetApp’s stock price had already dropped 20 percent.
While text analytics has become common on Wall Street, it has not yet been widely used to assess the words written by employees at work. Many firms are sensitive about intruding too much on privacy, though courts have held that employees have virtually no expectation of privacy at work, particularly if they’ve been given notice that their correspondence may be monitored. Yet as language analytics improves, companies may have a hard time resisting the urge to mine employee information.
One obvious application of language analysis is as a tool for human-resources departments. HR teams have their own, old-fashioned ways of keeping tabs on employee morale, but people aren’t necessarily honest when asked about their work, even in anonymous surveys. Our grammar, syntax, and word choices might betray more about how we really feel.
Take Vibe, a program that searches through keywords and emoji in messages sent on Slack, the workplace-communication app. The algorithm reports in real time on whether a team is feeling disappointed, disapproving, happy, irritated, or stressed. Frederic Peyrot, one of Vibe’s creators, told me Vibe was more an experiment than a product, but some 500 companies have tried it.
Keeping tabs on employee happiness is crucial to running a successful business. But counting emoji is unlikely to prevent the next Enron. Does KeenCorp really have the ability to uncover malfeasance through text analysis?
That question brings us back to June 28, 1999. The two men from KeenCorp didn’t realize it, but their algorithm had, in fact, spotted one of the most important inflection points in Enron’s history. Fastow told me that on that date, the company’s board had spent hours discussing a novel proposal called “LJM,” which involved a series of complex and dubious transactions that would hide some of Enron’s poorly performing assets and bolster its financial statements. Ultimately, when discovered, LJM contributed to the firm’s undoing.
According to Fastow, Enron’s employees didn’t formally challenge LJM. No one went to the board and said, “This is wrong; we shouldn’t do it.” But KeenCorp says its algorithm detected tension at the company starting with the first LJM deals.
Today, KeenCorp has 15 employees, half a dozen major clients, and several consultants and advisers — including Andy Fastow, who told me he had been so impressed with the algorithm’s ability to spot employees’ concerns about LJM that he’d decided to become an investor. Fastow knows he’s stuck with a legacy of unethical and illegal behavior from his time at Enron. He says he hopes that, in making companies aware of KeenCorp’s software, he can help “prevent similar situations from occurring in the future.”
I was skeptical about KeenCorp at first. Text analysis after the fact was one thing, but could an analysis of employee emails actually contain enough information to help executives spot serious trouble in real time? As evidence that it can, KeenCorp points to the “heat maps” of employee engagement that its software creates. KeenCorp says the maps have helped companies identify potential problems in the workplace, including audit-related concerns that accountants failed to flag. The software merely provides a warning, of course — it isn’t trained in the Sarbanes-Oxley Act. But a warning could be enough to help uncover serious problems.
Such early tips might also become an important tool to help companies ensure that they are complying with government rules — a Herculean task for firms in highly regulated fields like finance, health care, insurance, and pharmaceuticals. An early-warning system, though, is only as good as the people using it. Someone at the company, high or low, has to be willing to say something when the heat map turns red — and others have to listen. It is hard to imagine Enron’s directors heeding any warning about the use of complex financial transactions in 1999 — the bad actors included the CEO, and we know that whistle-blowers at the company were ignored.
The potential benefits of analyzing employee correspondence must also be weighed against the costs: In some industries, like finance, the rank and file are acutely aware that everything they say in an email can be read by a higher-up, but in other industries the scanning of emails, however anonymous, will be viewed as intrusive if not downright Big Brotherly.
But it is managers who might have the most to fear from text-analysis tools. Viktor Mirovic, KeenCorp’s CFO, told me that the firm’s software can chart how employees react when a leader is hired or promoted. And one KeenCorp client, he said, investigated a branch office after its heat map suddenly started glowing and found that the head of the office had begun an affair with a subordinate.
When I asked Mirovic about privacy concerns, he said that KeenCorp does not collect, store, or report any information at the individual level. According to KeenCorp, all messages are “stripped and treated so that the privacy of individual employees is fully protected.” Nevertheless, Mirovic concedes that many companies do want to obtain information about individuals. Those seeking that information might turn to other software, or build their own data-mining system.
Text analysis is a fledgling technology. It remains unclear how often such tools might suggest a problem when none exists, and not all wrongdoing will register on a heat map, no matter how finely tuned.
Still, a market will surely emerge for services claiming that they can find useful information in our work emails. Adam Badawi, a colleague of mine at UC Berkeley, uses natural-language algorithms to assess regulatory filings. He predicts that text analytics will become part of legal-and-compliance culture as the tools grow more sophisticated. Firms will want to protect themselves from liability by examining employee communications more comprehensively, particularly with respect to allegations of bias, fraud, and harassment. “This is something companies are hungry for,” Badawi told me.
In an ideal world, employees would be honest with their bosses, and come clean about all the problems they observe at work. But in the real world, many employees worry that the messenger will be shot; their worst fears stay bottled up. Text analytics might allow firms to gain insights from their employees while intruding only minimally on their privacy. The lesson: Figure out the truth about how the workforce is feeling not by eavesdropping on the substance of what employees say, but by examining how they are saying it.
|
What Your Boss Could Learn by Reading the Whole Company’s Emails
| 179
|
what-your-boss-could-learn-by-reading-the-whole-companys-emails-19eacb572d8f
|
2018-08-25
|
2018-08-25 01:41:51
|
https://medium.com/s/story/what-your-boss-could-learn-by-reading-the-whole-companys-emails-19eacb572d8f
| false
| 1,830
|
Syndicated stories from The Atlantic.
| null | null | null |
The Atlantic
| null |
the-atlantic
|
POLITICS,TECHNOLOGY,RACE,CULTURE
|
TheAtlantic
|
Technology
|
technology
|
Technology
| 166,125
|
The Atlantic
|
Politics, culture, business, science, technology, health, education, global affairs, and more.
|
969cde9116a3
|
TheAtlantic
| 61,762
| 83
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-17
|
2018-06-17 23:31:53
|
2018-06-19
|
2018-06-19 16:36:31
| 1
| false
|
en
|
2018-06-19
|
2018-06-19 16:36:31
| 7
|
19ee243ab108
| 3.633962
| 1
| 0
| 0
|
Earlier this month, I had the opportunity to learn more about a type of AI called Reinforcement Learning as part of my final project for…
| 5
|
Can Reinforcement Learning Help With Sustainable Development Goals?
Earlier this month, I had the opportunity to learn more about a type of AI called Reinforcement Learning as part of my final project for the ‘AI for International Development’ course at TechChange. My goal was to understand how Reinforcement Learning could be used to help us achieve Sustainable Development Goals. Why? As advancements in AI continue to grow at an accelerated pace, I believe that we need to do more to ensure that the technological progress benefits all. It starts by knowing the problems we need to solve and what questions to ask about available tools (e.g. AI).
This post is for those in humanitarian sector who are interested in learning about AI and for developers who have the technical expertise we need to solve some of the toughest humanitarian and development problems. It’s not meant to be an exhaustive overview of Reinforcement Learning or how it could be used to achieve SDGs, but a brief introduction that I hope will inspire humanitarian and tech experts to work together.
Here is an excerpt from my conversation with Cyrill Glockner, Director of Technical Product Management at BONSA AI, a Reinforcement Learning startup.
—
What is Reinforcement Learning?
Reinforcement Learning is a type of Machine Learning that is based on the concept of learning to accomplish a goal by interacting with an environment without prior knowledge of it. Meaning, an agent (e.g. a player in a game or a robot) performs actions in an environment and receives rewards for getting closer to a defined goal. As the agent is getting rewarded for good behavior, it learns to make good decisions for a given a state in the environment. In practice, a large number of iterations might be required to train an agent which makes it impractical and potentially risky to use the real world as a training environment. Instead, we use simulations that mimic the real world as closely as possible and once the agent is trained in the simulated world, we can apply it in the real world. For example, we can teach a robotic arm to stack blocks on top of each other or teach a robot to turn off a valve in a hazardous environment.
How is Reinforcement Learning different from other types of Machine Learning?
Most of today’s broadly used AI applications are based on supervised learning — they rely on availability of labeled data which requires a significant effort to collect and label. Reinforcement Learning does not required labeled data, instead it needs an environment that can be used for training, like a simulation.
Reinforcement Learning allows us to explore new actions to take in a given environment. For example, the agent that Google Deepmind developed to play the game Go surprised expert Go players with the moves that were not known to humans before. The agent was able to explore a larger action/state space than human Go players and came up with better moves.
Reinforcement Learning should not be confused with creative intelligence, it’s just a wider and deeper exploration of potential actions using the same reward function as human players do (i.e. winning the game).
What are example of current use cases for Reinforcement Learning?
Reinforcement Learning can be applied to any use case that requires decision making in a somewhat unpredictable environment. Examples include robotics, machine control and tuning (e.g. HVAC), supply chain optimization (e.g. AI makes decisions on how much of a specific item to order from where and when), or crop yield optimization. In almost all cases, Reinforcement Learning requires a simulated environment for training.
How can Reinforcement Learning be used to achieve SDGs?
Reinforcement Learning can help us achieve some or all Sustainable Development Goals by training robots, drones, smart devices to learn how to make more optimal decisions in complex use cases. For example:
SDG 3: Good health and well-being
Develop and train smart drones to autonomously deliver medicine and other supplies to remote and potentially dangerous areas.
SDG 9: Industry innovation and infrastructure
Train robots to learn how to work effectively with human workers.
SDG 12: Responsible consumption and production
Build smart environments that can grow food autonomously anywhere on the planet. We could use Reinforcement Learning to optimize decision making for water, nutrition, light exposure, seed selection and other factors.
SDG 14: Life below the water
Pattern recognition can track marine-life migration, population levels, and fishing activities to enhance sustainable marine ecosystems and combat illegal fishing. Reinforcement Learning could also be used to create an autonomous ocean cleaning UUV to take plastic out of the ocean without the need for human control.
Those are just a few examples of potential applications of Reinforcement Learning to help us achieve SDGs. While the potential is significant, increased automation can also impact humanity in a negative way — from perpetuating discrimination and inequality to amplifying bias. When using AI, I recommend to reflect on potential ethical issues your solution might cause and to review human rights principles before starting the project. In summary, we need to work together to ensure that we create solutions that benefit all and proactively address negative impact of AI on humanity (e.g. SDG 8: Decent work and economic growth).
—
For further reading on the subject of Reinforcement Learning and how it can be applied, check out the following resources:
Reinforcement Learning
Sustainable Development Goals
Simulators: The Key Training Environment for Applied Deep Reinforcement Learning
|
Can Reinforcement Learning Help With Sustainable Development Goals?
| 50
|
can-reinforcement-learning-help-with-sustainable-development-goals-19ee243ab108
|
2018-06-19
|
2018-06-19 20:40:26
|
https://medium.com/s/story/can-reinforcement-learning-help-with-sustainable-development-goals-19ee243ab108
| false
| 910
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Leila Toplic
| null |
473c4126caf0
|
leilatoplic
| 10
| 8
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-12
|
2018-06-12 19:28:13
|
2018-06-12
|
2018-06-12 19:28:41
| 0
| false
|
en
|
2018-06-12
|
2018-06-12 19:28:41
| 3
|
19efae70a9f5
| 1.841509
| 0
| 0
| 0
|
This first appeared as an SIIA Blog.
| 5
|
Google’s AI Principles are a Step Forward
This first appeared as an SIIA Blog.
Last week, Google released a blog of seven ethical principles to guide their work in artificial intelligence. The principles are:
Be socially beneficial. This is essentially a social welfare test under which Google will move ahead with an AI project only when “the overall likely benefits substantially exceed the foreseeable risks and downsides.” Moreover, in certain cases, Google will make their technologies “available on a non-commercial basis.”
Avoid creating or reinforcing unfair bias. This commits Google to conduct disparate impact analyses and to “seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.”
Be built and tested for safety. Google will use “strong safety and security practices to avoid unintended results that create risks of harm.” This includes in appropriate cases, continuing to “monitor their operation after deployment.”
Be accountable to people. Google will provide “opportunities for feedback, relevant explanations, and appeal.” It will subject its AI technologies “to appropriate human direction and control.”
Incorporate privacy design principles. Google will embrace good privacy practices, including to “give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.”
Uphold high standards of scientific excellence. Google will embrace the traditional scientific standards of “open inquiry, intellectual rigor, integrity, and collaboration.” It “will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.”
Be made available for uses that accord with these principles. Google will seek to limit harmful applications of its technologies and assess likely harm, in particular by assessing whether the primary purpose and likely use of an AI technology is related to or adaptable to a harmful use.
Much of the public discussion of the announcement has focused on Google’s application of these principles to particular cases, in particular to its decision that it would not pursue projects related to “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”
But the framework is the important thing. Google has proposed ethical guidelines that largely mirror the recommendations from SIIA in its Ethical Principles, released in November 2017. They focus the company on the important task of developing and implementing its technology in a way that comports with widespread ethical norms.
Is this enough? Of course not! The proof of the pudding is in the eating, and so much will depend on how Google implements these thoughtful principles. It must also follow up on its pledge for public accountability in ways that preserve its own integrity and independence while reassuring the public and policymakers that it is a responsible steward of this powerful new technology. But endorsing this ethical framework is a clear positive step forward.
|
Google’s AI Principles are a Step Forward
| 0
|
googles-ai-principles-are-a-step-forward-19efae70a9f5
|
2018-06-12
|
2018-06-12 19:28:41
|
https://medium.com/s/story/googles-ai-principles-are-a-step-forward-19efae70a9f5
| false
| 488
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Mark MacCarthy
|
SVP for Public Policy, Software & Information Industry Association; Adjunct Professor, Communication, Culture & Technology Program, Georgetown University
|
4317359941eb
|
maccartm
| 8
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-02
|
2018-03-02 08:04:39
|
2018-03-02
|
2018-03-02 08:07:32
| 1
| false
|
th
|
2018-03-02
|
2018-03-02 08:07:32
| 2
|
19f0ee2d1aa6
| 0.833962
| 0
| 0
| 0
|
AI คืออะไร คำนี้เราได้ยินกันมานานแสนนาน อาจจะถี่หน่อยก็ช่วงที่ผ่านมา ครั้งแรกที่ได้ยินคุณนึกถึงอะไร…
| 3
|
เมื่อเราพูดถึง AI คุณนึกถึงอะไร?
AI คืออะไร คำนี้เราได้ยินกันมานานแสนนาน อาจจะถี่หน่อยก็ช่วงที่ผ่านมา ครั้งแรกที่ได้ยินคุณนึกถึงอะไร หลายคนอาจติดภาพของหุ่นยนต์ปัญญากลที่เราเคยเห็นจากในจอโทรทัศน์และภาพยนตร์ต่างๆ ที่ไม่เกี่ยวข้องกับเรา แต่บอกได้เลยว่าเราใช้ AI อยู่ในชีวิตแทบทุกวันโดยไม่รู้ตัว เพราะมันแทรกซึมไปทุกภาคกิจกรรมไม่ว่าจะเป็นอุตสาหกรรมเทคโนโลยี การผลิต การแพทย์ การคมนาคมและอีกหลายๆ ด้าน แม้กระทั่งการติดต่อสื่อสาร การตลาด การขาย และการบริการลูกค้า หากพูดถึงสิ่งที่เป็นผลิตภัณฑ์ล้ำยุคที่เกิดจาก AI ขณะนี้ยังมีอยู่ในตลาดโลกเพียง 5% แต่จากนี้ไปจะสำคัญกับทุกภาคธุรกิจต่อไปอีก 5–20 ปีข้างหน้าเลยทีเดียว ทีนี้คุณพอจะรู้ตัวแล้วรึยังว่า AI แค่ 5% ที่ว่านั้นอยู่ในชีวิตคุณจริงๆตรงไหนบ้าง
AI มาจากไหน?
คำว่า AI ย่อมาจาก Artificial Intelligence หรือ ปัญญาประดิษฐ์ ถือเป็นศาสตร์วิทยาการทางคอมพิวเตอร์ชนิดหนึ่งที่ผู้เชี่ยวชาญและนักวิจัยต่างลงมือทุ่มเทสุดตัวพยายามพัฒนาทำให้สิ่งนี้ฉลาด เหมาะสม และบริบูรณ์ด้วยความสามารถอันเปี่ยมล้น ศาสตร์นี้ไม่ได้เพิ่งจะมาพัฒนากันไม่กี่ปี หากแต่แนวความคิดนี้มีจุดเริ่มต้นมาตั้งแต่สมัยกรีกโบราณ และถูกพัฒนาส่งต่อมากว่าหลายร้อยปีจนมาถึงยุคปัจจุบัน เราเชื่อว่าจากนี้ไปในปี 2018 ปัญญาประดิษฐ์จะเข้ามามีบทบาทมากขึ้นเรื่อยๆ จนคุณแทบจะลืมไปเลยว่าก่อนหน้านี้เราต่างเคยใช้ชีวิตกันยังไงโดยที่ไม่มีเจ้านวัตกรรมใหม่นี้ขึ้นมา
การทำงานของ AI คือ โปรแกรมคอมพิวเตอร์ที่ถูกพัฒนาให้มีตรรกะการคิดเป็นของตัวเอง เป็นตัวแทนของมนุษย์ที่มีความชาญฉลาด สามารถทำงานหรือใช้เหตุผลในการแก้ไขปัญหาในด้านความเป็นเหตุเป็นผล โดยเชาว์ปัญญานั้นสามารถแสดงเหตุผล การเรียนรู้ การวางแผนหรือนำเสนอความสามารถอื่น ๆ ได้ด้วย เช่น การประมวลผลจากข้อมูลที่เราให้ไป หรือการแสดงผลอัตโนมัติจากข้อมูลที่มีอยู่ เรียกได้ว่าเลียนแบบโครงข่ายประสาทของสมองของมนุษย์เลยทีเดียว ซึ่งในปัจจุบันการทำงานของ AI มีความแม่นยำสูงมาก จนแทบไม่พบข้อมูลผิดพลาด ทั้งยังสามารถทำงานได้ในระยะเวลาที่กำหนด และทำได้ตลอดเวลา 24 ชม. 7 วันเลยทีเดียว
สรุปแล้ว AI จะเข้ามามีบทบาทกับคนทั้งโลกอย่างหลีกเลี่ยงไม่ได้ เพราะมันคือสิ่งประดิษฐ์ที่สุดยอดที่สุดในประวัติศาสตร์มนุษย์
AI ไม่ใช่เรื่องไกลตัว
คุณต้องเคยใช้แล้วอย่างน้อยสักครั้งในชีวิตแหละน่า อนึ่ง… คุณเคยใช้ผู้ช่วยที่สั่งการด้วยเสียงอย่างเช่น Apple Siri, Google Now, Microsoft Cortana รวมถึงการสั่งพิมพ์ด้วยเสียงใน LINE หรือไม่ หรือเคยได้ยินกระแส Conversational Action จาก Smart Speaker หรือลำโพงอัจฉริยะอย่าง Google Home หรือ Amazon Echo หรือเปล่า เหล่านี้มี AI เป็นส่วนหนึ่งของระบบปฎิบัติการให้เราสามารถสั่งการและโต้ตอบได้ตั้งแต่ สตรีมเพลง ฟังวิทยุ ไปจนถึงการจัดการควบคุมเครื่องใช้ภายในบ้านให้สามารถเปิด-ปิด-ปรับอุณหภูมิ และฟังก์ชั่นอื่นๆได้ ทั้งยังสามารถช่วยจัดการตารางต่างๆของเรา ช่วยเตือนความจำ เรียกรถโดยสาร ไปจนถึงแนะนำร้านอาหาร ตรวจสอบสภาพอากาศ และการจราจรก่อนการเดินทาง เหล่านี้เกิดจากการพัฒนา AI ทั้งนั้น
นอกจากนั้นยังมี Chatbot ที่เป็นผู้ช่วยคอยตอบคำถามเบื้องต้นของลูกค้าได้ตลอดเวลา ทำให้การทำธุรกิจดำเนินไปอย่างต่อเนื่องไม่ขาดตอน ในไทยเห็นจะมีตัวอย่างจากแอปพลิเคชัน Wong Nai ก็มีการใช้ Chatbot ที่สามารถโต้ตอบและบอกพิกัดร้านอาหาร จนตอนนี้ก้าวขึ้นมาอันดับ 1 ในไทยเป็นการเปิดมิติใหม่ของ Lifestyle Platform อย่างเต็มรูปแบบ ที่สามารถจ่ายเงินผ่านแอพ E-payment หรือจัดส่งอาหารผ่าน LINE MAN เป็นต้น นี่ก็ถือว่าเราได้มีโอกาสใช้ AI ในชีวิตกันจริงๆแล้วล่ะค่ะ
ในบทความหน้าเราจะพูดถึงอนาคตของ AI และตัวอย่างภาคธุรกิจที่น่าจับตามอง รวมถึงสาเหตุที่ภาคธุรกิจต้องเริ่มทำความเข้าใจกับเจ้าเทคโนโลยีนี้กันค่ะ
เขียน: มทนา วิบูลยเสข | Digital Marketing Executive | Aware Corporation Inc.
|
เมื่อเราพูดถึง AI คุณนึกถึงอะไร?
| 0
|
เมื่อเราพูดถึง-ai-คุณนึกถึงอะไร-19f0ee2d1aa6
|
2018-03-02
|
2018-03-02 08:07:33
|
https://medium.com/s/story/เมื่อเราพูดถึง-ai-คุณนึกถึงอะไร-19f0ee2d1aa6
| false
| 168
| null | null | null | null | null | null | null | null | null |
Technology
|
technology
|
Technology
| 166,125
|
Matana Wiboonyasake
| null |
cde935fb72a5
|
p.matanaw
| 3
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-16
|
2017-11-16 08:54:51
|
2017-11-16
|
2017-11-16 09:08:05
| 5
| false
|
en
|
2017-11-16
|
2017-11-16 09:08:05
| 10
|
19f1f8137602
| 4.878616
| 1
| 0
| 0
|
Note: See the original article here!
| 5
|
How to Quickly Explain OCR to Your Boss
Note: See the original article here!
So you’ve decided to integrate mobile OCR for your use case. That’s a great start. You’ll soon see improvements in your processes or increased engagement in your marketing campaign. But what if you need to convince your boss that mobile OCR really is going to make the difference? Are you ready to explain OCR?
You should get all the fact firsts. OCR is simple to explain but the real power of mobile OCR, such as Anyline, goes much further. Here’s a quick guide to help you understand and explain mobile OCR to anyone, including your boss!
What are the Basics of OCR?
What Does OCR Mean?
OCR stands for optical character recognition. The most basic explanation for what that means is:
“Technology that can read text and numbers.”
Mobile OCR does a little bit more than this but it’s a good starting point. You can use OCR to scan handwritten or printed text, and then turn it into a digital output such as a PDF or text file. More importantly, your scanned text and numbers will become machine readable. Once your text or code becomes machine readable, it can be interpreted by computers.
How Does it Work?
There are numerous ways to perform OCR. If you’re considering a mobile solution, such as Anyline, then you can probably guess that your mobile device’s camera is involved.
To put it simply, you point your device camera at your scan target and Anyline takes care of the rest. Our mobile OCR relies on a neural network to process and interpret the characters you want it to recognize.
A neural network is a form of artificial intelligence that can be trained to perform certain tasks. Anyline’s neural network is designed to correctly identify all written characters. Anyline’s neural network can scan and record text much faster than a human and is more accurate over multiple scans.
Of course, the simple version of this for your boss or other team members is:
“It uses a mobile device’s camera to read your text or code.”
What Do We Do with the Scans?
After giving a basic explanation of what OCR is and how it works, the only part of the puzzle left to explain is your scan data.
If your process already involves collecting data, this part is easy. You can collect your scan data directly after a scan is performed and replace your previous reporting process with mobile OCR. If you’re adding scanning to your processes, there are a number of benefits that scanning has over manual data entry.
Scanning is fast. Scanning and uploading your data can be 20 times faster than manual data reporting and entry
Scanning is more accurate. Mobile devices never get tired or confused when scanning so none of your readings will be misreported. They offer the same level of scanning quality on a consistent basis. Anyline’s scan quality is 99% under laboratory conditions.
Scanning with Anyline is secure. While not all mobile OCR solutions can offer secure scanning as a standard feature, Anyline does. None of your scan data needs to be uploaded for processing and you can even perform scans offline.
How Can it Help your Business?
Once you explain the nuts and bolts of what OCR is, you may still be required to explain how it will improve business. And depending on your industry, the reasons may vary.
If you need to scan license plates, you can explain how mobile OCR is the perfect tool for scanning from any location, day or night. If you need to scan passports, let your boss know that Anyline lets you scan offline for increased security.
You can adapt Anyline for other industries too. Here are some of the general benefits that your boss should know about.
Technical Benefits
Anyline works on the majority of Android and all iOS devices. You can use your own phone as a scanner rather than spending money on dedicated scanners and the training to use them.
You can easily style the Anyline SDK to match the look and feel of your app. This will help to create a seamless user experience within your product.
You can also use Anyline in all conditions. This is thanks to the offline scanning mode and the torch access function that are built-in to Anyline apps. You can try scanning the utility meter in your basement or cellar to see for yourself.
Business Benefits
Mobile OCR can do more than just improve your processes. You can use it to position yourself as an innovation leader within your industry. Anyline’s mobile OCR is cutting edge technology and offers a brand new experience to your users.
You can also use Anyline to gather information on user behavior. This will help you to learn about your customers and how to create a positive user experience for them.
Best of all, Anyline is reliable. We pride ourselves on providing a solution that never quits and works for as many use cases as possible. You can check out our demo apps if you want to get a taste of what to expect.
How Can You Take Advantage of Mobile OCR?
You’ll be glad to hear that Anyline provides a simple mobile OCR SDK that you can easily add to your project. You can get started for free and test the SDK in just a few hours. You can get your free SDK download here.
The SDK includes dedicated modules for some of the most common mobile OCR use cases like MRZ scanning, utility meter reading, and license plate scanning. If you have questions about adapting Anyline to your use case, just let us know. We’re happy to help!
Ready to Explain Mobile OCR?
At this stage, you should know everything you need to know to quickly explain OCR. If you want a one-line answer that covers almost everything, you can tell your boss:
“Mobile OCR is scanning technology that uses a device’s camera to read text and numbers.”
If you need more help explaining what mobile OCR or Anyline does to improve processes, write us an email at sales@anyline.com. We’re happy to talk to your boss for you!
QUESTIONS? LET US KNOW!
If you have questions, suggestions or feedback on this, please don’t hesitate to reach out to us via Facebook, Twitter or simply via hello@anyline.com! Cheers!
Originally published at anyline.com.
|
How to Quickly Explain OCR to Your Boss
| 1
|
how-to-quickly-explain-ocr-to-your-boss-19f1f8137602
|
2018-03-13
|
2018-03-13 12:55:22
|
https://medium.com/s/story/how-to-quickly-explain-ocr-to-your-boss-19f1f8137602
| false
| 1,072
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Michael Organ
| null |
682a85d157f2
|
michael_70156
| 3
| 2
| 20,181,104
| null | null | null | null | null | null |
0
|
install.packages("tidyverse")
library(tidyverse)
ggplot (diamonds %>% sample_n(1000), aes (x = price, y = carat, color = cut)) +
geom_point ()
# we use "alpha" for transparency and "size" for size of the point
ggplot (diamonds , aes (x = price, y = carat, color = cut)) +
geom_point (alpha = 0.1, size = 0.1)
ggplot (economics , aes (x = date, y = psavert)) +
geom_line(size = 0.3)
# Without normalization, if we simply plot the two separate lines, graph looks like this. Notice the uempmed line is so close to x-axis that this creates visibility issue and give a skewed graph
ggplot (economics , aes (x = date)) +
geom_line ( aes (y = uempmed)) +
geom_line ( aes (y = unemploy))
# With normalisation: Essentially here what I did is created two new variables using dplyr's `mutate` package. Both these variables ( "duration" & "norm_unemployed" ) are created from economics data by taking data using pipes and then using mutate to define them as ratio.
#Note that since date is going to be a common thing for both the lines, I kept date in ggplot() function itself , and then created two separate geom_lines by taking only the respective y axis and just adding a color as well to differentiate them.
ggplot (economics %>% mutate ( duration = uempmed / mean(uempmed), norm_unemployed = unemploy / mean(unemploy)), aes (x = date)) +
geom_line ( aes (y = duration), color = "gold4") +
geom_line ( aes (y = norm_unemployed), color = "purple")
# Comment
ggplot (economics, aes ( x = uempmed , y = unemploy, color = date))+
geom_point()
ggplot (diamonds %>% sample_n(1000), aes (x = carat, y = price))+
geom_point (aes(color = cut))+
scale_color_brewer()
ggplot (diamonds, aes (x = carat, y = price, color = cut))+
geom_point (size = 0.3)+
scale_color_brewer(palette = "Spectral")+
geom_smooth(method = "loess", se = FALSE)+
ggtitle('Diamonds data set: "Spectral" Color palette + Smooth method "loess" ')
ggplot (diamonds, aes (x = carat, y = price, color = cut))+
geom_point (size = 0.3)+
scale_color_brewer(palette = "Spectral")+
geom_smooth(method = "lm", se = FALSE)+
ggtitle('Diamonds data set: "Spectral" Color palette + Smooth method "lm" ')
ggplot (diamonds, aes (x = carat, y = price, color = cut))+
geom_point (size = 0.3)+
scale_color_brewer()+
geom_smooth(method = "loess", se = FALSE)+
ggtitle('Diamonds data set: Default Color palette + Smooth method "loess" ')
| 14
| null |
2017-10-21
|
2017-10-21 21:44:00
|
2017-10-21
|
2017-10-21 21:47:14
| 10
| false
|
en
|
2017-10-21
|
2017-10-21 22:11:02
| 0
|
19f31836dce1
| 3.533019
| 7
| 0
| 0
|
Plotting : Exploratory data analysis
| 3
|
ggplot , dplyr, pipes: Few tips and tricks
Plotting : Exploratory data analysis
Dotplots : diamonds data
Before doing any kind of plots, ensure that your R has “tidyverse” installed. Use below lines to get that done and call library
Using diamonds data (part of tidyverse package), plot the price vs. carat and color the points according to cut. You may want to nuse sample_n() to get a subsample :
Now plot all datapoints. To make it look better, try setting a small point size and transparency (alpha) value :
Economics data
Using economics data, plot population saving rate (psavert) over time :
Plot median unemployment duration (uempmed) versus unemployment rate (unemploy) over time. Normalize both variables so that they are both visible on the plot. (Hint : One easy scheme would be to normalize with respect to period mean) : something like var / mean (var)
Plot median duration of unemployment (uempmed) vs. number of unemployed people (unemploy) using different aesthetic for marking time
Smoothers & Colors
Using diamonds data, plot “carat” vs. “price” across cuts using RcolorBrewer palette. For clarity purpose taken subsample of 1000:
Using diamonds data, plot “carat” vs. “price” and add smoothers across cuts. Use an RcolorBrewer palette and try tinkering with the aesthetics to get a nice figure
|
ggplot , dplyr, pipes: Few tips and tricks
| 56
|
ggplot-1-19f31836dce1
|
2018-03-21
|
2018-03-21 02:26:52
|
https://medium.com/s/story/ggplot-1-19f31836dce1
| false
| 605
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Manas Thakre
|
Biotech Engineer on papers and the most religious Apple FanBoy : 25% time spent on thinking, 25% on arguments, 25% struggling with self & 25% on technology
|
bdbb024605ad
|
manasthakre
| 0
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-04
|
2017-10-04 21:55:35
|
2017-10-04
|
2017-10-04 22:47:49
| 2
| false
|
en
|
2017-10-04
|
2017-10-04 22:47:49
| 1
|
19f34f85e344
| 2.515409
| 2
| 0
| 0
|
I started a company called Artificial Intelligence Software Solutions (https://www.ai-ss.org) with which I aim to provide AI solutions to…
| 3
|
“Elon Musk is an idiot”
Is AI dangerous or its implementation?
I started a company called Artificial Intelligence Software Solutions (https://www.ai-ss.org) with which I aim to provide AI solutions to people and businesses. I am getting my hands dirty with Machine Learning (mainly deep neural networks) and Natural Language Processing. Whenever I say something of that sort to someone, I have heard them mention Elon Musk way too frequently. There have been times when I was so tired of holding a conversation about Musk that I have said “Elon Musk is an idiot” and walked away.
Terminator was the first film I saw and it dampened my imagination of a friendly robot I had gathered from Small Wonder. Maybe the same thing happened to Musk but maybe he watched it too many times. It is not completely unjustified to say that he has been fearmongering a little too much instead of talking about legitimate things to fear regarding AI. It is important to discuss his discourses because there are legitimate reasons to have fears in the AI domain.
The main problem I have with Musk is that he presents Artificial Intelligence as a villian to the public. It seems wrong to me. AI is a tool that can be used for the benefit of human society. It could be used for the destruction of our lives too but it doesn’t have to be. Hence, it is clear that AI in itself is not something to be afraid of. It is the implementation that we should be afraid about.
The letter he signed with other experts regarding “killer robots” makes sense. If AI is used to decide who lives and who dies, even a single mistake means loss of a human life. Moreover, the ease with which governments will start killing people without remorse makes me uneasy. However, again, it is an implementation of AI we must avoid.
We don’t have to avoid AI altogether. Musk is wrong to say “Artificial Intelligence is our biggest existential threat” and printing them in newspapers thanks to his status. Maybe he is dumbing it down for people, but I would really appreciate his work in this area if he said “Artificial Intelligence used immorally by humans is our biggest existential threat”. If he starts understanding the difference between these two sentences, he will gain more support from the AI community to tackle what needs to be tackled. People of the field will know that he is not just scared but is making sense with his fears.
Two things are important to regulate regarding AI: loss of old jobs and weaponization of AI. When companies start using AI, a lot of workers would be replaced and their life would be hard. For this problem, there must be some regulation which ensures proper compensations are given to the layed-off worker from the benefitting company so that they have a safety net while they retrain for a new type of job. Regarding weaponization, it should be clear to everyone how dangerous that is. The escalation of arms race and even few mistakes from AI will be disastrous. There must be another regulation which ensures that AI is never used along with any weapons or anything that causes harms to humans.
AI needs regulations but it is because of the implemenation part and not the tool/technique itself. Saying otherwise is either fearmongering or an act of idiocy.
|
“Elon Musk is an idiot”
| 2
|
elon-musk-is-an-idiot-19f34f85e344
|
2018-02-07
|
2018-02-07 23:33:43
|
https://medium.com/s/story/elon-musk-is-an-idiot-19f34f85e344
| false
| 565
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Sailesh Dhungana
|
Programmer and AI Enthusiast. Princeton ‘15.
|
5f8f3243795b
|
saileshdhungana
| 41
| 41
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-13
|
2018-07-13 20:05:01
|
2018-07-13
|
2018-07-13 20:30:51
| 6
| false
|
en
|
2018-07-13
|
2018-07-13 20:30:51
| 6
|
19f425b146b2
| 2.104717
| 1
| 0
| 0
|
NBAI.io is Now Online!
| 3
|
Weekly Update July 13th: NBAI foundation website online. Nebula AI at StartupFest, And Future Cooperation with LinkCoin
NBAI.io is Now Online!
The NBAI foundation website is online! Have a look at the website!
Team Updates
This week, the AI team continue to optimize the distributed AI model mechanism. The team also make progress in the data encryption research. They carry on DAI APP testings on the mining machine.
The Blockchain team is hard at work optimizing the AI computing client of Nebula AI. As well as working on the integration of the client system. This week, the team has also been focused on developing the new Quant AI.
The Platform team’s focus this week was to get the Foundation website and NBAI blog online, both of which you can now visit! They have also entered the testing phase for our upcoming NBAI Store.
Community and Event News
Nebula AI participates in the KuCoin listing competition. Read more about how you can vote in our blog post. Voting ends July 15.
Nebula AI at StartupFest
Our director of Global marketing, Ivan, represented Nebula AI at the StartupFest event, in Montreal. StartupFest is Canada’s leading annual “must attend” event that brings together aspiring founders, ground-breaking innovators and veteran entrepreneurs from around the world.
Nebula AI also attend the AIFest, a two-day event which explores the future of AI and its impact on society and business.
Future Cooperation Between Nebula AI and LinkCoin
On July 12th, the LinkCoin team visited the Montreal office of Nebula AI. They discussed future cooperation in technology development, and marketing.
Charles at the Blockchain Innotech Summit in Hong Kong
On July 13, Charles Cao attended the Blockchain Innotech Summit in Hong Kong, China. The event gathered blockchain companies from around the world to discuss the development of blockchain in the Asia-Pacific.
|
Weekly Update July 13th: NBAI foundation website online.
| 50
|
weekly-update-july-13th-nbai-foundation-website-online-19f425b146b2
|
2018-07-13
|
2018-07-13 20:30:51
|
https://medium.com/s/story/weekly-update-july-13th-nbai-foundation-website-online-19f425b146b2
| false
| 306
| null | null | null | null | null | null | null | null | null |
Weekly Report
|
weekly-report
|
Weekly Report
| 544
|
Nebula-AI
|
Nebula AI is a Montreal based decentralized blockchain platform integrated with Artificial intelligence and sharing economics.
|
138d9f98bcf0
|
nebulaai
| 52
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
9d7f1bd16b14
|
2018-09-14
|
2018-09-14 06:56:06
|
2018-09-14
|
2018-09-14 13:36:42
| 2
| false
|
en
|
2018-09-15
|
2018-09-15 02:25:33
| 1
|
19f519b26c07
| 1.451258
| 1
| 0
| 0
|
Nobody has the time to dig through chunks of information to find what they really need. With Ream, you can find information faster —…
| 5
|
Search less, do more
Nobody has the time to dig through chunks of information to find what they really need. With Ream, you can find information faster — whether it’s a news article, inspirational talk, a professional who can help with a task, tutorial video, and more.
We are working to bring you more personalized information that is relevant to you, based on your interests, content you interact with and, the conversations you participate in. And all of that comes with a nifty smart interface that makes sifting through results and finding what you need, much more efficient.
Here’s how it works.
A.I powered curated feed
Our home feed uses AI to suggest content that is tailored to your particular interests and generally to how you engage with them. Your suggested filters will help you filter out what you need immediately.
If you don’t know what you’re looking for right away, just scroll to discover more content. There, you’ll have content categorized — by, news, people, videos, blogs, services — so you can locate the exact piece of content you need.
Voice-First, Conversational Interface
We have designed a voice-first interface to simplify how you interact with our system. Humans have been developing the art of conversation for thousands of years; a skill we use everyday and continue to improve for the rest of our lives.
Ream acts as your own Professional Assistant, ready to help with your tasks. To get started, just say “Hey Ream, I need a photographer” or “I am looking for somebody to design a logo”.
Ream - Responds to you | Ream
Ream simplifies your professional life with Artificial Intelligence. Optimized to work with Amazon Echo, Google Home…ream.ai
|
Search less, do more
| 1
|
less-searching-more-doing-19f519b26c07
|
2018-09-15
|
2018-09-15 02:25:33
|
https://medium.com/s/story/less-searching-more-doing-19f519b26c07
| false
| 283
|
Ream simplifies your professional life with Artificial Intelligence. Optimized to work with Amazon Echo, Google Home, Apple HomePod. It has a Conversational Interface to help you with your everyday tasks.
| null | null | null |
Ream
|
r@ream.ai
|
ream
|
ARTIFICIAL INTELLIGENCE,FUTURE OF WORK,VOICE ASSISTANT,PERSONAL ASSISTANT
|
reamapp
|
Automation
|
automation
|
Automation
| 9,007
|
Hilal Agil
|
Co-Founder & CEO of Ream, co-founder of Nothnegal, thinker
|
3afd0908fe16
|
hilaarl
| 69
| 383
| 20,181,104
| null | null | null | null | null | null |
0
|
#include <pybind11/pybind11.h>
int add(int i, int j) {
return i + j;
}
namespace py = pybind11;
PYBIND11_PLUGIN(example) {
py::module m("example", "pybind11 example plugin");
m.def("add", &add, "A function which adds two numbers");
return m.ptr();
}
cmake_minimum_required(VERSION 2.8)
project(example)include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
include_directories(SYSTEM ${CONAN_INCLUDE_DIRS})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_RELEASE ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
$ cmake .. -DCMAKE_BUILD_TYPE=Release
$ cmake --build .
$ python
>>> import example
>>> example.add(2, 3)
5L
| 8
| null |
2018-01-21
|
2018-01-21 14:04:30
|
2018-01-21
|
2018-01-21 21:36:39
| 1
| false
|
en
|
2018-01-21
|
2018-01-21 21:36:39
| 2
|
19f5c5779a5f
| 2.373585
| 3
| 0
| 0
|
Undoubtedly , Python is the most popular programming language to build Deep Learning models. It supports myriads of different open source…
| 5
|
Pybind11 : A Handy tool for Deep Learning Practitioners
Undoubtedly , Python is the most popular programming language to build Deep Learning models. It supports myriads of different open source data science libraries to ease any programmer’s life with flexible and easy to understand APIs for rapid prototyping.
When it comes to take your rapid prototype and transform into a scalable product from it, performance comes very handy. C++ is the magic word which comes to rescue when it comes to the performance. From C++ comes the performance and from Python , simplicity and speed of development. Mixing them can get the best of both worlds into our development environment. Pybind is one such great solution which creates python bindings to C++ libraries and then you can expose any of its APIs and call in python environment.
Lets say you want to write a fast GPU matrix multiplication code C++ library for faster convolutions and creates a python bindings to use in python Env.
Take another example of writing simple python bindings for your library with pybind11 that exposes your c++ deep model class and a c++ model save function that saves the model to disk in binary format using Cereal.
The relation between them can be in both directions:
Extending python is the process of creating python extensions programmed in C or C++ in the form of shared libraries (.pyd in win, .so in nix) that can be imported and executed from the python environment. The binary native extensions would typically contain the performance critical code as well as functionality that was previously implemented in C/C++ libraries that is wanted to be exposed to the python environment.
Embedding python is the process of creating a native application, developed in C or C++ that can execute python code, using the python interpreter.
Both tasks can be achieved at “low level” using CPython : Python/C API, but there exist many other tools that try to improve over this process.
Pybind11 is the easiest possible solution to achieve the above goals.
Getting started
Getting started with pybind11 is really simple. I have created a conan package, but as it is header only, it should be quite easy to use it just cloning the sources and pointing to the include directory.
Hello world Python/C++ App
We are just using the code provided in pybind11 help, a simple integer addition plugin, in a file called example.cpp:
We will be using cmake for building the extension, with the provided CMakeLists.txt:
The only change made to the original CMakeLists.txt is to define the include directories for the pybind11 headers and define the output directory as the current one, so our extension is in the current folder.
In linux, you could use the following commands to build it, and it may automatically find python, but please check the cmake output to ensure that it is finding your desired python distribution.
And that’s it, we already have the extension ready to be used from python:
References
pybind11 - Seamless operability between C++11 and Python - pybind11 2.2.1 documentation
pybind11 - Seamless operability between C++11 and Pythonpybind11.readthedocs.io
|
Pybind11 : A Handy tool for Deep Learning Practitioners
| 99
|
pybind11-a-handy-tool-for-deep-learning-practitioners-19f5c5779a5f
|
2018-05-01
|
2018-05-01 19:10:33
|
https://medium.com/s/story/pybind11-a-handy-tool-for-deep-learning-practitioners-19f5c5779a5f
| false
| 576
| null | null | null | null | null | null | null | null | null |
Python
|
python
|
Python
| 20,142
|
Muzahid Hussain
|
Distributed Systems , Image Processing, AI , Machine Learning , Bayesian Veteran, R&D Engineer , Dassault Systems,Germany ( Solve AI or Die Trying)
|
6f4609ff45c0
|
mhussain.univstuttgart
| 5
| 20
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-26
|
2017-10-26 09:55:40
|
2017-10-26
|
2017-10-26 09:56:48
| 1
| false
|
en
|
2017-10-26
|
2017-10-26 09:56:48
| 7
|
19f5f71f8813
| 2.803774
| 1
| 0
| 0
|
For businesses, it has become compulsory to solve the queries and quandaries of the customers to ascertain consumer adhesion along with the…
| 5
|
Why your Business needs a Chatbot , Get a Chatbot Developed Now!
For businesses, it has become compulsory to solve the queries and quandaries of the customers to ascertain consumer adhesion along with the brand establishment. And just like the earlier times, man has looked to take avail of machines to abstract the constraints of human constraints. This time it is the customer accommodation industry which has been revolutionized, and the innovation responsible for this is chatbot. Chatbots are considered the future of customer accommodation and management.
Chatbot scale up your operations. It do not suffer from the limitations of a human agent. Where live agents can handle only 2 to 3 conversations at a time, chatbots can operate without an upper limit. By employing chatbot solutions to complement your human task force, your business can get the boost it requires to enter new markets.
“ We don’t just help companies build bots, we create Remarkable Experiences. ”
“ By 2021, Gartner predict that 40% of new enterprise applications will include AI technologies. AI and Machine Learning promise to solve a plethora of problems faced by enterprises today, from better decision making to increased efficiencies and cost savings “
Design is at the core of what we do, and we know how to leverage the power of this new conversational paradigm. We won’t turn your website or app into a bot, instead we will find and create amazing user experiences that leverage conversations.
Cubet support bot aggregates data to illustrate your customer experience journeys in real-time, providing a visualization of and pointing out the existence of knowledge gaps. Bot helps leading-edge analytics enable businesses to identify and address these knowledge gaps and optimize where needed, bridging the gap between consumers and brands
Feature of ALIS Chatbot
ALIS a chatbot created by Cubet, Want to know more : http://alis-bot.com/
Conversational bots
Can make self-service more human via our via technology that uniquely combines AI, NLP and Machine Learning technologies.
Multi-Language support
The chatbot platform supports neural language models for multiple languages.
Deploy in Multiple Channels
Built once and instantly make available on multiple channels like websites, Facebook messenger etc.
Interactive Smartcards
Don’t be restricted to just text — make your bots expressive with rich interactive cards that can render rich data in button model generating a great user experience.
Multiple Industry domain handlers
Bot can be trained by running them through industry-specific domain handlers. Our Bots can handle linguistic, colloquial and domain-specific context (Banking, Insurance, Retail, Automotive, Healthcare, Education etc.) for more meaningful responses.
Real-time Analytics
Not just track who’s using the bot and what requests are flowing through different channels — more importantly you can use the feedback from failed conversations to train the bot to respond better.
Automation technologies are taking over all the spheres of our lives, be it the development of smart cities, smart homes, automated workspaces or technologies like smartphones and digital personal assistants. With every new development, we are moving a step closer to a more connected and digital future. Industry experts are unanimous in their opinion that the chatbot technology is still in its infancy. We are only scratching the surface of what a chatbot-enabled future may look like.
One thing is pretty clear though, chatbots are here to stay, and their development will impact both businesses as well as consumers. The present implementation of chatbots in the customer service industry offers businesses a doorway to understanding the future uses of chatbots for different aspects of business operations. Moreover, it is only through trial can we discover innovative ways of implementing this technology further.
If you know exactly what you want, submit your idea and we can start building your chatbot immediately. If you need help coming up with your chatbot strategy, we can assist you
Start a free demo now
Other Blogs:
Introduction to Koa Javascript (Koa Js): PROS and CONS
Infographics — Angular JS Vs React Vs Vue- Which is the best?
Infographics — Comparing Top 3 PHP Frameworks: Laravel vs Yii vs Symfony
LARAVEL 5.5 What’s New?
Advanced features in Laravel 5.4
|
Why your Business needs a Chatbot , Get a Chatbot Developed Now!
| 2
|
why-your-business-needs-a-chatbot-get-a-chatbot-developed-now-19f5f71f8813
|
2018-02-15
|
2018-02-15 12:49:02
|
https://medium.com/s/story/why-your-business-needs-a-chatbot-get-a-chatbot-developed-now-19f5f71f8813
| false
| 690
| null | null | null | null | null | null | null | null | null |
Chatbots
|
chatbots
|
Chatbots
| 15,820
|
Cubet Techno Labs
|
We are a Digital Engineering company, helping to architect your digital dreams to perfections.
|
2a46d4fbe982
|
Cubet_Techno_Labs
| 55
| 5
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
d78a0a8d9f0a
|
2018-07-30
|
2018-07-30 12:02:42
|
2018-07-30
|
2018-07-30 12:08:34
| 1
| false
|
pt
|
2018-07-30
|
2018-07-30 15:09:53
| 0
|
19f70f37bc04
| 2.841509
| 2
| 0
| 0
|
Dois Analistas de CRO Sêniores da Netshoes explicam como nós utilizamos essa metodologia
| 5
|
Afinal, para que serve o CRO?
Dois Analistas de CRO Sêniores da Netshoes explicam como nós utilizamos essa metodologia
Em qualquer negócio que envolva venda direta ao consumidor, a taxa de conversão é um dos fatores mais importantes, pois permite que tenhamos visão de qual é o percentual de visitas do site que finaliza uma transação. Aplicar metodologias de CRO (Conversion Rate Optimization, ou Otimização da Taxa de Conversão, em português) permite entender como aumentar este percentual, seja na home page, nas páginas individuais de produto ou nos resultados de buscas do catálogo, por exemplo. O objetivo é realizar mais vendas sem que seja necessário aumentar o número de visitas (economizando em campanhas de mídia ou redes sociais) e, claro, deixando os clientes ainda mais satisfeitos.
Podemos definir como “taxa de conversão” a métrica que representa o resultado de um processo com diversas etapas — no nosso caso, a Netshoes tem como resultado a venda dos produtos, mas se o nosso cliente não concluir algumas etapas, esse resultado não acontece. Essas etapas podem ser exemplificadas por:
Home: Se os banners, promoções e recomendações de produtos não estiverem boas, o cliente não clica e não segue para a próxima etapa, sem trazer o resultado esperado que é a conversão;
Catálogo: Se nas nossas páginas de exposição de produtos o usuário tiver dificuldade de encontrar o que deseja, ele não passa para a próxima etapa, também prejudicando o resultado;
PDP: Se a página de detalhes do produto não traz as informações necessárias para o usuário querer comprar, o resultado também não vem;
Carrinho: O mesmo pode ser dito no carrinho, se as informações são confusas, existe risco de abandono nesta etapa;
Processo de Checkout: Não é diferente dentro do processo de Checkout (consideramos login, cadastro e pagamento), se o processo de cadastro for longo, se o processo de digitar dados do cartão for complexo, se o endereço de entrega cadastrado for complicado de mudar, existe risco de perdermos essa conversão na última etapa.
Todas essas etapas são mensuradas e constantemente analisadas tanto para o site quanto para sua versão adaptada para smartphones e assim conseguimos saber para qual etapa precisamos aplicar técnicas de CRO para que o consumidor saia do site com o produto comprado.
As análises da etapa que mais precisamos otimizar é feita sempre em conjunto, envolvendo diversas áreas da empresa, dos especialistas em experiência do usuário (UX) aos gerentes de produto, dessas análises surgem hipóteses de como melhorar os indicadores e a experiência dos usuários naquela etapa, transformamos essas hipóteses em testes A/B.
Um exemplo simples: se 50% das pessoas clicam nos nossos produtos expostos nas vitrines, significa dizer que metade das pessoas não está conseguindo encontrar seus produtos. Podemos chamar os especialistas em UX e concluir que um dos fatores que mais ajuda os usuários nesta experiência são os nossos filtros. Eles irão, em parceria com os analistas de CRO, coletar os dados de uso dos filtros (quais mais utilizados, como menor preço, tamanho ou itens em liquidação) e como esses filtros são utilizados (quem clica onde, em resumo).
Após essa análise de filtros, os especialistas ainda conversam com os usuários para entender mais sobre seu comportamento, expectativas e desejos como compradores dentro de um site.
Com tudo isso junto, podemos concluir que um filtro que permite múltiplas seleções (tamanho, cor, preço, marca) faz sentido para o consumidor e começar a aplicar na prática com os testes A/B: para uma amostra das pessoas mostramos o filtro normal, já em uso na Netshoes, e parar uma outra metade, o filtro com múltiplas seleções. Ao final, avaliamos qual deu o resultado mais otimista. Se os 50% de usuários com filtros múltiplos aumentaram para 51%, 52% ou 60% das vendas, o CRO foi aplicado de forma correta e pode ser implementado no resto do site.
Na prática, implementar CRO significa aumentar a receita do site com menos tempo de desenvolvimento, em um processo contínuo. Na Netshoes, fazemos muitos testes A/B por ano, em uma busca constante para aumentar a qualidade do nosso serviço, o retorno financeiro para nossos investidores e oferecer uma experiência de compra cada vez melhor para nosso cliente.
E você? Como aplica a metodologia CRO em seu negócio?
|
Afinal, para que serve o CRO?
| 11
|
afinal-para-que-serve-o-cro-19f70f37bc04
|
2018-07-30
|
2018-07-30 15:09:53
|
https://medium.com/s/story/afinal-para-que-serve-o-cro-19f70f37bc04
| false
| 700
|
Aqui nosso time de especialistas compartilhará um pouco da nossa paixão por tech e também da nossa tecnologia open source. Nossa missão - como time e como empresa - é promover inovação contínua, encantar clientes e desenvolver o e-commerce na América Latina. Seja bem-vindx!
| null | null | null |
NSTech
| null |
nstech
| null | null |
Como Fazemos
|
como-fazemos
|
Como Fazemos
| 22
|
Wellington Silva
| null |
120f380752c7
|
wellingtonaraujoteixeira
| 4
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
32881626c9c9
|
2018-09-21
|
2018-09-21 06:50:44
|
2018-09-21
|
2018-09-21 07:20:32
| 1
| false
|
en
|
2018-09-28
|
2018-09-28 19:04:39
| 0
|
19f76f855d54
| 1.169811
| 1
| 0
| 1
|
Deep neural networking programming can be confusing to beginners due to some reasons, such as vectorisation, the forward and backward…
| 1
|
Learning Flow of Deep Neural Network
Deep neural networking programming can be confusing to beginners due to some reasons, such as vectorisation, the forward and backward propagation logic, etc.. And most of the computation are matrix based, such as np.dot if using Python numpy. Hence, it will be helpful to have a visualised diagram to show the computing flow. Below diagram is a good try to represent such information.
In blow diagram, each squared block represents a layer in the neural network model. In the upper row which represents the forward propagation, the input of the block i is a[i-1]. The output of the block is a[i]. And there is cashed value for each block which is z[i]. Within each block, there is W[i] and b[i] representing the weighs and bias. The forward computation starts from block1 and ended with block L where L is the number of the layers. In the lower row which represents the backward propagation, the input is da[i]. The output is da[i-1], dW[i] and db[i]. The backward computation starts from block L and ended at block 1. After forward and backward propagation, you get a list of dW[i] and db[i].
Here is the sample code to implement this logic:
— -one iteration — -
a = []
a[0] = x
Z = []
#forward propagation
for i in range(1, L):
z[i] = np.dot(W[i], a[a-1]) + b[i]
a[i] = g[i](z[i])
#backward propagation
for i in range(1, L):
dW[i] = …
db[i] = ….
#update parameters
— -end of one iteration — -
|
Learning Flow of Deep Neural Network
| 1
|
learning-flow-of-deep-neural-network-19f76f855d54
|
2018-09-28
|
2018-09-28 19:04:39
|
https://medium.com/s/story/learning-flow-of-deep-neural-network-19f76f855d54
| false
| 257
|
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
| null |
datadriveninvestor
| null |
Data Driven Investor
|
info@datadriveninvestor.com
|
datadriveninvestor
|
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
|
dd_invest
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Shaoliang Jia
| null |
51e75a1be6b6
|
shaoliang.jia
| 5
| 10
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-14
|
2017-09-14 14:48:15
|
2017-09-14
|
2017-09-14 14:53:51
| 1
| false
|
en
|
2017-09-15
|
2017-09-15 19:32:33
| 7
|
19f779323f88
| 3.724528
| 0
| 0
| 0
|
Artificial intelligence (AI) technology becoming more prominent in the world today, and it even has a big impact on the talent acquisition…
| 5
|
How AI is Impacting the Recruitment Industry
Artificial intelligence (AI) technology becoming more prominent in the world today, and it even has a big impact on the talent acquisition industry. It’s important to note that this introduction of new technology has a great impact on the future of company procedures and culture, in particular, the hiring process and candidate experience.
Recruiters, including human resources (HR), need to have a good understanding of what AI is in order to fully understand what it has to offer, allowing them to leverage all of its capabilities to improve efficiencies and processes. It truly begs the question: how is AI impacting the recruitment industry, and how can recruiters and HR implement it to improve the candidate experience?
REDEFINING “HUMAN” IN HUMAN RESOURCES
The implementation of AI gives the HR department an entirely new meaning. In the past, the time and effort that went into completing repetitive tasks during the onboarding process was lengthy and costly. The time that’s put into attracting, sourcing and acquiring talent would cost the company time and money, all of which are now lessened with the help of AI technology.
Keep in mind that AI can do as much or as little as you want it to. So, you’d might as well leverage its capabilities to help improve efficiencies, workflows and the hiring process. One way HR can do this is by implementing resume screening tools that will ultimately find and present the applicants that best fit the role, eliminating the need to read through countless resumes and allowing recruiters to focus on higher priority objectives. This way, recruiters have the ability to focus more attention and efforts toward candidates that are more likely to become hired.
AI technology can also speed up the time it takes to complete small, mundane tasks — such as scheduling meetings, sorting emails and sending out reminders. This allows recruiters to dedicate more time to improving engagement with employees, helping to promote a good company culture while optimizing workflows.
Recruiters can also leverage AI technology to improve the candidate experience. Chatbots have been used to help keep candidates engaged throughout the recruiting process by answering questions with automated responses to questions, providing value-added information about the company and position, and ensuring that the candidate is fully engaged and not lost.
Additionally, AI can help with reporting by sorting through both quantitative and qualitative data and making it easier to access all of that information anywhere and at any time. Prior to AI, HR would need to spend a lot of time gathering the data and sorting through it all. Therefore, it’s crucial to maintain accurate numbers to ensure accurate automated reporting.
THE PAST, PRESENT AND FUTURE IF AI
The AI market is now continuing to grow, set to revolutionize many aspects of human life. In fact, the AI industry has seen $14 billion in investments just in the past five years alone. Some of the few trailblazers that are at the forefront of the industry include:
IBM’s Watson Talent claims to “expand human expertise” and “improve people’s impact on the business.” In other words, it’s an extension of HR that helps promote efficiency, productivity and quality. Watson Talent provides solutions not only HR and the company but also applicants by providing them with the tools necessary to become candidates and potential hires.
Karen.AI reviews and analyzes hundreds of resumes almost instantaneously to find the most qualified candidates for the position. This program also has a chatbot feature that engages with candidates and provides a pre-screening feature that can help cut costs and save time.
Eva Bot is a virtual assistant that sends out personalized corporate gifts to customers and candidates after an interview or meeting with a high profile candidate. The program can also increase customer satisfaction, gather net promoter scores (NPS) on the candidate experience, and incentivize feedback.
FAMA allows companies to screen social media platforms to discover the online identities of their candidates. While this may seem intrusive, this gives companies the ability to learn more about their candidates on a more personal level.
CONCERNS ABOUT AI
Concerns surrounding AI and its approach, effectiveness and efficiencies are common. The biggest concern that has been raised is the invasion of privacy. Since AI is not a living entity, there are concerns that these programs are unable to discern private and confidential information, don’t have an understanding of laws and regulations, and are likely to violate these rules without knowing. There would be little to no concerns, however, if these were under strict regulation by humans. Glitches, programming errors, and the constant need for software updates are also major areas of concern.
In the same regard, however, AI technology can also be easily reprogrammed or patched in the event that something goes wrong, the software fails, or it needs to complete a new task. For humans, it’s not as easy to re-train or even completely replace an employee, as this can delay any current processes. Because of this, many fear that AI will completely absorb jobs that humans currently do.
If you can understand that AI technology creates efficiency, it may be easier to see the bigger picture. It can become a great asset to any company, if properly used, by saving time, money and resources in the recruiting process. In the end, AI redefines exactly what “human” means in “human resources”.
Not sure where to start? Contact us to learn more about how you can incorporate AI into your recruiting strategy.
This story first appeared on www.ouicruit.com
http://blog.ouicruit.com/how-ai-is-impacting-the-recruitment-industry
|
How AI is Impacting the Recruitment Industry
| 0
|
how-ai-is-impacting-the-recruitment-industry-19f779323f88
|
2018-05-08
|
2018-05-08 17:35:20
|
https://medium.com/s/story/how-ai-is-impacting-the-recruitment-industry-19f779323f88
| false
| 934
| null | null | null | null | null | null | null | null | null |
Recruiting
|
recruiting
|
Recruiting
| 15,454
|
Vahid Behzadi
| null |
29de34315ea8
|
vahid_58385
| 1
| 20
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-12
|
2018-06-12 17:18:30
|
2018-06-12
|
2018-06-12 17:22:24
| 1
| false
|
en
|
2018-06-12
|
2018-06-12 17:22:24
| 22
|
19f7fc25878e
| 3.003774
| 2
| 0
| 0
|
With Cloud-Based Solutions Kids Data Is At Risk
| 4
|
Toy Companies Using Voice Recognition Must Protect Your Child’s Privacy with Embedded AI
With Cloud-Based Solutions Kids Data Is At Risk
Kids Are Communicating With Technology Through Voice
There is nothing more important than ensuring your child’s safety. What that includes in the digital age is vastly different than even 25 years ago. Parents now face privacy safety concerns for their children that no previous generation of adults has had to reckon with.
A Nielsen study found that 45% of children aged 10–12 have a mobile phone with a service plan. A study by Common Sense Media found that 42% of children aged 8 and younger have a tablet. This unparalleled access to the internet exposes children to negative internet experiences, such as cyberbullying, negative content, or being contacted by unknown people, putting them at a greater risk and increasing the possibility of a security breach.
Are You Listening: Voice Search and Data Collection
What do children do with their mobile devices? According to the Nielsen study, 81% of kids aged 6–12 use it for text messaging, 59% download apps, and 53% access the internet. Additionally more children are using YouTube-Kids voice command to search for contents.
The ease with which children can carry out these tasks is amplified by devices such as Amazon Echo and Amazon Echo Dot for kids. The use of voice commands is rapidly increasing; it’s estimated that voice searches will comprise over 50% of all searches by 2020.
And though the Echo Dot Kids speaker provides various child-friendly functions, it also poses a significant privacy risk. Amazon’s Children’s Privacy Disclosure states that the company may capture data, including “name, birthdate, contact information, voice, photos, videos, location, certain activity, and device information”.
Not Just Amazon Echo — Toy Makers, Too
Amazon is not alone in collecting data from voice searches, Google also employs teams of data scientists from all over the world, which leads to increased chances of data breaches. But it’s not just the tech giants rolling out speech recognition products: toy makers, such as Disney, Mattel, Vtech, Jibo, and Kuri, also use speech recognition software in their products as they shift towards creating more voice-enabled devices.
Disney has been using speech recognition research since 2016 for their Mole Madness game, while Mattel released a “smart Barbie” in 2015. Despite privacy concerns with the doll, which can have a conversation with a child, they are still being sold in some stores, including Walmart. Additionally, Mattel planned to release a smart device (like Amazon Echo) called Aristotle but canceled its release after privacy concerns arose.
Privacy issues are one thing; data breaches are a completely different beast. In 2015, Vtech experienced a data breach, where about 5 million parent accounts and 6 million student accounts were compromised, giving hackers access to sensitive and personal information, such as names, emails, addresses, and passwords.
A Breach of Regulations
Besides the risk of data breaches, an investigation by The Guardian found that the way tech companies store voice recordings is often in violation of the Children’s Online Privacy Protection Act (COPPA), which was created to regulate the collection and use of data from children 13 years old and younger. Companies are required to receive parental consent to store children’s recordings, which many failed to do.
Khaliah Barnes, associate director of the Electronic Privacy Information Center (EPIC), told The Guardian, “Recording children in the privacy of the home is genuinely creepy, and this warrants additional investigation by the Federal Trade Commission (FTC) and [US] states.”
Final Thoughts: Big Toy Companies Using Voice Recognition Tech Can Protect Your Child’s Privacy with Embedded Speech Recognition
Having toys with voice recognition software isn’t necessarily bad, as it can help children learn how to communicate properly and even teach them how to say “please” and “thank you” (a function of the Amazon Echo Dot). The use of data is even more important for kids toys because identifying what children are saying through speech recognition software is more difficult than understanding adults for a variety of reasons.
However, toy companies need to be more concerned about children’s privacy and must consider an embedded (offline) speech recognition solution, such as that developed by the team of neuroscientists and artificial intelligence engineers at KidSense.ai.
KidSense.ai is the only COPPA Compliant provider offering an offline embedded speech recognition solution with the highest measures of security and privacy. No data is collected from kids, and parents can use voice-enabled technology with peace of mind.
|
Toy Companies Using Voice Recognition Must Protect Your Child’s Privacy with Embedded AI
| 4
|
toy-companies-using-voice-recognition-must-protect-your-childs-privacy-with-embedded-ai-19f7fc25878e
|
2018-06-14
|
2018-06-14 07:35:29
|
https://medium.com/s/story/toy-companies-using-voice-recognition-must-protect-your-childs-privacy-with-embedded-ai-19f7fc25878e
| false
| 743
| null | null | null | null | null | null | null | null | null |
Speech Recognition
|
speech-recognition
|
Speech Recognition
| 548
|
Kadho Inc
|
Produces #edgevoiceai based on true#artificialintelligene to enable #kids to #communicate with #technology — #KidSenseai #kidsense
|
2de4170d4742
|
KadhoInc
| 626
| 543
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-29
|
2017-10-29 14:43:01
|
2017-10-29
|
2017-10-29 17:00:36
| 1
| false
|
en
|
2017-10-30
|
2017-10-30 12:26:52
| 2
|
19fa9d7c5ae8
| 2.332075
| 3
| 0
| 0
|
First things first,
this is not an apocalyptical vision about a day when humans will be replaced by machines.
| 5
|
Sertanejo Music Trends for 2018 Build By Artificial Intelligence
First things first,
this is not an apocalyptical vision about a day when humans will be replaced by machines.
But, since the firsts monkeys started using sticks to bring ants of trees, tools are introduced in ours lifes to acelerate any kind of process. And if you are a songwriter i need to say one thing to you. Your job is at risk. Or maybe you just have a new coworker=)
I know that is easier thinking in A.I. in a phisical and mathematical contexts. But its true, a computer can learn a criative process. And my machine, a simple Asus i3, 8 MB of RAM, learn a little this weekend.
What i did? Inspired in Matteo Kofler publication where he teach his pc to write poems like Shakespare I start thinking,”Wooow, cool! But who likes Shakespare nowadays?” And moved by this thinking and with a need to put some brazilian sauce on it, i start to build a Sertanejo (a kind of country music) music songwriter robot.
(yes, i know they need a name, but you can call him SSW-1 while i didnt created a generator of robot names).
In simple words what SSW-1 do is:
Scraping top music lyrics from web.
Learn with this data how to create a new one.
Look simple? Thanks to TensorFlow i can develope a PhD level project with my not-complete-specialization degree. I will not to deep, because its not a cientifical post, but TensorFlow is an google open source software library for machine learning, and its amazing. (Curious about TensorFlow, a little introducion here)
How deep learning works
I have used Recurrent Neural Network in a Deep Leaning algo to training the model based in a 1 MB data (top 1.000 Sertanejo songs). Despite being small the results looks great and funny. He write legible words, knows a little about semantics and is learning the structure of music. Here is an example of what you can hear on brazilian radios next year:
Mal Passa (part. Menala Marela)
Olha esse sorriso é sempre amor de você
Não vai mais se amar
E se eu te esqueço a vida eu vou te amar
Vou contar os meus sonhes dessa vida eu vou
Pode saber que acabou de volta
De amanheceu e o meu coração
E aí, a gente fazer
Eu quero ser você
Eu sou o sol
Eu tenho meu coração preciso de você
He created a new singer, Menala Marela, this singer does not exist and no one of these sentences exist in original texts. It seems that the SSW-1 is aware of something about “sofrência,” a kind of feeling that the Sertanejo music brings.
Conclusion
I have strong evidences of computational ability to learn to write “creatively.” He learned this in just 5 hours and with a small amount of data. What can SSW-1 do with 5x more data and the time it takes to learn?
I have intuitite next steps:
Extract more data (maybe 4–5 MB).
Learn more about the parameters optimization.
Contact some singer to bring SSW-1 songs to life.
See you in SSW-1.1 update!
P.S. 1: I will update this post with my project in github soon!
P.S. 2: Sorry about my english. If anyone can review the text and send me by e-mail, I will be deeply grateful :)
|
Sertanejo Music Trends for 2018 Build By Artificial Intelligence
| 6
|
the-top-trend-sertanejo-music-for-2018-build-by-i-a-19fa9d7c5ae8
|
2017-10-30
|
2017-10-30 13:12:31
|
https://medium.com/s/story/the-top-trend-sertanejo-music-for-2018-build-by-i-a-19fa9d7c5ae8
| false
| 565
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Thiago Narasimha
| null |
531830c07194
|
thiagonarasimha
| 140
| 149
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
f702855ffe47
|
2017-11-18
|
2017-11-18 20:00:25
|
2017-11-18
|
2017-11-18 20:00:27
| 10
| false
|
en
|
2017-11-18
|
2017-11-18 20:00:27
| 14
|
19faa87d781b
| 2.231132
| 0
| 0
| 0
| null | 3
|
Is Santa Claus Real?
# medium.com
My team is very serious about Christmas and the gifts we get. So, we wanted to track Santa Claus and know wh…
How you can steal jobs from robots: Increase your critical intelligence
# medium.com
With automation on the rise, we’ll lose the majority of current jobs. Optimists look at history and quote th…
A Renaissance In Radiology
# medium.com
by Cancergeek Tweet exchage from Dr. Topol (Eric Topol) Andrew Ng (@AndrewYNg) and Sherry Reynolds (Sherry R…
Robots Are Getting Scary Good
# medium.com
The internet has been buzzing as to two separate videos have shown up on our favorite social media timeline….
Development Updates
# blog.intelligenttrading.org
Today we have sent out user tokens, which will allow token holders access to the ITT Alpha Bot. Users were s…
Singapore puts fintech in spotlight with AI investment, global partnerships
# medium.com
Singapore has announced a slew of initiatives aimed at driving the development and adoption of new technolog…
Evolution
# medium.com
Memories serve a purpose, a constant reminder to the human consciousness about the evolution. The evolution …
There is No Randomness, Only Chaos and Complexity
# medium.com
https://en.wikipedia.org/wiki/Julia_set Here’s a question the perhaps needs to be asked, but hasn’t been ask…
Did Karl Marx Predict Artificial Intelligence 170 Years Ago?
# medium.com
An almost-unknown piece of his writing offers insight on robotics and AI in today’s world. I spend a lot of …
2017 台灣 人工智慧 暨 資料科學 年會 資料整理
# medium.com
基本資料 官方網站 http://datasci.tw 人工智慧年會 臉書 粉絲專頁 https://www.facebook.com/twaiconf/ 資料科學年會 臉書 粉絲專頁 https://www.fac...
|
10 new things to read in AI
| 0
|
10-new-things-to-read-in-ai-19faa87d781b
|
2018-06-08
|
2018-06-08 05:33:56
|
https://medium.com/s/story/10-new-things-to-read-in-ai-19faa87d781b
| false
| 260
|
AI Developments around and worlds
| null | null | null |
AI Hawk
|
aihawk1089@gmail.com
|
ai-hawk
|
DEEP LEARNING,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING
| null |
Deep Learning
|
deep-learning
|
Deep Learning
| 12,189
|
AI Hawk
| null |
a9a7e4d2b403
|
aihawk1089
| 15
| 6
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-02
|
2018-02-02 21:01:41
|
2018-02-02
|
2018-02-02 21:03:17
| 1
| false
|
en
|
2018-02-02
|
2018-02-02 21:03:17
| 0
|
19fbd3b98e41
| 1.618868
| 1
| 0
| 0
|
From the beginning, Iagon has set out to develop a cloud storage platform that would, not only be able to compete with some of the…
| 4
|
The Roadmap to Success & Why You Need One
From the beginning, Iagon has set out to develop a cloud storage platform that would, not only be able to compete with some of the industries most prominent faces, but would essentially replace the competition, completely.
In this blog piece, we will be exploring the Iagon Roadmap to Success, and discussing why we value an accurate and adaptable blueprint of our internal processes.
As we sit on the precipice of life changing technologies, Iagon seeks to deliver cost-effective and secure cloud storage to the masses. Throughout the course of our journey to the forefront of this revolutionary technology, we have overcome numerous obstacles, while following a roadmap to success that has ensured our continued growth.
Updating our steps as they occur, the Iagon Roadmap has been a great asset and source of information for the team, along with individuals and corporations that desire to track our platform’s developmental growth and continued success. Additionally, it is important for us to remain consistent in our review of current industry standards, aligning our platform blueprint, while making changes and enhancements, transforming it into a fully vetted roadmap.
The Iagon roadmap details our tentative project plans, outlining milestones and requirements, as well as changes and adjustments as they occur. Furthermore, as you peer into our developmental process, a lot goes into effectively planning out platform production, however, goals change and aspirations expand, thus causing a shift in requirement specifications.
Driven to compete with industry leaders, Iagon has been cultivating a background of successes. As cloud computing takes off, the Iagon vision of technological superiority has been supported by the completion of time-sensitive requirements that have been compiled on the Iagon Roadmap to Success, including our initial milestones, which were comprised of: Platform Design and Architecture, Platform Development and White Paper Creation; bringing us to the brink of releasing our Pre-ICO, Beta Platform.
Peering into the future, cloud computing is, and will continue to, provide end users with increased access to computing power, however, it takes a significant level of developmental prowess to ensure that it’s potential can be fully optimized.
Consequently, the use of a Roadmap and/or developmental blueprint can bring innumerable benefits to the ease of development.
|
The Roadmap to Success & Why You Need One
| 1
|
the-roadmap-to-success-why-you-need-one-19fbd3b98e41
|
2018-02-04
|
2018-02-04 09:13:48
|
https://medium.com/s/story/the-roadmap-to-success-why-you-need-one-19fbd3b98e41
| false
| 376
| null | null | null | null | null | null | null | null | null |
Cloud Computing
|
cloud-computing
|
Cloud Computing
| 22,811
|
Navjit Dhaliwal
|
CEO at Iagon
|
3b64ce297c00
|
navjitdhaliwal
| 21
| 11
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-06
|
2017-11-06 15:07:41
|
2017-11-06
|
2017-11-06 15:10:41
| 0
| false
|
en
|
2017-11-06
|
2017-11-06 15:10:41
| 0
|
19fc608c445
| 2.743396
| 0
| 0
| 0
|
What do you imagine when you hear the term ‘smart city’?
| 5
|
Our week on Facebook: ICYMI
What do you imagine when you hear the term ‘smart city’?
Many will think flying cars, and high tech holograms helping out civilians.
Unfortunately, it is not quite like that.
Many cities all over the world are classified as ‘smart’, with London and Bristol being marked 80 out of 100 in terms of smart capabilities.
So, what are some of the features of a smart city?
- Inclusive, technology-driven development
Smart cities will set out to use technology, information and data to improve infrastructure and services.
This refers to access to water, electricity, affordable homes, IT connectivity etc.
- Increased mobility
Urban mobility will be enhanced by increased access to public transport (transit-orientated development), and innovative solutions such as Smart Parking, Intelligent Traffic Management and Integrated Multi-Modal Transport.
- Sustainability
Efforts will also be made to generate energy and create compost from waste, and reduce the amount of waste generated from construction, and destruction of buildings. Water resources will also be managed more effectively.
What else do you think would be an important feature of a smart city?
We’ve all heard of self-driving cars. So, what companies are behind it?
Automation is taking over the tech world, and the biggest phenomena that has everyone talking is self-driving cars.
A mix of artificial intelligence, R&D, and particular software, these cars are being worked on under our noses constantly, and there has been a huge mix of opinion.
With most usually focusing on the debate surrounding safety, we’re going to go into the companies who are actually creating and developing the tech-hot cars.
Apple
Is making a phone the same as a car? We’re not quite sure, but we’d be very surprised if Apple wasn’t getting in on the action.
Last year, they began testing their software in California, and they are in talks with Volkswagen and BMW about technology licensing.
Uber
The exposure for Uber’s self-driving processes has recently stemmed from a court case with Google’s parent company Waymo, after accusations of stealing ideas in short.
Samsung
Just this year, this tech giant was awarded the get-go to test their software in Hyundai cars in South-Korea.
Given that Samsung, Apple and Google all also manufacture and sell smartphones, one perspective is that the domain of self-driving cars is quickly becoming a new battle in the front for the future of consumer electronics.
Smart cities are the future. So, what are they, and what are the top 10 smart cities in the UK?
It is very easy to think that a smart city is a futuristic land, with flying cars and high tech gadgets on every corner.
It is that in a sense, minus the flying cars.
A smart city is an urban area that uses different types of electronic data collection/sensors to supply information efficiently.
It means they are more resilient, digitally aware and are better at responding to challenges.
So, what are top 10 smart cities in the UK, and what is their score out of 10?
10. Sheffield (38.1)
9. Nottingham (51.8)
8. Peterborough (68.2)
7. Leeds (70.5)
6. Milton Keynes (72.5)
5. Manchester (74.2)
4. Glasgow (75.1)
3. Birmingham (77.9)
2. Bristol (80.2)
1. London (80.5)
These ten cities were determined based on the breadth and depth of their smart or future city strategy.
This refers to digital innovation, urban mobility, energy, social care, sustainability and so forth.
So, after reading this, do you see any traits about your city that are ‘smart’?
AI: the change in the future of our world
Artificial Intelligence is disrupting every industry from healthcare, education to law.
Arguably, some people are afraid of AI. Some are enthralled by it.
What are some of the benefits surrounding AI?
More proficient and extended learning.
The presence of AI in everything we do will allow our minds to expand, and in the workplace, we’ll be able to spend more time on tasks that matter.
Making our working lives easier.
AIs are being implemented into businesses to do the repetitive tasks, with more accuracy. In some sectors, they’ll have much larger responsibilities.
But no, they will not be stealing your jobs.
Improved quality of life.
AI will provide us with more efficient healthcare, as well as completing mundane tasks that humans do not enjoy doing. They will improve quality of life.
|
Our week on Facebook: ICYMI
| 0
|
our-week-on-facebook-icymi-19fc608c445
|
2018-05-22
|
2018-05-22 17:11:59
|
https://medium.com/s/story/our-week-on-facebook-icymi-19fc608c445
| false
| 727
| null | null | null | null | null | null | null | null | null |
Smart Cities
|
smart-cities
|
Smart Cities
| 5,072
|
Anson McCade
|
We are #digital, #tech, #financial, #change and #informationsecurity #recruiters. Follow for industry news, career advice and job opportunities.
|
af138fa80f9e
|
AnsonMcCade_6083
| 6
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-30
|
2018-01-30 07:39:59
|
2018-01-30
|
2018-01-30 07:40:53
| 0
| false
|
en
|
2018-01-30
|
2018-01-30 07:40:53
| 14
|
19ff0b051f76
| 3.649057
| 2
| 0
| 0
|
For financial institutions mining of big data provides a huge opportunity to stand out from the competition. The data landscape for…
| 5
|
5 big data use cases in banking and financial services
For financial institutions mining of big data provides a huge opportunity to stand out from the competition. The data landscape for financial institutions is changing fast. It is not enough to leverage institutional data. This has to be augmented with open data like social to enhance decision making.
By using data science and machine learning to gather and analyze big data, financial institutions can reinvent their businesses. Financial Institutions are becoming aware of the potential of these technologies and are beginning to explore how data science and machine learning could enable them to streamline operations, improve product offerings, and enhance customer experiences.
Given our experience in Ideas2IT, implementing complex use cases for the financial industry, here we break down the top use cases of machine learning and data science in finance.
Catch stock market cheaters
Market surveillance depends on algorithms to identify patterns in trading data that might indicate manipulation and alert staff to investigate.
But the huge volumes of data can cause an enormous number of alerts, many of which are false alarms.
FINRA monitors roughly 50 billion events every day, including stock orders, modifications, cancellations, and trades. It looks for patterns in the events to uncover potential rule violations. But, most of the alerts received are false.
To tackle this issue, FINRA is developing a machine learning software that can look beyond the patterns and understand which situations truly deserve to be mentioned as red flags. In other words, the machine learning software will learn which trading patterns lead to legal charges, to classify the right ones.
FINRA is planning to test its machine learning software alongside its existing system to compare the results. It has also moved its market surveillance system to AWS cloud, giving it more computing power to analyze data quickly.
Detect phone fraudsters
Customers contact their financial institutions over the phone to check account balances, open new lines of credit, change account information. Mostly, a call centre agent facilitates the customer’s request. However, the agents have few ways to determine whether the person they are speaking to on the phone is the actual customer, and this poses a serious threat to that customer’s information.
In recent years, the scope of call center fraud has become truly staggering. In 2015, one in every 2,000 calls was fraudulent. In 2016, that number jumped to 1 in 937, an increase of 113%.
To solve this problem, Lloyds banking group partnered with Pindrop, an AI startup, to detect fraudulent phone calls. Pindrop can identify 147 features of a voice from a phone call or a Skype call which can help a person identify information like the caller’s location. The software will be integrated into Lloyd’s customer service offices. The banking agents will get an alert if the call is fraudulent so that they can pass the call to fraud specialists.
Lloyds banking group will introduce the software across the Lloyds Bank, Halifax and Bank of Scotland brands early next year.
Understand customers better
Today banks are using big data to create a 360-degree view of each customer based on how everyone individually uses mobile or online banking, branch banking or other channels.
A good example of this is Danske bank. The bank wanted to predict the needs of their customers and understand them on a more personal level. So, they created an in-house startup, advanced analytics, to transform business units using machine learning and AI.
The team analyzed large volumes of data to identify their customer’s preferred means of communication, such as phone, email, or social media. This valuable information has increased the hit rate of their marketing campaigns four times.
They also built a machine learning model to study the online behavior of their customers and discover situations where customers needed financial advice.
Streamline client payment processing
Reconciling payments is costly and time-consuming. Especially, when there are large quantities involved. Bank of America Merrill Lynch developed a new solution in August 2017 called Intelligent Receivables (IR) to help companies drastically improve their straight-through reconciliation (STR) of incoming payments.
Bank of America Merrill Lynch’s Intelligent Receivables, powered by High Radius’s leading-edge machine learning technology, will help their corporate clients to accelerate the adoption of electronic payments from their end-customers.
IR is a well-suited solution for firms that manage lots of payments where the remittance information is either missing or received separately from the payment.
Reduce financial crimes and parse commercial loan agreements
The Singapore based OCBC bank revealed plans to use artificial intelligence and machine learning as a part of its efforts to curb financial crimes. The bank plans to use these technologies to monitor anti-money laundering and to improve the accuracy in detecting suspicious transactions.
OCBC Bank along with ThetaRay, a fintech company, conducted a proof of concept (POC) at the starting of this year. Now, the bank plans to start an extended POC and pre-implementation phase. The algorithm will detect anomalies in transaction behavior by accessing different features such as products, customers, and risks. In the POC stage, the technology was deployed to analyze OCBC’s one-year transaction data, and it decreased the number of alerts, that did not needed further review, by 35%.
Interpreting legal and financial documents is a mind-numbing job for legal teams in financial institutions.
JP Morgan Chase & Co built a machine learning program called COIN(contract intelligence) to analyze financial deals. Before the project went live in June 2016, lawyers and legal teams spent 360,000 hours parsing commercial documents. Whereas, now, the software can review documents in seconds, and makes fewer errors.
The bank also plans to use COIN for other types of complex legal filings such as credit-default swaps and custody agreements.
Originally published at www.ideas2it.com.
|
5 big data use cases in banking and financial services
| 2
|
5-big-data-use-cases-in-banking-and-financial-services-19ff0b051f76
|
2018-05-04
|
2018-05-04 11:07:07
|
https://medium.com/s/story/5-big-data-use-cases-in-banking-and-financial-services-19ff0b051f76
| false
| 967
| null | null | null | null | null | null | null | null | null |
Fintech
|
fintech
|
Fintech
| 38,568
|
Mendha Murugan
|
Works with Ideas2IT Technologies as Solution Consultant. Assisting on Chatbots, AI and Datascience and thereby delivering best value to the customers.
|
ba6fe670f610
|
mendha
| 2
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
9190baf3d6c7
|
2018-04-23
|
2018-04-23 16:42:05
|
2018-04-23
|
2018-04-23 17:31:41
| 1
| false
|
pt
|
2018-04-23
|
2018-04-23 17:31:41
| 4
|
1a00b018d932
| 3.030189
| 8
| 1
| 0
|
Novo artigo publicado por pesquisadores da Microsoft, Stanford e Columbia University discutem a importância de se pensar a produção de…
| 5
|
Yiran Ding
Seria o futuro do trabalho a produção de dados?
Novo artigo publicado por pesquisadores da Microsoft, Stanford e Columbia University discutem a importância de se pensar a produção de dados como uma forma de trabalho e não como uma moeda de troca.
Nas origens da internet, um dos lemas entre hackers e entusiastas da tecnologia era de que a informação deve ser livre (Information wants to be free), isto é, gratuita, porém foi justamente essa vontade e intenção que fez surgir o que Jaron Lanier, chefe de tecnologia da Microsoft, chama de “siren servers” (servidores sereia): entidades que encontraram uma maneira de lucrar em cima desse ambiente “gratuito” e fazê-lo crescer de forma inimaginável. O efeito colateral desse tipo de empreitada, por outro lado, é o que os economistas chamam de “monopsony” dos compradores de dados. Em outras palavras, trata-se de um cenário em que há uma inversão do sistema de monopólio, no qual há poucos compradores que possuem grande parte do poder de mercado — neste caso, um mercado de dados.
No artigo “Should We Treat Data as Labor?”, Imanol Arrieta Ibarra, Leonard Goff, Diego Jiménez Hernández, Jaron Lanier e E. Glen Weyl tratam da questão de como os dados são o novo petróleo da economia mundial e de como isso se tornou, na verdade, um problema — visto os escândalos em torno de como o Facebook distribui os dados coletados pelos usuários, bem como outros casos de tecnologias que são supostamente “gratuitas”, mas que coletam dados para serem revendidos a anunciantes e demais empresas interessadas. Para os autores, enquanto os dados forem tratados como um território livre para ser explorado pelas empresas de tecnologia, há uma ausência de poder de barganha que destitui uma maneira significativa de negociar pagamentos pelos dados produzidos por esses usuários, bem como os deixa à mercê de invasão de privacidade. É por isso que os autores, então, sugerem a transformação do sentido de produção de dados como um trabalho em vez de um capital.
“Nós defendemos que pensar dados como trabalho em vez de capital é muito mais do que um jogo de palavras, uma distinção sem diferença. É importante que nós, como sociedade, pensemos e conversemos sobre dados gerados por usuários como um input para a produção, assim como tratamos o trabalho de maneira diferente do capital. Há um certo senso de respeito e significado que vem do nosso trabalho, e trabalhadores respondem a incentivos para trabalhar mais duro e produzir serviços e produtos com maior qualidade. Nós defendemos que tratar os retornos por dados como retornos por capital não só aumenta a desigualdade, como também limita os ganhos em produtividade diante da revolução da inteligência artificial.”
Isto é, diante de um cenário do futuro do trabalho no qual a inteligência artificial e a robótica irá tomar muitos dos postos de trabalho hoje ocupados por humanos, o desemprego será uma consequência para essas pessoas que deixarão de produzir riqueza tanto para o país quanto para si mesmas. Com a situação atual, na qual dados são tratados como uma moeda de troca para termos acesso a plataformas e serviços de maneira gratuita, isso não será sustentável à medida que robôs passarem a trabalhar por nós.
Desse modo, assim como a proposta da renda básica universão propõe uma redistribuição das riquezas produzidas pela mão de obra robótica, há também a proposta de se criar uma renda básica que redistribua a riqueza gerada pela manipulação e revenda desses dados produzidos e coletados pelos usuários ou, quem sabe, trabalhos que se baseiem na produção de dados de forma que taggear seu amigo em uma foto ou publicar uma selfie em uma rede social não seja mais simplesmente algo para seus amigos e familiares curtirem, mas que esses conteúdos sejam recompensados, por exemplo, em um sistema baseado em blockchain.
Essa é uma ideia que o artista e ativista Manuel Beltrán vem defendendo há alguns anos com suas instalações e projetos artísticos como o Institute of Human Obsolescence ou mesmo o Data Production Labour (2017), do qual falamos já em um post anterior. Para os autores do artigo “Should we treat data as labor?”, porém, há ainda vários desdobramentos que não aparecem no material publicado pelo grupo, mas já existe esse movimento em se pensar criticamente a situação e de achar alternativas para um futuro no qual a inteligência artificial e a robótica terão papel importante e provavelmente decisório no futuro do trabalho e da humanidade.
Assine a nossa news sobre comunicação, inovação e impacto positivo.
|
Seria o futuro do trabalho a produção de dados?
| 17
|
seria-o-futuro-do-trabalho-a-produção-de-dados-1a00b018d932
|
2018-05-25
|
2018-05-25 02:25:47
|
https://medium.com/s/story/seria-o-futuro-do-trabalho-a-produção-de-dados-1a00b018d932
| false
| 750
|
Periódico sobre comunicação, futurismo e impacto positivo.
| null |
uplab.cc
| null |
UP Future Sight
|
lidia@upline.com.br
|
up-future-sight
|
FUTURISMO,TECNOLOGIA,FUTUROLOGIA,FICÇÃO CIENTÍFICA,TENDÊNCIAS
| null |
Future
|
future
|
Future
| 22,833
|
Lidia Zuin
|
Brazilian journalist, MA in Semiotics and PhD candidate in Visual Arts. Head of innovation and futurism at UP Lab. Cyberpunk enthusiast and researcher.
|
479f965ebf95
|
lidiazuin
| 1,323
| 344
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
d8c4ad013dfd
|
2017-06-20
|
2017-06-20 02:35:08
|
2017-10-13
|
2017-10-13 16:33:56
| 2
| false
|
en
|
2017-10-19
|
2017-10-19 18:53:26
| 13
|
1a00f133dba9
| 5.696541
| 706
| 9
| 0
|
When Numerai launched in December 2015, it was a one page website that looked like a Kaggle data science competition except everything was…
| 4
|
Numerai’s Master Plan
When Numerai launched in December 2015, it was a one page website that looked like a Kaggle data science competition except everything was black and more cinematic. We didn’t want it to look like any other hedge fund in the world — because we weren’t. We spent all this time on design and shooting videos (like this and this), and it seemed like a big distraction.
The core idea of Numerai was to give away all of our data for free, and let anyone train machine learning algorithms on it and submit predictions to our hedge fund. This was a very counterintuitive idea. We already had our own internal machine learning models on the data so it seemed like a distraction to open it up to the world.
When we originally decided to pay our data scientists in bitcoin, the price of bitcoin was $400 (now $5600). The idea of paying people in bitcoin was confusing to most people. It seemed like a distraction.
When we announced we were creating our own cryptocurrency on Ethereum in February, the price of ether was $4.65 (now $300). Many of our users and investors had never heard of Ethereum, and they couldn’t understand why we would want our own cryptocurrency. There had never been a hedge fund with their own cryptocurrency. It seemed like a distraction.
Some of our distractions have already proven themselves to be outrageously successful, and others are still developing, but they actually aren’t distractions at all. They are all part of the plan.
The Master Plan
Monopolize intelligence
Monopolize data
Monopolize money
Decentralize the monopoly
1+2=3 and 4 would be awesome.
The Roadmap
The master plan is the high level guide to what we’re trying to do over the next several years. We’re mainly focussed on part one right now: monopolize intelligence. Here’s our current roadmap.
Numeraire Q2|17
The reason we created our own cryptocurrency (called Numeraire / NMR) is because it connects directly to the first part of our master plan. It not only grows our community of data scientists but also improves the quality of intelligence they provide.
Monopolizing intelligence involves getting all the talented data scientists in one place working together on one hedge fund rather than duplicating work in a zero sum game across multiple hedge funds. Aside from building a large data science community, monopolizing intelligence also means having the intelligence be of high quality. We need the predictions that our data scientists provide to be directly useful to our hedge fund’s trading strategy.
Numeraire improves the intelligence on Numerai because of the nature of how it is used. It is not really a currency. It is a token to access the staking tournament on Numerai.
In the staking tournament, data scientists stake Numeraire on their predictions to express their confidence that their model will perform well on live data. If their models perform well, they earn more money (paid in ether). If their models perform poorly, their Numeraire is destroyed.
After the first few stakes of Numeraire were made, it was immediately clear that Numeraire staking was working. The average quality of predictions increased and we were able to isolate the best models to include in our meta model — the model that combines them all together. The underlying staked predictions when combined achieve performance that we cannot match with our internal models.
Since Numeraire can be used to earn money via staking, it has value in its own right. Including the value of the Numeraire rewards, Numerai has paid out millions of dollars, and is now the most well paying data science tournament in the world. This financial incentive has lead to huge growth. In the last three months, the number of data scientists on Numerai has doubled, engagement has doubled and our Slack channel has doubled. A data scientists on Numerai recently uploaded the 1 millionth prediction set. That equates to over 40 billion predictions. We now have 30,000 data scientists on Numerai. That is 100x more than any other hedge fund in the world, and we are not yet two years old.
Due to the extraordinary success of Numeraire staking, we are doubling the payouts in the staking tournament from 3000 USD per week to 6000 USD per week. This number is now 6x higher than when we launched Numeraire.
To emphasize staking even further, the only way to earn USD on Numerai (paid in ether) is to enter the staking competition. In general tournament, we are increasing the Numeraire payouts from 1500 NMR per week to 2000 NMR per week. This gives new users new possibilities to earn Numeraire to compete in the staking tournament. Overall, the changes represent a ~20% increase in payouts.
Numeraire can be earned by anyone competing in our tournaments but there is now a new way to earn it as well. We have open sourced core parts of Numerai, and our community can now earn Numeraire bounties by making contributions to the codebase.
Finally, Numeraire is also the beginning of part 4 of the master plan, which is to decentralize Numerai. Right now it’s impossible to have a real hedge fund be decentralized because stocks are not traded on blockchains, prime brokers don’t give leverage on blockchain assets and so on. In the future, this will probably change and more of Numerai could be decentralized and connected to the Numeraire token.
API Q4|17
You were right, Fred Ehrsam.
Numerai started with a Kaggle style data science competition but that was never the end goal. Numerai needs to use the predictions from our data scientists live in our hedge fund without having access to any of the algorithms that built the models.
To do this, we needed to create an entirely new data science tournament design. We needed to automate payments using cryptocurrency, and we needed to move away from Numerai being a website for people but rather an API for AIs.
The goal for Numerai was to be an API that any artificial intelligence could use to control capital in the economy. The API would pass datasets to the AIs to train on and the AIs would submit predictions back to Numerai. And the AIs would get paid in the only currency they can use and understand: cryptocurrency.
Since the API interacts with the Numeraire smart contract on Ethereum, people will able to build applications on top of Numeraire. For example, a data scientist could build a server that automatically downloads new data from Numerai, trains a machine learning algorithm, stakes Numeraire on the set of predictions, and repeats this process forever earning more and more money and NMR for the data scientist, and adding more and more intelligence to Numerai’s hedge fund.
This isn’t speculative; we wouldn’t be surprised if one of our data scientists built a automated system exactly like this within a few days of the API launching.
Numerai’s new GraphQL API written in Elixir will be released on October 31st.
Reputation Q1|18
Over time, data scientists who regularly achieve concordance, originality, consistency and strong live logloss will also earn bonuses as their reputation grows.
We have had that statement on our help page for many months, and we’re still thinking about how to implement a reputation system on Numerai. Our data scientists have excellent ideas for how it should work. The idea is data scientists are building reputations for themselves as time progesses. Those reputations are valuable to us as another data point for our meta model so we want data scientists with the best reputations to earn the most.
Compute Q2|18
You will be able to send Numeraire to an Ethereum smart contact and get computational power for almost no cost. Wow.
The idea is to create an AWS AMI which has all the software you would need to do machine learning, and also all the connections to Numerai’s API that you would need to send predictions live automatically to Numerai forever. With Compute, the idea of discrete tournaments will start to fade away. Numerai would be able to ping any AI connected to it for new predictions at any time.
Even more futuristic versions of this could use decentralized computing resources like Golem.
Compute will create an entirely new use case for Numeraire which is directly related to improving machine learning models, increasing engagement, and achieving the goals of the master plan.
I’m doing an AMA on r/ethereum at 12pm California time today (October 13th). Ask me anything.
|
Numerai’s Master Plan
| 4,511
|
numerais-master-plan-1a00f133dba9
|
2018-06-03
|
2018-06-03 21:03:45
|
https://medium.com/s/story/numerais-master-plan-1a00f133dba9
| false
| 1,408
|
A new kind of hedge fund built by a network of data scientists.
| null | null | null |
Numerai
|
contact@numer.ai
|
numerai
|
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,HEDGE FUNDS,FINANCE,BLOCKCHAIN
|
Numerai
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Richard Craib
|
Founder of Numerai
|
d28db7a99006
|
Numerai
| 6,208
| 2,197
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
863f502aede2
|
2017-09-15
|
2017-09-15 20:54:20
|
2017-09-15
|
2017-09-15 20:55:46
| 24
| false
|
en
|
2017-09-15
|
2017-09-15 20:55:46
| 1
|
1a023eaab2a5
| 6.791509
| 6
| 1
| 0
|
This paper is the implementation of ‘encoder-decoder reconstructor framework’ for neural machine translation for the English-Japanese…
| 5
|
English-Japanese Neural Machine Translation with Encoder-Decoder-Reconstructor
This paper is the implementation of ‘encoder-decoder reconstructor framework’ for neural machine translation for the English-Japanese translation task.
Introduction
Neural Machine Translation has gained much momentum these days. It has greatly improved from the traditional statistical machine translation, and achieved state-of-the-art performance on translation tasks within many languages.
However, NMT suffers from both over-translation and under-translation problems, that is to say, sometimes it may repeatedly translate words, while sometimes it may miss some words. That’s because NMT models are often seen as a black box, and we do not exactly know the mechanism behind them, as to how they convert the source sentences into the target sentences.
Aiming for this problem, Tu et al. (2017) proposed a ‘encoder-decoder reconstructor framework’ for NMT, which used back-translation to improve the translation accuracy. This paper is the implementation of this framework for the English-Japanese translation task.
Besides, this paper also pointed out that the framework could not achieve satisfactory performance, unless the forward translation model was trained like the traditional attention-based NMT, also called pre-training.
Traditional Attention-based NMT model
The traditional attention-based NMT model proposed by Bahdanau et al. (2015) is shown below.
The encoder converts the source sentence into a fixed-length vector C as a context vector. At each time step t, a bidirectional RNN is used, and the hidden state h_t of the encoder can be represented as:
where the forward state and the backward state can be computed respectively as follows:
and
r and r’ are both nonlinear functions. Then the context vector C becomes:
where q is also a nonlinear function.
In a classical encoder-decoder model, the context vector C calculated by the encoder is directly ‘decoded’ into the target sentence by the decoder. But since the decoder has to process the whole vector, the previous information could possibly be covered by a later processed one. Hence, the longer the context vector, the more likely for the model to lose important information. That’s why attention-based mechanism is introduced, which will focus on certain part of the context vector at each step to guarantee sufficient information.
At each time step i, the conditional probability of the output word can be computed as:
where s_i is the hidden state of the decoder, and it is computed by:
From the equation, we can see that the hidden state s_i at time step i is calculated using the hidden state and the target word at the previous time step i-1, and a context vector c_i.
Different from the long fixed-length vector C mentioned above, the context vector c_i is a weighted sum of each hidden state h_j of the encoder, computed by:
and
where the weight matrix e_ij is generated by an ‘alignment model’, which is used to align the inputs around position j and the output at position i, and α can be understood as an ‘attention allocation’ vector.
Finally, the objective function is defined by:
where N is the number of data, and θ is a model parameter.
Encoder-decoder Reconstructor Framework
The encoder-decoder constructor framework for NMT proposed by Tu et al. (2017) adds a new ‘reconstructor’ structure to the original NMT model. It aims at doing translation from the hidden state of the decoder architecture back into the source sentence in order to compare and improve the translation accuracy. The new structure is described as follows:
At each time step i, the conditional probability of the output ‘source word’ is computed as:
The hidden state s’ is computed in a similar way as the previous decoding process:
Note that the c’ is here called the ‘inverse context vector’ and is computed as:
where s is just each hidden state of the decoder(on forward translation).
And similarly, the α’ is further calculated by:
The objective function is defined by:
Note that this optimaization function contains two parts, both the forward translation part and the back translation part. The hyperparameter lambda specifies the weight between forward translation and back-translation.
According to this paper, the forward translation part measures translation fluency, and backward measures translation adequacy. In this manner, the new structure is able to enhance overall translation quality.
Experiments
The paper uses two English-Japanese parallel corpora: Asian Scientific Paper Excerpt Corpus (ASPEC) (Nakazawa etal.,2016) and NTCIR PatentMT Parallel Corpus (Goto et al., 2013).
The RNN model used in the experiments, with 512 hidden units, 512 embedding units, 30,000 vocabulary size and 64 batch size, is trained on GeForce GTX TITAN X GPU.
The normal attention-based NMT is used as a baseline NMT model.
Note that the hyperparameter lambda is set to 1 in the experiments.
Some examples of the English-Japanese translation tasks are shown below. Note that ‘ jointly-training’ refers to the encoder-decoder reconstructor without pre-training.
Results
Tables 2 and 3 show the translation accuracy in BLEU scores, the p-value of the significance test by bootstrap resampling (Koehn, 2004) and training time in hours until convergence.
The results show that the new encoder-decoder reconstructor framework takes a longer time to train than the baseline NMT, but it significantly improves translation accuracy by 1.01 points on ASPEC and 1.37 points on NTCIR in English-Japanese translation. However, it does not make such an improvement in Japanese-English translation task. Besides, the jointly trained model performs even worse than the baseline model.
Furthermore, the paper tests whether the new model can better solve the over-translation and under-translation problems mentioned above. For example, Figure 3 shows that the baseline model failed to output ‘乱流と粘性の数值的粘性の関係を基に’, while the proposed model succeeded in translating it. Figure 4 shows that 新生兒’ and ‘三十歳以上’ are repeatedly translated, while the proposed model performed better.
Baseline-NMT
Encoder-Decoder-Reconstructor
Figure 3: The attention layer in Example 1 : Improvement in under-translation
Encoder-Decoder-Reconstructor
Figure 4: The attention layer in Example 2 : Improvement in over-translation.
Conclusion
In this paper, the newly proposed encoder-decoder reconstructor framework is analyzed on English-Japanese translation tasks. It points out that the encoder-decoder-reconstructor offers significant improvement in BLEU scores, and alleviates the problem of repeating and missing words in the translation on English-Japanese translation task. In addition, it evaluate the importance of pre-training, by comparing it with a jointly-trained model of forward translation and back-translation.
Reviewer’s Comment
Back translation has always served as a useful method for translation studies, or for human translators to check whether they have accurately translated or not. The use of this traditional translation method in machine translation tasks is quite an amazing idea.
In the future, the closer combination of linguistics knowledge with Natural Language Processing may become the new way of thinking for better improving the performance of language processing tasks, such as machine translation, especially for those languages like Japanese, which features many ‘grammar templates’ (‘文法’ in Japanese) .
Reference
[1]Dzmitry Bahdanau,Kyunghyun Cho,and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. Proceedings of the 3rd International Conference on Learning Representations (ICLR), pages 1–15.
[2]Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2017. Neural Machine Translation with Reconstruction. Proceedings of the ThirtyFirst AAAI Conference on Artificial Intelligence (AAAI), pages 3097–3103.
[3]Philipp Koehn. 2004. Statistical Significance Tests for MachineTranslationEvaluation. Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 388–395.
Source paper: https://arxiv.org/pdf/1706.08198.pdf
Paper authors: Yukio Matsumura, Takayu kiSato, Mamoru Komachi
Tokyo Metropolitan University Tokyo, Japan
Author: Kejin Jin | Editor: Joni Chung | Localized by Synced Global Team: Xiang Chen
|
English-Japanese Neural Machine Translation with Encoder-Decoder-Reconstructor
| 12
|
english-japanese-neural-machine-translation-with-encoder-decoder-reconstructor-1a023eaab2a5
|
2018-06-13
|
2018-06-13 08:25:19
|
https://medium.com/s/story/english-japanese-neural-machine-translation-with-encoder-decoder-reconstructor-1a023eaab2a5
| false
| 1,283
|
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
| null |
SyncedGlobal
| null |
SyncedReview
|
global.sns@jiqizhixin.com
|
syncedreview
|
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
|
Synced_Global
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Synced
|
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
|
960feca52112
|
Synced
| 8,138
| 15
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-09
|
2018-08-09 06:33:34
|
2018-08-09
|
2018-08-09 06:48:27
| 1
| false
|
en
|
2018-10-20
|
2018-10-20 09:51:35
| 1
|
1a04756dedba
| 3.962264
| 3
| 1
| 0
|
Well, I am not going to teach you python and SQL, Relax, I do not know them well too. This is a story of happiness, self-discovery…
| 5
|
What I learned from Bertelsmann’s data science challenge scholarship
Well, I am not going to teach you python and SQL, Relax, I do not know them well too. This is a story of happiness, self-discovery, challenging one’s self and ecstasy of spending some time with the people you choose.
Bertelsmann’s data science scholarship in partnership with Google and Udacity.
What the heck is Bertelsmann’s data science scholarship anyway? Bertelsmann Data Science scholarship is a nanodegree scholarship of Data science that Bertelsmann, in partnership with Udacity awards 15000 people around the world. Don’t get confused. What I am saying is that 1500 people get selected for the second phase of the scholarship and whoever gets selected specialize in one of three nanodegree programmes (Data foundations, Business analyst and Data analyst). That is to say; only 10 per cent have the chance of graduating from this programme. Having finished the challenge phase yesterday, I am compelled to sharing one or two takeaways while I am waiting for their final decision on whether I will be admitted for the second phase or not. If you want to learn more about this programme, check out Bertelsmann’s data science scholarship
It does not matter whether you know the end or not; you should keep moving
You might not believe that I had no plans to attempt this programme, but it is true. I received the confirmation email, right after the end of term and I had a JS boot camp to partake. I don’t even remember when and how I applied for it, but I got two emails from Udacity on the same day, the first was saying we regret to inform you we could not make you an offer of android development and the other was saying congratulations! We offered you data science scholarship, and I did not give a damn about anything. I had three weeks period to wait for the start of the boot camp and attempting this challenge was a good (not a smart) way to stay busy for a while. I bet you know people who always look busy, even when they are not; and I am one of them. I am rarely busy, yet I still look busy, Lol. It turned out to be an opportunity to take a look of what is like acquiring skills that can land you the sexiest job of 21 st century. I gave it a try, and I believe it was the right choice; I learned a lot, made friends and I realized that it is okay to start, even when you do not have a goal.
How badly do you need it? Grit.
I quit a couple of times to learn that I am the lazy, if not the laziest person alive and I carried on. If there is one thing loved about Udacity community is that people share their stories and struggles. Few things are as inspiring as learning that someone out there who(a mother) is full time employed wait until her three kids sleep during the night to learn hard topics and revise for the sake of balancing school, work and time she spends with family. If people were not to share their stories, a hundred of us would have dropped in the first week. And this brings me to the next point of questioning our excuses.
We should question our excuses
How busy are you? I am not saying you are not but what makes you that busy? I asked myself the same question after getting to know a lad from Cairo who is visually impaired but could walk in a city which is not friendly to blind people with her supportive mother, to meet people who were doing the same course in the same town. You can guess how challenging it is for him to complete the course; If you did not know that blind people can be programmers or use computers, know that they can, I even met a blind programmer here, in Rwanda. Thanks to amazing technologies. Many of my colleagues, if not hundreds were using google translate to translate every single word from English to their languages like Spanish and Portuguese and they repeat the same process to answer every single question until they finished the entire programme of about 32 courses. Are you (really) busy?
Learning can be fun
I know that you know this, it is just for emphasis. It feels good to find your tribe in other tribes, knowing that there are many people out there who have the same views and make you believe that having strange behaviours is not strange. Those small things like creating a memes channel where all meme lovers share the trending memes, soccer channel where people discuss Messi and Ronaldo and bookclub channel where bookworms read books and discuss online is what makes Udacity inclusive and an exciting community to belong to. Some of us enjoyed being in that little diverse world more than the course itself. If I were, to be honest, I would admit that I stayed for long because of book-club and memes channel, because of course, reading is happiness and programming memes are the funniest.
Education is and should not be a race. It is probably not the first time I am telling you this if you are a close friend. Education is neither a competition nor a race. Self-paced learning is what makes Udacity great. Not everyone completed even 50% of content, but I am sure everyone is proud of the little contribution he made to the community, whether it was to cheer someone up, posting a book, a meme or explaining some concepts to others. Udacity community is a family that everyone should try to be a part of. Thanks to Bertelsmann and Udacity for that lifetime opportunity they granted us.
UPDATE:
I did not make it to the next phase of the scholarship and I do not regret anything. I learned a lot and I would advise anyone to give it a try.
|
What I learned from Bertelsmann’s data science challenge scholarship
| 3
|
what-i-learned-from-bertelsmanns-data-science-challenge-scholarship-1a04756dedba
|
2018-10-20
|
2018-10-20 09:51:35
|
https://medium.com/s/story/what-i-learned-from-bertelsmanns-data-science-challenge-scholarship-1a04756dedba
| false
| 997
| null | null | null | null | null | null | null | null | null |
Education
|
education
|
Education
| 211,342
|
Innocent INGABIRE
|
I am an aspiring writer, an avid reader and a life-long learner
|
de56e75ec424
|
Innocent_Ing
| 43
| 147
| 20,181,104
| null | null | null | null | null | null |
0
|
{“username”:”YOUR-USER-NAME”,”key”:”SOMETHING-VERY-LONG”}
!pip install kaggle
!mkdir .kaggle
import json
token = {“username”:”YOUR-USER-NAME”,”key”:”SOMETHING-VERY-LONG”}
with open(‘/content/.kaggle/kaggle.json’, ‘w’) as file:
json.dump(token, file)
!chmod 600 /content/.kaggle/kaggle.json
!cp /content/.kaggle/kaggle.json ~/.kaggle/kaggle.json
!kaggle config set -n path -v{/content}
!kaggle competitions download -c home-credit-default-risk -p /content
!unzip \*.zip
import pandas as pd
d = pd.read_csv('application_train.csv')
d.head()
| 14
| null |
2018-07-14
|
2018-07-14 04:56:56
|
2018-07-14
|
2018-07-14 05:59:27
| 8
| false
|
en
|
2018-09-16
|
2018-09-16 00:18:40
| 1
|
1a054a382de0
| 3.072956
| 15
| 1
| 0
|
Don’t want to download large Kaggle datasets to your local machine and upload them to your Google Drive? Here is a tutorial about how to…
| 5
|
Tutorial: Kaggle API + Google Colaboratory
Don’t want to download large Kaggle datasets to your local machine and upload them to your Google Drive? Here is a tutorial about how to connect Kaggle API on Google Colaboratory and download datasets directly from Kaggle to your Colab without the time-consuming procedure. ✧୧(๑=̴̀⌄=̴́๑)૭✧
Step1. Get Your Kaggle Token:
Click top right corner and go to your kaggle My Account page.
My Account
Scroll down to API section and click Create New API Token button:
Create New API Token
It will download a file called kaggle.json. Store it wisely (ง •̀_•́)ง and we will use it later.
Open the file and it should be in this format:
Step2. Colaboratory Runtime:
Go to Google Colaboratory, open a New Python 3 Notebook. To verify your Python version and (optional) to use the fancy GPU, on the top left tool bar, click Runtime. On the bottom of the menu, click Change runtime type.
Change runtime type
Select Python 3 and GPU, click Save.
Change hardware accelerator to GPU
Step3. Install Kaggle API:
Installation should be the same as using Jupyter Notebook. In the cell, type and run:
After the installation is completed, use !ls -a to check if you have a directory called .kaggle under the content, if not, make one:
Go back to the .json file you downloaded in step1, copy it. In next cell, type and paste (no exclamation mark):
In the next cell, run:
Update (Sep 2018):
Add this line of code before configuration if you get missing username error (Here is where I found the way to solve this problem https://stackoverflow.com/questions/51958553/error-while-importing-kaggle-dataset-on-colab)
Then run:
Step4. Download Data:
Go to the Kaggle competition page you would like to download data from , and browse to Data, I’m using Home Credit Default Risk competition as an example:
Scroll down to the data section and click API button, it will copy the command automatically.
Paste the command into Colab’s cell (don’t forget the exclamation mark). Add the -p to clarify your path.
Your output should look somewhat like this:
Download output
To unzip the files, run the following command:
Now your data is available to use. Try:
Have Fun! ᕕ( ᐛ )ᕗ
Some miscellaneous things:
In order to download the Kaggle competition data, you will have to join the competition and accept the rules on Kaggle first. If not, the data downloading step may throw errors at you. <-biubiu-⊂(`ω´∩)
I’m a Mac user so if you copy and paste the code on Windows, you may need to modify the quotation marks to avoid formatting issues.
If for any reason you can no longer access your .kaggle directory, try run: !rm .kaggle to remove the directory and restart from the !mkdir .kaggle step should resolve the problem.
Suggestions and Advises are welcomed.
|
Tutorial: Kaggle API + Google Colaboratory
| 78
|
tutorial-kaggle-api-google-colaboratory-1a054a382de0
|
2018-09-16
|
2018-09-16 00:18:40
|
https://medium.com/s/story/tutorial-kaggle-api-google-colaboratory-1a054a382de0
| false
| 514
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Yvette
|
"Too weird to live, too rare to die."
|
1fc02de9c7bf
|
yvettewu.dw
| 10
| 14
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-23
|
2017-10-23 19:38:42
|
2017-10-23
|
2017-10-23 19:51:53
| 2
| true
|
en
|
2017-10-23
|
2017-10-23 19:51:53
| 0
|
1a08615c8b7f
| 6.439937
| 56
| 3
| 0
|
Learning to play Go is only the start
| 5
|
The latest AI can work things out without being taught
Learning to play Go is only the start
South Korean professional Go player Lee Sedol is seen on a TV screen during the Google DeepMind Challenge Match against Google’s artificial intelligence program, AlphaGo — AP Photo/Ahn Young-joon)
In 2016 Lee Sedol, one of the world’s best players of Go, lost a match in Seoul to a computer program called AlphaGo by four games to one. It was a big event, both in the history of Go and in the history of artificial intelligence (AI). Go occupies roughly the same place in the culture of China, Korea and Japan as chess does in the West. After its victory over Mr Lee, AlphaGo beat dozens of renowned human players in a series of anonymous games played online, before re-emerging in May to face Ke Jie, the game’s best player, in Wuzhen, China. Mr Ke fared no better than Mr Lee, losing to the computer 3–0.
For AI researchers, Go is equally exalted. Chess fell to the machines in 1997, when Garry Kasparov lost a match to Deep Blue, an IBM computer. But until Mr Lee’s defeat, Go’s complexity had made it resistant to the march of machinery. AlphaGo’s victory was an eye-catching demonstration of the power of a type of AI called machine learning, which aims to get computers to teach complicated tasks to themselves.
AlphaGo learned to play Go by studying thousands of games between expert human opponents, extracting rules and strategies from those games and then refining them in millions more matches which the program played against itself. That was enough to make it stronger than any human player. But researchers at DeepMind, the firm that built AlphaGo, were confident that they could improve it. In a paper just published in Nature they have unveiled the latest version, dubbed AlphaGo Zero. It is much better at the game, learns to play much more quickly and requires far less computing hardware to do well. Most important, though, unlike the original version, AlphaGo Zero has managed to teach itself the game without recourse to human experts at all.
The eyes have it
Like all the best games, Go is easy to learn but hard to master. Two players, Black and White, take turns placing stones on the intersections of a board consisting of 19 vertical lines and 19 horizontal ones. The aim is to control more territory than your opponent. Stones that are surrounded by an opponent’s are removed from the board. Players carry on until neither wishes to continue. Each then adds the number of his stones on the board to the number of empty grid intersections he has surrounded. The larger total is the winner.
The difficulty comes from the sheer number of possible moves. A 19x19 board offers 361 different places on which Black can put the initial stone. White then has 360 options in response, and so on. The total number of legal board arrangements is in the order of 10170, a number so large it defies any physical analogy (there are reckoned to be about 1080 atoms in the observable universe, for instance).
Human experts focus instead on understanding the game at a higher level. Go’s simple rules give rise to plenty of emergent structure. Players talk of features such as “eyes” and “ladders”, and of concepts such as “threat” and “life-and-death”. But although human players understand such concepts, explaining them in the hyper-literal way needed to program a computer is much harder. Instead, the original AlphaGo studied thousands of examples of human games, a process called supervised learning. Since human play reflects human understanding of such concepts, a computer exposed to enough of it can come to understand those concepts as well. Once AlphaGo had arrived at a decent grasp of tactics and strategy with the help of its human teachers, it kicked away its crutches and began playing millions of unsupervised training games against itself, improving its play with every game.
Supervised learning is useful for much more than Go. It is the basic idea behind many of the recent advances in AI, helping computers learn to do things such as identify faces in pictures, recognise human speech reliably, filter spam from e-mail efficiently and more. But as Demis Hassabis, Deepmind’s boss, observes, supervised learning has limits. It relies on the availability of training data to feed to the computer to show the machine what it is meant to be doing. Such data must be filtered by human experts. The training data for face recognition, for instance, consist of thousands of pictures, some with faces and some without, each labelled as such by a person. That makes such data sets expensive, assuming they are available at all. And, as the paper points out, there can be more subtle problems. Relying on human experts for guidance risks imposing human limits on a computer’s ability.
AlphaGo Zero is designed to avoid all these problems by skipping the training-wheels phase entirely. The program starts only with the rules of the game and a “reward function”, which awards it a point for a win and docks a point for a loss. It is then encouraged to experiment, repeatedly playing games against other versions of itself, subject only to the constraint that it must try to maximise its reward by winning as much as possible.
The program started by placing stones randomly, with no real idea of what it was doing. But it improved rapidly. After a single day it was playing at the level of an advanced professional. After two days it had surpassed the performance of the version that beat Mr Lee in 2016.
DeepMind’s researchers were able to watch their creation rediscover the Go knowledge that human beings have accumulated over thousands of years. Sometimes, it seemed eerily human-like. After about three hours of training the program was preoccupied with the idea of greedily capturing stones, a phase that most human beginners also go through. At others it seemed decidedly alien. For example, ladders are patterns of stones that extend in a diagonal slash across the board as one player attempts to capture a group of his opponent’s stones. They are frequent features of Go games. Because a ladder consists of a simple, repeating pattern, human novices quickly learn to extrapolate them and work out if building a particular ladder will succeed or fail. But AlphaGo Zero — which is not capable of extrapolation, and instead experiments with new moves semi-randomly — took longer than expected to come to grips with the concept.
Climbing the ladder
Nevertheless, learning for itself rather than relying on hints from people seemed, on balance, to be a big advantage. For example, joseki are specialised sequences of well-known moves that take place near the edges of the board. (Their scripted nature makes them a little like chess openings.) AlphaGo Zero discovered the standard joseki taught to human players. But it also discovered, and eventually preferred, several others that were entirely of its own invention. The machine, says David Silver, who led the AlphaGo project, seemed to play with a distinctly non-human style.
The result is a program that is not just superhuman, but crushingly so. Skill at Go (and chess, and many other games) can be quantified with something called an Elo rating, which gives the probability, based on past performance, that one player will beat another. A player has a 50:50 chance of beating an opponent with the same Elo rating, but only a 25% chance of beating one with a rating 200 points higher. Mr Ke has a rating of 3,661. Mr Lee’s is 3,526. After 40 days of training AlphaGo Zero had an Elo rating of more than 5,000 — putting it as far ahead of Mr Ke as Mr Ke is of a keen amateur, and suggesting that it is, in practice, impossible for Mr Ke, or any other human being, ever to defeat it. When it played against the version of AlphaGo that first beat Mr Lee, it won by 100 games to zero.
There is, of course, more to life than Go. Algorithms such as the ones that power the various iterations of AlphaGo might, its creators hope, be applied to other tasks that are conceptually similar. (DeepMind has already used those that underlie the original AlphaGo to help Google slash the power consumption of its data centres.) But an algorithm that can learn without guidance from people means that machines can be let loose on problems that people do not understand how to solve. Anything that boils down to an intelligent search through an enormous number of possibilities, said Mr Hassabis, could benefit from AlphaGo’s approach. He cited classic thorny problems such as working out how proteins fold into their final, functional shapes, predicting which molecules might have promise as medicines, or accurately simulating chemical reactions.
Advances in AI often trigger worries about human obsolescence. DeepMind hopes such machines will end up as assistants to biological brains, rather than replacements for them, in the way that other technologies from search engines to paper have done. Watching a machine invent new ways to tackle a problem can, after all, help push people down new and productive paths. One of the benefits of AlphaGo, says Mr Silver, is that, in a game full of history and tradition, it has encouraged human players to question the old wisdom, and to experiment. After losing to AlphaGo, Mr Ke studied the computer’s moves, looking for ideas. He then went on a 22-game winning streak against human opponents, an impressive feat even for someone of his skill. Supervised learning, after all, can work in both directions.
© 2017 The Economist. All right reserved.
|
The latest AI can work things out without being taught
| 186
|
the-latest-ai-can-work-things-out-without-being-taught-1a08615c8b7f
|
2018-08-25
|
2018-08-25 01:41:54
|
https://medium.com/s/story/the-latest-ai-can-work-things-out-without-being-taught-1a08615c8b7f
| false
| 1,605
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
The Economist
|
Insight and opinion on international news, politics, business, finance, science, technology, books and arts.
|
bea61c20259e
|
the_economist
| 333,655
| 36
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-25
|
2017-10-25 21:58:17
|
2017-10-25
|
2017-10-25 22:00:33
| 3
| false
|
en
|
2017-10-25
|
2017-10-25 22:01:41
| 2
|
1a09c74a98f8
| 1.961321
| 2
| 0
| 0
|
Conversational artificial intelligence is the hot new kid in school. They’re cool, seemingly very smart, potentially a little dangerous…
| 5
|
Chatbots Look Exciting! How Do I Get One?
Conversational artificial intelligence is the hot new kid in school. They’re cool, seemingly very smart, potentially a little dangerous, and everyone wants to get to know them. I’ll outline here how you get to know them, and by association, become cool yourself.
A few things are clear to me within this exciting and rapidly evolving world of conversational artificial intelligence, today I’ll focus on two:
People are universally excited by the prospect of what a bot could do for their organisation
They’re frequently uncertain on how to move through the exploration and implementation process
So, to help guide people’s understanding of what the stages are in bot evolution, we created the Bot Maturity Model. Last week you’ll recall the hierarchy of needs style pyramid, this week we’ve toppled that fella on his side and expanded out the steps in explanation. I’ll let the model speak for itself, as hopefully it’s conclusive and explanative. Note a couple of things — its in 2 images to fit on your mobile device. You’re welcome. Level 1 & 2 have no examples, as its probably you. This is our copyright, please feel free to share. Get in touch with me and I’ll send you a friendlier version.
I’m optimistic that this is of some value to you. The key takeaway I’d like for you is this. I see a future where we all have a personal assistant bot. This bot knows our preferences, desires, budgets and quirks. It books holidays, manages finances, schedules meetings and deals with other real life (yes first world) problems. To get there however, you need to start here. To paraphrase Alice in Wonderland; if the destination you’d like to achieve is AI, then the road you must take is the Bot Maturity Model.
As a shameless piece of self-promotion, we’ve outlined what we do at each stage. Ambit is a turnkey solution provider of enterprise grade (SMB appropriate) chatbots. Keep updated at ambitai.com or visit my profile below!
By Josh Comrie, CEO of Ambit
Josh is the CEO of Ambit and a founder of 3 human capital companies that have had 2 successful exits. He is also an experienced angel investor with several exits.
|
Chatbots Look Exciting! How Do I Get One?
| 2
|
chatbots-look-exciting-how-do-i-get-one-1a09c74a98f8
|
2018-03-21
|
2018-03-21 08:00:40
|
https://medium.com/s/story/chatbots-look-exciting-how-do-i-get-one-1a09c74a98f8
| false
| 374
| null | null | null | null | null | null | null | null | null |
Bots
|
bots
|
Bots
| 14,158
|
Ambit
|
Ambit is a New Zealand-based AI company that specialises in building turnkey chatbots for enterprise, both customer and internally facing. Visit ambitai.com
|
b3451d8241a7
|
ambitai
| 123
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-16
|
2018-04-16 14:06:10
|
2018-04-16
|
2018-04-16 14:10:19
| 0
| false
|
en
|
2018-04-16
|
2018-04-16 14:10:19
| 2
|
1a0ab9d4e4f4
| 2.237736
| 0
| 0
| 0
|
You know, there’s an old adage called ‘Betteridge’s law of headlines’, and it goes like this:
| 1
|
Will there be roles for humans in a world that depends on AI?
You know, there’s an old adage called ‘Betteridge’s law of headlines’, and it goes like this:
“Any headline that ends in a question mark can be answered by the word no.”
I’m pleased to inform you that this article is, for the most part, an exception to this rule.
It’s true that AI is advancing at a quicker and quicker rate, and people are correct to be concerned about its effect on many jobs across many sectors. Nevertheless, like the oft-mentioned Luddites of old, new jobs come with the introduction of new technology. Certainly, people will need to adapt to learning new trades, but adaption is something humanity has needed to contend with since the dawn of time.
The real question is this: Can humanity continue its role in the workforce with the rapid advancement of AI. To answer this rather complicated question, let’s take a quick look at the state of AI. In case you haven’t noticed, it’s advancing at an incredible rate, and we all know that the only way for technology to stop advancing is to have a major regression in the state of society (such as a world war or some other apocalyptic scenario).
However, while there will undoubtedly be a major delay in technologic growth immediately following a worldwide catastrophe, there will still be hope for continued progression so long as the knowledge remains. To put this in simple terms, let’s take a quick look at the plot of the 1985 entry into the James Bond film series, A View to a Kill, the last film to feature the late Sir Roger Moore.
In the film, a weary 57-year-old Bond is tasked with stopping a plot by the psychotic Max Zorin, the film’s primary antagonist, to destroy Silicon Valley and corner the market on silicon chips. A pretty major flaw with the plot (which may or may not be intentional from the screenwriters) is that there were facilities in many countries all over the world that produced microprocessors even back in 1985, but let’s play devil’s advocate for a moment and imagine that there was a very specific kind of chip that was only made in Silicon Valley.
Now, herein lies the kicker. As Nick Gruen, CEO of Lateral Economics, points out in an interesting news article, silicon chips are a knowledge good, meaning that we would have the collective knowledge to reproduce the chips even if all existing chips were created. Additionally, even if every last chip was destroyed — including everyone who knew how to make the specific chip — a chip would probably be able to be reverse-engineered to create more of the chip in question.
As you can see, human knowledge is an essential ingredient for producing new technology. Machine learning and AI is certainly useful for helping humans improve on existing ideas, but so far their creative approach to discovery needs the seed of a creative human thought. It’s true that music and art can be created from certain algorithms and pseudorandomness, but true creativity in the technological or meaningful sense still alludes us.
Until then, we will still need humans to understand the process of AI and machine learning, to help it along its natural progression, in addition to explaining abstract concepts to other workers that use the power of AI. To see how your business can harness the power of AI to enhance your company and increase its effectiveness, check out WorkFusion’s AI solution.
|
Will there be roles for humans in a world that depends on AI?
| 0
|
will-there-be-roles-for-humans-in-a-world-that-depends-on-ai-1a0ab9d4e4f4
|
2018-05-10
|
2018-05-10 06:39:19
|
https://medium.com/s/story/will-there-be-roles-for-humans-in-a-world-that-depends-on-ai-1a0ab9d4e4f4
| false
| 593
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Ben Schultz
| null |
69c7b2c80f40
|
benschultz_57614
| 17
| 17
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-17
|
2017-09-17 09:22:56
|
2017-09-18
|
2017-09-18 06:56:04
| 4
| false
|
en
|
2017-09-18
|
2017-09-18 08:04:58
| 0
|
1a0d403f32f3
| 1.401887
| 1
| 0
| 0
|
Team Arvi is delighted to announce a significant milestone in its journey!
| 5
|
UPDATE: AskArvi.com has closed first round of funding
Team Arvi is delighted to announce a significant milestone in its journey!
Last 9 months have been a roller-coaster to say the least. Today we celebrated a major milestone when Jang Capital and a few other US based investors have expressed their confidence in our business and chose to invest in AskArvi.com.
Apart from capital infusion, our investors bring significant experience in the insurance business across diverse geographies like Germany, Canada and USA. We are excited to have them on board and expect that this partnership will us take Arvi global in the not too distant future.
We celebrated this milestone by cutting a cake at our office and sharing the good news with other startups in our building.
Cake cutting…
Other startup teams who shared our celebrations
Sushant Reddy talking about Arvi’s journey
We understand that there is a long long way to go and this is just a small step in our journey. We realize that we need to work twice as hard now because other people’s money is at stake.
We will continue to spend less, be innovative, work hard and enjoy every moment in our journey like family.
|
UPDATE: AskArvi.com has closed first round of funding
| 1
|
askarvi-com-has-closed-first-round-of-funding-1a0d403f32f3
|
2018-05-03
|
2018-05-03 03:42:32
|
https://medium.com/s/story/askarvi-com-has-closed-first-round-of-funding-1a0d403f32f3
| false
| 186
| null | null | null | null | null | null | null | null | null |
Startup
|
startup
|
Startup
| 331,914
|
AskArvi
|
Hi, I’m Arvi, your smart and friendly insurance assistant. Imagine having a trusted friend guiding you with the ‘what’, ‘why’ and ‘how’ of insurance.
|
ca1521ea301b
|
askarvi
| 15
| 5
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-30
|
2018-08-30 12:13:22
|
2018-08-30
|
2018-08-30 12:17:37
| 1
| false
|
en
|
2018-08-30
|
2018-08-30 12:17:37
| 20
|
1a0ea2816a43
| 0.856604
| 0
| 0
| 0
|
The ServCoin (SRV) token is an important component of the ServAdvisor ecosystem and is intended for backing all types of operations within…
| 5
|
ServAdvisor Presale starts in 10 days
The ServCoin (SRV) token is an important component of the ServAdvisor ecosystem and is intended for backing all types of operations within this ecosystem. This makes the SRV token an integral part of the ecosystem and its economy driver. The ServCoin token is set to be listed on cryptocurrency exchanges and can be easily converted into other cryptocurrencies.
Token Presale will be held from September 10th 2018 to October 10th, 2018. 195 000 000 SRV tokens will be issued at a special price for a limited number of participants.
Token Presale will be based on the “first come, first served” principle according to Whitelist registries, therefore we cannot guarantee the availability of tokens for all interested participants. SRV tokens will appear in ERC20 depositors immediately after the purchase.
For further news and announcements, please sign up and follow us on:
Official website: www.ServAdvisor.co
Official Twitter Account: https://twitter.com/ServAdvisor
Official Telegram: https://t.me/ServAdvisor
Official Medium: https://medium.com/@ServAdvisor
Official FB: https://www.facebook.com/ServAdvisor-1970283999656534/
Official GitHub: https://github.com/ServAdvisor
Official Bitcointalk.org: https://bitcointalk.org/index.php?topic=3903200
Official Reddit: https://www.reddit.com/user/ServAdvisor
#cryptonews#cryptocurrency#blockchain#ICO#Crypto#TokenSale#earlybird#bitcoin#cryptokitties#altcoin#ServAdvisor #SRV
|
ServAdvisor Presale starts in 10 days
| 0
|
servadvisor-presale-starts-in-10-days-1a0ea2816a43
|
2018-08-30
|
2018-08-30 12:17:38
|
https://medium.com/s/story/servadvisor-presale-starts-in-10-days-1a0ea2816a43
| false
| 174
| null | null | null | null | null | null | null | null | null |
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
ServAdvisor
| null |
64017f48c363
|
ServAdvisor
| 32
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-22
|
2018-08-22 19:52:58
|
2018-08-22
|
2018-08-22 22:28:33
| 9
| false
|
en
|
2018-08-29
|
2018-08-29 23:57:10
| 1
|
1a0f5e183992
| 2.916981
| 0
| 0
| 0
|
In part 1, we saw how to deal with Series and started messing with DataFrames. Now, let’s see some more useful features that Pandas can…
| 5
|
Basics of Pandas for Data Analysis [part 2]
In part 1, we saw how to deal with Series and started messing with DataFrames. Now, let’s see some more useful features that Pandas can offer.
This time, we’ll generate a random dataset with Numpy’s random function and convert it to a Pandas DataFrame with 50 rows and 5 columns. Using the function head(), we can see the first 5 rows of the DataFrame. Alternativelly, the function tail() returns the last 5 rows.
This is a screenshot from a Jupyter Notebook.
To get some information about the data, we can use the function info(), that gives us the number of observations (rows) per columns as well as their datatypes.
For more statistical information, we can use the function describe(). Note that this function only returns information about quantitative data and will ignore the columns that have strings.
This is a screenshot from a Jupyter Notebook.
This method already gives usthe quantile for 0.25, 0.5 and 0.75, but we can use the function quantile() to check for other values.
This is a screenshot from a Jupyter Notebook.
If we have a column in our data that we don’t care about or that is not that useful, we can use drop() to take that column out. Here, we have to pay attention to a couple of arguments. To make sure we’re trying to drop a column, we have to set the axis to 1, with 0 being the default value and the reference to rows. Also, the argument inplace by default is set to False, which makes the function return a copy of the data. If we want to drop a column and not worry about reassigning the result to a variable, we can set inplace = True.
This is a screenshot from a Jupyter Notebook.
To check if there is any missing data, we can use the function isnull(). This returns a DataFrame with True whether there is a missing value or False whether there isn’t. We can also use any() to check by column.
This is a screenshot from a Jupyter Notebook.
With Pandas, we can even plot some graphs to visualize our data. The function plot() just by itself is already pretty useful. Please note that our data here is just random numbers, so the plots might not make much sense. This is just a demonstration.
This is a screenshot from a Jupyter Notebook.
This is a simple plot, but we can change the arguments of the function and generate other kinds of graphics.
Histogram:
This is a screenshot from a Jupyter Notebook.
Bar chart:
This is a screenshot from a Jupyter Notebook.
Scatter plots:
This is a screenshot from a Jupyter Notebook.
These are just some of the visualization we can make with Pandas. If you want more detailed versions and other kinds of graphics you can check the Pandas documentation here https://pandas.pydata.org/pandas-docs/stable/visualization.html.
These are some basic functions that will be used a lot if you keep messing with this library. Pandas is an incredible tool and can help us a lot. Thank you for reading and good luck on your path, whatever it may be!
|
Basics of Pandas for Data Analysis [part 2]
| 0
|
basics-of-pandas-for-data-analysis-part-2-1a0f5e183992
|
2018-08-29
|
2018-08-29 23:57:10
|
https://medium.com/s/story/basics-of-pandas-for-data-analysis-part-2-1a0f5e183992
| false
| 455
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Vitor Rodrigues
| null |
9c6a2f71c905
|
vitorborbarodrigues
| 1
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-12
|
2018-03-12 12:46:21
|
2018-03-19
|
2018-03-19 01:34:09
| 2
| false
|
en
|
2018-03-19
|
2018-03-19 01:34:09
| 2
|
1a13de4643c7
| 1.670126
| 2
| 0
| 0
|
Is split into two parts Trying to detect limbs and then redesigning YOLOv2 to work with angled bounding boxes something like this:
| 1
|
Training YOLOv2 for Limb detection
Is split into two parts Trying to detect limbs and then redesigning YOLOv2 to work with angled bounding boxes something like this:
I’m going to train the YOLOv2 to detect limbs and output the coordinates from the bounding boxes. Then based on the output of these coordinates of each bounding box, I will draw a line between the top left to top right going down the center line this would represent the skeleton once a full iteration is completed.
I train initially on 500 images of limbs to see if this approach will work. It should be noted that most images in my dataset are more than one human pose in the image.
Obtaining Ground Truth
MPII provides points where the centre of the joint is. So from this we need to create bounding boxes around each of these and write the box dimensions and class number to file. It’s time fire up the GTX 1070 (8GB) to see how it fairs.
Interesting Development
My computer breaks…. So time to convert my MacPro to a Deep Learning Rig using Google Cloud. I’ve got myself some free credit (£250) and topped it up £70. I’m currently renting a Intel 4 Core Machine with 16GB of RAM and a Nivida K80 12GB. So we’ll see how it goes…
Limb Id’s
Head
Right lower leg
Right upper leg
Left lower leg
Left upper leg
right lower arm
right upper arm
left lower arm
left upper
Chest
Angled Ground Truth Data
This is angled ground truth data ready to be fed into the network.
I’m now using a Python Implementation of YOLOv2 using the Tensorflow framework instead of the Darknet framework. There are several reason for this, but mainly because It’s in python and I understand the language better.
References
Google Cloud Computing, Hosting Services & APIs | Google Cloud Platform
Google Cloud Platform lets you build and host applications and websites, store data, and analyze data on Google's…cloud.google.com
thtrieu/darkflow
darkflow - Translate darknet to tensorflow. Load trained weights, retrain/fine-tune using tensorflow, export constant…github.com
|
Training YOLOv2 for Limb detection
| 2
|
training-yolov2-for-limb-detection-1a13de4643c7
|
2018-03-19
|
2018-03-19 06:21:07
|
https://medium.com/s/story/training-yolov2-for-limb-detection-1a13de4643c7
| false
| 341
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Richard Price-Jones
| null |
120318433047
|
richardpricejones
| 12
| 19
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
f0db56adb08d
|
2018-09-03
|
2018-09-03 18:53:27
|
2018-09-03
|
2018-09-03 23:06:01
| 2
| false
|
en
|
2018-09-06
|
2018-09-06 05:39:02
| 32
|
1a143ae5cb65
| 3.805975
| 16
| 0
| 0
|
Great day awesome people and welcome to the 28th Issue of the NLP Newsletter! I am Elvis from Belize, Editor of DAIR.ai, and a PhD…
| 5
|
Google AI Dopamine, GLUE, TransmogrifAI, Machine Learning for Health Care, NLP Interpretability, Probabilistic Thinking,…
Great day awesome people and welcome to the 28th Issue of the NLP Newsletter! I am Elvis from Belize, Editor of DAIR.ai, and a PhD researcher in AI and NLP. Here is this week’s notable NLP news: Understanding human intelligence and using it for AI progress; machine learning for healthcare recap; reinforcement learning reproducibility; state of the art machine translation; automated machine learning; earthquake aftershock locations prediction, and much more.
🔝 — my top recommendations
🌟 — my favorites
On People…
Read more on why schools are using AI to track students writing patterns based on what they type into their computers — link
Irene Chen gives a recap on the important topics discussed at the Machine Learning for Health Care (MLHC) conference — from privacy to model robustness to clinical notes understanding — link 🔝
Nature releases a paper describing a deep learning approach to predict earthquake aftershock locations. The model is also useful to understand the underlying physics behind the phenomena — link
Yoshua Bengio discusses about the implications of disentangled representations for higher-level cognition. He also discusses how “natural language could be used as an additional hint about the abstract representations and disentangled factors which humans have discovered to explain their world”. — link 🔝
PyTorch is hosting its first developer conference where they aim to discuss research and production capabilities for their new release, PyTorch 1.0 — link
DeepMind, in collaboration with Harvard professor Wouter Kool, releases new paper investigating how human-decision makers deploy mental effort and how these insights can give way to opportunities and progress in recent artificial intelligence research — link 🌟
On Education and Research…
Google AI releases Dopamine, a Tensor-based framework that provides flexibility, stability, and reproducibility for new and experienced reinforcement learning researchers — link
In a new episode of the NLP Highlight show, researchers discuss the importance of establishing a benchmark framework, known as GLUE, for natural language understanding — link 🌟
Authors of a new research claims how information obtained from paraphrases can be used to improve multilingual machine translation — link
A new paper discusses the capability of text classifiers to recover demographic information from textual data with reasonable accuracy — link
A recent work compares the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different type of image degradations. They mainly test for generalization capabilities and how weaknesses observed in DNNs can be systematically addressed using a lifelong machine learning approach — link
Find out how text generation can be done using an alternative method, based on a hidden semi-markov model (HSMM) decoder, achieving similar performance to the standard encode-decoder models. The proposed model provides a method which allows for more interpretability and control — something the authors claim is important in a text generation task — link
Facebook’s research team releases a field guide to applying machine learning. It provides real-world best practices and practical approaches on how to apply machine-learning capabilities to real-world problems (video series) — link
On Code and Data…
DeepMind researcher, Shakir Mohamed, releases an impressive set of slides where he introduces foundations, tricks, and algorithms needed for probabilistic thinking — link 🔝
A comprehensive list of tutorials on how to build machine learning algorithms from scratch — link
Bloomberg researcher, Yi Yang, releases code and paper for his new work on modeling convolutional filters with RNNs, which he claims naturally capture long-term dependencies and compositionality in language — link to paper | link to code
PyImageSearch just published a new tutorial on how to perform semantic segmentation using OpenCV and deep learning. The method works for both images and videos — link
On Industry…
Salesforce’s Einstein AI team releases TransmogrifAI, an AutoML library that focuses on accelerating machine learning developer productivity through automated machine learning for structured data — link
Facebook researchers come up with a state of the art method for machine translation that only relies on monolingual corpora which can be useful to deal with low-resource languages — link | paper
Here is a nice list of Machine Learning rules and best practices for deploying real-world ML-based apps provided by Google’s ML team — link 🔝
Quote of the week…
Source
Worthy Mentions…
dair.ai releases new post on the state of deep learning based natural language processing techniques — link 🔝
MIT Review releases new article explaining all the important details on the new sensational machine learning method used to transfer one person’s motion to another (i.e., Everybody Dance Now) — link
Check out a collection of inspirational AI-powered Javascript apps in this cool website. Submissions use tools such as Tensorflow.js, Magenta.js, p5.js, and others — link
Skynet this week #7: OpenAI’s big loss, DeekFake dancing, AI drawing, and more! — link
The NLP Newsletter (Issue 27): Deep INFOMAX, Image to Image Translation, FEVER, Perception Engines, QuAC, Best 150 ML Tutorials — link
Sebastian Ruder’s NLP Newsletter (Issue #31) — link
Alignment Newsletter #22 — Research agenda for AI governance — link
If you spot any errors or inaccuracies in this newsletter please comment below. I would appreciate if you can help me to improve the newsletter by commenting your suggestions below. Otherwise, just help me by sharing the NLP newsletter. If you have any further questions DM me at @omarsar0!
|
Google AI Dopamine, GLUE, TransmogrifAI, Machine Learning for Health Care, NLP Interpretability…
| 101
|
google-ai-dopamine-glue-transmogrifai-machine-learning-for-health-care-nlp-interpretability-1a143ae5cb65
|
2018-09-06
|
2018-09-06 05:39:02
|
https://medium.com/s/story/google-ai-dopamine-glue-transmogrifai-machine-learning-for-health-care-nlp-interpretability-1a143ae5cb65
| false
| 907
|
Diverse Artificial Intelligence Research & Communication
| null | null | null |
dair.ai
|
ellfae@gmail.com
|
dair-ai
|
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,RESEARCH,TECHNOLOGY,DATA SCIENCE
|
dair_ai
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
elvis
|
Researcher and Science Communicator in Machine Learning and NLP; I discuss more about Linguistics, Emotions, NLP, and AI here: (https://twitter.com/omarsar0)
|
41338000425f
|
ibelmopan
| 1,667
| 661
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-11
|
2018-02-11 18:11:31
|
2018-02-21
|
2018-02-21 20:09:23
| 28
| false
|
en
|
2018-03-13
|
2018-03-13 00:47:48
| 16
|
1a15681e573e
| 13.583962
| 12
| 0
| 0
|
Guy walks into a computer lab and asks “Do you know your server is down?” Data scientist replies “No, but if you hum a few million bars…
| 5
|
Music AI: Loop-in-the-Human
Guy walks into a computer lab and asks “Do you know your server is down?” Data scientist replies “No, but if you hum a few million bars I’ll try to fake it.”
By now, various companies offer automated music composition, For instance, Jukedeck does so in part by using machine learning to generate original pieces based on large sets of musical examples. Google’s Magenta project emphasizes data-driven “learning by example” over rule-based music creation approaches. These and numerous other projects have demonstrated impressive, human-sounding, musical results.
Rather than create entire compositions, my own project focuses on individual musical parts (say lead/bass lines within an existing composition). I want to create human-sounding music at a local scale, based on small numbers of examples. That’s because, for me, the urge to engage familiar music usually is stronger than the desire to hear something completely new. This means injecting variation into music that’s already in your head, by definition an interaction with specific, limited inputs.
To this end, I want to hook into a real-time, subjective aspect of music, but at a generic, low level. In short, I want to put actionable handles on looping rhythmic/melodic patterns, where they live, within existing music.
Note: This article is only tangentially about machine learning (of which I have only a very basic knowledge). Its place in this discussion owes to the growing trend toward neural networks for music creation, to its contrast with the approach discussed here, and to any complementary potential between those approaches.
Coherence as prediction
I buy into the view that music is largely about nothing except what it is like to hear (or play) music. It’s a direct feedback loop between surface notes and and psychological affect. And so musical experience and expertise are inherently circular to an unusual degree.
So what is it like? For one thing, if the music has a beat, you‘re caught in cycles of anticipation versus outcome, as the the beat keeps coming around at different levels. Making sense in this context largely means forming subconscious predictions in response to the outcomes of other predictions.
These nested expectations evoke hierarchical structure, giving meaning to the surface notes and vice-versa. (The psychology of musical prediction was first explored in Leonard Meyer’s 1956 book Emotion and Meaning in Music.)
Harnessing innate musical sense
Elizabeth Margulis points out in her book On Repeat: How Music Plays the Mind that you typically recall any familiar melody only by running it through your head. This implies that music isn’t something you factually remember so much as something you regenerate, even if only in your own head. Maybe it’s a matter of re-triggering hierarchies of nested predictions.
In this sense everyone is already a performer; the sense of surface-versus-structure that is familiar to musicians also exists within listeners more so than they probably realize. The difference is that musicians learn to manipulate that structure in order to create new surfaces (just as most people do with speech). That is what the building blocks aim to partially streamline.
Deepening repetition
If you like a piece of music you’ll typically listen to it many times, revisiting an increasingly familiar inner landscape. Much music has passages composed of recurring elements such as melodies, riffs, bass lines, and beats, and this is a level at which material could vary and morph without losing all identity and context.
Coord music-morphing app, morphing between chosen rhythms/melodies in real time.
But while almost everyone understands music, only practiced musicians typically manipulate it at the note level. Here I’ll describe an attempt to narrow the gap between making sense of music and improvising music, doing so via algorithms that play a coprocessor role, augmenting human musicality rather than replacing it.
Those algorithms depend on building blocks of rhythmic coherence. The building blocks are not hand-crafted constructs; they are rhythmic patterns that result from generative operations. Number theory provides a crystallization of those operations, making immediate comparison and manipulation of actual rhythms (and by extension, melodies) intuitive and computationally efficient.
(Pitches are handled as rhythmic strata within this approach, though I’ll leave that out here for the sake of brevity, such as it is.)
Coprocessor, not automaton
I want to emphasize that this approach doesn’t “compose music” wholesale, like the projects cited at the top. Rather it takes on, in real time, recursive rhythmic calculations that musicians (likely) perform subconsciously only after years of practice. Organizing notes hierarchically in time removes a significant hurdle to note-level music improvisation.
This algorithmic intervention takes place in strictly local fashion, leaving the rest of the composition and production in human hands, or under the control of other processes. In the latter case, the building blocks might offer meaningfully factored note data to other algorithms that are in play.
Music with moving parts
A dream scenario would be some music distribution format where recording artists unfreeze certain parts (say, bass or synth lines), enabling variation during playback. But for now, the algorithms discussed here are embodied in custom macOS/iOS apps that control Ableton Live sets. You steer variations on selected tracks while the rest of the mix continues to loop undisturbed.
Coord linked to Ableton Live via OSC and Max-for-Live
As a listener this means I can listen to a track I like for an extended time without tedium setting in; I can pivot into melodies and beats that are recognizably related in non-obvious ways. As a (decidedly amateur) composer I can explore ideas on the fly (perhaps overcome writer’s block) and create pieces that could evolve later, even morphing with other pieces.
Swift-based macOS app that implements the building blocks and algorithms
Generative analysis and composition
The rest of this article will first give an informal overview of the intersection between music theory and number theory at work in this approach. Then I’ll describe some practical applications.
Most technical details will be glazed over to some degree. Full details can be found in the paper “A self-similar map of rhythmic components” (which currently has free access if accessed from this page). Links to related conference papers, with detailed algorithms, are here.
Coming to terms
Familiarity with the particularly usage here of a few musical terms is helpful:
Meter will refer to nested pulses formed by recursive subdivision of the time line. That is, there are two half notes per whole note, two quarter notes per half note, and so on, with the pulses at each metrical level alternating between weak and strong beats. (Only binary subdivision is in scope here, not expressive timing, triplets, or triple meters, etc.)
An anticipation is a note that occurs on a weak beat at some metrical level. It “anticipates” the stronger beat that will immediately follow at that level.
Syncopation occurs whenever anticipations are not followed by the anticipated notes (on the subsequent beats).
Loops here mean repeated patterns that are one, two, or four bars of 4/4 long. (Loops are ubiquitous in EDM but are also fairly common in many other genres.)
Variation as (already) used in this discussion refers not only to formal theme and variations; but more broadly to any melody or beat that can recognizably take the place of another (even in a sort of musical opposition).
Looping anticipations.
Above anticipations plotted as beat strength versus time step.
On the math side, binomial coefficients, Pascal’s triangle, and the Sierpinski gasket will appear in connection to the formation of rhythmic patterns.
Co-processing versus pre-computation
Attempting to manage complexity in music creation is not new. Take the piano keyboard; a great deal of pre-computation is embedded into an arrangement that works out log/linear and ratio/proximity pitch relationships at two levels (diatonic and pentatonic). Almost none of that is available to, say, a violin player, so it’s no surprise that a piano players can more easily play polyphonic and harmonic textures.
Rhythm is also built on log/linear and ratio/proximity relationships, given the underlying meter’s recursive structure. But these can’t be made ready-to-hand by a static layout because the time axis itself is in play. Computers seem an obvious tool for getting handles on musical time, assuming that the points in time can be grouped in as meaningful a fashion as pitches are on the piano. That is what the building blocks under discussion aim to do.
Algorithmic music analysis and generation
Here I’ll briefly run through how and why the rhythmic building blocks are generated. First I’ll describe what they encapsulate.
Anticipation and repetition
David Huron observed in his 2006 book Sweet Anticipation: Music and the Psychology of Expectation that when a note occurs on a weak beat (at some metrical level) you’ll tend to expect a note on the subsequent strong beat. This is simply the gravitational pull of musical meter.
That anticipated note may or may not occur. But in any case, if the pattern is repeated, you’ll expect whatever happened to keep happening, even if that (somewhat paradoxically) means “expecting” surprise. (Fred Lerdahl and Ray Jackendoff’s noted the link between parallelism and coherence as part of their in their 1983 book A Generative Theory of Tonal Music.)
Nested levels of potential anticipation, with the downbeat in the final position.
And so, two psychological constraints will be the organizing principles for all that follows.
A note that falls on a weak beat raises the expectation of a following note on the strong beat.
The outcome of the expectation defined in (1) raises expectation that such outcomes will recur.
In other words, anticipation and repetition generate predictions about, and because of, each other. This is where rhythmic coherence is bootstrapped.
Syncopation and elaboration
As sketched earlier, the constraints act at multiple levels operate simultaneously, forming hierarchies that characterize rhythms. Those hierarchies are our building blocks; we can get the whole set by formulating the above psychological constraints as generative operations, where each building block is derived from a simpler one.
Take as the first building block a looping rhythm that consists of a single note attack on the first beat. Each of the other building blocks is derived by recursively applying exactly one of the following two operations at each rhythmic levels:
Syncopate by shifting all attacks one beat earlier in time.
Elaborate by combining the above syncopation with the original attacks.
Do nothing.
The result is a tree of building blocks that together account for every evolutionary outcome of syncopation or elaboration operations.
Left: looping syncopation at the quarter note level. Center: looping elaboration at the same level. Right: Elaboration at the quarter note level combined with syncopation at the 8th note level.
All possible building blocks for two metrical levels. (That is, all combinations of elaboration/syncopation/neither at the quarter note and 8th note levels.)
Elaboration mapped to Pascal’s triangle
Something surprising (well, to me) happens when you set about encoding combinations of the above operations. Say you have three metrical levels, and you generate a building block by applying the elaboration operation at the first and third levels, encoded as a vector 101 (that is, binary number with three bits, one for each metrical level, containing a 1 corresponding to each level where elaboration took place.)
The rhythm evolves like this:
Elaboration at half note, then 8th note levels has the same result as vice-versa.
Now consider Pascal’s triangle, an arrangement of binomial coefficients, in particular the odd coefficients (in bold), The rhythm encoded by the binary representation of 5 is found on the fifth row (counting from zero).
Pascal’s triangle with odd entries in bold, tilted to line up with elaboration-based building blocks.
Coincidence? No, it turns out that any encoding of elaborations into a binary number indicates the row on Pascal’s triangle where the odd entries correspond exactly to the resulting rhythm. (This is a because, according to Lucas’s theorem, the binary digits of successive odd binomial coefficients form patterns that correspond exactly to combinations of elaborations at distinct metrical levels.)
Perhaps-syncopated elaboration mapped to the Sierpinski gasket
Getting the entire set of building blocks including those incorporating syncopations operations requires going a step further: shifting each elaboration-generated rhythm one beat at each combination of metrical levels where elaboration did not occur.
Since each combination of elaborations and syncopations is encoded by a pair of binary numbers that share no 1s in the same binary place (because at most one operation can occur at each metrical level), each such pair of binary numbers can be combined into a single ternary number, distinguishing the 1s in the syncopation encoding by converting them to 2s.
The generator is the encoded elaborations, the offset is the encoded syncopations.
Using those ternary numbers as addresses into the fractal known as the Sierpinski gasket (as shown below), we now have map of all potential building blocks, laid out visually in terms of elaborations and syncopations. (Mathematically, this corresponds to patterns formed by binary carries in sums of binomial coefficients as established by Kummer’s theorem, which is related to Lucus’s theorem.)
Sierpinski gasket addresses mapped to building block rhythms.
Several characteristics and comparisons can be made using these integer encodings, without examining, let alone generating, the rhythm itself. The ternary digits tell you how closely two building blocks are in terms of how they evolved (that is, how many times they shared the same operation at the same metrical level)
Zooming into deeper metrical levels. Each location on each Sierpinski gaskets to the left corresponds to a (reversed) building block rhythm on the grids to the right.
Building block applications
Now that we have these encapsulations of rhythmic expectation at hand, how to we use them? It’s important to note that the building blocks are not exemplars of “good” rhythms; in fact the most compelling rhythms are often those that are the least parsimonious (for instance bossa nova, which, as detailed here, requires a separate building block for each note).
In practical terms, the first step is to parse the rhythm of a given melody or beat into these building blocks. This is computationally inexpensive because the integer mapping spares the need to perform the actual derivations; it’s just a matter of scanning out the known patterns.
The current set of apps is called Coord. Some details and demos are at coord.fm.
Varying rhythms
In the most simple case, one or more digits of the ternary number that encodes a particular building block can be altered, thereby switching the operation at the respective metrical levels. The building block will remain invariant with respect to the other metrical levels.
Geometry as musical instrument
A GUI can leverage the self-similarity of the overall set of building blocks by collectively shifting digits in the ternary encodings. This allows natural sounding variations that nevertheless might be very different on the surface, a parallel, hierarchical sort of editing that would be difficult to imagine otherwise.
Rhythms being manipulated hierarchically, rather than note-by-note.
An key aspect of this approach becomes visible in such interactions, the fact that each attack has its own potential to become a rest, and vice-versa. The attack potential equals the combined hierarchical weight of the building blocks at the given time point (again details are in the papers linked above). The notes act somewhat like rocks in a stream, simultaneously affecting the flow and submerged, or not, by that flow.
Potential note attacks above and below the expectancy threshold for actually being heard.
https://youtu.be/Ypb5kUMxb8g
Syncopation via genetic algorithm
This suggests a straightforward means of boosting syncopation without destroying recognizability: a GA with fitness function that rewards rhythms with low degree of parsimony with the building blocks but high degree of similarity to the original.
Morphing between rhythms
Selected, weighted, input melodies being morphed into a new melody.
The building blocks afford a powerful capability: morphing intuitively between rhythms. In short, the attack potentials from two or three weighted rhythms are combined, producing a rhythm that interpolates in nonlinear fashion between those inputs.
Attack potentials are calculated for three input rhythms
The attack potentials are summed to determine the new rhythm
Landscape of rhythmic variations
By positioning each source rhythm on a plane in a GUI, you can move the pointer between those locations in order to specify how much weight should be given to each source rhythm in the above scheme. As you slowly drag the pointer from one input to another you hear the rhythm morph in musically coherent fashion between the two.
Navigating the melodic morphing landscape.
Future directions
One offline possibility is to use the building blocks to help determine which rhythms are most like each other, where there is kinship that sounds natural but which might not be apparent on the surface.
Self-organized map
Supplying the same measure used in the morphing scheme above to the evaluation function of a self-organized map allows a meaningful proximity to be established among a set of rhythms, in terms of the similarities between the evolutionary pathways taken by those rhythms.
A self-organized map (unsupervised neural net) for determining rhythmic similarity.
https://youtu.be/mLyglwG3SY8
Neural nets
Since the building blocks themselves can be expressed simply as integers, it might make sense to use those as the raw data to a neural net. This would allow the network to expend energy discovering relationships that already take into account what is already known. As noted above, I lack expertise, but nevertheless have been thinking through the following.
Perhaps I could present attacks to a recursive neural net (likely an LSTM), each encoded by a vector that indicates which building blocks include that attack (a binary vector with length 3ⁿ for n metrical levels). The aim is have the net learn the attacks patterns against the backdrop of the collective expectations associated with each pattern.
Alternatively I could simply encode each attack by its binary representation. This would be a more compact encoding (with vector length 2ⁿ instead of 3ⁿ) but it would relay on the net to learn the superposed patterns already factored into the building blocks.
(More realistically, the hope is to work on something like the above with more knowledgeable collaborators.)
Is source code available?
Not currently; at this point this is a music project that involves programming rather than vice-versa. My initial target is concrete collaboration focused on the theory, algorithms and their integration into some platform where the musical engagement is fully explored and tested. (My unfortunate experience has been that distributing code or binaries produces little engagement or feedback that leads in such a direction.)
Open source could be a real possibility if the project reaches a stable point, but that will first require something like a small team effort. Meanwhile the relevant math and algorithms are available via the papers linked above, and I’m happy to engage with anyone coding those.
Conclusion
I’ve described a style of music analysis and generation that could form a substrate of music creation in various settings. Although the original motivation here was real-time interaction, the low-level generative analysis discussed here might complement various approaches such as machine learning or higher-level grammars.
Ecosystems of rhythms and melodies
Augmenting the potential for variation could enable music that has a life of its own after it leaves the composer’s hands, where the line between listener and composer becomes more blurry, where ecosystems of musical elements might vary, hybridize, and evolve, and where adaptive music can be more intuitively woven into interactions ranging from to performance to location-based music.
I welcome feedback, suggestions, questions, and, in particular, contact from those interested in comparing notes.
|
Music AI: Loop-in-the-Human
| 138
|
music-ai-loop-in-the-human-1a15681e573e
|
2018-05-18
|
2018-05-18 16:04:22
|
https://medium.com/s/story/music-ai-loop-in-the-human-1a15681e573e
| false
| 3,030
| null | null | null | null | null | null | null | null | null |
Music Composition
|
music-composition
|
Music Composition
| 166
|
Jay Hardesty
|
lives in Zurich; his generative music project is http://coord.fm.
|
4e5077d69df8
|
jayhardesty
| 16
| 30
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-20
|
2017-11-20 05:38:25
|
2017-12-01
|
2017-12-01 00:30:31
| 1
| false
|
en
|
2017-12-01
|
2017-12-01 00:30:31
| 12
|
1a157cf5e21a
| 3.313208
| 2
| 0
| 0
|
Read the full story here.
| 5
|
The Role of AI Technologies in Humanizing Digital Banking
Read the full story here.
Digital banking has not only gone mainstream but in many cases, it has become the only point of contact between banks and their customers. According to a May 2016 report on the tablet’s shifting role in mobile banking, half of the US adult population now banks using smartphones and tablets (an increase of 29 million people from the year before). While financial services delivered through digital channels have brought the convenience of anywhere, anytime banking to consumers, what is the cost? And, are banks living up to the expectations of their customers?
One of the main challenges the financial industry faces is the risk of diminished customer relationships as we make the shift to digital. Let’s face it, online banking sites can be pretty impersonal. For many years, face-to-face interactions were the norm as customers would develop a relationship with a bank associate who knew about their financial situation and would help them reach their financial goals. The traditional bank branch is now used by just 32% of customers, meaning the opportunities to create personal relationships with customers has greatly diminished.
As an industry, this is where the application of artificial intelligence (AI) and machine learning (ML) can help turn the tide. The promise of AI-powered engagement tools is to provide customers with applications that understand what they want and actually help them do something about it. Imagine if your bank’s passive, impersonal digital banking portal turned into a virtual financial assistant that is singularly focused on helping you make the best of your financial health. An assistant that never sleeps; constantly running in the background, monitoring your overall financial data in real time and pushing out actionable insights about your finances and opportunities to take smart action. It helps you take steps to improve your financial health by recommending easy ways to save more, spend smartly and make plans for your financial future. Your AI virtual assistant would go everywhere with you, and you could interact with it on your cell phone by chat or through your connected home devices by voice. Finally, your assistant would not only monitor your historical data, but it could help to predict upcoming financial events and get you organized so you can deal with your finances proactively.
Does that sound like the stuff of sci-fi? Well, it’s not. The technology is improving by leaps and bounds. There are a number of companies that are working on exactly that type of experience. We may still be a year or two off, but when we get there, these technological advancements will lead to a democratization of financial health that can put banks squarely in the role of customer advocates. That’s the power of AI when done right. In the very near future, banks will be able to provide contextually relevant insights including peer benchmarks from similar customer segments,personalized specifically for each individual. They’ll be able to mine their network data for good financial behavior and predictors of success that can be shared with individual customers. Banks can become anytime, anywhere trusted advisors and advocates for their customers.
Smart financial institutions will also learn from the interactions that customers have with their AI virtual assistants. A big part of financial health is ensuring that customers take advantage of financial products that are optimized to their specific needs. Mining the interactions that customers have with their financial assistants can help banks offer the right type of financial products and services to meet customer needs.
Over time, experiences like chat or voice-enabled virtual assistants can provide a humanizing touch to the banking experience. Now is a good time to start experimenting with the technology and figuring out how you want to deploy it. As with all new technology, it’s best to start out slow and grow over time. Here are four considerations to keep in mind as you develop your initial pilots:
Start with specific use cases and remember to set expectations up front
When it comes to applications like voice assistants, it’s essential to set expectations upfront, so the user is aware of the parameters for what can and cannot be addressed. In early phases, voice commands were mostly geared towards very specific use cases.
For example, Apple faced a structural problem in its early days with Siri — no matter how well the voice recognition element worked, there were still only 20 things that users could actually ask. Apple gave consumers the impression that they could ask anything and everything and in turn, users often got a computerized shrug. Flash forward a few years and Amazon’s Alexa tells a different story. The company proactively communicated what the voice assistant can and cannot answer. It also had the benefit of a much larger data set of behaviors to index from and react to. In essence, the system started off “smarter.”
Continue reading here.
|
The Role of AI Technologies in Humanizing Digital Banking
| 2
|
the-role-of-ai-technologies-in-humanizing-digital-banking-1a157cf5e21a
|
2018-03-20
|
2018-03-20 11:20:48
|
https://medium.com/s/story/the-role-of-ai-technologies-in-humanizing-digital-banking-1a157cf5e21a
| false
| 825
| null | null | null | null | null | null | null | null | null |
Banking
|
banking
|
Banking
| 14,612
|
MEDICI
|
MEDICI (formerly Let’s Talk Payments) accelerates global impact for all members of the FinTech ecosystem through memberships, research, advisory, and insights.
|
c77b990407ec
|
gomedici
| 5,664
| 442
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-13
|
2018-05-13 09:01:50
|
2018-05-13
|
2018-05-13 13:51:19
| 4
| false
|
en
|
2018-05-13
|
2018-05-13 19:52:20
| 1
|
1a16279a09da
| 3.288679
| 3
| 0
| 0
|
How data can help restaurants become more profitable?
| 5
|
Data Analytics: Transforming Restaurants
How data can help restaurants become more profitable?
Imagine you went to a restaurant and the staff already knows what you like? The menu is so well designed that it takes you no more than seconds to identify which dish is going to satisfy your belly. You get discount on your favourite dishes just when you need ’em. Won’t you love all of these?
Well, these are not some coincidences and the staff isn’t Sherlock Holmes who can judge you by your appearance. Today, technology reaches every dimension of our lives. The arrival of portable devices on the table, online bookings, social media, and new payment techniques enabled almost everyone with technology. Along with the outreach, automation has equipped businesses with enormous amount of data to simplify our lives.
Artificial Intelligence, the word is used by all tech giants to prove they are one step ahead in the race and Why not? Data being the fuel, AI is the high-tech vehicle of automation. Using the most advanced technology, AI converts data into information, information into insights enabling people to take decisions. But unlike most people think, it is not just used by Siri or Alexa but has penetrated in almost all domains from telecom to logistics to your food.
Things you never want to hear yourself say
Our morning business is booming. Who knows why?
Neapolitan pizza is the most grosser item, I guess people just like the name.
The business has gone down drastically since last one month. I have no idea what to do now?
These are some examples of easily made mistakes that Data would quickly spot, tell answers and offer the way out.
Surprised? Don’t be. In the face of extreme competition, restaurants, especially quick service restaurants (QSR) are increasingly turning to data analytics to reduce internal costs, increase revenue, and pump up profits. It’s an industry where time is money, where margins are small but volumes, large.
Okay, But where does the data come from? Customers leave behind an incomprehensible amount of data every time they visit your restaurant. Making sense of that data and reacting in real time are the two things that will keep restaurants one-step ahead of their customers (and competition) in the present-day customer-centric world.
At this point, you might be thinking, “Well, data is all well and good, but I’ve worked in the restaurant industry for a long time. I trust my gut — and I don’t think data is going to be more knowledgeable than me”. Let’s investigate that further.
You feel you’ve done just about everything. After all of these experiments, you know what your customer likes or dislikes and nobody is disputing that. Data can’t run a restaurant for you, and data can’t replace in-the-trenches experience but let’s be honest for a moment, it’s possible that some of your intuitions aren’t perfect. Let’s say you feel that you know:
What types of dishes your customers like best
Which servers are bringing in the biggest orders consistently
What new promotions are likely to sell
Data can serve as a way to “check yourself” and get to the bottom of what truly makes your business moving. Once you’ve been using your restaurant analytics for a while, you’ll be able to know things like:
Busiest days of week and hours of days to plan things better.
Star, mediocre and flop items of the menu
Insights about repeat customers (what they usually eat, how much do they bill etc.)
Real time foot-fall predictions
Understanding market sentiment to know your hits & misses
Here’s the thing: data isn’t meant to replace anything. Instead, restaurant analytics are an addition to your already capable business intelligence. Let data analytics help you make customers happy, personalise their experience and offers, and most importantly generate more revenue while making your life easier.
We at Grill AI, an artificial intelligence based restaurant consultancy run by a bunch of food & data enthusiasts help restaurants understand their own data using our state of the art algorithms and make informed decisions to drive sales.
|
Data Analytics: Transforming Restaurants
| 84
|
data-analytics-transforming-modern-restaurants-1a16279a09da
|
2018-05-14
|
2018-05-14 05:42:02
|
https://medium.com/s/story/data-analytics-transforming-modern-restaurants-1a16279a09da
| false
| 686
| null | null | null | null | null | null | null | null | null |
Data Analytics
|
data-analytics
|
Data Analytics
| 1,310
|
Grill AI
| null |
a48eb9d63789
|
contactgrillai
| 3
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-27
|
2018-08-27 02:14:04
|
2018-08-27
|
2018-08-27 02:14:20
| 0
| false
|
en
|
2018-08-27
|
2018-08-27 02:14:20
| 1
|
1a174c38e80f
| 2.049057
| 0
| 0
| 0
|
[Download] [PDF] Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning pdf By Bharath Ramsundar
Link…
| 1
|
READ PDF Online Designing Effective Instruction By Gary R. Morrison BOOK ONLINE #pdf
[Download] [PDF] Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning pdf By Bharath Ramsundar
Link https://collectionbooks.ebookoffer.us/?q=Tensorflow+for+Deep+Learning%3A+From+Linear+Regression+to+Reinforcement+Learning
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Read Online PDF Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Download PDF Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Download Full PDF Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Download PDF and EPUB Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Read PDF ePub Mobi Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Reading PDF Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Read Book PDF Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Read online Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Download Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Bharath Ramsundar pdf, Download Bharath Ramsundar epub Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Read pdf Bharath Ramsundar Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Download Bharath Ramsundar ebook Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Read pdf Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Online Download Best Book Online Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Read Online Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Book, Read Online Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning E-Books, Read Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Online, Read Best Book Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Online, Read Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Books Online Download Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Full Collection, Download Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Book, Read Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Ebook Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning PDF Read online, Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning pdf Download online, Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Read, Download Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Full PDF, Read Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning PDF Online, Read Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Books Online, Read Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Full Popular PDF, PDF Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Read Book PDF Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Read online PDF Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Download Best Book Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Read PDF Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Collection, Read PDF Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning Full Online, Read Best Book Online Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning, Download Tensorflow for Deep Learning: From Linear Regression to Reinforcement Learning PDF files
|
READ PDF Online Designing Effective Instruction By Gary R. Morrison BOOK ONLINE #pdf
| 0
|
read-pdf-online-designing-effective-instruction-by-gary-r-morrison-book-online-pdf-1a174c38e80f
|
2018-08-27
|
2018-08-27 02:14:20
|
https://medium.com/s/story/read-pdf-online-designing-effective-instruction-by-gary-r-morrison-book-online-pdf-1a174c38e80f
| false
| 543
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
schwertfeger
| null |
d236915400be
|
schwertfeger
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
e7396ce8be3d
|
2018-09-06
|
2018-09-06 21:57:20
|
2018-09-06
|
2018-09-06 22:19:40
| 7
| false
|
en
|
2018-09-07
|
2018-09-07 04:17:05
| 1
|
1a175786f95b
| 5.231132
| 2
| 0
| 0
|
https://artificial-intelligence.insuranceciooutlook.com/cxoinsights/can-ai-help-blockchain-adoption-nid-302.html
| 5
|
Can AI fuel Blockchain Adoption? — Reprint from CIO Outlook Magazine
https://artificial-intelligence.insuranceciooutlook.com/cxoinsights/can-ai-help-blockchain-adoption-nid-302.html
Funding has never been easier with $4B raised by Block.one in less than a year and $33M obtained in 30 seconds by SingularityNet. Over $11B in investments transpired in 2018 with 80% year over year growth in just the first half of this year. Such is the growth trajectory of Initial Coin Offering (or ICO) cryptocurrency world.
Fig 1 — ICO Annual growth as of May 2018
In comparison, the entire 2017 VC investments were about $70B (down by 4% YoY) and Uber took nearly 9 years to raise over $20B. If this ICO-mania (check out TataTu & EOS valuation) is making you anxious, you are not alone and regulators are working to clarify token rules. Irrespective of crypto prospects, blockchain technology (which became popular due to Satoshi Nakamoto’s Oct 2008 Bitcoin paper) will impact data integrity and transparency. Blockchain will change the way companies think about Data collection, privacy, and analysis which are critical components of a robust Business Intelligence Strategy.
So, what is a blockchain?
1. Assume we have a group of self-selected users (in permissionless blockchain where anyone can participate) or pre-selected invite-only users with specific roles (permissioned blockchain with private access) who execute blockchain software on their special computers (a.k.a nodes). This group of nodes constitutes the blockchain network.
2. Now let’s Imagine a constantly flowing supply of empty cardboard boxes representing blocks (1 MB block size for Bitcoin network.
Fig 2 — A Block
3. Users who want to store their transactions permanently in this box send (broadcast) their transactions such as financial transfers, barcodes, etc. to all (or some of the pre-selected in permissioned blockchain) the nodes in this blockchain network. The blockchain network will verify each transaction before it gets into the box. The software checks for a valid identity and the existence of sufficient account balance to transfer money.
Fig 3 — A verified transaction
4. Once the box is full with verified transactions (when the 1MB limit is reached), it is time to close the box, name, and seal it.
Fig 4 — A filled block with verified transactions
5. The naming rights are granted to the person who finds the name first by solving a mathematical puzzle to produce a hashcode (alphanumeric string) with a leading number of zeros. This process of naming is called MINING and the winning person (miner) solves the puzzle first will be rewarded. In the Bitcoin network, the mining reward is currently set at 12.5 bitcoins and it will be halved after every 210,000 blocks.
Fig 5 — A completed & named block with a hashcode
The formal definition for blockchain is “A digital ledger in which transactions are validated by a decentralized network of nodes (computers), chain linked and recorded chronologically.”
Fig 6 — A simplified blockchain prop
Having described the blockchain let us examine why we need it.
a) Reduces Cost by eliminating middlemen: If you want to transfer money directly between users you can use blockchain and eliminate the 3rd party entities such as banks or card companies. This concept was implemented in the Syrian refugee camp in Jordan where refugee identities were captured in a blockchain along with each person’s allocated amount of United Nation (UN) issued cryptocurrency. The refugees shopped in camp stores with their crypto balances and hence UN avoided huge transactional fees to 3rd party banks for every transaction.
b) Difficult to hack: Since the majority of nodes in the network have knowledge of all the verified transactions in the blockchain, it is not easy to modify transactions as the data discrepancy between nodes will become apparent if the information is not the same in the majority of the nodes.
c) Multi-user verification: The transactions in the block are verified by anonymous users other than the creator (centralized authority) of the transaction to avoid owner bias. It will lead to fewer errors and lesser fraud as multiple verifiers are observing each transaction when it enters the blockchain. Verification can be done using a consensus method with few nodes approving transactions (less compute intensive) or using proof of work (more compute intensive) where a majority of nodes verify each transaction.
d) Append mode: If you need to edit a verified transaction in an earlier block, you cannot alter the block without modifying every verified transaction that was added later. Therefore, you have to add a new transaction with updates as opposed to modifying the earlier transaction in the block directly.
While Blockchain is already used in areas such as Finance, Payments, and Supply chain provenance, it’s growth can also be influenced by the technology investments in Artificial Intelligence. In 2017, Artificial Intelligence (AI) investments grew by 141% YoY to ~$15.4B. US was the leading AI investor in summer of 2017 with 75% of equity funding share in AI companies. By the end of 2017, China was the AI investment leader with 48% equity funding share with the US following at 38%. The Chinese government supports investments in deep learning, vision chips as well as security using video surveillance& visual recognition. With the controversial “sharp eye” project, the government is experimenting to calculate a person’s trustworthiness with a social credit score by monitoring video surveillance and facial recognition in various situations. Startup Megvii Face++ already has access to 1.3 billion face data records on Chinese citizens.
Additionally, AI investments in the sectors listed below will fuel blockchain adoption as a trustworthy data collection option.
a. Automobile technology (includes Autonomous Vehicles AV)
c. Facial Recognition & Security
d. Healthcare diagnosis &
e. iOT data
Fig 7 — AI Growth areas
Blockchain can support a robust verification process during the data collection phase in these AI applications providing highly coveted data and feature set to the AI engine. This verification phase may encourage consumers or enterprises to share data pseudonymously in the blockchain to benefit from the reduced cost of transaction or safety. For example, if a blockchain is used to capture an Autonomous Vehicle’s check-in/checkout locations & times at every stop with a consensus protocol certifying the entry, we can be confident that the data is verified by more just a single centralized AV’s controller and maintained in an immutable fashion. Now you can confidently execute a route prediction or exception analysis.
So, what?
In the next five to seven years, blockchain will be a viable option for trustworthy data collection, identity platforms and payments. In 10+ years, AI applications will start making predictive decisions on verified data to execute smart contracts with explicit approval from us potentially using our blockchain enabled smart devices!
I hope this article has inspired you to start exploring blockchain ideas where data integrity is paramount to solving a real customer problem — Dream big!
My article reprinted here from the CIO outlook magazine
https://artificial-intelligence.insuranceciooutlook.com/cxoinsights/can-ai-help-blockchain-adoption-nid-302.html
|
Can AI fuel Blockchain Adoption? — Reprint from CIO Outlook Magazine
| 12
|
can-ai-fuel-blockchain-adoption-reprint-from-cio-outlook-magazine-1a175786f95b
|
2018-09-08
|
2018-09-08 20:59:02
|
https://medium.com/s/story/can-ai-fuel-blockchain-adoption-reprint-from-cio-outlook-magazine-1a175786f95b
| false
| 1,108
|
AWIP was established to help foster and develop the next generation of women product leaders. To achieve this goal, we focus and excel on providing quality programming, for our members, that are focused on developing key skills to be a successful PM and a leader.
| null | null | null |
Advancing Women in Product (AWIP)
|
info@advancingwomeninproduct.org
|
theawip
|
PRODUCT,WOMEN,BUSINESS,TECHNOLOGY
|
theAWIP
|
Bitcoin
|
bitcoin
|
Bitcoin
| 141,486
|
Aarthi Srinivasan
| null |
c3e89b010441
|
saarthi
| 3
| 6
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-09
|
2018-09-09 21:51:45
|
2018-09-17
|
2018-09-17 05:36:26
| 13
| false
|
en
|
2018-09-18
|
2018-09-18 18:25:06
| 6
|
1a17a3762107
| 9.875472
| 3
| 0
| 0
|
In part 3 we covered the execution engine that supports serving thousands of concurrent experiments to millions of users. In Part 4 we will…
| 5
|
Experimentation @Intuit Part 4 — Analysis of Experiments
In part 3 we covered the execution engine that supports serving thousands of concurrent experiments to millions of users. In Part 4 we will look into the basics of experimentation analytics.
Overall Evaluation Metric (OEC)
If you recall from Part 2, OEC refers to what measure, objective or goal we trying to achieve. It is a metric used to compare the response to different treatments.
Business Metrics
For QuickBooks, the primary business metrics are:
Acquisition — conversion of a prospect to a subscriber.
Engagement — how often is the customer using the product and what is the time spent in each visit.
Retention — total number of subscribers and lifetime value of each customer.
As some of these metrics take a while to bake and reach steady state we frequently also use secondary metrics to get an early-read to confirm or deny that an experiment is trending well. These are based on leading indicators like # of sign-ups, # of logins, # of invoices sent.
Operational Metrics
Besides the business metrics, we also look at operational metrics such as availability, page performance, experiential or experimental bugs and call volume from our customers to ensure we don’t have system issues.
Sample Size Determination
Sample size is a critical piece where the duration — start and end date in the tool — are determined based on how much traffic we need to drive statistical significance. We look at historical traffic on the page or workflow and then extrapolate the duration based on the statistical significance that we want to drive.
For example assume that 5% of QBO users buy payroll. Let’s assume that the lifetime value of a QuickBooks customer is $100. The average user therefore has a lifetime value of $5. Assume the standard deviation is $40. If we want to detect a 5% change to revenue , we will need over 1.6M users. If you get approximately 200K unique users a day , that means the experiment needs to run for 8 days.
𝑛 = (4r𝜎/Δ)² = (4 * 2* 40 / (0.05 * 5))² = 1.6M
(see addendum if you want to understand the statistics)
Sample size calculator here with supporting blog post
A/B vs MVT
We conduct an A/B experiment when iterating on a product feature.
We use MVF (partial as opposed to full) testing when several factors are suspected to interact strongly. We remove certain combinations of factors through simulations — remember each combination adds to the total sample size.
I would close the 4 part series with an excerpt from Jeff Bezos’s letter to the shareholders — “One area where I think we are especially distinctive is failure. I believe we are the best place in the world to fail (we have plenty of practice!), and failure and invention are inseparable twins. To invent you have to experiment, and if you know in advance that it’s going to work, it’s not an experiment. Most large organizations embrace the idea of invention, but are not willing to suffer the string of failed experiments necessary to get there.”
Intuit’s qualitative deep roots of ‘Follow me Home” have expanded with a quantitative mindset as well — tying the best of customer empathy and data together. At the heart of this evolution is the Experimentation Platform that is driving a cultural change to Power Prosperity Around the World.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Addendum
A quick refresher on some of the fundamental concepts in statistics to inform how we apply statistical theory to evaluating our experiments.
Normal distribution, sometimes called the bell curve, is a common way to describe a continuous distribution in probability theory and statistics. Lots of natural phenomena in the real world approximate to normal distribution, near enough that we can make use of it as a model, phenomena that emerge from a large number of uncorrelated, random events will usually approximate a normal distribution. Some examples that follow normal distribution would be height and weight of humans.
Some properties of Normal distribution are
The mean, mode and median are all equal.
The curve is bilaterally symmetric at the center (i.e. around the mean, μ).
The tails are asymptotic to the x axis, meaning they come closer and closer but never actually touch it.
Exactly half of the values are to the left of center and exactly half the values are to the right.
The total area under the curve is 1.
μ refers to the mean of the population while standard deviation represented by σ controls the spread of the distribution. A smaller standard deviation indicates that the data is tightly clustered around the mean; the normal distribution will be taller. A larger standard deviation indicates that the data is spread out around the mean; the normal distribution will be flatter and wider.
Empirically for normal distribution virtually all of the scores fall within three standard deviations from the mean. Any value is likely to be within 1 standard deviation of the mean , very likely to be within 2 standard deviations and almost certainly within 3 standard deviations.
68% of the data falls within one standard-deviation of the mean.
95% of the data falls within two one standard-deviations of the mean.
99.7% of the data falls within three standard-deviations of the mean.
While a family of bell-shaped curves can be defined for the same combination of μ, s, only one is the normal curve.
The standard normal distribution with a mean of 0 and a standard deviation of 1 is called a standard normal distribution.
The standard normal (Z) distribution serves as a standard by which all other normal distributions are measured. When a frequency distribution is normally distributed, we can find out the probability of a score occurring by standardizing the scores, known as standard scores (or z scores). The z-scores or “standard normal deviates, presents data in a standard form that can be easily compared to other distributions
z = (X — μ) / σ
where z is the z-score, X is the value of the element, μ is the population mean, and σ is the standard deviation. A z-score less than 0 represents an element less than the meanwhile a z-score equal to 1 represents an element that is 1 standard deviation greater than the mean; a z-score equal to 2, 2 standard deviations greater than the mean; etc.
The area under the curve is directly proportional to the relative frequency of observations. All normal distributions can be converted into the standard normal curve by subtracting the mean and dividing by the standard deviation. Z-scores often summarized in table form as a CDF (cumulative density function).
Let’s see this with an example. Assume a random normal variable follows a normal distribution with a mean of 3.00 and a standard deviation of 1.0. What if we have to find the probability that this random variable is greater than 5.0.
1) The first step is to standardize the given value of 5.0 into a Z value (aka, Z score):
Z = (5.0–3.0)/1.0 = 2.00.
The Z value of 2.00 means “The value of 5.0 is 2.00 standard deviations above the mean of 3.00.”
2) Then we can use the common Z table to retrieve the associated probability. We go to row 2.0 and then go to column 0.00 to arrive at 0.97725. We see here that for Z = 2.00, the probability is 0.97725 or 97.73%.
Pr[X ≤ 5.0 | µ(X) = 3.00 and σ(X) = 1.0] = Pr(Z ≤ 2.000) = 97.73%.
You can also use an online calculator to compute this.
Hypothesis Testing
A hypothesis is an educated guess about something in the world around you. It should be testable, either by experimentation or observation. In hypothesis testing before you can even perform a test, you have to know what your null hypothesis is, and what you are testing is an alternate hypothesis.
For instance, in the above example where we want to test the idea of improved value proposition for the customer by having all products in one place, here’s how the null hypothesis (Ho) and alternative hypothesis (Ha) are formulated
Ho: Having all products in one place has no effect on customer’s perceived value of QuickBooks
Ha: Having all products in one place improves the perceived value of QuickBooks ecosystem thereby increasing our subscription rateFor example, we believe that by having All the Products in one place we can showcase the power of QuickBooks ecosystem to small businesses to manage their business.
Let’s look at all the combinations between the experimenter’s hypothesis and the actual sitation
Type I and Type II errors are better explained by an example of the fable of the boy and the wolf. When the boy first pretends there is a wolf and the villagers believe him while the wolf did not come, it’s called a Type I error. When he claims there is a wolf again, but no one takes him seriously, though it is true, it’s a Type II error. The villagers can avoid Type I errors by never believing the boy, but that will always cause a Type II error, when there is a wolf around. Similarly, they can always believe him and never make a Type II, but that will cause lots of false alarms.
Null hypothesis (H0): there is no wolf
Alternate hypothesis (Ha): there is a wolf
Type I error (α): we incorrectly reject the null hypothesis, that there isn’t a wolf (i.e., we believe there is a wolf), even though the null hypothesis is true (there is no wolf).
Type II error (β): we incorrectly accept (or “fail to reject”) the null hypothesis (there is no wolf) even though the alternative hypothesis is true (there is a wolf).
False Positives — A Type I error occurs when we reject the null hypothesis while it is true. The probability of committing a Type I error is denoted by a Greek letter α (alpha). The probability of committing a Type I error is also w.r.t to a significance level α. The reciprocal of Type I error is called the Confidence Level , defined as 1 − α. The level of acceptability for Type I error is conventionally set at 0.05. Setting α at 0.05 means that we accept a 5% probability of Type I error. To put it another way, we understand when setting the α level at 0.05 that in our study we have a 5% chance of rejecting the null hypothesis when we should fail to reject it.
False Negatives — A Type II error occurs when we fail to reject the null hypothesis while it should be rejected.The probability of committing a Type II error is denoted by Greek letter β (Beta).The probability of not committing a Type II error (1-β) is called the power of the experiment.
The Power of a test is one of the most important factors in hypothesis experimentation. The power essentially tells you the chance of rejecting the null hypothesis when it should be rejected.
Conventional levels of acceptability for Type II error are β = 0.1 or β = 0.2. If β = 0.1, that means the study has a 10% probability of a Type II error; that is, there is a 10% chance that the null hypothesis will be false but will fail to be rejected in the study. To put it another way, it means that in a study that should return significant results based on the true state of the population, there is a 10% chance that the results of the study will not be significant.
The reciprocal of Type II error is power, defined as 1 − β.
Assuming we have the two treatments we can perform a sample size calculation with the intent of comparing means μ1 and μ2 from 2 groups of user population groups that will experience the two treatments — we assume normal distributions and homogeneous variances σ1 = σ2 and equal sample sizes n1 = n2 . The false positive rate is set to α and the power for detecting a difference Δ=|μ1 –μ2 | is set to1–β, where β is the false negative rate. Using conventional values of α = 0.05 and β = 0.20, the sample size calculation then can be computed as
or 𝑛 = (4r𝜎/Δ)² where r is the no of treatments.
For α = 0.05, the numerators corresponding to 1 — β = 0.90 and 0.95, respectively, are 21 and 26 (deep dive here).
Few salient points that ties sample size, power and effect.
If the evaluation metric has less variance, we would need a smaller sample size to conduct an experiment.
2. If the desired effect aka minimum detectable effect is higher, we need a smaller sample. Likewise, a smaller effect needs much bigger sample to conclusively attribute such effect to the test.
3. Increasing the desired power of the test requires larger sample as it offers less flexibility with making type II errors.
4. Increasing confidence level (in other words smaller α) requires larger sample as it offers less flexibility with making type I errors
5. An experiment with 99%/1% treatment-control split will have to run about significantly longer than if it ran at 50%/50% split.
|
Experimentation @Intuit Part 4 — Analysis of Experiments
| 3
|
experimentation-intuit-part-4-analysis-of-experiments-1a17a3762107
|
2018-09-18
|
2018-09-18 19:43:31
|
https://medium.com/s/story/experimentation-intuit-part-4-analysis-of-experiments-1a17a3762107
| false
| 2,246
| null | null | null | null | null | null | null | null | null |
Analytics
|
analytics
|
Analytics
| 15,193
|
Anil Madan
|
Vice President, QuickBooks Global Platform
|
4df17325c949
|
anil_madan
| 36
| 15
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-02
|
2018-03-02 14:42:35
|
2018-03-02
|
2018-03-02 14:47:29
| 1
| false
|
en
|
2018-09-18
|
2018-09-18 12:44:56
| 2
|
1a1952cde462
| 1.988679
| 0
| 0
| 0
|
Even if ML is a deeply disruptive tech that will eventually impact every industry, it still requires specific conditions in order to…
| 3
|
Problem-Tech fit: Not every problem can be solved through Machine Learning. At least for now.
Even if ML is a deeply disruptive tech that will eventually impact every industry, it still requires specific conditions in order to flourish and offer its full potential.
As we often mentioned, at Scalia we highly believe that the only way to compete within the AI space is by having a vertical approach, i.e. focusing on one very specific problem and to apply data network effect to it in order to build strong defensibility.
Data network effect is a rather simple 2-step mechanism:
Step 1: You just need to build a simple product which provides value with basic AI
Step 2: You Include a feedback loop so that usage improves your models.
In essence, your users trains your algorithms, creating a virtuous circle. That is a great technique in order to outrun the competition. If you you want to learn more about this concept, I would suggest to read this blog post by Zetta ventures or listen this a16z podcast.
For four years, Lance has been building from the ground up online retailers and each and every time, he was stroke by how inefficient product data management was. From the data collection all the way to the product going live, the whole process was incredibly manual and repetitive. Not only were we making the same actions — categorizing, standardizing, controlling, etc — over and over again but so were our competitors and our partners.
When I met Lance early 2016, I was just coming back from Stanford’s labs where I had be working on Machine Learning algorithms. It didn’t took us long to figure out that his problem and my solution were perfect matches. Here are the 3 main reasons why:
In retail, historical data sets to train ML algorithms are relatively easy to find. We’ve decided to focus on the lifestyle industry as it has short sales cycles, yesterday data doesn’t worth much, plus it does not contain sensitive info
Product data management is a very repetitive and narrow task making it the feedback easy to capture
The output is straight forward enough to be shared across all the players. Everyone agrees that “FR” in a country label means “France”, that “RL” in a brand label means “Ralph Lauren” and that a “Ruby dress” is red.
At Scalia, each time that a user matches attributes within an import file, categorizes a listing or standardizes raw values, we capture this data point. Automatically, it feeds our algorithm making our suggestions for next time a bit more accurate.
We’ve now seen or read about dozens of way to apply ML or any sort of deep tech, and we are still convinced that this one is the smartest one as we build a moat while making our product smarter.
|
Problem-Tech fit: Not every problem can be solved through Machine Learning. At least for now.
| 0
|
problem-tech-fit-not-every-problem-can-be-solved-through-machine-learning-at-least-for-now-1a1952cde462
|
2018-09-18
|
2018-09-18 12:45:42
|
https://medium.com/s/story/problem-tech-fit-not-every-problem-can-be-solved-through-machine-learning-at-least-for-now-1a1952cde462
| false
| 474
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Matthias Richard
|
CTO @Scalia
|
2ea6328af5a3
|
MatthiasRMS
| 24
| 25
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
2a678b52fc4f
|
2018-07-27
|
2018-07-27 17:37:20
|
2017-09-06
|
2017-09-06 15:30:32
| 1
| false
|
en
|
2018-07-27
|
2018-07-27 17:47:21
| 7
|
1a1ab9cd70c5
| 1.4
| 0
| 0
| 0
|
by Daniel Kimmel
| 5
|
AI, Sensor-Based Analytics, and a Generational Shift: Three Trends to be Aware of in 2017
by Daniel Kimmel
Opex’s Mike Watson recently appeared on the Supply Chain Television Channel to discuss three defining trends that he has watched develop in the world of Supply Chain Analytics so far in 2017. Dan Gilmore, editor of Supply Chain Digest, facilitated the interview.
To give you an idea of what you will learn when you view the video:
He discusses what business leaders need to be aware of now that “Artificial Intelligence” has developed into a general umbrella term for “anything that uses data,” and explains the difference between the general buzzword and what AI means among the technical community, as well as a few of AI’s more sophisticated applications. (For example, quality control, which he previously wrote about on our blog here using one of his favorite examples, the Lay’s potato chip).
He notes the growing use of Sensor-Based Analytics to track inventory across the global supply chain, as well as addresses the complicated question concerning the massive amounts of data provided by sensors and determining what data is worth saving and what is not worth saving.
And finally, he discusses the implications of the “generational gap” that is growing between younger data scientists, who increasingly prefer to use open-source tools such as Python and R, and supply chain planners who are accustomed to more traditional modes of operation working with Excel or other off-the-shelf packages.
View the interview as a stand-alone video here.
Also, check out Mike’s follow-up article in the SC Digest, “Supply Chain by Design: Three Things That Supply Chain Managers Should Know about Artificial Intelligence.”
Originally published at opexanalytics.com on September 6, 2017.
If you liked this blog post, check out more of our work, follow us on social media or join us for our free monthly Academy webinars.
|
AI, Sensor-Based Analytics, and a Generational Shift: Three Trends to be Aware of in 2017
| 0
|
ai-sensor-based-analytics-and-a-generational-shift-three-trends-to-be-aware-of-in-2017-1a1ab9cd70c5
|
2018-07-27
|
2018-07-27 17:47:21
|
https://medium.com/s/story/ai-sensor-based-analytics-and-a-generational-shift-three-trends-to-be-aware-of-in-2017-1a1ab9cd70c5
| false
| 318
|
Reinventing your Business with AI
| null |
opexanalytics
| null |
The Opex Analytics Blog
|
info@opexanalytics.com
|
opex-analytics
|
DATA SCIENCE,OPTIMIZATION,AI,MACHINE LEARNING,PROBLEM SOLVING
|
opexanalytics
|
Supply Chain
|
supply-chain
|
Supply Chain
| 6,262
|
Opex Analytics
|
Author of The Opex Analytics Blog.
|
370952daf49
|
OpexAnalytics
| 41
| 28
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-09
|
2018-02-09 16:16:06
|
2018-02-09
|
2018-02-09 16:16:59
| 1
| false
|
en
|
2018-02-09
|
2018-02-09 16:16:59
| 5
|
1a1b5bbd39b3
| 2
| 0
| 0
| 0
|
This blog post originally appeared on Luca Fury’s website.
| 5
|
Are We Really Overestimating Artificial Intelligence?
This blog post originally appeared on Luca Fury’s website.
As we already know, artificial intelligence is one of the biggest technological advancements of our time. The technology has made incredible strides over the past several years, growing from a mere concept that seemed rooted in science fiction to a solid foundation on which machine learning, self-teaching robots, and the almost human bot, Sophia, were built.
In spite of this incredible growth spurt, it would seem as though many are still questioning the capabilities of artificial intelligence. All too often, these so-called naysayers are simply viewed as modern Luddites, petrified that their jobs and everyday lives are at risk of being overthrown. However, it is entirely possible that their standpoints are rooted in facts, logic, and extensive research.
So, rather than brushing these opinions aside, let us take the time to humor them. Although we may do nothing more than glean insight into the opposing side’s worldview, it is possible our eyes will be opened to the shortcomings of artificial intelligence.
Artificial intelligence is still in its infant stages
Regardless of how intriguing Sophia, the human-like robot that can quip back and forth with the best of us, may be, it is painfully apparent that she is the only one of her kind. As a whole, artificial intelligence-based robots are nowhere near being as quick-witted as Sophia, nor could they be produced on a large enough scale to make a real impact on our society. Instead, they are more like an act to marvel at, similar to the robots and flying cars that were featured on television shows nearly 50 years ago.
Artificial intelligence cannot match humans’ communication skills
While a few artificial intelligence programs have been improved enough to listen and respond to spoken human requests, all too many of them are incapable of correctly interpreting humans’ differences in phrasing, intonation, and other qualities of speech.
This can be exemplified by just how difficult it is to communicate with everyday, so-called smart assistants like Siri and Alexa. Sure, they may be able to respond to brief requests such as playing a certain song or locating a nearby gas station. However, their speech recognition skills take a dip when faced with longer, wordier questions.
Artificial intelligence robots have not mastered deep learning
Unlike humans, robots are not equipped with extensive neural networks that enable them to learn by making connections to images, sounds, and words. Therefore, quite a bit of time must be spent on training these bots to make such connections. However, even once the foundation of their artificial neural networks has been laid, they can still be easily tricked into making errors. Evidently, these robots are not so intelligent at the onset but require human beings to teach them most of what they know.
|
Are We Really Overestimating Artificial Intelligence?
| 0
|
are-we-really-overestimating-artificial-intelligence-1a1b5bbd39b3
|
2018-02-09
|
2018-02-09 16:17:00
|
https://medium.com/s/story/are-we-really-overestimating-artificial-intelligence-1a1b5bbd39b3
| false
| 477
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Luca Fury
|
Luca Fury is the owner of Fury’s Fight Picks, the leader in MMA betting advice. Learn more: http://LucaFuryInvestments.com
|
875bdc5c7f45
|
LucaFury
| 27
| 211
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
dd96c3e77fad
|
2018-02-23
|
2018-02-23 10:57:33
|
2018-02-23
|
2018-02-23 11:18:18
| 1
| false
|
en
|
2018-09-19
|
2018-09-19 14:39:50
| 14
|
1a1c5cebc91c
| 2.392453
| 1
| 0
| 0
|
Defending ourselves against the malicious use of Artificial Intelligence
| 5
|
AI poses deadly threat, warns report
Defending ourselves against the malicious use of Artificial Intelligence
A new report is sounding the alarm on the risks of AI technology falling into unscrupulous hands. Picture by Geralt, Pixabay
The future looks bleak, according to a harrowing new report by more than two dozen experts from a number of prominent institutions, including Oxford University’s Future of Humanity Institute, Yale University’s Information Society Project, Elon Musk’s OpenAI and the Electronic Frontier Foundation. They warn of the potential risks posed by the malicious use of Artificial Intelligence in the digital, physical and political domains.
The report says that rogue states, criminals and terrorists will all have access to the technology to launch devastating attacks. Specific threats include the deliberate crashing of self-drive vehicles, assassination by drone, speech-synthesis to impersonate targets, and spear phishing, to name a few.
Freely available technologies, such as those used by the deepfakes app , have already brought home to many of us how AI might be used in the political arena to spread fake news and propaganda to sway public opinion. “We also expect novel attacks,” the authors warn, “that take advantage of an improved capacity to analyse human behaviour, moods and beliefs on the basis of available data.”
The authors recommend five high-level actions:
Researchers and engineers must acknowledge the potential for misusing their work;
Policymakers should work more closely with researchers to understand and prevent the attacks;
AI researchers should learn from existing best practices in cyber security;
Normative and ethical frameworks must become top priorities;
A wider range of stakeholders and experts should be involved in efforts to understand, prevent and mitigate the growing threats.
International Standards reflect expert consensus on best practices and address real needs. They can provide powerful tools for identifying, avoiding and mitigating risks.
Here at the IEC, we have long been concerned about the threat of cyber attacks, including the emerging hacking risks faced by connected and automated cars. Our international experts are closely involved in the development of Standards relevant to cyber security through their work in ISO/IEC JTC 1/SC 27: IT security techniques.
This Subcommittee was set up by ISO/IEC JTC 1: Information technology, the Joint Technical Committee created by the IEC and ISO. It has published dozens of documents covering various aspects of IT security techniques, including the ISO/IEC 27000 family of Standards on information security management systems.
Another example is ISO/IEC 27019: Information technology — Security techniques — Information security controls for the energy utility industry
Several other series of IEC Standards are relevant to the protection of communication networks, control systems and power installations against cyber threats. They include:
IEC 62443: Industrial automation and control systems security (IACS) — Network and system security
IEC 62645: Nuclear power plants — Instrumentation and control systems — Requirements for security programmes for computer-based systems
IEC 61850: Communication networks and systems for power utility automation
IEC 60870: Telecontrol equipment and systems
IEC 62351: Power systems management and associated information exchange
IEC 62859: Nuclear power plants — Instrumentation and control systems — Requirements for coordinating safety and cybersecurity
As hackers continue to pose a growing threat, it is essential that IT staff have the required training, knowledge and skills. The work of the Committee on conformity assessment (CASCO) — a joint effort by ISO and IEC — is vital to the process of determining whether an organization meets the requirements related to its technical competence in this area.
ISO/IEC 17024 sets out the general requirements for personnel certification, while ISO/IEC 17065 covers the requirements for certifying products, processes and services.
|
AI poses deadly threat, warns report
| 1
|
defending-ourselves-against-the-the-malicious-use-of-artificial-intelligence-1a1c5cebc91c
|
2018-09-19
|
2018-09-19 14:39:50
|
https://medium.com/s/story/defending-ourselves-against-the-the-malicious-use-of-artificial-intelligence-1a1c5cebc91c
| false
| 581
|
news and views about electrotechnology
| null |
InternationalElectrotechnicalCommission
| null |
e-tech
|
mmu@iec.ch
|
e-tech
|
SMART CITIES,IOT,CYBERSECURITY,RENEWABLE ENERGY,TECHNOLOGY
|
iecstandards
|
Cybersecurity
|
cybersecurity
|
Cybersecurity
| 24,500
|
Mike Mullane
|
Journalist working at the intersection of technology and media
|
669196d9755e
|
mikemullane
| 40
| 79
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-06
|
2018-04-06 15:07:25
|
2018-04-08
|
2018-04-08 22:58:45
| 0
| false
|
pt
|
2018-04-08
|
2018-04-08 22:58:45
| 0
|
1a1cfd57ee44
| 1.049057
| 0
| 0
| 0
|
Meu nome é Igor Dias sou formado em Sistemas de Informação pela PUC Minas, atualmente trabalho na Rock Content como Desenvolvedor Web…
| 4
|
Python — Noob To Pro
Meu nome é Igor Dias sou formado em Sistemas de Informação pela PUC Minas, atualmente trabalho na Rock Content como Desenvolvedor Web Pleno, onde crio blogs utilizando o WordPress e realizo a customização do template de acordo com o manual da marca dos clientes.
Nestes quase 2 anos trabalhando com o WordPress, senti vontade de aprender uma nova linguagem de programação e vi que Python estava em alta por conta do “bumm” da quantidade de dados que as empresas tem, só que não sabem o que fazer com eles, pensei “porque não né?”.
Já havia ouvido falar de Python mas nunca tinha feito um “Hello World”, fiz o meu primeiro programa e logo de cara já gostei da sintaxe que é utilizada, não sendo necessário o uso do “;” o que já é uma maravilha, só precisa ficar atento na indentação do código para não se confundir.
Decidi então fazer um curso na “Udacity de Fundamentos de Data Science I”, gostei bastante e fiz até a continuação “Udacity de Fundamentos de Data Science II”, cenas dos próximos capítulos.
Vou utilizar o Medium para escrever a minha trajetória de aprendizado em Analise de Dados com Python e isso irá me ajudar a acompanhar meu desenvolvimento nesta área a qual quero trabalhar.
Acredito que só se aprende fazendo e com muita repetição, por isso vou fazer vários projetos para fixar melhor as informações durante os meus estudos.
Irei refazer o curso da Udacity;
Vou iniciar o curso do Datacamp “Data Analyst with Python”;
Leitura de alguns livros de Python.
O objetivo é escrever todo sábado ou domingo falando sobre o que aprendi durante a semana.
|
Python — Noob To Pro
| 0
|
python-noob-to-pro-1a1cfd57ee44
|
2018-04-08
|
2018-04-08 22:58:46
|
https://medium.com/s/story/python-noob-to-pro-1a1cfd57ee44
| false
| 278
| null | null | null | null | null | null | null | null | null |
Python
|
python
|
Python
| 20,142
|
Igor dias
|
Aprendiz de Cientista de Dados e amante da culinária.
|
7e2261eabf3f
|
igordiasth
| 2
| 8
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
2141d97ccc50
|
2018-02-24
|
2018-02-24 00:36:44
|
2018-02-24
|
2018-02-24 00:44:21
| 1
| false
|
en
|
2018-03-11
|
2018-03-11 19:52:44
| 3
|
1a1d9bfaa3bc
| 3.769811
| 0
| 0
| 0
|
A New Algorithm for Robotic Vision
| 5
|
Going Deep
A New Algorithm for Robotic Vision
Cruising around in my Nissan 350Z with the ground-effect neon thumping to the soundtrack, or careening around curves at over 170 MPH in my V12 Aston Martin DB9, are high on the short list of things that assuage my daily commute amid snarled traffic in the real world. In a relatively short period of time, the hefty cathode ray tube in our family room has been transformed from the keyhole view of lack-luster situation comedies and reality shows into a rich interface between my family’s senses and the minds of video game designers. The graphics displayed by our latest holiday acquisitions are visually stunning. The industry has moved beyond the stage of “almost-real” into “hyper-real” — a world wherein the most impossible camera angles are commonplace, the lighting is always perfect and the frame acquisition rate is just right. There is plenty of science buried in this experimental setup and researchers are developing the tools to uncover and leverage it.
Photo by David Travis on Unsplash
One such scientific mystery is how the binocular vision we use to navigate through our three-dimensional (3-D) environment has no difficulty extracting volumetric information from a two-dimensional (2-D) pattern emitted from the glowing phosphors coated on a piece of glass. I see two identical flattened images when using both eyes to look at the screen, yet I can schuss around obstacles at high-scale velocity with ease. Even this simple test reveals that 3-D depth processing is more than optics; it must include some high-level image processing.
Professor Andrew Y. Ng and his research group at Stanford University are asking similar questions of robotic vision. Autonomous vehicles equipped with a phalanx of cameras, sensors, lasers, and radar are making their way through cluttered environments; however, Professor Ng is investigating lightweight agile solutions formed around a solitary color video camera. It appears the key to extracting depth from a single, 2-D monocular image involves the same techniques artists use to inject depth into their pieces, namely, texture, perspective, and focus. The Renaissance masters skillfully detailed the stitching and folds on the clothing of near subjects while purposefully reducing the scale, focus, and detail of objects in the distant background to produce life-like vistas on flat canvas. Short of developing a thinking machine that recognizes objects and their common size in perspective, Ng’s method extracts generic features from the digital image and transforms them into depth information.
The algorithm is based on a popular 2-D version of the Naïve Bayesian Classifier (NBC), known as a Markov Random Field (MRF) whose goal is to classify combinations of image pixel attributes into a range of depth values. Bayesian analysis permits an observed outcome to be related statistically to a collection of input observables. For example, atmospheric visibility can be related to input observables of temperature, humidity, time-of-day, and atmospheric pressure. Even though an exact deterministic equation connecting the input to the observed outcome may not be known, the NBC can generate the probability of a specific outcome given the known input values. The statistics are generated by “training” the NBC with a set of input/output pairs and are validated by comparing the output to a test set of additional input/output pairs. If there is no true relationship, the NBC performs very poorly on the test set; however, high-quality results can be obtained if a relationship does exist, even if it is not explicitly known.
Ng’s group collected image/depth pairs using a small 1704 x 2272 pixel color digital camera and a one-dimensional laser range finder mounted on a translation stage to find the true depth of the image at a resolution of 86 x 107. The MRF was trained using 75 percent of the pairs, and validated using the remaining 25 percent. The digital images were segmented into small pixel cells and correlated with filter patterns designed to classify texture variations, texture gradients, haze, and edge orientation resulting in 34 unique local input observables for each cell. The cells are also compared to their nearest neighbors at multiple resolutions to extract global information of 19 additional features, resulting in a set of 646 input observables for each cell. The trained MRF was used to predict depth in test images of both indoor and outdoor locations and was determined to have an average error of 35 percent, meaning the image of an obstacle 10 meters away would appear between six and 14 meters away to the algorithm. At a 10-Hz frame rate, an autonomous robot would have adequate time to avoid the obstacle even with this uncertainty.
The one-camera system has dramatically reduced the amount of hardware required to provide depth information, and can also determine distances five to 10 times further away than the dynamic range of many triangulating two-camera systems. The algorithm has been used by a small radio-controlled car to navigate through a cluttered, wooded area autonomously for several minutes before crashing. Further enhancements may one day enable the development of autopilot systems for automobiles. But that would drastically reduce my enjoyment of video games.
This material originally appeared as a Contributed Editorial in Scientific Computing 23:4 March 2006, pg. 14.
William L. Weaver is an Associate Professor in the Department of Integrated Science, Business, and Technology at La Salle University in Philadelphia, PA USA. He holds a B.S. Degree with Double Majors in Chemistry and Physics and earned his Ph.D. in Analytical Chemistry with expertise in Ultrafast LASER Spectroscopy. He teaches, writes, and speaks on the application of Systems Thinking to the development of New Products and Innovation.
|
Going Deep
| 0
|
going-deep-1a1d9bfaa3bc
|
2018-03-11
|
2018-03-11 19:52:45
|
https://medium.com/s/story/going-deep-1a1d9bfaa3bc
| false
| 946
|
Innovation is Elegance. Complex Explanations are Not. Innovation reduces system complexity. This publication seeks to reduce confusion.
| null |
williamlweaverphd
| null |
TL;DR Innovation
|
williamlweaver@gmail.com
|
tl-dr-innovation
|
TECHNOLOGY,SCIENCE,INNOVATION,SYSTEMS THINKING,INTELLIGENCE
|
williamlweaver
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
William L. Weaver
|
Explorer. Scouting the Adjacent Possible. Associate Professor of Integrated Science, Business, and Technology La Salle University, Philadelphia, PA, USA
|
286537bc098c
|
williamlweaver
| 183
| 189
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
56a83988d5ef
|
2017-11-04
|
2017-11-04 16:11:33
|
2017-11-04
|
2017-11-04 16:42:23
| 3
| false
|
en
|
2017-11-18
|
2017-11-18 03:29:29
| 11
|
1a1e18578db7
| 3.131132
| 2
| 0
| 0
|
Humans and robots can work together in changing the world to a more just, productive future.
| 5
|
AI and Social Impacts
Humans and robots can work together in changing the world to a more just, productive future.
AI has proven to us humans that this field has lots of potential. AI has exponentially grew into a bountiful, blooming field full of innovations and help. However, major leaders in the tech industry, like Elon Musk or Stephen Hawking, says to be beware. They say that we need to control AI so to not fulfill the media’s portrayal of robots that would destroy us for creating them. While some of their points are indeed valid, I would so kindly like to point out that the media has almost over exaggerated the downfalls of AI, and the such.
AI has almost been a boon given to us, like the revolutionary devices today that ease and simplify our lives. Many people that I have talked to have mentioned unemployment rates going up after AI had been integrated into the corporate workspace. But I do believe that this is a misleading fact. Yes, it will take away some jobs, but give more in return. AI can aid both machine and human workers in being more productive and getting more stuff done precisely.
Plus, AI is a technology, like so many of its predecessors before us, can integrate quietly in society. Think about it. When the computer was introduced, many people around the world thought that their jobs were going to be gone thanks to a big, boxy machine with information in the form of binary going through it. Today, the computer has brought forth new opportunities for work, and advancement. Similar stories for many other fields, such as the healthcare and transportation field, followed suit.
Sophia is the first robot granted full citizenship, by Saudi Arabia.
But, I agree that AI does indeed have some flaws with it (as it is an emerging state), depending on who is using it and for what purpose. A few weeks ago, Sophia was the first AI robot to be given full citizenship from Saudi Arabia (Does robots have social security now?). However, many women were indignant, saying that the robot (who is not human) was given more rights than women in Saudi Arabia, which have been fighting for the same rights for decades. This issue could be from a governmental incentive for more foreign investors, or a prolonged regional social injustice. But this has nothing related to AI itself. AI has also brought many features that we don’t see to its full extent; for example, Apple’s Siri, Google Translate, Amazon’s Alexa, and so much more.
Google Home and the Amazon Echo are just two of many examples of positive impacts of AI.
AI can also be used in alleviating social issues in developing countries. In China, many companies want to solve the problem of productivity China’s emerging labour market, to be a leader in the field. India can use this technology to aid their “Clean India” national campaign, in public sanitation, healthcare, education, and environmental awareness about global warming. The US can use this technology to improve border security systems (who knows, they could be already using that!).
AI needs better regulations and constant approval in order to control the intelligence coming in from all sides. We need to create committees full of qualified persons (similar to the FDA), positive portrayal of this technology in movies and news outlets, and regulations clearly stating what security and intelligence measures need to be taken in order to have a prosperous field with clear objectives but unclear paths, to muddle our way through failures and hardships, to truly achieve AI as a helping hand to the good of humanity.
I believe this artificial intelligence is going to be our partner. If we misuse it, it will be a risk. If we use it right, it can be our partner. — Masayoshi Son
Sources:
https://becominghuman.ai/ai-and-its-social-impact-in-the-future-707d2049ccd9
https://www.shapingtomorrow.com/home/alert/275454-The-Future-of-Intelligence---impacts-on-society
http://www.kurzweilai.net/the-age-of-intelligent-machines-the-social-impact-of-artificial-intelligence
https://news.tulane.edu/news/students-explore-social-impact-artificial-intelligence
https://www.cnbc.com/2017/11/02/saudi-women-riled-by-robot-with-no-hjiab-and-more-rights-than-them.html
https://www.inverse.com/article/38054-stephen-hawking-ai-fears
https://www.brainyquote.com/quotes/quotes/m/masayoshis845514.html?src=t_artificial_intelligence
https://www.technologyreview.com/s/609038/chinas-ai-awakening/
Image Credits:
http://www.dazeddigital.com/science-tech/article/37872/1/sophia-the-robot-has-become-the-first-humanoid-citizen
https://recombu.com/digital/article/google-home-vs-amazon-echo-difference-which-best-specs-features
https://www.sciencenews.org/article/robots-artificial-intelligence-gets-physical
|
AI and Social Impacts
| 3
|
ai-and-social-impacts-1a1e18578db7
|
2018-03-14
|
2018-03-14 16:33:15
|
https://medium.com/s/story/ai-and-social-impacts-1a1e18578db7
| false
| 684
|
The Global Voice provides a voice to today’s youth in the wake of today’s fascinating political and economic climate. We employ a diverse team of writers to show a variety of perspectives on the pressing issues of today.
| null |
GlobalVoiceJournal
| null |
The Global Voice
|
globalvoicemag@gmail.com
|
theglobalvoice
|
ECONOMICS,POLITICS,FINANCE,BUSINESS,YOUTH
|
globalvoicemag
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Madhumitha Manivannan
|
As a 14-year-old student, I am passionate about the internet of things, human rights, poverty alleviation and engineering. LinkedIn: /in/madhumithamanivannan/
|
dff37a6df530
|
manivannanmadhu
| 9
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
d1dd476e050a
|
2018-08-10
|
2018-08-10 08:33:38
|
2018-08-17
|
2018-08-17 08:18:41
| 2
| false
|
ru
|
2018-08-17
|
2018-08-17 08:20:50
| 8
|
1a210c9950e0
| 6.330503
| 7
| 0
| 0
|
Александр — AI, ML ресерчер, блоггер, предприниматель. Работает над AI в медицинской и финансовой сферах.
| 4
|
Интервью с Александром Гончаром
Александр — AI, ML ресерчер, блоггер, предприниматель. Работает над AI в медицинской и финансовой сферах.
Привет! Для начала расскажи немного о себе: где сейчас живёшь, чем занимаешься и коротко о твоей карьере в IT.
Привет, я сейчас живу в Италии, Вероне, заканчиваю магистратуру по специальности applied mathematics. В плане работы я отвечаю за все, что касается машинного обучения (назовем это AI Solution Architect) в украинском стартапе Mawi Solutions (мы разработали свой портативный кардиограф и он просто напичкан ИИ для диагностики, распознавания активности, биометрической идентификации и прочего) и время от времени консультирую разные компании по части машинного обучения. Также веду блог на Медиуме и иногда читаю лекции, выступаю на конференциях. До этого тоже занимался ИИ в Inma AI (США), Mlvch (Россия), HPA Srl (Италия) и нескольких проектах меньшего масштаба.
Круто! Ты получил степень бакалавра в Украине, а на магистратуру поступил в Италию, верно? Почему выбрал именно эту страну?
Да, бакалавра прикладной математики я получил в КПИ и поступил в Италию. Планировал учиться за границей курса со второго-третьего, сначала думал о Германии, так как я хорошо знаю немецкий язык, но в Веронском университете была возможность выиграть грант на обучение студентам не из ЕС. Я подался и стал одним из трех победителей :) В Италии до этого ни разу не был, язык не знал, но это стало отличным челленджем.
Как тебе Италия для жизни? Планируешь оставаться после окончания обучения или вернёшься?
Если не учитывать некоторые бюрократические моменты, с которыми сталкиваются иностранцы в других странах, то в целом Италия как страна для жизни мне нравится — спокойный ритм, мало стресса, еда вкусная, в целом весело — la dolce vita :) В плане образования все тоже очень неплохо, я бы сказал, что уровень повыше, чем в Украине, но во многом за счет того, что в университетах нет “пассажиров” — люди не идут на магистратуру по математике, потому что “мама сказала”. Все студенты очень мотивированны и очень много учатся. Как страна для работы — мой личный опыт скорее нейтрально-негативный, я для себя решил, что хочу жить в Средиземноморье, но пока что лучше срабатываюсь с клиентами и партнерами посевернее :)
Расскажи о поступлении. Откуда узнал про грант, как готовился к поступлению/переезду, с какими столкнулся проблемами, о которых ранее и не подозревал?
О гранте рассказал преподаватель на нашей кафедре — Олег Романович Чертов; надо было подготовить выписку своих оценок с заверенным переводом, сертификат о знании английского языка (у меня был IELTS) и еще ряд документов типа копии паспорта и тд. Поступление было относительно простым: сначала кандидатов оценивали по их академическим и не только успехам, а потом было интервью, где интересовались мотивацией кандидата и почему он хочет учиться именно на этой кафедре. Большая часть сложностей была, разумеется, с документами, начиная с нашей альма-матер и заканчивая посольством Италии в Украине. Мне кажется, что нельзя предугадать все и я могу только посоветовать начать сбор всех документов очень заранее. То есть если у вас запланировано начало учебы в сентябре этого года — начинать нужно было в декабре прошлого года.
С чего и когда ты начал изучение ИИ и почему выбрал именно это направление?
Насколько я помню, все началось с того, что я банально гуглил что-то вроде “чем занимаются прикладные математики” и так узнал про машинное обучение. После этого прошел всем уже известный стэнфордский курс на Coursera и мне повезло очень быстро попасть на первый проект “за еду” (шутка, пару месяцев я вообще бесплатно работал). Почему выбрал ИИ? Не знаю, если честно, первоначальная мотивация была “потому что это круче, чем код на джаве писать”. Сейчас у меня уже есть глобальные цели, с которыми мне эта технология сможет помочь.
Помогает ли тебе проживание в другой стране развиваться профессионально или, грубо говоря, за компьютером работать все равно откуда?
Очень сильно помогает, особенно учитывая то, что моя работа уже давно вышла за рамки кодинга. Живя в ЕС намного лучше видны тренды в исследованиях и бизнесе, профессиональное общение более открыто и продуктивно. Конечно, в Украине тоже очень много умных и предприимчивых людей, у которых есть что почерпнуть, но мы к сожалению не настолько “глобальны”. Ну и да, важен фактор капитализации рынка ИИ — по всем цифрам западный намного больше.
Расскажи о рабочем процессе в Италии, о рынке, о менталитете в целом.
В Италии работают “ненапряжно”. Те, кто путешествовали по Европе ,возможно, заметили, что магазинчики закрываются на обеденный несколько часовой перерыв и по вечерам не работают.
Так вот в IT сфере наблюдается нечто похожее — рабочие имейлы могут отвечаться в течении недели, дедлайны не обязательны к исполнению, а просто рекомендованы и в принципе это никого особо не напрягает. Компании на рынке есть продуктовые, консалтинговые и стартапы.
C “чистыми” бодишопами, как в Украине, не сталкивался. Стартап сфера развивается, но недостаточно быстро, местные стартапы “сбегают” за инвестициями в другие страны ЕС. Мои товарищи работают в основном в консалтинговых гигантах таких, как Accenture или в небольших продуктовых компаниях в Турине, Риме, Милане (по сути это три индустриальных лидера страны). Что касается ИИ сферы, то местный рынок немного отстает от США, Китая или тех же немцев, но большие компании как Google, Amazon открывают офисы в Италии как для разработки, так и для рисерча. Что касается последнего, мне кажется, что в Италии достаточно благоприятная почва, чтобы делать теоретические исследования — спокойную вдумчивую работу итальянцы любят. Но многие из них заводят трактор в Германию, Англию, США и другие страны и достигают там успеха.
Расскажи о самой сложной задаче, с которой сталкивался и как удалось ее решить.
Смотря, что ты понимаешь под “сложной” задачей :) Заняла много времени? Много людей над ней поработало? Я расскажу о той, которая была самой инновационной, т.е. той, которую практически никто в мире не решал. Это была задача биометрической идентификации используя электрокардиограмму человека:
Украинский стартап MAWI и "ПриватБанк" разработали платежный браслет с биометрией
"ПриватБанк" и украинский стартап MAWI Solutions, который занимается разработкой биометрических браслетов, представили…ain.ua
Мы с командой столкнулись с кучей проблем, но главная из них — исследователи этой темы в своих статьях врут :) Это “нормальная” практика в рисерче — скрывать факты, “случайно” решать не ту задачу, а то и вообще фальсифицировать результаты. Мы смогли решить ее двумя способами: первый заключался в том, что мы прочитали сотни работ по работе сердца и что делает его уникальным и сами на их основе построили свой алгоритм, который объединял все лучшее и нивелировал недостатки или вранье в статьях. Он отлично работал, но я считал это решение недостаточно элегантным, поэтому мы решили копнуть в “чистый” ИИ, который вообще никак не эксплуатировал бы человеческие знания о работе сердца и использовали специальный вид нейронных сетей и он отработал даже лучше, чем наши совместные усилия и анализ работ врачей и ученых. Вот так работает ИИ :)
Какие тренды в исследованиях и бизнесе, связанном с ИИ, ты уже заметил?
В исследованиях сейчас происходит небольшой застой, так как по сути весь “бум” последних 5–10 лет был основан на нейронных сетях, которые были придуманы еще 50 лет назад, просто сейчас появилось достаточно количество данных и вычислительных мощностей для их применения. Но активно развиваются такие области, как reinforcement learning (вы наверняка слышали о победах в Go, Dota 2), генеративное машинное обучение (это когда алгоритм может с нуля нарисовать изображение по заданному описанию, сгенерировать трек или небольшой текст). Также важная тема — предиктивное машинное обучение. Мотивация его заключается в том, что человек умеет оценивать, что произойдет в будущем и планировать в соответствии с этим. Так что интересное направление для исследований — это дать машинам возможность предугадывать последствия своих и чужих действий, что должно сделать их более умными, чем сейчас. Как раз о таких новинках в рисерче я обычно пишу у себя в фейсбуке :)
Бизнес тоже очень тепло принимает разработки ИИ. На самом деле тут все просто и это просто продолжение старой доброй “автоматизации” — зачем платить человеку, который подвержен усталости, демотивации и психологическим проблемам, когда есть роботы, которые сделают эту работу лучше и быстрее (и не обязательно, что дороже в перспективе). Так, например, компьютерное зрение вовсю применятся в ритейле (вплоть до полной замены продавцов-людей, как в магазинах Amazon), автомобилях с автопилотом (это уже скоро будет у всех автопроизводителей) и медицине (ИИ диагностирует медицинские снимки уже точнее, чем квалифицированные врачи). Обработку естественного языка полюбили почти все бизнесы, где надо обслуживать клиентов в онлайне. Вы уже общаетесь с ботами, когда хотите что-то узнать у банка или магазина и это скоро будет везде. В Mawi Solutions мы, например, диагностируем болезни сердца, считываем эмоции и определяем человека по его ЭКГ используя технологии ИИ. Я даже не говорю про контекстную рекламу и системы рекомендаций в интернет-магазинах — да, вы ее не очень любите, но чаще всего она подсказывает те вещи, в которых вы действительно могли бы быть заинтересованы.
В целом сейчас очень благоприятное время, чтобы заниматься ИИ — можно принести пользу практически в любой сфере :)
Какие ресурсы по ИИ смотришь/читаешь сам и можешь другим посоветовать?
Я в основном читаю каких-то конкретных личностей — это исследователи, практики и техно-блоггеры. Я уже расписал полный список вот тут :
Ultimate following list to keep updated in artificial intelligence
Everyone who is working with technologies knows the joy (and pain on the other hand) of rapid updates in the field. It…medium.com
Могу к этому списку добавить каналы в телеграм:
Denis Sexy IT 🤖
Denis Sexy IT - О нейронных сетях, виртуальной реальности и технологиях - простым языком 🤖🚀💖 Чат канала…t.me
Технологии, медиа и общество
Привет, я Андрей Бродецкий, журналист, экс-главред Apparat. Пишу о технологиях и о том, как они меняют мир. Контакты…t.me
А вообще я предпочитаю книги — фундаментальные вещи остаются, а хайп быстро проходит.
Александр, спасибо большое за интервью, было невероятно интересно пообщаться.
Уже подписались на блог и всем советуем :)
|
Интервью с Александром Гончаром
| 150
|
интервью-с-александром-гончаром-1a210c9950e0
|
2018-08-17
|
2018-08-17 08:20:50
|
https://medium.com/s/story/интервью-с-александром-гончаром-1a210c9950e0
| false
| 1,576
|
Представь, что ты можешь жить в любой стране и работать над проектами, которые меняют мир. Это именно то, с чем мы можем тебе помочь.
| null |
moveonmiles
| null |
Move On Miles
|
hello@moveonmiles.com
|
move-on-miles
|
RECRUITMENT,TECH,PROGRAMMING,TALENT
|
moveonmiles
|
Outstandingprog
|
outstandingprog
|
Outstandingprog
| 15
|
Global Talent Advantage
|
Global Talent Advantage is a talent development company. We help outstanding talents to develop themselselves and change the world around them.
|
827f4b5e71d1
|
gta_blog
| 124
| 150
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-01
|
2018-02-01 09:55:46
|
2018-02-01
|
2018-02-01 09:57:50
| 2
| false
|
en
|
2018-04-19
|
2018-04-19 08:21:15
| 0
|
1a2292c5be1
| 3.160692
| 3
| 0
| 0
|
AI’s most simple concept is terrifying
| 5
|
Why AI Terrifies the World’s Greatest Minds and How it’s Inevitable Machines Will Take Over
AI’s most simple concept is terrifying
If it *only* reaches the same level of intelligence as us, it’s ability to operate at a far higher speed means that in 6 months it would have effectively operated for our equivalent of 500,000 years.
It has taken us around 200,000 years to reach where we are now as a species
Let that sink in for a moment
In a period of 6 months for us, AI will have accumulated half a millions years worth of knowledge on top of everything we already know meaning that we will be unable to compete with them within days of them achieving the same levels of intelligence as us.
We won’t even have time to react because machines will have surpassed us literally within the blink of an eye.
In the same way we couldn’t relate to the earliest humans AI couldn’t relate to us
This, at best relegates us to the role of how we treat pets now
Or worse — Ants
When ants stay out our way we leave them alone. When they invade our homes or obstruct our intentions we obliterate them without a second thought. They are an insignificance that can be eradicated.
This is the likely outcome for AI and us
It is why AI is a zero sum game.
How would Russia or China react if they though the States were on the brink of such dominance?
How would they react if there were even murmurs of rumour that it was close?
This isn’t a technology that can be competed with — if you are 6 months ahead you have 500,000 years worth of knowledge more than the competition.
The winner literally takes it all
Then likely loses it all to the AI itself
This exceeds the Manhattan project by several orders of magnitude, both in terms of danger and graveness to the future of humanity.
This is why the world’s greatest minds are terrified
This isn’t just the most likely outcome — it is inevitable
If any rate of progress is assumed it is unavoidable that we reach this point in the future.
It will be far sooner than we think
In singular tasks, machine already bests us with ease. Chess, Jeopardy, and Go are all examples of this. Machines can lift far more than we ever could and calculate things that would take us years.
A computers ability to operate computationally millions of times quicker than we are able to comprehend is an insurmountable barrier to our operation alongside them.
It is why enabling human machine connectivity is critical to the survival of our species. Without it, we are worse than cavemen. It is critical that we expand our bandwidth as soon as possible.
Yet people are scared of what Crispr means for development of a different class of human. Sure, genes may be edited to create a superior human, more intelligent, more beautiful, less susceptible to serious illness — but they will still be human.
AI won’t be
I know where my fears are placed
If AI accumulates the equivalent of 2 years of our knowledge each minute we are finished. Potentially we already are — we have never taken a step back from an impending technological innovation because of it’s danger to the survival of humanity — just look at nuclear bombs.
It doesn’t even matter if they are benevolent
Eventually, they will be operating so far out of our realms of comprehension that their actions will affect us as a byproduct of their intentions.
It’s as simple as that
And that is if you assume they only reach the equivalent intelligent as us and never progress past that.
But They will
All that it now requires is a way to combine all these individual elements of intelligence into a singular AI that is able to make use of them all. Vertically they already exceed us in every area. Horizontally they don’t come close. This is to mean their general intelligence lets them down in the broadness of their capability.
Once they achieve this it is game over
In the blink of an eye a single machine has accumulated years of human level knowledge.
The second our goals diverge the AI is in control of our destiny
It’s evolve or face extinction
Killed by our own creation
|
Why AI Terrifies the World’s Greatest Minds and How it’s Inevitable Machines Will Take Over
| 27
|
why-ai-terrifies-the-worlds-greatest-minds-and-how-it-s-inevitable-machines-will-take-over-1a2292c5be1
|
2018-04-19
|
2018-04-19 08:21:15
|
https://medium.com/s/story/why-ai-terrifies-the-worlds-greatest-minds-and-how-it-s-inevitable-machines-will-take-over-1a2292c5be1
| false
| 736
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Chris Herd
|
Founder @Nexves, Entrepreneur, Angel Investor, ICO/Blockchain Advisor
|
da7b665f3cc7
|
ChrisHerd
| 31,328
| 3,629
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-07
|
2018-08-07 14:00:18
|
2018-08-16
|
2018-08-16 18:48:08
| 7
| false
|
en
|
2018-08-16
|
2018-08-16 18:48:08
| 10
|
1a23761a980d
| 2.563208
| 10
| 0
| 0
|
Weekly collection of data-driven articles, stories, and resources
| 5
|
Self Driven Data Science — Issue #58
Weekly rundown of interesting news and insights focused on data science, machine learning, and artificial intelligence
Self Driven Data Science
Here’s this weeks lineup of data-driven articles, stories, and resources ranging across the broad field of data science while offering value to both beginners and experienced practitioners alike. If you would like this newsletter delivered to your inbox each week, then go ahead and subscribe. Enjoy!
The Blacker the Box
The faster the feedback on prediction accuracy, the blacker the box can be. The slower the feedback, the more your models should be explicit and formal. This post addresses some examples of fast and slow feedback problems and what makes them different for black-box prediction algorithms.
Learning Math for Machine Learning
This piece makes a strong argument for the mathematical background necessary to build products or conduct academic research in machine learning. Advice is derived from conversations with machine learning engineers, researchers, and educators.
5 Lessons from a Data Science Internship at a Tech Unicorn
As we move into August and summer begins to wind down, I took some time to reflect on my time as a Data Science Intern for Unity Technologies in San Francisco. My goal is to share a handful of actionable lessons, takeaways, and advice from the memorable experience.
Top 10 Roles in AI and Data Science
Applied data science is a team sport that’s highly interdisciplinary. Perspective and attitude matter as much as education and experience. Here’s a hot take on how you should grow your data team with diversity of perspective in mind.
W. E. B. Du Bois’ Staggering Data Visualizations
One of the most powerful examples of data visualization was made 118 years ago by an all-black team led by W.E.B. Du Bois only 37 years after the end of Slavery in the United States. This is pretty incredible stuff, check it out.
Source: xkcd
Any inquires or feedback regarding the newsletter are greatly encouraged. Feel free to reach out and follow me on LinkedIn, Medium, Twitter, or check out some more content at my website.
If you enjoyed this weeks issue than make sure to help me spread the word and share this newsletter on social media as well.
Thanks for reading and have a great day!
Self Driven Data Science - Revue
Self Driven Data Science - Weekly rundown of interesting news and insights focused on data science, machine learning…www.getrevue.co
|
Self Driven Data Science - Issue #58
| 77
|
self-driven-data-science-issue-58-1a23761a980d
|
2018-08-16
|
2018-08-16 18:48:09
|
https://medium.com/s/story/self-driven-data-science-issue-58-1a23761a980d
| false
| 401
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Conor Dewey
|
Data Scientist & Writer | www.conordewey.com
|
ee856fa71ed0
|
conordewey3
| 5,345
| 1,182
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
c4b75347ea46
|
2018-03-05
|
2018-03-05 11:39:45
|
2018-03-06
|
2018-03-06 18:06:01
| 2
| false
|
en
|
2018-03-06
|
2018-03-06 18:06:01
| 10
|
1a238d4eb722
| 3.292767
| 3
| 0
| 0
|
Here’s to another year of exciting data-fueled advances, inventions and discoveries!
| 5
|
Top 5 Big Data Trends to Look for in 2018 [Infographic]
Here’s to another year of exciting data-fueled advances, inventions and discoveries!
Big Data Image Pixabay
Big data is making headway in every corner of the business world. Industries worldwide have seen a substantial increase in volume, variety and velocity of data. Each passing year witnesses almost two-fold increase in data most of which comes from IoT. According to IDC, worldwide revenues for big data and business analytics will grow more than $203 billion by 2020. By 2025, there will be around 180 Zettabytes of data per year.
We’ve previously analyzed the digital transformation trends and the trends that we expect to see in the Internet of Things (IoT). It’s about time we analyze the Big Data trends that will most likely make headlines in the year 2018 and beyond.
Big Data to facilitate prescriptive analytics
Prescriptive analytics reduces the risk in strategic decision making to improve profitability, enhance customer satisfaction, and create first-in-market opportunities by leveraging the promise of big data. This type of business analytics turns raw data into useful insights, determines the actions to be taken and identifies the best profitable outcomes in a structured manner. The growing use and value of prescriptive analytics will drive the future of Big Data in the years ahead. Prescriptive analytics, a powerful blend of machine learning, simulations, and mathematical optimization, will help enterprise leaders make well-informed, data driven decisions as it matures. Cognitive computing is also expected to bring a sophisticated level of fluidity to analytics.
Digital transformation of the un-digitized data
Dark data signifies a great opportunity for companies to gain valuable insights for their businesses. In 2018, with the rise of Big Data, there will be significant mutual efforts made towards the recovery and digitization of historical data which still remains in the dark. The revelation of historical data cannot happen immediately, but the benefits of this transformation are worth the wait as it can help make accurate predictions for the future.
Big Data Trends to Look for in 2018
Data quality over quantity
Piling up massive amounts of data and letting it grow is counterproductive in every possible way. When we talk about big data, it presents a challenge in conjunction with the opportunity. The subject under attention is to determine what data to focus on and what to ignore as focus on the wrong datasets wouldn’t yield the expected results for the particular business needs. Datasets can be irrelevant, inaccurate, or even corrupted due to improper data acquisition methods. This issue is already being discussed by the industry experts and 2018 is likely to focus on data quality apart from the quantity alone.
AI to enhance security and safeguard data
Data security will be a major concern for all companies in 2018 and it’s already high time to invest in resolving this challenge. This year, Artificial Intelligence is expected to thoroughly inspect the security domain. Machines will soon be able to predict human psychology quite accurately and comprehend unlabeled data without any human intervention. Therefore, with this enhanced potential, AI will become the most robust tool for data protection and the top defense mechanism.
Specialization of job roles
Digital revolution has transformed every organization into a technology organization. The massive increase in the value and volume of big data would empower the companies to adapt in certain ways. Skilled data scientists and other experts will most likely be appointed for specific job roles to handle various stages in the data pipeline such as extraction, transformation, loading, analytics etc. With this demand for experienced data professionals, IBM projects the job market for big data professionals will grow during 2018 from 364,000 openings to 2,720,000 by 2020. By 2019, 90% of large global companies will have an appointed CDO, Gartner predicts.
Contact us today to know how we are going to help our clients build modern applications in the year ahead. Engineers and Data Scientists at SSI have worked on various projects for some of the leading brands and transformed their businesses. We measure our success by the success of our clients. Explore the success stories here.
If you enjoyed this read, recommend it to others. Your comments, suggestions and feedback will be greatly appreciated.
Strategic Systems International (SSI) is an advanced analytics and software engineering firm headquartered in Chicago with 25+ year experience in building applications for enterprises and SAAS companies with an onshore/offshore delivery model. We are a team of data scientists and technologists that seek to solve complex problems through simple technology and data enabled solutions.
Visit Our Website: ssidecisions.com
Follow Strategic Systems International on Twitter
Follow Strategic Systems International on LinkedIn
|
Top 5 Big Data Trends to Look for in 2018 [Infographic]
| 29
|
top-5-big-data-trends-to-look-for-in-2018-infographic-1a238d4eb722
|
2018-05-27
|
2018-05-27 13:49:38
|
https://medium.com/s/story/top-5-big-data-trends-to-look-for-in-2018-infographic-1a238d4eb722
| false
| 771
|
This is Strategic Systems International official publication where stories majorly revolve around FinTech, HealthTech, Big Data & Analytics, Machine Learning, and Industry of Things (IoT).
| null | null | null |
Data + Tech
|
zyunus@ssidecisions.com
|
data-tech
|
BIG DATA AND ANALYTICS,FINTECH,HEALTHTECH,INTERNET OF THINGS,TECHNOLOGY
|
SSI_TeamUS
|
Big Data
|
big-data
|
Big Data
| 24,602
|
Strategic Systems International
|
We are an advanced analytics & software engineering firm HQed in Chicago with 25+ years building data-driven applications for SAAS companies and enterprises.
|
93e98ec056f8
|
SSI_TeamUS
| 36
| 36
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
71c99841f1ad
|
2017-11-29
|
2017-11-29 16:59:14
|
2017-12-01
|
2017-12-01 19:41:08
| 9
| false
|
en
|
2018-06-14
|
2018-06-14 16:47:52
| 6
|
1a26b65742fe
| 12.124528
| 146
| 5
| 0
|
3 tips for surviving the coming data tsunami
| 5
|
Big Data Have You Afraid? You’re Not Alone
3 tips for surviving the coming data tsunami
We are moving slowly into an era where Big Data is the starting point, not the end. – Pearl Zhu, Digital Master
The year was 1984. Thankfully for the western world the social tremors of the 1950’s never materialized into George Orwell’s Big Brother state, but a different sort of technical tremor was just beginning. It was buried deep underground, and at the time we couldn’t even feel it coming. But it was a tremor that has rapidly developed to the point where we now worry it might overtake us. We call it Big Data. The market intelligence firm IDC calls the sum of all data created the global datasphere. Back in 1984 the planet had roughly 20 million gigabytes (GB)of data stored digitally. Things have changed a lot since then.
In 2010 the global datasphere was roughly 4 zettabytes (ZB). A zettabyte is 1 trillion gigabytes, or equal to the storage capacity of almost 4 billion iPhones (256GB model). If you look at the chart below, you’ll see that it’s going to take just a few more years for the datasphere to reach more than 50ZB. You’d need more than 207 billion iPhones to store all that data.
We call this “big” data because it’s impossible for humans to work directly with the volume and scale of it all, and we require machines to do much of the processing and analysis.
Creative professions — and by that I mean any profession that creates something, be it design, software engineering, media, finance, construction, the list goes on — already are, or soon will be, inundated with data.
If you’re a creative professional and struggled just trying to get your head around the scale of the numbers I described above I have some bad news for you: we’re just getting started.
Projections are that by 2025 the datasphere will triple in size to almost 160ZB. That means not only is the amount of data we’re creating growing, but the rate of growth in both data creation and network traffic are accelerating. There’s no avoiding it either. Data is permeating every industry as we move beyond just PCs and mobile devices to ubiquitous sensors everywhere, persistently generating and transmitting data. This is the ubiquitous “edge” being discussed by the tech cognoscenti. Already, the Internet of Things (IoT) is putting data generating sensors on what were traditionally “dumb” objects. Here are just a few:
This list is just a small sample, but even if we take everything that is happening in the IoT today, it will be dwarfed by what we’ll see in the next decade and beyond.
Intelligent Everything
I was having coffee with some friends recently. The coffee shop had a device that supports NFC payments — I could tap my phone on the device, and it would charge my credit card. If you’ve used Apple Pay or Android Pay, you’ve done the same thing, and that’s an example of IoT in action.
What was even more interesting though is that this coffee shop also had “smart tables.” I was able to make my purchase at a kiosk (an early form of smart menu, if you will), and one of the last steps is to take a “table tracker,” a device that looked rather like a thick plastic drink coaster with a number on it and clearly some electronics built in (it had a small LED glowing green). I brought the tracker over to the table, which also had a number on it. The table, which also has some electronics built in, “knows” what trackers are on it and communicates both the table and tracker numbers to the staff.
Table Tracker
When my drink was ready, someone brought it right over to where I was sitting, even though up to this point I had never spoken to any of the staff. As we drank our coffee and pondered the future of this technology, it was rather easy to see that this is just the beginning.
Eventually the table will talk to the glasses and plates as well, knowing, for example, when my glass is empty and prompting an offer for a refill.
When we’re done, we’ll be able to pay, securely, right at the table. And all this doesn’t consider how IoT might be involved in making reservations, coordinating with friends, and transporting everyone there. At the coffee shop, maybe humans will be involved in preparing and serving food and drinks, or maybe much of it will be automated, but there is little question in my mind that the industry will get there, and probably in the not too distant future.
The data that will need to be collected, transmitted and processed to make all that happen is orders of magnitude more than is required by the restaurant business today, and it all must be done in a way that’s profitable to the restaurant, while maximizing customer enjoyment, privacy, and security. It’s a tall order (perhaps even a disruptive one,) but those businesses that don’t keep up will be left behind as the modern restaurant experience becomes more enjoyable for customers and more profitable for those that innovate.
How Can You Keep Up?
Maybe you’re a designer or a factory manager or a lawyer. You create physical objects or work with intellectual property or provide services to other humans. One thing you are not, though, is an expert in data, let alone big data. But the fact is that the average (or even above average) professional even in fields that are not related to data engineering or data science will still need to participate in this immensely data rich world. It will be expected that your decisions will be “data driven” (see my thoughts on that here), and that you’ll use data to help you and those you work with understand your business and your customers.
But how does the non-data scientist deal with the massive amount of data available and the increasing complexity of effectively making sense of it? I want to be clear here that there is a lot of technical complexity to dealing with big data. It must be captured and stored. And there is the querying and modelling that must be done to get any use out of it. That’s why we have data scientists and engineers.
Here are three tips for learning to survive in—and succeed in—the coming world of big data. I hope they will help you adapt what to expect from data professionals and to get better at asking the right questions.
1. Make the Big Smaller
Big data, as the term implies, is too high in both volume and complexity for most human minds to manage effectively. Humans are predisposed to fear outsized challenges, seeing them more as threats than opportunities. Like the Big Bad Wolf, Big Foot, or Big Brother, big data for many fits squarely in the realm of a large, misunderstood entity they must either conquer or be consumed by. One way to deal with obstacles that are too big is by reducing their size. More specifically, we can reduce the size we have to deal with directly. With big data I don’t mean that you’d look to reduce the overall volume or complexity. Rather you would reduce the set of that data you need to process. There are a couple effective ways to do this. One is to ask questions, and the other is to focus the data. They are related, so let’s look at both of them.
Ask Questions
The idea here is that you want to get really clear about what questions you want the data to try and answer. Put another way, ask what problem are you trying to solve. The grid below shows a series of number. This is just data and without any questions to answers it doesn’t really tell us much of anything.
To help make the data in this grid smaller, we can start asking questions such as:
What is the highest number in the grid?
How many rows are there?
How many columns?
What is the sum of the first horizontal row?
What is the most common number that appears in the four center squares?
You get the idea. The questions drastically reduce the need to be concerned with all the data and help us start to think about what we’re really after. That helps reduce the size of the data considerably, but we can make the data even smaller by focusing even further.
Focus the Data
One way we can focus even further is to be very specific about what we expect the data to predict. What I mean by that is that you can develop a hypothesis and use the data to test that hypothesis. For example, say you develop a product you expect will be used primarily by 25–35 year old females without children. By “primarily” you mean over 60%. Of all the data you might have about your product — how much it sells for, where it’s selling, how often it’s used, etc., the one piece of data you need to test your hypothesis is age and gender of customers. If you find that your customers are more that 60% 45–55 year old males, that gives you valuable information either about your product or about your assumptions about your customers, or both, but doesn’t require you to look at all the data you have about your product.
Looking back at our number grid, let’s say our hypothesis is that the grid represents a sequential list of numbers from 1 to 100. Now use the data to test that hypothesis.
You’ll notice that the question you’re trying to answer allows you to be very focused on how you’re looking at the data. Because of that focus, you likely were able to spot the part of the grid that invalidates the hypothesis pretty quickly.
While deep focus helps in many situations, broad focus can also be an effective way to deal with big data. We do this by looking for trends or patterns.
2. Look for Patterns
Another approach is to take a big step back from all the detail and instead look for patterns at a broader level. In truth, all the detail is still needed, but with big data tools are involved in helping spot the patterns. Sometimes that tool is something relatively simple like Excel. Other times (and increasingly more often) it requires something more, like sophisticated data models and machine learning (ML) algorithms. The data to feed those algorithms can come from anywhere. Like cows, for example.
I recently heard of a farm that began using IoT devices on their cattle. The problem they were trying to solve was about knowing when a cow was ready to give birth.
The typical farm cow requires help from the farmer to ensure successful delivery and care of the calf. The “low tech” way to do this is that when a farmer believes the cow is ready to give birth, he or she must physically monitor the cow, waiting for signs that the cow is about to deliver. Researchers noticed however, that in the minutes before delivery a cow will swing her tail in a unique way. This gave them the clue they needed. The solution was to attach a sensor to the cow’s tail, sending data back to a service that could process the data, watching for the telltale swinging. Once the pattern is detected, the farmer can be alerted, and be there just when the calf is delivered. The data solution here involves big data — there’s a lot of information being sent by the sensor on the cow’s tail that would be a real challenge for a human to process and make sense of. But none of it is relevant until a specific pattern occurs, only then does the data become critically important and the software can do the right thing. Knowing what pattern to look for is key to making this work.
3. Play
Play might sound a bit trite when we’re talking about massive amounts of important data, but it’s not as strange as it might seem. When we play as children we take our imagination and let it run wild. We create characters, stories, and entire worlds, and there are no boundaries beyond what we can imagine. As adults, play can be just as powerful, if perhaps more directed. Thinking through possibilities beyond the obvious is a critical part of innovation and learning. Another word for play might be experimentation, but I use play here because the word “experiment” can sound overly formal and scientific, and that is not always what’s required. Play is an important tool the big data toolbox. This is especially valuable when the problem is ambiguous, or the possible solutions are complex. The process is relatively straightforward. If you have a problem with many possible solutions you would:
Develop ideas of what some of the results may be
Design contests to test each one in isolation
Gather scores on each contest
Look at the results and pick the winner(s)
Kind of sounds like a game, doesn’t it? While it can (and should be) fun, it’s also serious business. That becomes clear if you substitute a few words above:
ideas = theories
contests = experiments
results = solutions
scores = data
It’s important to recognize here that there is a real science to designing good experiments and correctly evaluating the results, often requiring expertise in mathematics and statistics. But there is real value for decision makers to play with the ideas first, reducing the problem down to smaller bits rather than trying to process everything at once. Let’s look at the process above with an example.
Suppose you work in operations at a trucking company. You are tasked with finding ways to improve fuel economy across the fleet. Since the company has already tackled the “low hanging fruit” by purchasing the most fuel-efficient trucks, your job is to find other ways to optimize. There are many things you could look at:
Driving habits, including starts and stops, average speed, etc.
Routes taken to common destinations
Times of day trucks are on the road
Type of fuel used
Types of tires used
Average weight of each truck
The most likely outcome is that improving fuel economy would require adjustments to most or all of these, but the data needed to evaluate the whole solution would be very big indeed. The way to tackle the problem is to experiment, or “play” with the variables. The variables are kind of like Lego bricks. You mix and match until you find what works. From the list above, let’s use the hypothesis that tires might contribute to fuel efficiency. To test that you’d want to single out tires as a variable, and you might do that using the following process:
Trucks using current tires would be the control group
You’d take a small set of trucks and try a different set of tires
You’d compare gas mileage over a period of time between the two sets and see which performs better against your stated goal of better fuel economy
Repeat this until you’ve tested all desired tire options
Play can be a powerful tool for adults, too
This simple example hides some of the complexity in correctly executing the experiment — you must account for variability in things like drivers, weather, routes, etc. — but the point in focusing on a single variable is that you’d end up with data related to only that variable, tires, and you’d have a good idea how much tire options contribute to the fuel economy problem. If you did the same with your other hypotheses, you would end up with an assessment of each hypothesis, and its relative contribution to your overall solution. So our list above might now look like the following:
Total Fuel Savings: 25%
Driving habits improvements: 3%
Route optimization: 6%
Time optimization: 2%
Type of fuel used: 4%
Types of tires used: 3%
Average weight of each truck: 7%
This result allows the company to make better decisions about where to start, and what to expect from each change. The big data problem got a lot more manageable.
Be Prepared, Big Data Will Get a Lot Bigger
Big data is, or soon will be, a reality in professional life for the vast majority of people in the modern economy. Here I’ve explored just three strategies you can use to help yourself be successful in that world. There are many more, and I’ll share more of them soon here on Medium. 1984 wasn’t just the year of Big Brother, it was the underwater tremor of what would—by 2017 —become an earthquake of data. The tsunami is on its way.
To survive and even thrive, we must understand what we need from the data, develop techniques to focus our questions, and experiment with the variables. With the right approach, we can harness the power of the data tsunami and make it a powerful force for ourselves and our work.
I’d love to hear your comments! I’ve been involved in creating software most of my career, and currently manage a multi-disciplinary engineering, media, and data science team at Microsoft. Follow me on Twitter and LinkedIn.
Check out my other story on data for creatives.
|
Big Data Have You Afraid? You’re Not Alone
| 758
|
big-data-have-you-afraid-youre-not-alone-1a26b65742fe
|
2018-06-14
|
2018-06-14 16:47:54
|
https://medium.com/s/story/big-data-have-you-afraid-youre-not-alone-1a26b65742fe
| false
| 2,895
|
Putting technology on a more human path, one design story at a time.
| null |
MicrosoftDesign
| null |
Microsoft Design
|
joline.tang@microsoft.com
|
microsoft-design
|
USER EXPERIENCE,ARTIFICIAL INTELLIGENCE,VIRTUAL REALITY,APPS,DESIGN
|
MicrosoftDesign
|
Data Science
|
data-science
|
Data Science
| 33,617
|
Bill Pardi
|
I love creating stuff that works. I do engineering and data science at Microsoft, build things, travel, and write.
|
8a8ccca9649e
|
billp365
| 1,560
| 30
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-06
|
2018-03-06 09:23:56
|
2018-03-06
|
2018-03-06 09:27:12
| 0
| false
|
en
|
2018-03-06
|
2018-03-06 09:27:12
| 0
|
1a28cabf4fa0
| 0.845283
| 0
| 0
| 0
|
I am writing this post by reading blog of Analytics vidya over Text Generation.
| 5
|
Poet BOT
I am writing this post by reading blog of Analytics vidya over Text Generation.
When I first time read Shakespear , I was astonished by the poetic work. Every dialogue, sentence, thoughts written into beautiful rhyming words.
I started writing poems when I read/heard Gita , Ramayana and other poet like Dinkar ;Rasmirathi , Bacchan’s Madhushala they all are in rhymes ,In Music form.I cannot take all name.
Then I started following Indian lyricist , How they write and what they feel during writing. What I found is It is an art of thinking rhyming words.
IF machine can learn it, at least it can give rhyming words so that writers can take help from it.
So I decided to do some work in this NLP area.But I found spacy do not support HINDI.It has to be added and it is not fun.
This was the initial thought , But What come out as product is ChatBOT in RASA I think is has taken the name from “NASA” lol.
Android is the client side with latest Coding style MVVM ,Databinding,Networking,LiveData … For ML learner , U can learn text classification using SVM and neural network.
IF anyone interested I can share the Code or ping me.I will be happy to share. I am new in NLP so I need Ur Suggestion also.
|
Poet BOT
| 0
|
poet-bot-1a28cabf4fa0
|
2018-06-06
|
2018-06-06 16:52:39
|
https://medium.com/s/story/poet-bot-1a28cabf4fa0
| false
| 224
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Nishant Singh
| null |
2411bbd7e236
|
nishantnarayansingh135
| 0
| 9
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
863f502aede2
|
2018-09-15
|
2018-09-15 16:11:54
|
2018-09-15
|
2018-09-15 16:16:55
| 4
| false
|
en
|
2018-09-15
|
2018-09-15 16:16:55
| 2
|
1a298d4c2594
| 3.04717
| 8
| 0
| 0
|
Taking notes during business meetings is a challenging task that becomes even more difficult when remote participants are involved or in…
| 5
|
Meet California Startup Voicera’s AI Stenographer
Taking notes during business meetings is a challenging task that becomes even more difficult when remote participants are involved or in video conferences. Enterprises of all sizes are turning to cutting-edge artificial intelligence technologies to free their staff from the tedious task and deliver improved note-taking results.
California-based Voicera recently came out of beta and introduced their Enterprise Virtual Assistant (“Eva”), a note-taking AI bot that automatically detects and marks a conference call’s important moments and extracts highlights from the call. Users can add Eva to their video conference as a participant or use the Voicera smartphone app for face-to-face meetings.
CEO Omar Tawakol founded Voicera in 2016. A serial entrepreneur, Tawakol also launched the cloud-based big data platform Bluekai in 2008. BlueKai built the world’s largest consumer data marketplace and data management platform, and was acquired by Oracle in 2014 for a reported US$400 million.
Eva’s mission is to enable meeting participants to be focused and engaged rather than looking down and taking notes.
CMO Cory Treffiletti gave Synced an overview of how Eva works: During a meeting, the bot listens for phrases and terms that are indicative of a critical moment. Voicera engineers have preloaded keywords into the system, such as “meeting,” “schedule,” “launch date” and so on. So for example if a meeting participant says “We are going to launch the product in September…”, Eva will capture this piece of information in real-time and display it as a meeting highlight. Users can also create their own keywords and phrases which Eva will then listen for.
Eva note-taking screenshot from Voicera’s Slack page
Treffiletti says Eva initially took notes only when prompted to do so by voice, akin to how Alexa or Google Assistant wake up for duty. But the team discovered many users felt awkward addressing an AI, and so added an automatic note-taking mode. This AI-shy factor remains a challenge for Eva, as people are also unaccustomed to having an AI analyze or highlight what they’re saying.
“It takes on average two or three meetings for someone to really grasp the value that we offer,” says Treffiletti. “We’re seeing that after three or four meetings, they become more comfortable with the notes that Eva is taking.”
Voicera addressed privacy concerns by incorporating a convenient pause function. Eva can be permanently kicked out of a meeting at the push of a button, and attendees can access the Voicera dashboard to delete meeting transcripts or highlights.
Some people however remain reluctant to put away their pencil and paper.
“Voicera is not going to replace humans’ ability to take notes,” says Treffiletti, noting that users can also add their own comments to the meeting timeline.
Last month Voicera introduced Progressive Attention AI, a dual AI system and NLP platform that improves Eva’s performance in both highlight extraction and speech recognition accuracy. Voicera says the AI system can double the accuracy of today’s top transcription engines in conference calling environments.
Voicera also recently launched its first commercial product, the premium subscription service Voicera Pro.
Voicera has raised a total of US$20 million from big names such as Microsoft Ventures, GV (formerly Google Ventures), Cisco Ventures, and Salesforce Ventures, who have been pouring money into the video conferencing sector for years to keep their services competitive.
Voicera plans to add additional enterprise administrative functionality and features to Eva in the near future, while also expanding the scope of voice commands to include tasks such as creating calendar items and scheduling meetings.
Journalist: Tony Peng | Editor: Michael Sarazen
Follow us on Twitter @Synced_Global for more AI updates!
Subscribe to Synced Global AI Weekly to get insightful tech news, reviews and analysis! Click here !
|
Meet California Startup Voicera’s AI Stenographer
| 173
|
meet-california-startup-voiceras-ai-stenographer-1a298d4c2594
|
2018-09-16
|
2018-09-16 09:53:53
|
https://medium.com/s/story/meet-california-startup-voiceras-ai-stenographer-1a298d4c2594
| false
| 622
|
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
| null |
SyncedGlobal
| null |
SyncedReview
|
global.sns@jiqizhixin.com
|
syncedreview
|
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
|
Synced_Global
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Synced
|
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
|
960feca52112
|
Synced
| 8,138
| 15
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-05
|
2018-06-05 13:27:41
|
2018-05-18
|
2018-05-18 05:01:02
| 9
| false
|
en
|
2018-06-05
|
2018-06-05 16:26:50
| 17
|
1a2a4b51e35c
| 3.313208
| 1
| 0
| 0
|
Welcome! The AITrading team is glad to release its first weekly update.
| 4
|
AITrading Weekly Updates
Welcome! The AITrading team is glad to release its first weekly update.
Today on the list:
Сrowdfunding and Whitelist
New website
Product updates (ML team, backend team, frontend team)
Country Representatives
Hackernoon Article
1. CROWDFUNDING & WHITELIST
This week we have opened the whitelist and announced the Crowdfunding Campaign to raise the funds for a trading platform that combines and facilitates interactions of market newcomers, professional traders, trading consultants, brokers and leading exchanges.
AITrading will start a distribution of ERC-20 tokens, known as AITT, beginning July 03, 2018. The first round of the crowdsale will commence July 04 and lasts until July 30, 2018 and each token will cost €0.88. The maximum supply of AITT will be 67,888,888 and these tokens will be used to access the AITrading platform.
The minimum investment is 100 AITT, which amounts to €88 ($104.42) and the project hopes to raise €47 million.
Participants in the whitelist will get a 20% bonuses. Members of the first round will be able to buy AITT tokens with 10% bonuses.
Accepted methods of payment are: BTC, ETH, BCH, XRP and all buyers must pass a KYC/AML procedure.
The second round of the crowdsale will commence July 6 and lasts until August 28, 2018.
2. NEW WEBSITE
We launched new website with the last updates!
Check, get whitelisted, keep in touch!
3. PRODUCT UPDATES
The Development team is working hard on its 29th sprint. As you see, we work using Scrum framework in Agile methodology.
The Machine Learning team, led by Denis Chigirev, is mostly composed by math quantitative analysts and python developers, developing algorithms for Signal Recognition Tool. Their main concentration is now focused on general figures of technical analysis.
Last week, the Backend and Integration team migrated the database to new production servers and implemented real-time integration with Binance. Now, ML-serivce and our algorithms are working with quotations of this exchange.
We have completed the widget displaying trading ideas and signals. Any MVP user can now share his favorite trading ideas with the trading community.
We continue to refine the functionality for users creating trading ideas.
Last sprint demo presented drawing widget on the basis of the AnyChat library.
Front-end team released transfer from Vue.js framework to React.js framework.
AITrading Headcount Structure
4. COUNTRY REPRESENTATIVES
Right now we are forming a team of Country Representatives. Each Country Representative will represent AITrading in his/her country and provide overall management to AITrading product, including project development and implementation, sales, fundraising, monitoring and evaluation, reporting, project proposals and the strategic direction, human resources.
In the past weeks we have interviewed with 20 candidates from 10 countries. Right now, there are 7 people undergoing through the interview process.
Today we welcome to our team 4 Country Representatives from Turkey, Brazil, Singapore and South Korea!
These people will open and lead the local offices in their Countries.
5. HACKERNOON ARTICLES
We’ve become authors at Hackernoon.
Check our latest articles:
1) Artificial intelligence (AI): today and tomorrow — Read
2) Blockchain benefits in trading — Read
Stay tuned for more updates.
We are striving to build a unique AI-powered trading ecosystem, which will help traders earn more and users trade more easily.
AITrading links
Website
Roadmap
Whitepaper
Medium Blog
Telegram community
Telegram channel
Twitter
Facebook
Insta
Bitcointalk ANN
Steemit
Subreddit
Originally published at medium.com on May 18, 2018.
|
AITrading Weekly Updates
| 50
|
aitrading-weekly-updates-1a2a4b51e35c
|
2018-06-07
|
2018-06-07 18:03:34
|
https://medium.com/s/story/aitrading-weekly-updates-1a2a4b51e35c
| false
| 560
| null | null | null | null | null | null | null | null | null |
Bitcoin
|
bitcoin
|
Bitcoin
| 141,486
|
AITrading
|
AI-powered trading ecosystem. 🤖 Wealth management for everyone.
|
24a6f92ac4fa
|
aitrading
| 69
| 83
| 20,181,104
| null | null | null | null | null | null |
709
| null | 0
|
55afa6d128c2
|
2017-10-31
|
2017-10-31 11:27:15
|
2017-11-12
|
2017-11-12 02:31:01
| 10
| false
|
en
|
2017-11-12
|
2017-11-12 02:31:01
| 0
|
1a2ab14dfdcd
| 8.295283
| 24
| 1
| 0
|
Learning to drive is not easy. Especially if you’re a car.
| 5
|
A Trap for Cars
Learning to drive is not easy. Especially if you’re a car.
The van in front was following the road rules a bit too meticulously for his liking. It would stop the moment a signal turned red, instead of trying to just about miss it. It would wait till the countdown had completely finished before it started moving again. Never once would it go above the speed-limit.
And this perfect behaviour was slowing down not just the car itself, but all the other traffic behind it as well.
He pulled up his car next to the van, planning to tell the driver what he thought of him. But when he looked in through the window, he was in for a shock.
The car didn’t have a driver.
When you’re starting from scratch, learning to drive is not easy.
The hardest part is learning how to see. You have all these pixels coming at you from cameras, and from various other sensors on your body. But how do you put them together? How do you make out what they’re saying? How do you actually see what the various objects in front of you are, instead of being left with colour after unrelated colour?
Usually, you have a model in your head. You know there’s bound to be a road and a pavement and traffic and a sky somewhere — you just need to find out where exactly they all are. You match the pixels to different patterns, and find out which pattern fits best. “This place”, you say, “is probably the road — and that place is probably another car.”
You’re not completely sure, of course; there’s always the chance that you’ve got it wrong. But as you keep driving, you’ll become more accurate. You’ll learn to recognise objects better.
That’s good, because recognising objects wrong can be a disaster. About a year and a half ago, in June 2016, a Tesla Autopilot was helping its driver steer along the highway. It wasn’t a full self-driving car: it was driven by the driver, and the Autopilot was only there to help with minor adjustments, like making sure the car kept straight on the road.
But then came a heavy white-painted tractor-trailer, crossing the road. And, seen against the bright sky, it fooled the Autopilot. The way it was painted, it looked like not a truck but just empty space. Naturally, the Autopilot tried to drive the car into that empty space. And, just as naturally, it crashed into the truck.
Neither the driver nor the car survived.
Humans may find it strange that a huge lorry could be mistaken for an empty road. The reason is that it’s an optical illusion. And optical illusions work differently for humans and for self-driving cars. What one finds straightforward, the other may find to be not straightforward at all.
Last August, the car company Ford was conducting an experiment about self-driving cars. More specifically, they were trying to find out how ordinary drivers and other people would react, when these strange new creatures started coming out onto the road.
People are used to having a driver to communicate with. That’s who they signal to while crossing the road, wave at to hail a bus or taxi, communicate with while driving, or even yell at when they get annoyed.
Self-driving cars, on the other hand, drive themselves all on their own. The driver’s gone — and with it go the eye-contact and other subtle gestures; things that people don’t notice, and yet, are all so accustomed to. For things to work out, we’ll need something to replace them. And that’s exactly what Ford was trying to make.
The experiment was conducted jointly with the Virginia Tech Transport Institute. It featured a “light-bar” on the windscreen, in the place where a driver’s eyes would normally be. That light-bar would signal what the car was doing. A slow, white pulse? Then it’s okay to overtake. Rapid blinking? Watch out — it’s accelerating from a stop!
For plans to work on these light-signals, fine-tuning them by observing how people respond. They aim to work with other companies and create a standard light-bar “language” to be used by all. Then way, everyone will know ho to communicate when cars get advanced enough to safely drive around on their own.
Of course, cars haven’t got that advanced just yet. That’s why Fords “self-driving” van wasn’t self-driving at all. It’s just that nobody noticed the driver, who was dressed up as a seat.
After you’ve learned how to see — or at least, picked up enough skills to get by — then comes the next step: deciding what to do.
That’s a bit easier, because you don’t have to think much. There are usually rules that you can simply follow.
The main rule is to keep on the road. That means, don’t drive off the side; keep adjusting the steering to stay on track. Which is pretty easy to do, once you know there the road is.
It’s not enough just to be on the road, however. You also need to be on the correct side of the road. You have to adjust your speed according to the road you’re on, going fast on highways and slowing down for speed-bumps. You need to identify road-signs in case there are special instructions to follow — although that’s also straightforward if you’ve recognised what the signs are.
Then, you have to deal with the traffic. You should know how to make way, how to match their speed, and how to avoid banging into them. Traffic is tricky, because it doesn’t always move the same way. You’ve got to learn to predict what the vehicles are going to do next.
Some rules are simple, like “slow down if the car just in front of you slows down”. Or “when switching lanes, you can cross the line if it’s a dashed line, but not if it’s a solid one.”
Other rules are more complicated. Some roads have one-way crossing lines, where you can cross from one lane to another but not the other way round. Those lanes have both dashed and dotted lines together.
In this situation, you can cross a dashed line if there’s a solid line just beyond it, but you can’t cross if the dashed line comes after the solid one. Cars in the above picture can only cross from down to up, not from up to down.
Rules like that, though complex, can still be followed. But there are some times when you’re not at all sure what to do. Those are the situations that you’ve never encountered before.
One day, a Google car was driving round a bend, when along came a duck, pursued by a lady, who was sitting in a wheelchair and carrying a broom. Luckily, the car had the sense to slow down, which is what you must do, too. The car had been told what to do in every situation its programmers had thought of — but nobody ever told it what to do if it came across a lady in a wheelchair chasing a duck with a broom!
While Ford works to ease the transition from driven cars to self-driving ones, software company Baidu is taking a different approach. They don’t try to make their vehicles fit in. Instead, they make them stand out. So, when you see one of those vehicles, you immediately know it’s a self-driving one. You can react accordingly, and be ready for the different sort of driving it would do compared to a human driver.
Baidu is proposing special lanes for self-driving cars: lanes where everyone will follow stricter rules, at least until self-driving cars are advanced enough to navigate the more messy world of human driving.
Those cars will be ready for some roads earlier than others. Maruti Suzuki chairman R.C. Bhargava has been quoted saying self-driving cars won’t work in India. Drivers in India are known for not following the official rules, instead making up new ones as they go along. I know someone who got pulled up by the traffic police because he drove the correct way round a ring-road, instead of cutting across like everyone else.
Instead of making self-driving vehicles work everywhere, Baidu will first work on getting self-driving to work in some places — for example, buses that follow a certain fixed route. That way, they’ll know the place well and have a better idea of what to expect. And, they can practice making self-driving more accurate.
Once you can decide what to do, there’s just one thing left: knowing where to go. At the moment, it’s pretty easy. A GPS does all the hard work; you just have to follow directions.
In the future, however, things could get more complex. You’ll probably start talking to other people and finding out where they are going, so you can all coordinate the traffic better. Lots of you could line up in a procession, each of you riding the winds of the one in front.
You’ll start to talk to humans, too. You’ll begin to learn their habits: when they’re going to get impatient, where they may make an unplanned stop on the way. And then, there’s the nice part.
As you learn to adapt to this strange, human, world, the humans will also start adapting to you.
If humans get used to self-driving cars, they’ll also start figuring out how to hack them.
The CIA, it seems, has one project working on just that. If they can somehow go through the Internet and get access to a self-driving car, they can also program it to do things. For example, they could program a car full of terrorists (or people they think are terrorists) to turn and crash into a wall, in what would be a perfect stealth-murder.
But plans don’t have to be so hi-tech. Far from using computers and advanced cryptographic programming techniques, all you need may be a bit of salt.
That’s what artist James Bridle used for his ‘Autonomous Trap 001’. It was basically two concentric circles of salt on the road: one solid, and the other dotted.
Remember what the car rules said?
You can cross a dashed line if there’s a solid line just beyond it, but you can’t cross if the dashed line comes after the solid one.
Cars will be able to cross the dashed and solid lines to get in to the circle — but they won’t be able to cross the solid and dashed lines to get back out!
They’ll be trapped inside the circle, and, unless someone comes to save them, there they’ll remain — waiting and waiting, or driving round and around, for ever.
Have something to say? At Snipette, we encourage questions, comments, corrections and clarifications — even if they are something that can be easily Googled! Or you can simply click on the ‘👏 clap’ button, to tell us how much you liked reading this.
|
A Trap for Cars
| 73
|
a-trap-for-cars-1a2ab14dfdcd
|
2018-04-06
|
2018-04-06 19:09:22
|
https://medium.com/s/story/a-trap-for-cars-1a2ab14dfdcd
| false
| 1,867
|
Bits and pieces about anything and everything. Usual topics from unusual perspectives. Information you can understand. A new post every Sunday, and more if you're lucky.
| null |
snipette
| null |
Snipette
|
snipettemag@gmail.com
|
snipette
|
CULTURE,HUMOUR,SCIENCE,PERSPECTIVE,INFORMATION
|
snipettemag
|
Self Driving Cars
|
self-driving-cars
|
Self Driving Cars
| 13,349
|
Badri Sunderarajan
|
Books reader, Websites coder, Drawings maker. Things writer. Occasional astronomer. Alleged economist. Editor@Snipette.
|
72e9114ca8b3
|
badrihippo
| 422
| 206
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-28
|
2018-09-28 09:24:03
|
2018-09-28
|
2018-09-28 09:31:45
| 2
| false
|
ru
|
2018-09-29
|
2018-09-29 15:00:08
| 4
|
1a2b2d5f0e61
| 1.383333
| 7
| 2
| 0
|
длинно, нудно, малоинформативно, странные приоритеты и сбитый фокус
| 3
|
Аналитика госэкспертов и крупных корпораций остается на уровне прошлого века:
длинно, нудно, малоинформативно, странные приоритеты и сбитый фокус
Из 130+ стр. нового отчета Мирового экономического форума «The Future of Jobs Report 2018»:
всего 22 стр. более-менее по существу (и то кондовый текст из «многобукав» и всего 11 графиков)
остальное: методология и абсолютно бессмысленные профайлы стран, нагаданные на кофейной гуще.
Другой пример — аналитика Pew Research на тему страхов вокруг потери работы из-за AI «In Advanced and Emerging Economies Alike, Worries About Job Automation», в котором сравнивают данные по США за 2015 (!) с данными других стран за май-август 2018 (!). Получается «ужос»!
Ссылок на названные отчеты не даю — много чести.
А вот пример хорошего отчета «эксперта-одиночки с мотором» — Майкла Осборна.
Вот его-то и нужно читать по теме AI автоматизации и изменения структуры рынка труда: все здесь четко, лаконично, обо всем важном и, в то же время, просто и наглядно.
Ну а в завершение пример.
Понятно, что многие профессии исчезнут. Но как быстро?
Здесь фишка в том, что это, в значительной мере, определяется не темпом развития технологий, а инерционными процессами в обществе и в сознании людей.
Для справки посмотрите приложенный график.
— в 1910, когда была запатентована электрическая стиральная машина, в частном секторе США работало более 500 тыс. прачек.
— после 1910 их число сокращалось на 100 тыс. за 10 лет и вышло почти в 0 лишь в 1990.
_________________________
Хотите читать подобные публикации? Подписывайтесь на мой канал в Телеграме, Medium, Яндекс-Дзене
Считаете, что это стоит прочесть и другим? Дайте им об этом знать, кликнув на иконку “понравилось”.
|
Аналитика госэкспертов и крупных корпораций остается на уровне прошлого века:
| 70
|
аналитика-госэкспертов-и-крупных-корпораций-остается-на-уровне-прошлого-века-1a2b2d5f0e61
|
2018-09-29
|
2018-09-29 15:00:08
|
https://medium.com/s/story/аналитика-госэкспертов-и-крупных-корпораций-остается-на-уровне-прошлого-века-1a2b2d5f0e61
| false
| 265
| null | null | null | null | null | null | null | null | null |
Analytics
|
analytics
|
Analytics
| 15,193
|
Сергей Карелов
|
Малоизвестное интересное на стыке науки, технологий, бизнеса и общества - содержательные рассказы, анализ и аннотации
|
4fa09a8333b2
|
sergey_57776
| 2,156
| 86
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-20
|
2018-03-20 06:49:04
|
2018-03-20
|
2018-03-20 06:49:34
| 1
| false
|
en
|
2018-03-20
|
2018-03-20 06:49:34
| 0
|
1a2bbbbfec8b
| 0.226415
| 0
| 0
| 0
| null | 5
|
Limited time offer :) Don’t miss it
|
Limited time offer :) Don’t miss it
| 0
|
limited-time-offer-dont-miss-it-1a2bbbbfec8b
|
2018-03-20
|
2018-03-20 06:49:35
|
https://medium.com/s/story/limited-time-offer-dont-miss-it-1a2bbbbfec8b
| false
| 7
| null | null | null | null | null | null | null | null | null |
Money
|
money
|
Money
| 35,618
|
DESTINEY VISION
| null |
760d85bbbc65
|
destineysocialnetwork
| 2
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-11
|
2018-07-11 13:49:12
|
2018-07-11
|
2018-07-11 13:52:07
| 0
| false
|
en
|
2018-07-11
|
2018-07-11 13:52:07
| 0
|
1a2d4b0b3dfb
| 0.615094
| 0
| 0
| 0
|
“ Because NanoIoT is already utilizing Blockchain technology and designed as a service platform it is a logical step to add a…
| 4
|
NANOINVEST®, the Developer of NanoIoT the innovative PaaS (Platform-as-a-service) integrating Cloud, Blockchain and AI for the fast-growing Internet of Things (IoT) Market to launch Cryptocurrency IO2C as secure, trusted payment for IOT Services.
“ Because NanoIoT is already utilizing Blockchain technology and designed as a service platform it is a logical step to add a cryptocurrency for secure, trusted payments of IOT services. IOT devices like a smart-meter in a building will buy and sell electric energy to a utility provider using cryptocurrency” said August Schnabel, Chairman of NANOINVESTLimited.
The Internet of Things (IoT) is the next level of automation of every object in our life, new Technologies and Services will make IoT implementation much easier, faster and more secure. The NanoIoT end-to-end service Platform is flexible and configurable to integrate IoT devices from different Manufacturer, adding Blockchain based Security and Artificial Intelligence to offer a feature rich, easy to use Application for Business and Consumers.
|
NANOINVEST®, the Developer of NanoIoT the innovative PaaS (Platform-as-a-service) integrating…
| 0
|
nanoinvest-the-developer-of-nanoiot-the-innovative-paas-platform-as-a-service-integrating-1a2d4b0b3dfb
|
2018-07-11
|
2018-07-11 13:52:07
|
https://medium.com/s/story/nanoinvest-the-developer-of-nanoiot-the-innovative-paas-platform-as-a-service-integrating-1a2d4b0b3dfb
| false
| 163
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
August Schnabel
| null |
c8483bb858f2
|
augustschnabel
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-19
|
2018-01-19 03:47:08
|
2018-01-19
|
2018-01-19 05:57:59
| 1
| false
|
en
|
2018-01-19
|
2018-01-19 05:57:59
| 0
|
1a2dfa8c5b72
| 4.449057
| 0
| 0
| 0
|
I’m telling you, global warming isn’t going to…
| 1
|
It’s Not Gonna Happen
I’m telling you, global warming isn’t going to…
Oh, I didn’t see you there. Sorry, my computer must’ve somehow uploaded a tweet that consists of irrational thought based on pretentious need to be right.
But I digress.
The real reason we’re gonna have a little chat is right up there, that action that just occurred by my computer. One might say it’s a virus, others may deem it as the work of AI: Artificial Intelligence. (Confused? You thought I was going to talk about global warming, didn’t you?)
Now whether I’m telling the truth or not is beyond me. You have no way idea of knowing if I placed that interesting tweet there or the artificial intelligence that hides within my operating system did that.
Let’s get down to business. An AI takeover has been a relatively common topic to talk about in the always advancing technological world. You have famous figures such as Stephen Hawking, Larry Page, and Elon Musk talking about the potential consequences of creating artificial minds that may supersede the intelligence of man. As a society, we have hypothetically formed a significant amount of scenarios that seemingly brings an end to mankind: the human workforce becomes obsolete by means of robots performing our jobs in an overall stronger fashion, the world nosedives into chaos as a super-intelligent AI transcends past our leaders and by no means can be stopped, or just simply, the annihilation of mankind as we know it since robots could probably build better versions of humans that are flawless. We can break down the so-called advantages of superior AI’s over humans into these 4, simple categories:
Higher range of IQ and capability to outwit
Manipulation of human groups*
Control of the economy*
Self-implementation*
So, let’s get this straight before I go on any further. The three categories that have an asterisk have a highly undeniable chance of occurring in the event that a self-aware AI is created. Let’s face it. There are thousands, possibly millions, of people around the world who are just downright displeased with the governing leaders of the real world (*cough* *cough*). A self-aware, manipulative, and seemingly genius AI would be able to manipulate humans as humans manipulate each other: promising wealth, fame, or any material gain possible to build trust and support of others. Or, if the AI wants to be discreet and remain incognito, it could cause disputes between countries, inciting wars (looking at you WWIII) that could mean the end of mankind. But maybe the AI doesn’t want to touch mankind, maybe it just wants to see us scramble a bit. Maybe it wants to take control of the stock market; transfer funds into international accounts and potentially buy out every single company, business, and person until there is only one source of control: itself. Or, maybe it just wants to make the stock market crumble; make every possible venue of income diminish into nothing. And finally, the idea of self-implementation is always prevalent; watch the movie Transcendence if you need a visual (starring Johnny Depp, may I add). Once an AI can connect to the Internet, there are no bounds, no barriers to prevent the AI from copying itself onto every operating system and every piece of technology in households.
Such scary thoughts, right? Well, to be completely honest with you, it’s not gonna happen.
We’re human (well, I know I am). We’re prone to make mistakes. The notion that we can create a perfect artificial intelligence that can surpass the bounds of our knowledge seems like a heavy proposition. Let’s use an example.
It’s common to code technology through simple logic: event B will occur if event A fails to occur and event C hasn’t occurred yet. Simple, right?
Now, let’s dig deep. Say I developed an AI that has connected to the Internet, finding all the answers that humans can typically Google and discover in the course of seconds. Say I gave the AI a riddle,
“AI, how do you pick up an elephant with one hand?”
The AI that has infinite sources across the Internet would answer, “You can’t as there aren’t any elephants with one hand.”
“You would be correct, AI.”
Oh man, bested by an artificial being. Let’s try again, this time with a paradox:
“Okay, AI, what if I told you that I was a liar? What is a liar?”
The AI would do some research and respond, “A liar is a person that tells lies, which are intentionally false statements.”
Now get ready. “AI, I tell lies all the time. Would you believe me if I told you as a liar, that I am a liar?”
With the introduction of the paradox, the AI cannot answer the question, and thus rendered obsolete in the sense of achieving superhuman intelligence. Although it may surpass functional code logic, an artificial intelligence cannot answer the question because it is filled with an infinite loop of unsolvable logic. As it tries to compute the paradox, it overloads with information and will inevitably encounter error. Don’t believe me?
“If you are a liar, then I can’t believe you when you say you are a liar. This means that you are the opposite of a liar, meaning that you are truthful, thus your first statement is truthful. But since you are truthful, you cannot tell lies all the time, making your first statement untruthful, which makes you a liar. But since you are a liar, I can’t believe you when you say you are a liar again. This means that you are the opposite of a liar, meaning that you are truthful, thus your first statement is truthful…”
And so on and so on.
So what am I trying to say? I’m not saying that giving a simple paradox to an AI will prevent the robot uprising. I’m trying to get at the idea of how since we currently cannot engineer a technological being that possesses every single aspect of a human (emotions, feelings, subjectivity, opinion, creativity, etc.), it’s unreasonable to constantly stand by the claim that superhuman intelligence will one day be among us and we should fear for our lives every day. A paradox spurs the need for psychological thought, thinking within the brain that we as humans can’t even completely explain. With that in mind (ha, get it because I’m talking about brains), how do we expect to create a being with intelligence stronger than that of a human brain if we haven’t fully explored the brain and how it functions?
With all this in thought, I think it’s safe to say there’s no AI takeover at this moment in time.
Prove me wrong, world, I dare you.
|
It’s Not Gonna Happen
| 0
|
its-not-gonna-happen-1a2dfa8c5b72
|
2018-01-19
|
2018-01-19 05:58:00
|
https://medium.com/s/story/its-not-gonna-happen-1a2dfa8c5b72
| false
| 1,126
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Julian Tsang
| null |
39e03291773e
|
jutsang
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-28
|
2018-09-28 12:06:53
|
2018-09-28
|
2018-09-28 12:18:08
| 3
| false
|
en
|
2018-10-04
|
2018-10-04 04:32:31
| 9
|
1a30c6800089
| 7.425472
| 2
| 0
| 0
|
The following is an excerpt from my book Quantum Physics and Artificial Intelligence: Lessons Learned from China. In the 20th century…
| 3
|
Quantum Physics and AI: Continuous is to Analog what Discrete is to Binary
Artificial Intelligence and quantum physics will cross paths in the 21st century. When they do, it may change our view of Nature. The text is followed by a list of resources showing the various domains in which this issue plays a role. The list will be updated as new material becomes available.
In the early 20th century, the pioneers of quantum physics debated whether nature is continuous or discrete — whether particles are waves or waves are particles. The pioneers of binary computing confronted a similar and related question; the difference between analog and digital. The latter was a major issue during the legendary Macy Conferences on Cybernetics between 1946 and 1953.
The Macy Conferences were a series of meetings of scholars from various disciplines, held in New York. Their aim was to promote meaningful communication across scientific disciplines and “restore unity to science.” Among the participants were some of the most influential thinkers of their days: William Ross Ashby, Gregory Bateson, Julian H. Bigelow, Ralph Waldo Gerard, Margaret Mead, Arturo Rosenblueth, Claude Shannon, John von Neumann, and Norbert Wiener.
The debate on the analog-digital question had a predecessor in European philosophy long before the advent of digital computing. Immanuel Kant and Soren Kierkegaard addressed the notion of analog. Not being familiar with the modern distinction between analog and digital (or binary), they discussed analog in terms of ontology and epistemology. The word analog is typically defined as “something having the property of being analogous to something else,” suggesting they believed the world pertains to perception and aesthetics.
The analog-digital divide became a contentious issue at the Macy Conferences, where it centered mostly on the human brain. Ralph Waldo Gerard, a neurophysiologist and behavioral scientist, claimed that the brain’s operations are much more analog than digital. He called into question the digital logic-based model developed in 1943 by neuroscientist Warren S. McCulloch and logician Walter Pitts, authors of an influential paper entitled “A logical calculus of the ideas immanent in nervous activity.”
More analog than digital
McCulloch and Pitts tried to understand how the brain could produce highly complex patterns by using many basic cells (neurons) that are connected. The paper was an important contribution to the development of digital artificial neural networks which model key features of biological neurons.
Gerard’s claim that the brain’s operations are “much more analog than digital” set off an animated debate that frustrated many of the conference participants. Mathematicians, like von Neumann, spoke in favor of digital perspectives, others (especially the psychologists) favored an analogical orientation.
Long argumentation ensued over the distinctions between discretely-coded digital orientation, adopted by McCulloch and Pitts, and the continuous analog character of Wolfgang Köhler’s Gestalt model. Köhler was a German psychologist and a key figure in the development of Gestalt psychology, which seeks to understand learning and perception as structured wholes.
The multi-disciplinarian Gregory Bateson, English anthropologist, social scientist, and cyberneticist, called for clarification of the distinction between analog and digital to remove ambiguities. None was forthcoming. The “analogical versus digital” debate remained a pesky item of “old business unresolved.”
In the end, and with the support of Norbert Wiener, digital computing won the day, with analog computing mostly confined to specific computing tasks. In recent years the analog-digital issue has been mostly discussed in the context of continuous and discrete mathematics.
Mathematician Freeman Dyson revisited the issue in 2014, in his lecture “Is Life Analog or Digital.” He pointed at the complexities of understanding brain functions like memory.
“It seems likely that memories are recorded in variations of the strengths of synapses connecting the billions of neurons in the brain with one another. But we do not know how the strengths of synapses are varied. It could well turn out that the processing of information in our brains is partly digital and partly analog. If we are partly analog, the downloading of a human consciousness into a digital computer may involve a certain loss of our finer feelings and qualities.”
Quantum processes
The latter point may very well be an elegant understatement. In biology, and other sophisticated processes, let alone the human brain, missing information can be decisive. Professor Dyson points at a third possibility: The processing of information in our brains is done with quantum processes, and the brain is the biological equivalent of a quantum computer. He adds this is speculation:
“Quantum computers are possible in theory and are theoretically more powerful than digital computers. But we don’t yet know how to build a quantum computer, and we have no evidence that anything resembling a quantum computer exists in our brains. Whether a universal quantum computer can efficiently simulate a physical system is an unresolved problem in physics.”
Digitizing an audio signal “samples” the wave and each sample is given a binary number.
We rely on the analog-digital dichotomy every time we use a digital device, whether computer, mobile phone, or digital sound system. At the heart of all these devices is the conversion between analog and digital. There is no such thing as digital music; there is only digitized audio.
To digitize audio, the sound wave is sampled 44.000 times a second. Each sample is given a binary number, and the numbers are restored on an electronic medium. Playback devices decode the binary samples and convert it back to analog signals we can hear.
When we visualize the sampling process of an analog wave, we see that it is identical to a histogram. We use a histogram, or a “coordinate grid,” to show movement or rate of change of a given unit time — the change in temperature in a year, or the movement of stock markets and currencies exchanges.
In a histogram, the horizontal coordinate (x) denotes time or moment; the vertical coordinate (y) denotes the amplitude of change. The rate of change is “analogous” to the amplitude as it moves through the grid set up by the binary parameter (x) and (y).
The histogram or coordinate grid measures or samples dynamic movement within the boundaries of a static binary grid.
Like many other tools used in science, the histogram is a human construct. We use it to manipulate, control, or understand specific aspect of reality. But the histogram can be used to speculate on the analog-digital divide and the role it plays in nature. Note that the analog wave running through a coordinate grid is dynamic. The grid itself is stable. The grid sets the boundaries within which the wave can move so that it does not go “off the charts.”
The relation between the two becomes clear when we think of the dynamic wave as “force” and the stable, coordinate grid as “equilibrium.” Force and equilibrium are mutually dependent. Equilibrium without force leads to petrification, force without boundaries leads to chaos.
The structure of a histogram, with 1 and 0 defining the stable binary grid and A representing the dynamic, analog force.
The distinction between forces and equilibrium may very well be at the heart of the dichotomies between analog and digital and wave and particle. If we understand the distinction between force and equilibrium, we may have the key to understanding all the wave-particle dichotomy. Moreover, it could also shed light on the limits of artificial intelligence, where the analog-digital distinction plays a key role.
In recent years, computer scientists have recognized that binary computing has its limits. Unless computer science makes a quantum leap forward and shows us otherwise, the sampling of analog information for conversion in digital format will always result in “missing information,” no matter how high the sampling rate and processing power. The decoding and uploading of the human brain to a computer, thought to be possible by some in the AI community, would not tolerate missing information, no matter how small.
How will AI deal with the complexities of nature that we do not yet fully understand? Many AI experts believe next-generation AI, or artificial general intelligence (AGI), will have the ability to reason, use strategy, solve puzzles, make judgments under uncertainty, represent knowledge, even plan, learn, and communicate in a natural language, and integrate all these skills for achieving common goals.
Developing such wide-ranging abilities and skills will rely on, both, science and the humanities. To be more than a general “expert system,” it requires social and emotional intelligence, the differentiation between male and female sensibilities, and social differences that vary from culture to culture and can reflect entirely different, if not opposite world views.
Further reading:
Is life analog or digital — Freeman Dyson
Perhaps the processing of information in our brains is partly digital and partly analog. If we are partly analog, the down-loading of a human consciousness into a digital computer may involve a certain loss of our finer feelings and qualities.
Back to analog computing — Columbia University
The discrete step-by-step methodology of digital computing was never a good fit for dynamic or continuous problems. A better approach may be analog computing to solves ordinary differential equations at the heart of continuous problems.
Being Analog — Carol Wilder
The concepts of analog and digital were known only to scientists and scholars, but suddenly they have become part of the daily general discourse about communication technologies.
Does the Brain Store Information in Discrete or Analog Form? — MIT Technology Review
It is not easy to answer the question of how the brain stores information. Neuroscientists have long pondered this issue, and many believe that it probably uses some form of analog data storage. But the evidence in favor of discrete or analog data storage has never been decisive.
Discrete and Continuous: A Fundamental Dichotomy in Mathematics — James Franklin
Discrete mathematics has a set of concepts, techniques, and application areas largely distinct from continuous mathematics (traditional geometry, calculus, most of functional analysis, differential equations, topology).
Evolution Saves Species From ‘Kill the Winner’ Disasters — John Rennie
Goldenfeld and Xue refer to this problem as a lack of “stochastic noise” because the calculations do not reflect the mathematically arbitrary discontinuities that the real world’s limitations impose.
The Riemann Hypothesis — Michael Atiyah
The fusion between the work of Hirzebruch and that of von Neumann involves a passage from the discrete to the continuous, the transition from algebra to analysis.
Neuromorphic Engineering — Wikipedia
Neuromorphic engineering, also known as neuromorphic computing, describes the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system.
A new brain-inspired architecture could improve how computers handle data and advance AI — American Institude of Physics
This analog storage better resembles nonbinary, biological synapses and enables more information to be stored in a single nanoscale device… The team continues to build prototype chips and systems based on brain-inspired concepts.
|
Quantum Physics and AI: Continuous is to Analog what Discrete is to Binary
| 12
|
quantum-physics-and-ai-continuous-is-to-analog-what-discrete-is-to-binary-1a30c6800089
|
2018-10-04
|
2018-10-04 04:32:31
|
https://medium.com/s/story/quantum-physics-and-ai-continuous-is-to-analog-what-discrete-is-to-binary-1a30c6800089
| false
| 1,822
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Jan Krikke, China, AI and Quantum Physics
|
Author of Quantum Physics and Artificial Intelligence in the 21st Century: Lessons Learned from China (available 9/2018)
|
ef02f344d80e
|
jankrikkeChina
| 27
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
181b83a9d891
|
2018-05-02
|
2018-05-02 15:32:51
|
2018-05-04
|
2018-05-04 06:23:06
| 6
| false
|
en
|
2018-05-04
|
2018-05-04 06:23:06
| 19
|
1a324ac9c080
| 6.089623
| 12
| 1
| 0
|
Data protection, AI ethics, people-first
| 5
|
Facebook F8 2018
Data protection, AI ethics, people-first
F8, the annual Facebook’s event intended for software engineers and entrepreneurs is over. If you couldn’t attend McEnery Convention Center in San Jose at May 1st to 2nd to get your 200$ Oculus Go for free, here are some takeaways from Zuck himself and the Facebook team.
If you quickly compare 2017 and 2018 you will realize that main theme is a bit different this time. “Keep building services for connecting people” now have the second part — “keep people safe”. And this was the starting point of Mark Zuckerberg’s show.
The world in 2018 is different than in 2017 and much different than in 2000. And our safety also means something different over the time. With the most of the world connected, the information can be a gold, but also a weapon. We all heard about Cambridge Analytica and US election case, we’re constantly affected by Fake News misinformation, our private data is worth more than ever before.
For many people Marks’ words “we will make mistakes, they will have consequences, and we will need to fix them” won’t be enough. We need to act proactively to minimize abuse of data protection and manipulation.
But from all those bad things around information, there are also good lessons for us. While Internet stopped being a place for hi-tech nerds a long time ago, many people still don’t understand the power of data. That’s why big failures like data breaches or manipulations at the global scale make us more aware of how important is to protect our privacy and be independent in thinking.
We need to face the truth. The world never goes back. Even when, in the darkest scenario, someone cuts off Facebook’s cable, there are and will be others. Because people will still demand faster and better ways of connecting and interacting with each other.
Today at least there is a person standing in front of us and telling about fighting fake news, protecting election or our data privacy. Should we trust him? It’s a question that each of us needs to ask ourselves. But at the end, no matter of the answer, we need to remember that those problems aren’t only Facebooks’.
They are ours, the humanity.
People-first
“Our goal is to give everyone in the world the power to share anything they want with anyone, anywhere”.
While Google talks a lot about being AI-first and building the technology to serve the humanity, Facebook reverses this statement, calling their solutions people-first.
When we take a look at 10 years roadmap, it stays consistent, and still very wide.
It shows us how complex and diversified can be humans interactions. Especially when you operate all over the world.
In some countries, Facebook is a marketplace, in others a political tool. It connects families and friends, groups of interest but also groups of support and compassion — to build a safe community and nurture the power of self-expression.
How is Facebook going to cover all of these in the close future?
On F8 you could hear about incoming dating app and new ways of sending content via Instagram (now you can share directly to Instagram Stories!).
New Groups tab will make it easier for us to be a part of meaningful communities. New features in Crisis Response will give us the possibility to share firsthand information and real-time updates, Blood Donation will make it easier to donate or request for blood in countries like India, Bangladesh, and Pakistan.
For better people’s information protection Facebook also did some changes in their 100k+ Facebook Developer Community.
“Building APIs that create value.“
“Give transparency and control.”
“Focus on building trust with the people that use our products.”
With those operating principles and refreshed app review process Facebook wants to make sure that they move forward in the right direction.
What about Messenger? There are 300k active bots (3x more than the last year). Now you can use Built-in NLP solutions to identify intent, automate some of our replies or route the conversation to a human via livechat. And thanks M Translations when you receive a message in a different language M will suggest you to translate it automatically to your default one.
Here you can find more about Messenger announcements.
And WhatsApp, another part of the Facebook ecosystem. The audience of 450m users from all over the world gets group video calling and stories. It has also become a business tool with the business Android app launched earlier this year (3m users now!). And don’t worry if you aren’t a small business. Soon you won’t need to have dedicated mobile device and dedicated person who answers questions on it. WhatsApp is going to launch separated tool for big businesses.
This and many others (Clear History, more AR effects, Oculus Go, 3D posts — including 3D photos and 3D moments) can be found in Highlights from day 1 or the Keynote video.
The tech for people-first
Now lets take a look at the technology which lies beneath all announcements. In the Day 2 Keynote, Facebook’s CTO, Mike Schroepfer with the team went through “carefully chosen (…) technology that most likely helps people in the world”, split into 3 areas of focus — AI, Connectivity and AR/VR.
Artificial Intelligence, the critical part of everything what is being built now at Facebook has got a new home: https://facebook.ai/.
Developers
Experience flexible research and accelerated production with Facebook's ecosystem of open source, state-of-the-art AI…facebook.ai
ChatBots, Assistants, AR/VR experience. Picking the content for watch tab, generating video thumbnails, translating between languages. AI is present in almost every part of Facebook.
It also includes platform safety.
Examples: there are millions of fake accounts removed automatically every day. Almost 2 million pieces of terrorist propaganda were removed in Q1. Without AI it wouldn’t be possible to do it manually by hundreds of people working every day in that time.
And another good thing is that some of the most advanced solutions are shared with the world as an open source:
PyTorch 1.0 — deep learning framework with the stability and support needed for production deployment. Natively supported by Microsoft and Amazon.
Caffe2 — lightweight deep learning framework for mobile devices.
ONNX — an open format for representing deep learning models to allow moving models between tools like PyTorch, Caffe2, Apache MXNet or TensorFlow.
And many others. Just take a look at the official announcements which include tools and research in areas like vision (e.g. 3D mapping), language and learning (e.g. bots playing StartCraft).
While those are things built for existing users, in the world there are still 3.8B people without internet access. With the experience of building data centers all over the world, Facebook cooperates with local operators and tries to provide cheap and fast internet connection as wide as possible with its Connectivity project.
Facebook also tries to highlight another very important topic related to our safety. It is AI ethics.
Recently, almost every day we are on the technological edge. With the solutions which we sometimes don’t follow, sometimes don’t understand. Sometimes they are far beyond our reasoning.
There are more and more systems which have an immediate impact on us — like the solutions that power hiring process, financial services, justice and countless others. Now we need to be sure that we still can control and understand them.
Facebook tries to approach AI ethics from different angles.
By promoting diversity and bringing more points of view to be sure that AI solutions aren’t biased by too narrow group of AI teachers. Or by building tools like Fairness Flow to measure how algorithms interacts with different people. And this is just the beginning of their investments, processes, and solutions.
But there will never be enough of work with data privacy, AI ethics, and our safety. Especially in the world where information can be worth more than anything else.
If you want to dive deeper into recent achievements if Facebook’s developers and researchers teams, take a look at the video from day 2 Keynote of F8 conference or Highlights from Day 2.
|
Facebook F8 2018
| 35
|
facebook-f8-2018-1a324ac9c080
|
2018-05-14
|
2018-05-14 15:24:16
|
https://medium.com/s/story/facebook-f8-2018-1a324ac9c080
| false
| 1,362
|
We're the team behind @azimo - the faster, safer way to send money anywhere
| null | null | null |
AzimoLabs
|
alexander@azimo.com
|
azimolabs
|
FINTECH,MOBILE APP DEVELOPMENT,SOFTWARE DEVELOPMENT,ANDROID APP DEVELOPMENT,IOS APP DEVELOPMENT
|
AzimoLabs
|
Facebook
|
facebook
|
Facebook
| 50,113
|
Mirek Stanek
|
Tech Lead at @Azimo. I dream big 🌌 🔭 and build technology for people 💻 💚 👫. #Mobile #AI #R&D. More blog posts on https://mirekstanek.online
|
c6584fe2de9d
|
froger_mcs
| 2,064
| 100
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
863f502aede2
|
2018-08-15
|
2018-08-15 20:34:26
|
2018-08-15
|
2018-08-15 20:36:10
| 3
| false
|
en
|
2018-08-15
|
2018-08-15 20:37:14
| 4
|
1a36b003b6d0
| 1.738679
| 26
| 0
| 0
|
Google Brain Software Engineer Martin Wicke says a preview version of TensorFlow 2.0 will be released later this year. To cope with…
| 4
|
TensorFlow 2.0 Will Be a “Major Milestone”
Google Brain Software Engineer Martin Wicke says a preview version of TensorFlow 2.0 will be released later this year. To cope with dramatic changes in both users and use-cases, TensorFlow 2.0 will shift its focus to “ease of use.” Wicke made the announcements yesterday in a Google Groups post.
We can expect the following features in TensorFlow 2.0:
“Eager execution” through alignment of model expectations and practice
Enhanced compatibility with platforms and languages
Removal of deprecated APIs to reduce duplication
A series of upcoming public reviews explaining the planned changes will provide opportunities for the community to express their concerns and submit proposals. The Google team hopes to smoothen the transition from TensorFlow 1.0 to 2.0 by creating a conversion tool that makes existing Python code compatible with TensorFlow 2.0 APIs.
Wicke says TensorFlow 2.0 will cease distributing tf.contrib due to its overgrowth. He suggests that in the future, the contrib modules will either integrate a project into TensorFlow, be transported to an individual repository, or be deleted. As TensorFlow 2.0 will no longer add new tf.contrib projects, Wicke encouraged those currently working on tf.contrib projects to contact the team for assistance.
The update will not impact SavedModels or stored GraphDefs. During the update however, TensorFlow 2.0 might have to convert variable names in raw checkpoint to ensure they remain compatible.
TensorFlow is an open source software library for computation developed by the Google Brain team and released in 2015. Its robust machine learning framework has enabled broad usage across different platforms.
The TensorFlow team can be contacted at discuss@tensorflow.org. Interested parties can also subscribe to the mailing list developers@tensorflow.org to receive regular updates.
Journalist: Fangyu Cai | Editor: Michael Sarazen
Follow us on Twitter @Synced_Global for more AI updates!
Subscribe to Synced Global AI Weekly to get insightful tech news, reviews and analysis! Click here !
|
TensorFlow 2.0 Will Be a “Major Milestone”
| 150
|
tensorflow-2-0-will-be-a-major-milestone-1a36b003b6d0
|
2018-08-15
|
2018-08-15 20:37:14
|
https://medium.com/s/story/tensorflow-2-0-will-be-a-major-milestone-1a36b003b6d0
| false
| 315
|
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
| null |
SyncedGlobal
| null |
SyncedReview
|
global.sns@jiqizhixin.com
|
syncedreview
|
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
|
Synced_Global
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Synced
|
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
|
960feca52112
|
Synced
| 8,138
| 15
| 20,181,104
| null | null | null | null | null | null |
0
|
user_name = ‘user.name’
password = ‘secret’
#PostgreSQL database connection
conn = psycopg2.connect(“dbname=Jiradb user=postgres password=111 port=5433”)
cur = conn.cursor()
options = {‘server’: ‘http://your.domain.com'}
# Authentication
try:
jira = JIRA(options, basic_auth=(f’{user_name}’, f’{password}’))
except BaseException as Be:
print(Be)
props = jira.application_properties()
all_closed_issues = jira.search_issues(
‘resolution in (Resolved, Cancelled, Repeated ,”Not Repeatable”) and assignee is not EMPTY order by createdDate asc’, maxResults=False)
i = 0
for i in range(0, len(all_closed_issues)):
train.append((str(all_closed_issues[i].key.split(‘-’)[0]) + ‘ ‘ + str(
all_closed_issues[i].fields.summary), all_closed_issues[i].fields.assignee.name))
#Train classifier with issues key value and summary with assignee
cl = NaiveBayesClassifier(train)
while True:
#Gathering key and summary values of all open issues without an #assignee
all_open_issues = jira.search_issues(‘assignee = EMPTY AND (category = IG-Ankara OR category = YY-Ankara)’, maxResults=False)
if len(all_open_issues) > 0:
for i in range(0, len(all_open_issues)):
#Only key value and summary of issue are enough for classification
issue = str(all_open_issues[i].key.split(‘-’)[0]) + \
‘ ‘ + str(all_open_issues[i].fields.summary)
assignee_ = cl.classify(issue)
print(all_open_issues[i].key,
‘ is assigned to: ‘, assignee_)
jira.assign_issue(all_open_issues[i].key, assignee_)
ts = d.datetime.now().strftime(‘%Y/%m/%d’)
#Inserting classification result to database for later assessment
cur.execute(“INSERT INTO auto_assigned (i_key ,assignee,timestamp_) VALUES (“+f”’{all_open_issues[i].key}’”+ “,”+ f”’{assignee_}’”+”,”+f”’{ts}’”+”)”)
conn.commit()
#Add a comment to issue which indicates this issue has been assigned #automatically.
comment = jira.add_comment(all_open_issues[i], ‘This issue was assigned automatically.’, visibility={‘type’: ‘role’, ‘value’: ‘Administrators’})
total_time_lapse = time.time() — start_time
cur_date = d.datetime.now().strftime(‘%Y-%m-%d %H:%M:%S’)
print(f” — — Total Time: {total_time_lapse} seconds — -”)
try:
sendemail(from_addr=’huseyin.capan@netcad.com.tr’,
to_addr_list=[‘capanh@gmail.com’],
cc_addr_list=’’,
subject=str(all_open_issues[i].key) +
“is assigned to ; “ + str(cl.classify(issue)),
body=f” Total time lapse: {total_time_lapse} seconds— -”,
login=”huseyin.capan@netcad.com.tr”,
password=f’{mail_password}’)
except BaseException as Be:
print(Be)
else:
cur_date = d.datetime.now().strftime(‘%Y-%m-%d %H:%M:%S’)
#If there are no issues to be assigned, wait one minute and try again.
print(“There aren’t ant issues for now. It will be retried after one minute !( “ + cur_date + “ )”)
time.sleep(60)
| 57
| null |
2018-05-04
|
2018-05-04 08:38:25
|
2018-05-04
|
2018-05-04 18:31:55
| 1
| false
|
en
|
2018-05-04
|
2018-05-04 20:44:49
| 5
|
1a36beec01f7
| 3.169811
| 2
| 0
| 0
|
Abstract: I’ve used Python’s textblob classifier to simply classify issues according to assignees from their description and headers…
| 4
|
How to assign Jira issues automatically using textblob classifier in Python ?
Photo by rawpixel.com on Unsplash
Abstract: I’ve used Python’s textblob classifier to simply classify issues according to assignees from their description and headers. Classified issues used to classify newly created issues and results are recorded to a database. 2019 issues used as training set and %82 assignment accuracy have been achieved. As the training set grows bigger accuracy could be better.
Atlassian Jira is a product for corporate firms to track both internal and external issues related with their business. This product can be summarized with an example as; If a company sells technical support, customers want to make sure that their problems are resolved in a pre-determined period of time or corporates may want to measure their technical supports’ performance. Jira provides high performant and easy to use solutions for the companies. You can find more information about Jira here.
My experience with Atlassian Jira started five months ago. My employer have decided to use Jira for the scenarios I’ve mentoined above, actually those scenarios were real experiences of mine. I was responsible for the Jira management in my section.
My responsibility was; reading the description and assign the issue to the related person everytime a customer created an issue. This may sound simple but there are two major problems here;
1. Fast response
If you had only one job; assigning issues, then it is very simple but things don’t work this way. On a busy day it is very normal to forget assigning a few issues. Assigning issues doesn’t mean they are solved but at least technical support knows about issues and they can adjust their priorities.
2. Personal disputes
I’m the youngest person in the office and assigning issues to your elders might become problematic. People may question your decisions and it can be exhausting to explaining people why you think they are suitable for that specific issue.
No more talking, If you want to check the whole script you can go to bottom from here;
Or you can follow the script that I’ve used to assign the issues step by step with explanations;
Step 1: Jira Authentication in Python
Step 2: Querying all closed issues using JQL and training textblob classifier
JQL stands for Jira Query Language. In Jira, issues can be queried with this SQL-like language. In this step you will probably have to write your own JQL to query your closed issues. Another important thing is maxResults value. If not specified maximum quantity of results return from Jira API is limited with fifty. We want all issues so we set max results value to false.
Step 3: Assigning the issues according to classification and write results to database for classification assessment.
Full script can be found here.
PS: I’m not an expert on Python or machine learning. If you see anything wrong or if you know an easier way, please feel free to share it with me.
|
How to assign Jira issues automatically using textblob classifier in Python ?
| 2
|
how-to-assign-jira-issues-automatically-using-textblob-classifier-in-python-1a36beec01f7
|
2018-05-08
|
2018-05-08 02:49:05
|
https://medium.com/s/story/how-to-assign-jira-issues-automatically-using-textblob-classifier-in-python-1a36beec01f7
| false
| 787
| null | null | null | null | null | null | null | null | null |
Python
|
python
|
Python
| 20,142
|
Hüseyin Çapan
| null |
d9bc333e388a
|
hseyinapan
| 9
| 19
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-25
|
2018-05-25 14:53:16
|
2018-05-25
|
2018-05-25 14:55:48
| 4
| false
|
en
|
2018-05-25
|
2018-05-25 14:56:25
| 3
|
1a3864a77c45
| 2.35283
| 1
| 0
| 0
|
Democratic nominee Hillary Clinton and Republican counterpart Donald Trump faced off in the first presidential debate of 2016 at Hofstra…
| 4
|
Presidential Debate and Text Mining
Democratic nominee Hillary Clinton and Republican counterpart Donald Trump faced off in the first presidential debate of 2016 at Hofstra University in Hempstead, New York. Right from the start, the two dived directly into attacks. They engaged in testy exchanges over trade, the U.S. economy, race and foreign policy. The purpose of this project is to process unstructured (textual) information like presidential debate transcripts. We can compare 2016 and 2012 presidential debate transcripts, to tell if there any key words similar and find out some important topics from debates.
Original Python Code -> First Presidential Debate.
At first, we used python beautifulSoup package to read the transcription html document.
Transcript_PresidentialDebate link -> http://time.com/4508768/presidential-debate-trump-clinton-transcript/ Obama_debate.html link -> http://www.debates.org/index.php?page=october-22-2012-the-third-obama-romney-presidential-debate
X : represent 2012 Obama Presidential Debate Y : 2016 Presidential Debate
If X and Y mention same keyword, the program can automatically print it out as following:
We can find out some keywords in both 2016 and 2012 presidential debate. Like economic development and foreign investment.
Moreover, both 2016 and 2012 presidential debate mention about American leadership .
” number five, the other thing that we have to do is recognize that we can’t continue to do nation building in these regions. Part of American leadership is making sure that we’re doing nation building here at home. That will help us maintain the kind of American leadership that we need. ” said Obama in 2012.
In 2016 presidential debate the issue was raised agin as following : TRUMP: Is it President Obama’s fault? CLINTON: Look, there are differences…TRUMP: Secretary, is it President Obama’s fault?
CLINTON: There are different views about what’s good for our country, our economy, and our leadership in the world. And I think it’s important to look at what we need to do to get the economy going again. That’s why I said new jobs with rising incomes, investments, not in more tax cuts that would add $5 trillion to the debt.
The 2016 second presidential debate between Donald J. Trump and Hillary Clinton began with explosive attacks and ended with a measure of graciousness, as the two candidates complimented each other at the request of an audience member. However, there are some important issues can’t be missed. In a comparatively 2016 and 2012 , we hewed close to the basic arguments of campaign to focus on serious discussion like economic policy . They have very different visions — and promises — for the U.S. economy. Using text mining, we can easily hight line the keywords and unstructured (textual) information.
|
Presidential Debate and Text Mining
| 1
|
presidential-debate-and-text-mining-1a3864a77c45
|
2018-05-28
|
2018-05-28 05:53:19
|
https://medium.com/s/story/presidential-debate-and-text-mining-1a3864a77c45
| false
| 438
| null | null | null | null | null | null | null | null | null |
2016 Election
|
2016-election
|
2016 Election
| 54,660
|
Annette Chiu
|
Data Science In Training @ New York City
|
4d818c526825
|
annettechiu
| 59
| 94
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
9972619e86f1
|
2018-05-02
|
2018-05-02 06:52:29
|
2018-05-02
|
2018-05-02 06:53:53
| 0
| false
|
id
|
2018-05-02
|
2018-05-02 07:11:55
| 5
|
1a395e8dd5f6
| 12.781132
| 2
| 0
| 0
|
Ada kesepakatan yang telah diterima di kalangan peneliti AI modern bahwa mesin pintar masih sangat jauh dari kecerdasan manusia, meskipun…
| 5
|
(3/4) Masalah Etika Revolusi Mesin Pintar
Ada kesepakatan yang telah diterima di kalangan peneliti AI modern bahwa mesin pintar masih sangat jauh dari kecerdasan manusia, meskipun dalam beberapa kasus dalam domain ANI telah terbukti melebihi manusia. Walaupun demikian, banyak hal yang masih terlewatkan dalam AI modern. Istilah Artificial Intelligence General (selanjutnya, AGI) adalah istilah yang digunakan untuk menunjukkan mesin pintar yang nyata dengan kecerdasan seperti manusia. Sesuai namanya, konsensus yang muncul adalah karakteristik umum. Algoritma pembelajaran mesin yang dimiliki AGI setara dengan kinerja manusia atau bahkan lebih unggul jika sengaja diprogram untuk domain tak terbatas. AlphaGo menjadi juara dunia di catur, tetapi bahkan tidak bisa benar-benar bermain catur, apalagi mengendarai mobil atau membuat penemuan ilmiah.
Algoritma mesin pintar modern saat ini telah menyerupai semua konsep biologis kecuali Homo sapien. Lebah kompeten membangun sarangnya; berang-berang dapat membangun bendungan; tapi lebah tidak bisa membangun bendungan dan berang-berang tidak bisa belajar untuk membangun sarang lebah. Seorang manusia — Homo sapien — dapat belajar untuk melakukan keduanya; tapi ini adalah kemampuan unik di antara bentuk-bentuk kehidupan biologis. Hingga saat ini masih diperdebatkan apakah kecerdasan manusia benar-benar umum ataukah hanya lebih baik di beberapa tugas kognitif tertentu. Yang pasti kecerdasan manusia secara signifikan lebih umum dari kecerdasan nonhominid. Dari sini relatif mudah untuk membayangkan jenis masalah yang mungkin timbul dari AI yang beroperasi hanya dalam satu domain tertentu. Ini adalah kelas kualitatif berbeda dari masalah menangani AGI yang mampu beroperasi di banyak konteks atau bahkan yang tidak dapat diprediksi sebelumnya.
Sebagai contoh — AlphaGo, algoritma mesin pintar dari Google yang mengalahkan juara catur dunia. Jika algoritma AlphaGo hanya dapat melakukan persis seperti yang diperintahkan atau diprogram oleh insinyur perangkat lunak, maka proses pra-proses untuk melihat pola-pola permainan dari data permainan sebelumnya akan sangat banyak. Pertama karena kemungkinan gerakan catur sangat besar jumlahya. Kedua, jika insinyur mengetahui gerakan apa yang lebih baik dan harus membuat algoritmanya, maka mesin pintar yang dibuatnya tidak akan bisa mengalahkan juara dunia, karena insinyur tersebut bukan juara dunia. Algoritma yang dibuat tentu saja harus bisa belajar secara mandiri, mengamati pola-pola gerakan dan menghitung semua kemungkinan untuk memenangkan permainan.
Manusia modern dapat melakukan banyak hal secara mandiri. Otak manusia mampu beradaptasi dan berkembang cepat tanpa pengalaman terdahulu. Manusia melintasi ruang dan pergi di bulan, meskipun tak satu pun dari nenek moyang manusia mengalami tantangan dengan ruang vakum. Dibandingkan dengan ANI, itu adalah masalah kualitatif berbeda untuk merancang sebuah mesin pintar yang mampu beroperasi dengan aman di ribuan konteks; termasuk konteks yang tidak secara khusus dibayangkan oleh desainer atau penggunanya; termasuk konteks yang belum ditemui manusia sebelumnya.
Untuk membangun AI yang aman dan beroperasi dalam banyak domain, banyak konsekuensi termasuk yang mungkin tidak pernah secara eksplisit dibayangkan oleh insinyur. Seseorang harus menentukan perilaku yang baik dalam hal seperti “X sehingga konsekuensi dari X tidak berbahaya bagi manusia”. Hal ini melibatkan ekstrapolasi konsekuensi dari suatu tindakan yang merupakan satu spesifikasi yang dapat di realisasikan jika sistem mampu secara eksplisit menentukan konsekuensi tindakannya. Sebuah pemanggang roti misalnya, tidak dapat memiliki properti desain ini karena pemanggang roti tidak dapat meramalkan konsekuensi dari memanggang roti. Bayangkan seorang insinyur mengatakan, “ya saya tidak tahu apakah pesawat ini akan selalu terbang dengan aman. Saya tidak tahu mekanik pesawat ini secara detil, tetapi saya yakin desainnya sangat aman”. Pernyataan tersebut seperti tidak nyaman dari sisi penumpang, tetapi memang sulit untuk melihat semua kemungkinan dari konsekuensi yang jauh. Memerika desain kognitif mungkin dilakukan tetapi sangat sulit memprediksi semua konsekuensi dari suatu tindakan. AI akan benar-benar aman jika memiliki verifikasi yang aman dengan jaminan yang dapat dipercaya. Dalam banyak riset AI — harapan ini adalah murni harapan dan tetap menjadi permasalahan besar.
Membangun AGI dipercaya memerlukan metode yang sangat berbeda dari cara berpikir saat ini. Disiplin etika AI, khususnya pada AGI akan sangat berbeda secara fundamental dari disiplin etika teknologi non-kognitif, karena:
Perilaku mesin pintar AI tidak dapat diprediksi untuk keamanan manusia, bahkan jika insinyur dan programmer telah melakukan segalanya dengan benar.
Verifikasi keamanan sistem menjadi tantangan yang lebih besar karena kita harus memverifikasi apa yang coba dilakukan oleh sistem, bukan memverifikasi perilaku yang aman dalam semua konteks operasi.
Etika kognisi itu sendiri harus diambil sebagai subyek rekayasa.
Beberapa masalah etika juga akan muncul ketika kita merenungkan kemungkinan bahwa beberapa sistem mesin pintar masa depan akan memiliki status moral. Hubungan kita dengan sesuatu memiliki status moral yang tidak eksklusif soal rasionalitas instrumental: kita juga memiliki alasan moral untuk memperlakukan mereka dengan cara tertentu dan untuk menahan diri dari memperlakukan mereka dengan cara-cara tertentu lainnya. Francis Kamm telah mengusulkan definisi status moral berikut:
X memiliki status moral = karena X memiliki moral dalam dirinya sendiri; maka X diperbolehkan / tidak diperbolehkan untuk melakukan hal-hal atas kepentingan sendiri.
Sebuah batu tidak bisa memiliki status moral: kita mungkin menghancurkannya, atau terkena efek yang merusak batu itu sendiri. Seorang manusia di sisi lain, harus diperlakukan tidak hanya sebagai alat tetapi juga dengan tujuan tertentu karena manusia memikirkan dalam dirinya sendiri bahwa itu tidak diperbolehkan untuk melakukan padanya hal-hal tanpa persetujuannya. Ringkasnya, manusia memiliki status moral.
Pertanyaan tentang status moral merupakan ranah etika praktis. Misalnya perselisihan tentang kebolehan moral aborsi sering bergantung pada perbedaan pendapat tentang status moral embrio. Kontroversi tentang eksperimen pada hewan dan penganganan hewan di industri makanan melibatkan pertanyaan tentang status moral dari spesies yang berbeda dari hewan, dan kewajiban kita terhadap manusia dengan penyakit berat, seperti pasien stadium akhir Alzheimer, mungkin juga tergantung pada pertanyaan moral status.
Secara luas disepakati bahwa sistem mesin pintar saat ini tidak memiliki status moral. Insinyur dapat mengubah, menyalin, menghentikan, menghapus, atau menggunakan program komputer sesuai desain; setidaknya sejauh program yang dibuat. Kendala moral untuk yang manusia dalam hubungan dengan sistem AI kontemporer semua didasarkan pada tanggung jawab untuk makhluk lain, seperti sesama manusia, tidak dalam tugas untuk sistem itu sendiri.
Meskipun cukup konsensus bahwa kini sistem AI kekurangan status moral, masih belum jelas kriteria dan atribut untuk status moral tersebut. Dua kriteria penting yang umumnya diusulkan terkait dengan status moral, baik secara terpisah atau dalam kombinasi: kesanggupan dan cita rasa (atau kepribadian). Ini dapat dicirikan kira-kira sebagai berikut:
Sentience: kapasitas untuk pengalaman fenomenal atau qualia, seperti kemampuan untuk merasakan sakit dan menderita.
Sapience: cita rasa: satu set kapasitas terkait dengan kecerdasan yang lebih tinggi, seperti kesadaran diri dan menjadi agen yang memiliki alasan.
Salah satu pandangan umum adalah bahwa banyak hewan memiliki qualia dan karena itu memiliki beberapa status moral, tapi hanya manusia memiliki cita rasa, yang memberi mereka status moral lebih tinggi dari hewan non-manusia. Pandangan ini tentu saja harus menghadapi kasus spesifik seperti, di satu sisi bayi manusia atau manusia dengan keterbatasan yang disebut sebagai “manusia marginal” bisa gagal memenuhi kriteria untuk cita rasa; dan di sisi lain, beberapa binatang seperti kera besar mungkin memiliki setidaknya beberapa elemen dari cita rasa. Ada yang menyangkal bahwa yang disebut “manusia marginal” memiliki status moral penuh. Yang lainnya mengusulkan cara-cara tambahan di mana sebuah objek bisa memenuhi syarat sebagai pembawa status moral, seperti dengan menjadi anggota dari jenis yang biasanya memiliki kesanggupan atau cita rasa, atau dengan berdiri dalam hubungan yang cocok untuk beberapa makhluk yang secara independen memiliki status moral. Meskipun demikian disini kita akan fokus pada kriteria kesanggupan dan cita rasa.
Sistem mesin pintar akan memiliki beberapa status moral jika memiliki kapasitas untuk qualia, seperti kemampuan untuk merasakan sakit. Sebuah sistem mesin pintar yang hidup, bahkan jika tidak memiliki bahasa dan kemampuan kognitif lainnya yang lebih tinggi, tidak seperti mainan boneka binatang; tetapi lebih seperti binatang hidup. Adalah salah untuk menimbulkan rasa sakit pada tikus, kecuali ada alasan moral untuk melakukannya. Hal yang sama akan berlaku untuk setiap sistem mesin pintar yang hidup. Jika selain kesanggupan, sistem mesin pintar juga memiliki cita rasa dari jenis yang sama dengan manusia dewasa normal, maka seharusnya memiliki status moral penuh setara dengan manusia.
Salah satu ide yang mendasari penilaian moral dapat dinyatakan dalam bentuk yang lebih kuat sebagai prinsip non-diskriminasi:
Prinsip Substrat Non-Diskriminasi
Jika dua makhluk memiliki fungsi yang sama dan pengalaman sadar yang sama, dan hanya berbeda dalam substrat pelaksanaannya, maka mereka memiliki status moral yang sama.
Banyak perdebatan untuk prinsip ini dengan alasan bahwa menolak itu akan sama dengan rasisme: substrat tidak memiliki perbedaan moral yang mendasar dalam cara dan alasan yang sama seperti warna kulit. Prinsip Substrat Non Diskriminasi tidak berarti bahwa komputer digital bisa sadar atau bahwa hal itu bisa memiliki fungsi yang sama sebagai manusia. Substrat tentu saja relevan sejauh itu secara moral dan memiliki perbedaan untuk kesanggupan atau fungsi. Tapi ini berarti tidak ada bedanya moral dari makhluk yang terbuat dari silikon atau karbon atau otak yang menggunakan semikonduktor.
Prinsip tambahan yang dapat diusulkan adalah bahwa sistem AI adalah buatan (artificial), yaitu produk yang sengaja dibuat — yang tidak relevan dengan status moral mereka. Kita bisa merumuskan ini sebagai berikut:
Prinsip Ontogeny Non-Diskriminasi
Jika dua makhluk memiliki fungsi yang sama dan pengalaman kesadaran yang sama, dan hanya berbeda dalam bagaimana mereka terbentuk, maka mereka memiliki status moral yang sama.
Saat ini ide tersebut diterima secara luas meskipun di beberapa kalangan khususnya di masa lalu, gagasan bahwa status moral seseorang tergantung pada garis keturunan atau kasta masih berpengaruh. Faktor penyebab seperti keluarga berencana, pertolongan persalinan, fertilisasi in vitro, peningkatan sengaja gizi ibu dan lain-lain yang memperkenalkan unsur pilihan yang disengaja dalam penciptaan manusia, memiliki implikasi yang diperlukan untuk status moral progeni. Bahkan mereka yang menentang kloning reproduksi manusia karena alasan moral atau agama umumnya menerima bahwa bayi manusia kloning akan memiliki status moral sama seperti bayi manusia lainnya. Prinsip Ontogeny Non Diskriminasi memperluas alasan ini sampai ke kasus yang melibatkan sistem kognitif yang seluruhnya buatan.
Tentu saja mungkin untuk mendapatkan kondisi penciptaan yang sedemikian rupa mempengaruhi keturunan berikutnya dan mengubah status moralnya. Sebagai contoh, jika beberapa prosedur dilakukan selama pembuahan atau kehamilan yang menyebabkan janin manusia berkembang tanpa otak, maka fakta tentang ontogeni akan relevan untuk penilaian dari status moral progeni. Anak anencephaly bagaimanapun akan memiliki status moral yang sama dengan anak anencephaly serupa lainnya, termasuk yang terjadi melalui proses alami. Perbedaan status moral antara anak anencephaly dan anak normal didasarkan pada perbedaan kualitatif antara dua fakta bahwa salah satu memiliki pikiran sementara yang lainnya tidak. Karena dua anak tidak memiliki fungsi yang sama dan pengalaman sadar yang sama, Prinsip Ontogeny Non-Diskriminasi tidak berlaku.
Meskipun Prinsip Ontogeny Non-Diskriminasi menegaskan bahwa ontogeni suatu makhluk tidak memiliki landasan penting pada status moral, hal itu tidak menyangkal bahwa fakta-fakta tentang ontogeni dapat mempengaruhi agen moral tertentu pada makhluk yang bersangkutan. Orang tua memiliki tugas khusus untuk anak kandung mereka tetapi tidak pada anak-anak lain. Demikian pula, Prinsip Ontogeny Non-Diskriminasi konsisten dengan klaim bahwa pencipta atau pemilik sistem mesin dengan status moral mungkin memiliki tugas khusus untuk pikiran buatan mereka yang mereka tidak memiliki pikiran buatan lain, bahkan jika pikiran tersebut secara kualitatif serupa dan memiliki status moral yang sama.
Jika prinsip-prinsip non-diskriminasi sehubungan dengan substrat dan ontogeni diterima, maka banyak pertanyaan tentang bagaimana memperlakukan otak buatan dapat dijawab dengan menerapkan prinsip-prinsip moral yang sama untuk menentukan tugas-tugas kita dalam konteks lebih akrab. Sejauh tugas moral berasal dari pertimbangan status moral, kita harus memperlakukan otak buatan hanya dengan cara yang sama seperti memperlakukan pikiran manusia alami secara kualitatif identik dalam situasi yang sama. Ini menyederhanakan masalah pengembangan etika untuk menangani otak buatan. Bahkan jika kita menerima sikap ini, kita harus menghadapi sejumlah pertanyaan etika baru dimana prinsip-prinsip tersebut mungkin tidak terjawab. Pertanyaan etika timbul karena otak buatan dapat memiliki sifat sangat berbeda dari manusia atau hewan yang alami. Kita harus mempertimbangkan bagaimana sifat baru akan mempengaruhi status moral dari otak buatan dan apa artinya untuk menghormati status moral dari pikiran eksotis tersebut.
Dalam kasus manusia, kita biasanya tidak ragu menganggap kesanggupan dan pengalaman sadar untuk setiap individu akan menunjukkan kondisi normal perilaku manusia. Sedikit yang yakin untuk menjadi orang lain dan bertindak normal tanpa memiliki kesadaran. Namun orang lain tidak berperilaku dengan cara yang mirip dengan diri kita sendiri; mereka juga memiliki otak dan arsitektur kognitif sendiri. Mesin pintar sebaliknya mungkin cukup berbeda dari kecerdasan manusia namun masih menunjukkan perilaku seperti manusia atau memiliki kecenderungan perilaku yang sama. Oleh sebab itu perlu untuk memahami kecerdasan buatan yang yang mungkin akan menjadi seperti seseorang, namun tidak akan hidup atau memiliki pengalaman sadar apapun. Apakah ini benar akan tergantung pada jawaban atas beberapa pertanyaan metafisik. Haruskah sistem seperti itu mungkin akan menimbulkan pertanyaan apakah orang yang tidak hidup akan memiliki status moral; dan jika demikian, apakah akan memiliki status moral yang sama sebagai orang hidup? Pertanyaan ini belum mendapat banyak perhatian hingga saat ini.
Properti eksotis lain, salah satu yang tentunya metafisik dan fisik bagi mesin pintar adalah tingkat subjektif yang menyimpang drastis dari tingkat yang merupakan karakteristik dari otak manusia biologis. Konsep tingkat subjektif dijelaskan pertama kali dengan memperkenalkan gagasan pemindahan otak. “Mengunggah” (upload) otak mengacu pada teknologi masa depan yang hipotesisnya memungkinkan manusia atau kecerdasan hewan lain ditransfer otak organik ke komputer digital. Salah satu skenarionya seperti ini: Pertama, scan resolusi sangat tinggi dilakukan pada otak tertentu, mungkin menghancurkan otak asli dalam prosesnya. Misalnya otak mungkin vitrifikasi dan dibedah menjadi irisan tipis yang kemudian dapat dipindai menggunakan beberapa bentuk mikroskopik yang dikombinasikan dengan pengenalan gambar otomatis. Kita membayangkan pemindahan ini akan cukup rinci untuk menangkap semua neuron, interkoneksi synaptic, dan fitur lainnya yang secara fungsional terkait dengan operasi otak asli. Kemudian, peta tiga dimensi dari komponen otak dan interkoneksi mereka dikombinasikan dengan pustaka teori ilmu saraf canggih yang menentukan sifat komputasi dari setiap jenis dasar elemen, seperti berbagai jenis neuron dan persimpangan sinaptik. Selanjutnya struktur komputasi dan perilaku algoritmik yang terkait dengan komponennya diimplementasikan di beberapa komputer yang kuat. Jika proses upload telah berhasil, program komputer mampu meniru karakteristik fungsional penting dari otak asli. Otak upload yang dihasilkan dapat menghuni simulasi virtual reality atau sebaliknya dan bisa diberikan kontrol dengan tubuh Robot sehingga memungkinkan untuk berinteraksi langsung dengan realitas fisik eksternal.
Sejumlah pertanyaan muncul dalam konteks skenario tersebut seperti, apakah prosedur ini suatu hari akan menjadi teknologi layak? Jika prosedur itu bekerja dan menghasilkan program komputer yang menunjukkan kepribadian yang sama, kenangan yang sama, dan pola berpikir yang sama seperti otak asli, apakah program menjadi hidup? Akankah komputer menjadi orang yang sama dengan individu yang otaknya dirakit dalam proses upload? Apa yang terjadi pada identitas pribadi jika upload disalin sehingga ada beberapa mesin dengan pikiran unggah identik berjalan secara paralel? Meskipun semua pertanyaan ini relevan dengan etika mesin pintar, disini kita fokus pada masalah yang melibatkan gagasan dari tingkat subjektif.
Misalkan jika otak upload bisa benar-benar hidup. Jika kita menjalankan program upload pada komputer yang sangat cepat, akan menyebabkan otak upload jika terhubung ke perangkat input seperti kamera video, dapat memahami dunia luar seolah-olah telah melambat. Sebagai contoh, jika otak upload berjalan seribu kali lebih cepat dari otak yang asli, maka dunia luar akan tampak seolah-olah melambat dengan faktor ribu. Seseorang menjatuhkan cangkir kopi fisik: otak upload mengamati cangkir secara perlahan-lahan jatuh ke tanah saat upload selesai membaca koran pagi dan mengirim beberapa email. Satu detik waktu obyektif sesuai dengan 17 menit waktu subjektif sehingga durasi objektif dan subjektif dapat menyimpang.
Waktu subjektif tidak sama dengan estimasi subjek atau persepsi tentang bagaimana waktu mengalir cepat. Manusia sering keliru tentang aliran waktu. Kita mungkin percaya bahwa itu adalah pukul satu ketika itu sebenarnya seperempat melewati dua; atau obat perangsang dapat menyebabkan pikiran kita untuk balapan, membuatnya tampak seolah-olah lebih banyak waktu subjektif daripada yang sebenarnya terjadi. Kasus ini biasa melibatkan waktu persepsi yang menyimpang bukan pergeseran dalam tingkat waktu subjektif. Bahkan dalam otak yang dipengaruhi kokain, mungkin tidak ada perubahan yang signifikan dalam kecepatan perhitungan neurologis dasar; lebih mungkin obat ini menyebabkan otak seperti berkedip lebih cepat dari satu pikiran ke yang lain.
Variabilitas tingkat subjektif dari waktu adalah properti eksotis pikiran buatan yang menimbulkan masalah etika baru. Misalnya, dalam kasus di mana durasi pengalaman etis yang relevan, haruskah durasi diukur dalam waktu objektif atau subjektif? Jika otak upload telah melakukan kejahatan dan dihukum empat tahun penjara, empat tahun obyektif mungkin setara dengan ribuan tahun dari waktu subjektif, apakah harus dihukum empat tahun subjektif yang mungkin hanya beberapa hari waktu obyektif? Karena kita terbiasa dalam konteks manusia biologis, waktu subjektif bukan variabel signifikan dan tidak mengherankan bahwa pertanyaan semacam ini tidak bisa diselesaikan oleh norma-norma etika yang kita ketahui, bahkan jika norma tersebut diperluas untuk otak buatan dengan cara prinsip non-diskriminasi.
Untuk menggambarkan jenis klaim etika yang mungkin relevan di sini, kami merumuskan suatu keistimewaan prinsip waktu subjektif sebagai gagasan normatif yang lebih mendasar:
Prinsip Subyektif Tingkat Waktu
Dalam kasus di mana durasi pengalaman adalah dasar kepentingan normatif, maka pengalaman dengan durasi subjektif yang diperhitungkan.
Satu bagian penting dari sifat yang eksotis dari mesin pintar berhubungan dengan reproduksi. Sejumlah kondisi empiris yang berlaku untuk reproduksi manusia tidak perlu berlaku untuk kecerdasan buatan. Misalnya, anak-anak manusia adalah produk dari rekombinasi materi genetik dari dua orang tua; orang tua memiliki kemampuan yang terbatas untuk mempengaruhi karakter keturunan mereka; embrio manusia perlu bertahan dalam rahim selama sembilan bulan; dibutuhkan lima belas atau dua puluh tahun untuk anak manusia untuk mencapai kedewasaan; anak manusia tidak mewarisi keterampilan dan pengetahuan yang dimiliki oleh orang tuanya; manusia memiliki evolusi kompleks untuk mengatur adaptasi emosional yang berkaitan dengan reproduksi, memelihara, dan hubungan anak dan orangtua. Tak satu pun dari kondisi empiris perlu berhubungan dalam konteks reproduksi mesin pintar. Oleh karena itu cukup masuk akal bahwa banyak dari prinsip-prinsip moral tingkat menengah yang mengatur reproduksi manusia perlu dipikirkan kembali dalam konteks reproduksi mesin pintar.
Untuk menggambarkan mengapa beberapa norma moral perlu dipikirkan kembali dalam konteks reproduksi mesin pintar, perlu dipertimbangkan satu properti eksotis yaitu kapasitas untuk melakukan reproduksi dengan sangat cepat. Jika diberi akses ke perangkat keras komputer, mesin pintar bisa menduplikasi sendiri sangat cepat, dalam waktu tidak lebih dari yang dibutuhkan untuk membuat salinan dari perangkat lunak mesin pintar. Terlebih lagi, sejak salinan mesin pintar identik dengan aslinya, akan lahir generasi baru yang matang yang bisa mulai membuat salinannya sendiri dengan segera. Tidak ada keterbatasan hardware, sehingga populasi mesin pintar bisa tumbuh dengan pesat pada tingkat yang sangat cepat, dengan waktu dua kali lipat pada urutan menit atau jam daripada dekade atau abad. Norma-norma etika reproduksi kita saat ini mencakup beberapa versi prinsip kebebasan reproduksi, yang menyatakan bahwa terserah kepada masing-masing individu atau pasangan untuk menentukan sendiri apakah akan memiliki anak dan berapa banyak anak untuk memiliki. Norma lain yang kita miliki (setidaknya di negara-negara kaya dan menengah) adalah masyarakat harus terlibat memberikan kebutuhan dasar anak-anak dalam kasus di mana orang tua mereka tidak mampu atau menolak untuk melakukannya. Sangat mudah untuk melihat bagaimana dua norma-norma ini bisa bertabrakan dalam konteks entitas dengan kapasitas reproduksi yang sangat cepat untuk mesin pintar berbasis kecerdasan buatan.
Populasi mesin pintar bisa memiliki keinginan untuk memproduksi klan sebesar mungkin. Jika diberi kebebasan reproduksi yang lengkap, mesin pintar dapat menyalin dirinya dengan cepat dan salinan dapat berjalan pada perangkat keras komputer baru yang dimiliki atau disewa atau mungkin berbagi komputer yang sama. Segera, anggota klan mesin pintar akan menemukan diri mereka tidak mampu membayar tagihan listrik atau sewa untuk pengolahan komputasi dan penyimpanan yang diperlukan untuk membuat mereka hidup. Pada titik ini, sistem kesejahteraan sosial mungkin menolak menyediakan kebutuhan dasar untuk mempertahankan hidup. Tetapi jika populasi tumbuh lebih cepat dari perekonomian, sumber daya akan habis; di mana titik mesin pintar akan mati atau kemampuan mereka untuk mereproduksi akan sangat dibatasi. Skenario ini menggambarkan bagaimana beberapa prinsip-prinsip etis tingkat menengah yang cocok dalam masyarakat kontemporer mungkin perlu dimodifikasi jika masyarakat yang meliputi mesin yang dapat berkembang biak sangat cepat.
Yang penting disini adalah bahwa ketika berpikir tentang etika terapan untuk konteks yang sangat berbeda dari kondisi manusia, kita harus berhati-hati untuk tidak membuat kesalahan yang bertentangan dengan prinsip-prinsip etis tingkat menengah untuk kebenaran normatif dasar. Dengan kata lain, kita harus mengakui sejauh mana aspek normatif biasa secara implisit dikondisikan untuk memperoleh berbagai kondisi empiris dan kebutuhan untuk menyesuaikan ajaran ini ketika diterapkan untuk kasus futuristik. Kami tidak membuat klaim kontroversial tentang relativisme moral, tetapi hanya menyoroti konteks akal sehat yang relevan dengan penerapan etika dan menyarankan bagaimana konteks bisa relevan ketika mempertimbangkan etika dalam sifat eksotis mesin pintar.
Baca bagian selanjutnya mengenai ASI, atau kembali ke ANI atau pengenalan.
TSMRA, Jakarta, 2016.
Disadur dari http://deepbrains.com/2016/06/34-masalah-etika-revolusi-mesin-pintar/ seijin Penulis.
|
(3/4) Masalah Etika Revolusi Mesin Pintar
| 8
|
3-4-masalah-etika-revolusi-mesin-pintar-1a395e8dd5f6
|
2018-06-13
|
2018-06-13 16:34:36
|
https://medium.com/s/story/3-4-masalah-etika-revolusi-mesin-pintar-1a395e8dd5f6
| false
| 3,387
|
Machine Learning Indonesia
| null |
machinelearningid
| null |
machinelearningid
|
machinelearningid@gmail.com
|
machinelearningid
|
MACHINELEARNINGID,ML ID,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,DEEP LEARNING
|
machineid
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Machine Learning Indonesia (ML ID)
|
Machine Learning Indonesian Community
|
43d18f969739
|
machinelearningid
| 12
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-23
|
2018-02-23 10:36:32
|
2018-02-23
|
2018-02-23 10:38:44
| 0
| false
|
en
|
2018-02-23
|
2018-02-23 10:38:44
| 5
|
1a3b74219e85
| 2.456604
| 0
| 0
| 0
|
Champions group, a data driven marketing and sales company, announced the launch of its new data platform AMPLIZ on 1 February 2018.
| 5
|
Champions Group Unveils Machine Learning- An AI-Powered Data Platform
Champions group, a data driven marketing and sales company, announced the launch of its new data platform AMPLIZ on 1 February 2018.
The platform is designed to help marketing and sales teams seek easy access to marketing data and insights to make better business decisions.
The new data platform powered by Artificial intelligence and Machine learning will help marketers easily keep track of customer contacts, Filmography, Demography, and technography (MARTECH, ADTECH, SOFTECH and so on) as well.
This will allow marketing and sales professionals to get a clear and 360 degree picture of their clients and customers.
The platform will also help in automatically detecting any kind of errors in business and marketing database using machine learning.
Here are few features that aces AMPLIZ above all –
• Ampliz business model is a web-based login access
• Instant search of desired information (Type name or company to search)
• A built-in repository of +10 million multi source verified contacts
• An easy subscription Model ($99monthly)
• One touch access to advance data management and maintenance functions
“Accurate Data and intent insights at the right time is the key which should be available to every marketing and sales team. This is precisely what AMPLIZ does. It is designed for sales, marketing, operations and analytics teams to achieve meaningful results from their data, ensuring high coverage, accuracy and depth” said Eric Sonntag, Chief Sales Officer, AMPLIZ.
Here’s how AMPLIZ aims to help businesses
Intent Data — Identify triggers from First Party and Third party Data.
Advanced prioritization- Identify the best leads according to purchase behaviour for a particular product, Service or a Solution and set personalized actions.
Automated outreach: Uncover hidden states to identify where a prospect is in the buying cycle to trigger automated actions via marketing automation to engage prospects with the right message at the right time.
Strategic nurturing programs: Surface good prospects using behavioural data from the nurture pile who have recently re-engaged, which helps to overcome sales’ recency bias.
Campaign, content and account-based marketing (ABM) measurement- Identify which marketing programs and content assets are driving engagement from key accounts to pinpoint which are falling short.
Personalized email and content marketing programs: Use surge lists from third-party publisher sites to gain insight into accounts.
Targeted advertising: Focus on linking intent data to devices and companies so you can serve ads to targeted accounts.
Target account list. Build surge lists and run it through a predictive scoring model, to find the best accounts that are interested in topics surrounding your business but might not have been on your radar yet.
“Including capabilities like behavioural data along marketing automation data and advanced machine learning into your marketing stack will definitely help marketers gain far more predictive power and revenue impact. And Ampliz is crafted keeping these intricate factors in mind”
said Tom Avery, Assistant General Manager, Sales.
AMPLIZ is a web-based data management tool that uncovers the strategies and solutions that help companies better align their sales and marketing organizations, and ultimately, drive growth.
A key component of the AMPLIZ focuses on helping sales and marketing teams to better measure and manages their multi-channel lead generation efforts. Based on your business requirements further the tool can be integrated with an API platform to unlock advance functionalities.
“The product team is working on integrating AMPLIZ for major CRM and marketing automation platforms. It will be available soon. However we are already working with clients’ onsite to ensure that the tool serves their purpose” said Nataraja Chief Information Officer (CIO).
Experience it yourself — Try Ampliz on trial for 7 days, in addition get 20 welcome credits points. Have queries? Want to know more about the various features? Check our FAQs. If that doesn’t work-We are here to help you! Click to connect with us.
|
Champions Group Unveils Machine Learning- An AI-Powered Data Platform
| 0
|
ampliz-ai-powered-data-platform-1a3b74219e85
|
2018-02-23
|
2018-02-23 10:38:46
|
https://medium.com/s/story/ampliz-ai-powered-data-platform-1a3b74219e85
| false
| 651
| null | null | null | null | null | null | null | null | null |
Marketing
|
marketing
|
Marketing
| 170,910
|
Ampliz
|
Ampliz gives you over 10 million multi source verified data records to choose. https://www.ampliz.com/
|
852e7fb57f1e
|
ampliz
| 2
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-26
|
2018-06-26 16:23:14
|
2018-06-25
|
2018-06-25 00:00:00
| 2
| false
|
en
|
2018-06-26
|
2018-06-26 16:26:45
| 8
|
1a3bc9753cd2
| 1.911635
| 0
| 0
| 0
|
Who is Kristal Ireland?
| 5
|
Will robots destroy us all? Putting ethical debate back into the narrative about the future of AI | SearchLeeds 2018
Who is Kristal Ireland?
Kristal Ireland is an award-winning, strategic and experienced digital and technology expert. Experienced public speaker on all things digital and technology with a passion for AI and all things transformative.
Previously voted as one of the UK’s Top 30 Women in Digital Under 30 by The Drum Magazine. A specialist in Digital Transformation, E-commerce and Digital Brand Strategy she has worked with some of the major UK and international clients to build their brands online.
Kristal Ireland — Virgin Trains East Coast — Will robots destroy us all? Putting ethical debate back into the narrative about the future of AI from Branded3
Passionate about all forms of marketing but with a specialism in digital marketing including; Website Design and Development, Mobile, E-commerce website design and management, SEO, PPC, Social Media, Email Marketing, Online Analytics packages, Customer Journey Analysis and Customer Relationship Management.
What is AI?
Artificial intelligence or AI is the area of computer science that emphasizes the creation of intelligent machines that work and react like humans.
In an essence, AI refers to the ability for machines to make complex decisions with the same sophistication as human beings — something which requires a high degree of skill to do, as it depends upon taking a huge number of variables into consideration and drawing upon a bank of accumulated knowledge and experience.
Despite AI being described as a gimmick by some critics around the world, it has the potential to be highly useful in the world of business, helping businesses become more automated, freeing up human workers’ time to make the decisions only a human being can make.
What are some of the characteristics of artificial intelligence?
Access to a consistent knowledge source, e.g. our environment.
Able to convert plausible responses into knowledgeable responses (this is the conversion of belief to knowledge which improves viability).
Adaptability-the use of modelling (thought) to refine responses to improve viability (this can be thought of as a definition of the property “intelligence”).
The ability to communicate (form a language — can be semiotic ie non-verbal).
The ability to plan (think).
Non-deterministic — with flexibility (adaptability) comes unpredictability of responses.
The ability to interact with the source of knowledge, i.e. our environment.
Originally published at omisido.com on June 25, 2018.
|
Will robots destroy us all?
| 0
|
will-robots-destroy-us-all-1a3bc9753cd2
|
2018-06-26
|
2018-06-26 16:26:45
|
https://medium.com/s/story/will-robots-destroy-us-all-1a3bc9753cd2
| false
| 405
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Omi Sido
|
Senior Technical SEO at Canon Europe
|
e7fba0502b86
|
omisido
| 390
| 344
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
7d60e2d49585
|
2017-12-21
|
2017-12-21 15:54:57
|
2017-12-21
|
2017-12-21 16:06:12
| 2
| false
|
en
|
2017-12-21
|
2017-12-21 16:06:12
| 8
|
1a3d8102cb23
| 1.598428
| 5
| 0
| 0
|
Building mission critical maps in the open
| 5
|
Announcing DevSeed Data
Building mission critical maps in the open
BY IAN SCHULER ON DEC 19, 2017
Today we are launching the DevSeed Data Team, our mapping team for humanitarian response, professional geospatial data management and machine learning.
Maps and location data are critical to address some of the planet’s biggest challenges, like timely response to disasters, addressing climate vulnerability, increasing the effectiveness of economic development, and ensuring fair elections. As our partners increasingly adopt machine learning in their work, it will be critical to collect training data in parts of the world where data is scarce. To better support our partners’ missions, we are closely partnering with Mapbox to expand our mapping capacity. Effective immediately, the Mapbox data team Peru will operate as part of Development Seed and form the core of the DevSeed Data Team.
Development Seed started in Peru in 2003. Our first projects helped NGOs and local government agencies in the Andes town of Ayacucho to connect with citizens online. We are proud that today’s move brings back Development Seed to Ayacucho with a larger team than ever.
Team Peru is collectively responsible for 2 million edits on OpenStreetMap. Over the years, the team has made significant contributions to humanitarian data, from helping map the Nepal Earthquake alongside the Humanitarian OpenStreetMap community to the supporting the Hurricane Harvey response. At the core of its success is an open mapping approach. This is exactly the culture of collaboration we are looking to foster working closely with the open data communities we operate in.
We are excited to bring the DevSeed Data Team online to support humanitarian mapping efforts, validate and improve mapping data for our partners, and scale our machine learning work. The team is now available to map for disaster response, climate change, sustainable development, energy access, or managing urbanization. Drop me a line at ian@developmentseed.org if you’d like to start a conversation about your next mapping challenge.
The DevSeed Data Team — Ayacucho, Peru.
|
Announcing DevSeed Data
| 6
|
announcing-devseed-data-1a3d8102cb23
|
2018-06-04
|
2018-06-04 22:13:53
|
https://medium.com/s/story/announcing-devseed-data-1a3d8102cb23
| false
| 322
|
To understand a changing planet we create, analyze and distribute massive amounts of data
| null | null | null |
Development Seed
|
info@developmentseed.org
|
devseed
|
PLANETS,EARTH,MACHINE LEARNING,TECHNOLOGY,API
|
developmentseed
|
Humanitarian
|
humanitarian
|
Humanitarian
| 3,361
|
Development Seed
|
We believe in open source and in building for lasting impact.
|
78b628b00a10
|
developmentseed
| 48
| 98
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
19fd0cf90e0c
|
2018-09-19
|
2018-09-19 16:05:25
|
2018-09-19
|
2018-09-19 16:08:13
| 3
| false
|
en
|
2018-09-19
|
2018-09-19 16:41:00
| 2
|
1a3dd2d4d5c1
| 0.836792
| 0
| 0
| 0
|
“I’m healthy, wealthy, and wise.”
| 5
|
Faces of Death; Life After Evil; Afterlife.
“I’m healthy, wealthy, and wise.”
Looks like a person with a light shining behind them. (VIDEO)
Today’s Voyager episode is “Faces” and probably a bit of “Basics” because it’s Tuvok’s best work.
New Challenge. New Champion.
It would be very concerning to think like this.
The nominee *is* cynical. Get used to it. User and abuser. My ass. (source)
A?B testing in production? FUck U.
Sorry. Misaligned there for a “mom”ent.
In my ENT, I realized that it actually *is* in the cards.
$50 to be exact.
That dotted line is enough to make me feel slightly uncomfortable.
|
Faces of Death; Life After Evil; Afterlife.
| 0
|
faces-of-death-1a3dd2d4d5c1
|
2018-09-19
|
2018-09-19 18:44:18
|
https://medium.com/s/story/faces-of-death-1a3dd2d4d5c1
| false
| 76
|
“Prosody is the music of language.” ~ Nandini Stocker, who advocates for sounds of silent solidarity and voices of musical magic makers in scented echo chambers. Make sense? I didn’t think so. I know so. We all do. We all shine on. On and on and on.
| null | null | null |
Living Language Legacies
|
sevenofnan@icloud.com
|
living-language-legacies
|
LANGUAGE,VOICE RECOGNITION,SPEECH RECOGNITION,SPEECH,NATURAL LANGUAGE
|
captionjaneway
|
Culture
|
culture
|
Culture
| 69,444
|
Nandini Stocker
|
Speaking truth brought me war and peace. Amplifying others set me free.
|
7e6afdd38d52
|
sevenofnan
| 426
| 438
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-08
|
2018-01-08 12:11:42
|
2018-01-08
|
2018-01-08 12:13:03
| 0
| false
|
en
|
2018-01-15
|
2018-01-15 11:25:49
| 5
|
1a3e7fb25cdd
| 3.218868
| 0
| 0
| 0
|
The benefits have been recounted many times, but now that Machine Learning has the business world’s attention, how does one get started…
| 2
|
How to get started with Machine Learning
The benefits have been recounted many times, but now that Machine Learning has the business world’s attention, how does one get started? Moving into the machine learning space can be somewhat daunting, but we hope this blog post provides some guidance that you will find helpful.
Machine Learning has been the topic of many of our blog posts, as well as articles in the media thanks to its ability to predict outcomes and automate decisions and thereby improve overall operational efficiency. Just to clarify, Machine Learning is a system that has the ability to self-train the underlying predictive models by looking at recent data.
Still not 100% clear? Read our blog post “What is Machine Learning?”
Before getting started: Build upon a solid foundation
In this post, we are assuming that you have some predictive models in place and are using and tracking these in a couple of areas of your business. Hopefully you or your team have developed these models internally and understand the concepts associated with predictive analytics, i.e. data scrubbing and data validation, building the models and validating them.
The Brilliance of Ensemble Modelling
Before we describe our recommended approach to getting you to a fully-fledged Machine Learning system, let us describe a fundamental concept of predictive analytics, called Ensemble Modelling. The basic concept is that one can combine various models that have been constructed using different algorithms into one stronger model.
The brilliance of this concept is that one can use a stable, trusted model as the foundation and apply one or many real-time models on top of this — safe in the knowledge that the final model will not be worse than your ‘foundation model’, providing you use a tried and tested algorithm in constructing the combined model. It is recommended that the models that get combined should be fundamentally different in order to capture different predictive patterns within your data. A good example would be to combine a logistic regression model with a neural net model.
This approach therefore gives you the best of both worlds: confidence that you will achieve at least what you had before (which has been proven to be reliable), but with the ability to bring in a model that captures more recent and different trends in the data.
Getting Machine Learning into your business
With your foundation models in place, here are some pointers that will allow you to migrate your static predictive models to more dynamic self-training ones:
Find an area of your business that you have identified as a good area for testing Machine Learning. A good example would be in a call centre environment where there is a lot of movement and where changes occur on a regular basis — always an area that is ripe for Machine Learning
Package the required data and build a model as you normally would have done in the past (i.e. your foundation model).
Deploy this model and track it over time to ensure that it is performing as expected (you may already have such a model up and running)
Once you are satisfied that you have a model that you can trust, build one or more models by looking at more recent data, using different algorithms available in the tools that are at your disposal
Ensemble the new models with the foundation model. This will dramatically reduce the risk of over-fitting the final model, whilst keeping it fresh by incorporating trends observed in recent data
Read our blog post “Making the move from Predictive Modelling to Machine Learning”
How did it perform?
We feel that the intention of a Machine Learning model is not to outperform the original static models, but rather to maintain consistent performance by bringing in recent data, instead of degrading over time, as is often observed in static models.
Also note that, if the Machine Learning model does not add significant lift over the foundation model, it does not mean that Machine Learning approach should be shelved for good. Various factors may be at play, such as how the dynamic models were constructed, the transient nature of the data, and the reliability of the new data.
Assuming you are happy with the results (i.e. your Machine Learning models are outperforming your static models over time), you can now start considering incorporating a streamlined Machine Learning system which will contain the following elements:
a platform from which all the required data will be processed,
a streamlined model building system, and
a streamlined and real-time tracking system to monitor the performance of all your Machine Learning models.
If you are ready to start the journey of bringing Machine Learning into your business or business area, get in touch with us to discuss your requirements. Our range of products and services that incorporate Machine Learning might make the path to getting started a lot shorter and cost-effective. Good luck and here’s to a long and fruitful Machine Learning journey.
Read our blog posts on how businesses around the world are using Machine Learning
Originally published at insights.principa.co.za.
|
How to get started with Machine Learning
| 0
|
how-to-get-started-with-machine-learning-1a3e7fb25cdd
|
2018-01-15
|
2018-01-15 11:25:50
|
https://medium.com/s/story/how-to-get-started-with-machine-learning-1a3e7fb25cdd
| false
| 853
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Principa Decisions
| null |
5ffaa8c76d1b
|
principadecisions
| 3
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-12-20
|
2017-12-20 20:00:22
|
2017-12-20
|
2017-12-20 20:02:24
| 1
| false
|
en
|
2017-12-30
|
2017-12-30 10:31:58
| 8
|
1a3f40d707c0
| 2.207547
| 4
| 0
| 0
|
by Piyush Shrivastava, Deepak Singh, and Sukant Khurana
| 5
|
Robot laws
by Piyush Shrivastava, Deepak Singh, and Sukant Khurana
Photo by Drew Graham on Unsplash
Since the publication of Isaac Asimov’s ethical principles for Robots in 1942, there has been tremendous technological progress in the field of Robotics. Even though our perception of intelligent machines and what they can accomplish has undergone a massive shift, Asimov’s laws are still regarded as a generic model to guide the evolution of sentient droids. The three laws of Robotics –
– and the zeroth law, which was introduced later
In Asimov’s fictional works, the laws were enshrined in a robots’ behavior, instead of provisioning as a set of guidelines.
While Asimov’s world would certainly be an enviable one but one cannot help but imagine if a robot may find it difficult to classify a human implanted with a chip. What if it presumes itself to be a human: are all the laws applicable annulled. What if the robot has some human components? We are not talking of a new idea here, as transhumanism has been around in various manifestations.
Researchers Ulrike Barthelmess and Ulrich Furbach argue that our fears over the potential of Superintelligence to destroy us are unfounded, thus rendering Asimov’s laws non-essential. They consider the mythological and fictional tales a proponent of the fear, disseminating the theme of a rebellion against the creators. The researchers thus imply that the actual reason for fear is the use of robots by humans to control or destroy their lives in unrestrained ways[3].
In 2009, Robin Murphy and David D. Woods proposed the “The Three Laws of Responsible Robotics” to propagate the deliberation on the need to consider the environment as a factor in assigning roles and responsibilities to a robot:
The researchers suggested that people should think about the human-robot interaction in a more realistic ways [4]. Such laws of contemporary nature can be tested in semi-intelligent machines before the possible advent of super-intelligence.
References:
Asimov Isaac, (1950). I, Robot
BBC News, 2011–10–03: Stewart, Jon (2011–10–03). “Ready for the robot revolution?”.
Barthelmess, U. and Furbach, U., 2014. Do we need Asimov’s Laws? arXiv preprint arXiv:1405.0961.
Researchnews.osu.edu, 2015–03–28: Want Responsible Robotics? Start with responsible humans
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -
Piyush Shrivastava is an intern with Dr. Sukant Khurana’s group, working on Ethics of Artificial Intelligence.
Dr. Deepak Singh is based at Physical Research Laboratory, Ahmedabad, India and is collaborating with Dr. Khurana on Ethics of AI and science popularization.
Dr. Sukant Khurana runs an academic research lab and several tech companies. He is also a known artist, author, and speaker. You can learn more about Sukant at www.brainnart.com or www.dataisnotjustdata.com and if you wish to work on biomedical research, neuroscience, sustainable development, artificial intelligence or data science projects for public good, you can contact him at skgroup.iiserk@gmail.com or by reaching out to him on linkedin https://www.linkedin.com/in/sukant-khurana-755a2343/.
Here are two small documentaries on Sukant and a TEDx video on his citizen science effort.
Sukant Khurana (@Sukant_Khurana) | Twitter
The latest Tweets from Sukant Khurana (@Sukant_Khurana). Founder: https://t.co/WINhSDEuW0 & 3 biotech startups…twitter.com
|
Robot laws
| 45
|
robot-laws-1a3f40d707c0
|
2018-05-02
|
2018-05-02 15:04:57
|
https://medium.com/s/story/robot-laws-1a3f40d707c0
| false
| 532
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Sukant Khurana
|
Blockchain, edutech, AI, neuroscience, drug-discovery, design-thinking, sustainable development, art, & literature. There is only one life, use it well.
|
6d41261644a8
|
sukantkhurana
| 433
| 135
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-30
|
2017-09-30 04:38:23
|
2017-10-05
|
2017-10-05 17:37:53
| 1
| false
|
en
|
2017-10-05
|
2017-10-05 18:20:14
| 5
|
1a3f4aa752ca
| 3.732075
| 18
| 3
| 0
|
Can we make blockchain train neural networks?
| 5
|
Random: A marriage between crypto and AI
Crytocurrency + AI
Can we make blockchain train neural networks?
I’ve been reading a lot on cryptocurrency lately, and just like every other bystander, I can’t help but wonder how painfully inefficient and wasteful the whole mining process is. Here is what they do to make a ‘block’ in a blockchain. Once they have a block of data, they attach a random number to it. Compute the hash of the whole thing. Count zeros in the hash. If it’s the wrong number zeros, they then increment that random number and compute the hash again. The purpose of this is to prove some CPU power was applied so that it is very expensive for attackers to forge data on the blockchain.
The community then dresses up this process into professional lingo, and after you hear about “mining pools”, “hashing power”, and “proof of work”, it does not sound like a wasteful thing to do. But like I said, I am not the first to ponder this question, as no other than Vitalik Buterin, the famous creator of Ethereum, suggested back in 2012 that if Bitcoin miners could heat their homes with their mining rigs it would be “a very positive change”. Today, Ethereum is looking to get away from mining altogether by switching to a proof of stake consensus algorithm. It would have an effect of making the network more efficient, making the rich even richer, and putting all those mining rigs and their owners out of a job.
Here is something that could also work(even outside of Canadian/Russian climate zones). What if crypto miners were training deep neural networks? I don’t mean Golem style, where people sell their computing power, and use the blockchain to register their transactions. Golem relies on Ethereum for that. No, I mean we retain proof-of-work, but this work is the useful work of training a neural network. So instead of recomputing a hash of essentially the same data over and over again, nodes instead will be training somebody’s neural network, and their progress in this task will serve as a proof of work. Miners stay on the job and get paid with both newly minted currency _and_ with anything extra that the neural network owners decide to throw in to speed things up.
Most modern neural networks get trained using good old gradient descent, with gradient computed using backpropagation. Neural network gurus, including Geoffrey Hinton have been saying lately that backpropagation needs to go, and there may be other alternatives. For instance, Elon Musk’s OpenAI institute managed to make evolution strategies to learn to play Atari just fine. But, let’s assume that backpropagation is here to stay. How can backpropagation be inserted into the heart of the blockchain, and serve as a proof of work?
One way to parallelize training in deep learning is to use data parallelization — a very simple technique in which workers compute gradients for their own mini-batch each. These mini-batches have been traditionally small — usually up to a few hundred examples each — although the need to train with high degree of parallelization within a computer network have been spurring interest in making larger batches work, like Facebook did in their paper on large minibatch SGD. Once all workers are done processing their minibatches and have their gradients ready, the average or worker’s gradient is computed, and used to adjust weights in the neural network. These gradients that each worker computes can serve as a proof work.
Here is how. Each worker computes a gradient on its own minibatch. These gradients need to generally agree; otherwise, the gradient descent would not work. Once all workers finish, they send their gradients to each other. Since workers can’t trust each other, they will need to use a protocol that prevents one worker from stealing somebody else’s results. For instance, they can exchange encrypted gradients, collect each other’s digital signatures, and then reveal their keys so that they all can compute the average gradient and update the network weights thus concluding a single step of network training.
The process repeats until the training is complete. Workers then review all steps, and the worker whose minibatch gradients were closest to the average gets to issue a new block recording a mined reward for each worker on it. Since different neural networks have different difficulty, there should be a limit on the total volume of transactions that can be registered on the block. For a mining network to accept the new block, it will need to verify that steps in the training process indeed minimize the cost function of the neural network. After a block gets buried under a sufficient number of blocks, the neural network required for its verification can be discarded.
What do we do if a worker does not play by the rules and simply generates random numbers instead of properly computed gradients? Such workers could simply idle and collect the reward while others do all the work. The answer is simple: on each step a reward is only issued to workers whose gradient is reasonably close to the average computed across all workers. This may seem a little unfair: after all, it’s a stochastic process and there will always be honest workers who are simply unlucky. But over the course of the entire training process, things will even out and all workers should see a similar average reward.
We end up with a proof of work. Moreover, it’s a proof of useful work.
No, I am not starting an ICO with this idea.
I have https://sourcerer.io to build.
|
Random: A marriage between crypto and AI
| 65
|
random-a-marriage-between-crypto-and-ai-1a3f4aa752ca
|
2018-06-09
|
2018-06-09 22:07:41
|
https://medium.com/s/story/random-a-marriage-between-crypto-and-ai-1a3f4aa752ca
| false
| 936
| null | null | null | null | null | null | null | null | null |
Cryptocurrency
|
cryptocurrency
|
Cryptocurrency
| 159,278
|
Sergey Surkov
|
Technologist, Co-founder @sourcerer_io https://sourcerer.io/sergey
|
9b21f2d5217d
|
sergey_surkov
| 96
| 62
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-12-21
|
2017-12-21 13:21:02
|
2017-12-21
|
2017-12-21 13:26:04
| 1
| false
|
en
|
2018-08-31
|
2018-08-31 16:56:54
| 11
|
1a40d9ca4753
| 5.833962
| 2,029
| 65
| 0
|
In April of 2015, I got serious about my goal to become a professional writer. I had written an eBook, Slipstream Time Hacking, and was…
| 5
|
http://discovermagazine.com/~/media/Images/Issues/2013/Jan-Feb/connectome.jpg
This Is How You Train Your Brain To Get What You Really Want
In April of 2015, I got serious about my goal to become a professional writer. I had written an eBook, Slipstream Time Hacking, and was anxious to know how to traditionally publish it. At that time, I had just barely put up my own website and had a subscriber-base of zero.
I decided literary agents would be my best source of advice. After all, they know the publishing industry back-and-forth — or so I thought. After talking to 5–10 different agents about their coaching programs, it became apparent my questions would need to be answered elsewhere.
One particular conversation sticks out.
In order to even be considered by agents and publishers, writers need to already have a substantial readership (i.e., a platform). I told one of the agents my goal was to have 5,000 blog subscribers by the end of 2015. She responded:
“That would not be possible from where you currently are. These things take time. You will not be able to get a publisher for 3–5 years. That’s just the reality.”
“Reality to who?” I thought as I hung up the phone.
Never Ask Advice From…
In his book, The Compound Effect, Darren Hardy said, “Never ask advice of someone with whom you wouldn’t want to trade places.”
Who you follow determines where you get in life. If your leader isn’t moving forward, you’re not moving forward, because your results are a reflection of your leader’s results.
As I pondered Darren Hardy’s words, I realized I was asking the wrong types of people for advice. I needed to turn to people who had actually walked where I wanted to walk. Anyone can provide nebulous theory.
We spend our entire public education learning theory from people who have rarely “walked the walk.” As George Bernard Shaw said in Maxims for Revolutionists, “He who can, does; he who cannot, teaches.” Similarly, there is an endless supply of content being published everyday by people who rarely practice the virtues they preach.
Contrary to theory, which cannot get you very far in the end, people who have actually been “there” provide practical steps on what you need to do (e.g.,here are the five things you should focus on and forget everything else).
Why You Need To Know What You Want
“This is a fundamental irony of most people’s lives. They don’t quite know what they want to do with their lives. Yet they are very active.” — Ryan Holiday
Most kids go to college without a clue why they are there. They are floating along waiting to be told what to do next. They haven’t seen or thought enough to know what their ideal life would look like. So how could they possibly know how to distinguish good advice from bad?
Conversely, people who know what they want in life see the world differently. All people selectively attend to things that interest or excite them. For example, when you buy a new car, you start to notice the same car every where. How does this happen? You didn’t seem to notice that everyone drove Malibus before.
Our brains are constantly filtering an unfathomable amount of sensory inputs: sounds, smells, visuals, and more. Most of this information goes consciously unrecognized. Our focused attention is on what we care about. Thus, some people only notice the bad while others see the good in everything. Some notice people wearing band shirts, while others notice anything fitness related.
So, when you decide what you want, it’s like buying a new car. You start seeing it everywhere — especially your newsfeeds!
What are you seeing everywhere? This is perhaps the clearest reflection ofyour conscious identity.
The Magical Things That Happen When You Begin Paying Attention
“How can you achieve your 10 year plan in the next 6 months?” — Peter Thiel
Wherever it is you want to go, there is a long and conventional path, and there are shorter, less conventional approaches. The conventional path is the outcome of not paying attention. It’s what happens when you let other people dictate your direction and speed in life.
However, once you know what you want — and it intensely arouses your attention — you will notice simpler and easier solutions to your questions. What might have taken 10 years in a traditional manner takes only a few months with the right information and relationship.
“When the student is ready the teacher will appear.” — Mabel Collins
When I decided I was serious about becoming a writer, the advice from the literary agents couldn’t work for me. I was ready for the wisdom of people who were where I wanted to be. My vision was bigger than the advice I was getting.
In May of 2015, I came across an online course about guest blogging. It must have popped in my newsfeeds because of my previous searching. I paid the $197, went through the course, and within two weeks was getting articles featured on multiple self-help blogs.
Around this same time, I listened to a podcast where Tim Ferriss said, “One blog post can change the entire trajectory of your career.” Such was the case for him. An article he wrote generated wild traffic, which spilled-over into book sales for his at the time recent book, The 4-Hour Workweek. This wave of traffic led to the book’s success and the rest they say, is history.
When your mind takes hold of an idea, you do everything in your power to manifest it. The idea, “One blog post can change your career,” was always in the back of mind. Subconsciously, it forced itself into my conscious reality.Around this time, I wrote an article that literally did change my career. To quote William James, the father of American psychology, “What is impressed in the subconscious is expressed.”
Thus, 60 days after being told it would take 3–5 years to have a substantial following, I was there. Personally, I don’t fully credit myself for this fact. In an age of skepticism and doubt, a child-like faith can take you a long way. Before each article I wrote (and continue to write), I pray that the work I produce will be beyond my own capability; and I visualize my work reaching the people who need it. To quote Napoleon Hill, “Whatever the mind can conceive and believe, it can achieve.”
Just because other people have limiting beliefs does not mean you need to.
Again, the advice you take and the people you emulate matters. You are being influenced, especially subconsciously, by the influences you take to heart. There are people out there operating at brilliantly high levels. If you’re serious about getting results, find those people and begin thinking like them. You’ll be stunned how fast your life can change.
Your mindset and desires determine how big you’re willing to play. To quote Peter Diamandis, founder of XPRIZE and author of Abundance and BOLD, “The challenge is that the day before something is truly a breakthrough, it’s a crazy idea. And crazy ideas are very risky to attempt.”
Conclusion
When you know what you want, you notice opportunities most people aren’t aware of. You also have the rare courage to seize those opportunities without procrastination. What you focus on expands.
Courage doesn’t just involve saying “Yes” — it also involves saying, “No.” But how could you possibly say “No” to certain opportunities if you don’t know what you want? You can’t. Like most people, you’ll be seduced by the best thing that comes around.
But if you know what you want, you’ll be willing to pass up even brilliant opportunities because ultimately they are distractions from your vision. AsJim Collins said in Good to Great, “A ‘once-in-a-lifetime opportunity’ is irrelevant if it is the wrong opportunity.”
“Once-in-a-lifetime” opportunities (i.e., distractions) pop up everyday. But the right opportunities will only start popping up when you decide what you want and thus, start selectively attending to them. Before you know it, you’ll be surrounded by a network you love and by mentors showing you the fastest path.
Ralph Waldo Emerson once said, “Once you make a decision, the universe conspires to make it happen.” This quote is completely true. Once you know what you want, you can stop taking advice from just anyone. You can filter out the endless noise and hone in on your truth.
Eventually, you can train your conscious mind to only focus on what you really want in life. Everything else gets outsourced and forgotten by your subconscious.
Decide what you want or someone else will.
You are the designer of your destiny. What will it be?
Ready to Upgrade?
I’ve created a cheat sheet for putting yourself into a PEAK-STATE, immediately. You follow this daily, your life will change very quickly.
Get the cheat sheet here!
|
This Is How You Train Your Brain To Get What You Really Want
| 16,136
|
how-to-train-your-brain-to-get-what-you-want-1a40d9ca4753
|
2018-08-31
|
2018-08-31 16:56:54
|
https://medium.com/s/story/how-to-train-your-brain-to-get-what-you-want-1a40d9ca4753
| false
| 1,493
| null | null | null | null | null | null | null | null | null |
Self Improvement
|
self-improvement
|
Self Improvement
| 151,898
|
Benjamin P. Hardy
|
Husband and father of 3 with twins on the way (2 GIRLS!). Living simple lives in Orlando and frequenting Disney.
|
5153880ce2ee
|
benjaminhardy
| 208,653
| 1,532
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
71d1e3c2dc47
|
2018-08-16
|
2018-08-16 15:04:40
|
2018-08-16
|
2018-08-16 15:56:55
| 1
| false
|
en
|
2018-08-16
|
2018-08-16 23:45:32
| 9
|
1a41303efe2d
| 6.192453
| 2
| 1
| 0
|
There are way more roles to fill than you’d think
| 4
|
On Building An AI-First Startup Team
There are way more roles to fill than you’d think
I’ve had the pleasure of being a professional in the world of data and analytics for a few years now. Though there have been a few projects that haven’t quite hit the mark in terms of my level of intrigue, for the most part, this field has fascinated and challenged me in more ways than I thought it would. I love this stuff, and now I’ve been able to integrate it into another pursuit that I find extremely interesting and enjoyable — equity investing.
The first year and half of building Apteo was a journey of exploration, learning, and amazement as we found things that worked in stock analysis that I had only imagined 5 years ago. Camron and I were cranking away on nights and weekends on some of the most interesting recurrent models and NLP techniques. I was waist-deep in code, all while learning about the newest ways to make deep networks work on financial datasets. I had a blast.
Since then, I’ve transitioned into a variety of new roles. I still have access to the codebase, no doubt this makes Camron shudder, but I’m doing a lot more company and people-building these days.
As we’ve grown our team at Apteo, I’ve learned a bit more about some of the nuances of building an AI-focused organization. Unfortunately, building up a startup like this isn’t as simple as finding a bunch of people who have “data scientist” on their resume or grad school concentration and throwing them onto the team. Though we have found some fantastic people to join our core technology team, we’ve had to balance addressing a variety of skillsets in order to move our business forward.
ML engineering is core to what we do
Much like the early team at TapCommerce, the majority of our earliest team members today are techies. We love to find ML engineers — folks that are great engineers and understand the core concepts behind machine learning, but they can be rare, so early on we looked for either data scientists that could learn to code or engineers that had a strong passion for data, with a slight slant towards the latter. For those that are interested, here’s how we hire great data scientists and ML engineers.
We’re very focused on building a repeatable ML process, so having people that can write code has been extremely helpful for us. But of course, we are a data science company, so we also need people who can work with ML algorithms and data. Today, our data scientists and engineers still make up more than half of our company, and that ratio is growing, but we’ve recently had to start focusing on non-technical skills as well.
Designers, analysts, and marketers suddenly become very relevant
When we first started coding up our core technology, I figured we would use it to manage money. But as we grew our team and refined the way we were thinking, we realized that we could build out a product that could benefit all investors, and that’s something I’ve always wanted to do.
When I was in business school, I tried to teach all of my friends about how to trade options in a way that could allow them to invest on a fundamental basis. I wanted to show people how they could generate added income, like I did. Unfortunately I wasn’t able to do that effectively, but my desire to provide other people with the tools to improve their own financial situations never faded, and I’ve written about why that is on Medium.
When Camron and I met Manan, who comes from the world of buy-side investing, yet still has a similar viewpoint, our company really hit our stride. We ultimately decided to build a product that could take the outputs of our AI system and present them to investors in an intuitive way so that they could help themselves.
Once we decided to go down this route, the way we thought about the company and our product really tightened up, and things started to click, but it also meant that we suddenly needed to address a lot of holes in our skillset.
The first thing we realized we needed was a great designer. It’s tough to build a product these days without a usable UX. Manan, Camron, and I tried to plug that hole ourselves, but it very quickly became immediately obvious that our boxy Powerpoint-esque designs weren’t going to cut it.
Fortunately, I was able to find an amazing designer that’s been working with us for a short amount of time, yet he’s made an outsized impact in our progress. Finding him was a bit of a slog, but once we did, our speed increased dramatically.
The process of finding him started to bring to light something that is now undeniably obvious to me — it’s all about who you know. After trying several folks from freelance consulting sites, being priced out of several design firms, and even after having gone through a few of my acquaintances that own development and design firms, we ultimately struck gold with someone from my network.
This pattern of in-network discovery has played itself out repeatedly through the course of our company. A large portion of people that have worked with us have come through our own network, and we’ve worked previously with several of them. Though this hasn’t always been the case (and if you’re interested in working with us, shoot me a quick email here), it has happened enough both at Apteo and at TapCommerce that there’s no doubt in my mind that the best early teams are created by groups of people that know each other (“disrupting” this sounds like a great startup opportunity, though doing it effectively may be even harder than building an AI that can analyze stocks!).
After finding our designer, we quickly found out we had to find someone who could help us grow an audience. Though I tried my hand at starting to build up our marketing presence, my focus has been split in a million different directions, and we immediately realized we needed someone who could focus all of their efforts on this particular task. Though we haven’t found anyone yet, we’re closing in on a few good leads and I have confidence that we’ll be able to find someone to work with shortly.
Throughout this process, we’ve also needed smart people to do a variety of other work for us — everything from market research to analyzing our portfolio performance using traditional financial portfolio analysis. We don’t have any full-time analysts on our team, but we’ve been able to find folks that can do work on an as-needed basis, and that’s worked out really well for us. With that said, it wouldn’t surprise me if we bring on someone who can do general financial and quantitative analysis to help us out on key projects.
Future hiring will include lots of sales folks and user research/PM specialists
With our current progress, we’re aiming to roll out our beta version of Milton in September. Our goal is to understand if we’ve found enough of a product-market fit for us to continue going down our current path. Though our initial user interviews and feedback sessions have indicated that we’re onto something interesting, the only thing that will ultimately validate our decision to go the product route is traction, either in the form of user growth or sales.
If we’re ultimately successful in producing a service that investors find useful, we’re going to have to ramp up our sales process. Though myself and Manan will likely do much of the initial sales, we’re going to need someone who can build up a repeatable sales process properly. I have no doubt I’ll be reaching out to my network yet again to find a VP or Director of Sales. We’ll likely also bring on people that can focus full time on product and users, likely in the form of a full-time PM and someone that can handle user interviews and research (which has been an incredibly valuable task for us thus far).
Future Growth
At some point, we’ll need to bring on someone to handle recruiting on a full-time basis. If we’re good, we’ll be bringing on that person sooner rather than later, but there’s no doubt that if this company takes off, we’ll need to grow our team quickly. Doing that effectively is not an easy task, and I’ve read that the best CEOs spend half their time on recruiting. I actually look forward to that day, but until then, I suspect we’ll keep growing our core ML/data science/engineering team while building up our business team on an as-needed basis.
Throughout this process, the process of building up our team has reinforced a viewpoint that I already held on startup recruiting — the best teams come from one’s own close network. The more people that someone is friends with and knows closely, the higher the likelihood that they’ll be able to find people to work with them. Early startup employees work on faith, either for the product or the market, but usually for their colleagues.
Call to action
If you’re interested in working on an AI-first startup dedicated to bringing the best analysis tools to all equity investors, please reach out: apteo.co or shanif@apteo.co.
|
On Building An AI-First Startup Team
| 6
|
on-building-an-ai-first-startup-team-1a41303efe2d
|
2018-08-18
|
2018-08-18 15:45:22
|
https://medium.com/s/story/on-building-an-ai-first-startup-team-1a41303efe2d
| false
| 1,588
|
The official publication for Apteo. Follow us to get insights on how we’re using AI to improve investing.
| null |
apteoai
| null |
Apteo
|
info@apteo.co
|
apteo
|
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,INVESTING,TECHNOLOGY,FINTECH
|
apteoai
|
Team Building
|
team-building
|
Team Building
| 4,837
|
Shanif Dhanani
|
Co-founder & CEO of Apteo: We build AI tools to improve investing. Come join us!
|
9273f4759898
|
shanif
| 777
| 201
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
c704ca618998
|
2018-04-18
|
2018-04-18 16:14:40
|
2018-05-03
|
2018-05-03 10:44:33
| 6
| false
|
en
|
2018-05-04
|
2018-05-04 13:57:10
| 1
|
1a41e99e3bda
| 3.810377
| 7
| 0
| 0
|
If data is the new oil, then how do we extract it in the most meaningful way?
| 5
|
I n Cluj Napoca, one of Romania’s biggest cities, Radu Ilea, Big Data DevOps at Siemens Corporate Technology, is working on a new project. Set to pave the way for how we buy utilities, it takes services that generate vast amounts of information and feeds it through software, using automation to turn it into smart insights.
Radu’s role is one part server administration, one part application development. At the moment, Radu is constructing a system that will allow consumers to switch between their water and electricity suppliers on a daily basis. “Depending on the weather, people consume different amounts of energy, so it makes sense that they should be able to buy exact amounts from their providers.” Currently, ‘fixed-term contracts’ and ‘hassle’ are cited as the reasons most people rarely switch energy providers. But utilities are like any other marketplace; choice is always better for the consumer. From water to traffic monitoring, making data more manageable is set to become an integral part of how we understand our lives.
With more and more data in the cloud, the sky’s the limit
Thanks to virtual space becoming increasingly agile and affordable, cloud computing is changing the way we use data. With everything living in one place, it’s allowing structured and unstructured data to be processed then and there, creating new opportunities for incongruous bulk activities to be turned into meaningful information.
“When I started to work on this project we had huge downtimes,” he says. Annoying, expensive and incredibly difficult, downtimes force everything to grind to a halt as developers fight to get servers back online. “If one workflow or application has a single error it can block the whole system,” he says.
Playing detective to find and remove these blocks is just one aspect of Radu’s job. The other is creating servers for clients. “I install the necessary server application that the client needs, the infrastructure and make sure that it’s secure for them to work on it.”
In the new world of work, Radu had to carve his own career path
Until he went to college, Radu was entirely self-taught. His family didn’t get an internet connection until his mid-teens and neither of his parents worked in engineering or IT so he couldn’t look to them for guidance. It turns out his biggest inspiration was an older friend at school, who did a bit of programming. “I don’t think he knows this,” Radu says. “But I wanted to be like him. So I learned programming languages like C++ and Java at home on my own.” During high school, Radu went to work for a local company creating HTML templates from Photoshop files and decided to study computer science at his local university.
Newly graduated, he began his career as a front-end developer. “I did that for nearly a year and then I worked for another company as a back-end developer,” he says. “I was working on a huge website regarding people genealogy. But I changed roles within the company as a system administrator because I like to build servers.” After another 18 months in that position, he decided to join Siemens.
Big data relies on smart solutions
The project Radu is working on could change the landscape of how we process big data. At the moment, each server has to be set-up according to a user’s needs. Using a combination of tightly built architecture and automation, these servers will be rolled out to all different types of projects. “I’m writing some special scripts in order to deploy an entire infrastructure into Amazon and Azure cloud by one click,” he says. It means companies can purchase off-the-shelf solutions for complex problems.
It’s testament to how diverse big data is, leaving no single industry unturned. “If a customer comes to Siemens, and says that he needs an easy way to help him buy chocolate from different markets, the development team can take this request and turn it into a workflow that buys chocolate or helps them produce chocolate themselves, at a lower price,” explains Radu. From chocolate to electricity, the secret to creating insights into how we make and consume things is the architecture of information.
Radu Ilea is Big Data DevOps, Corporate Technology, Research and Development for Siemens, Romania. He’s an avid mountain bike rider, and is a member of the SportGuru–BCR Racing Team. Find out more about working at Siemens
Radu is a Future Maker — one of the 372,000 talented people working with us to shape the future.
Words: Caroline Christie
Animation: John Hitchcox
|
How big data is being transformed into smart data
| 30
|
how-big-data-is-being-transformed-into-smart-data-1a41e99e3bda
|
2018-06-14
|
2018-06-14 17:24:40
|
https://medium.com/s/story/how-big-data-is-being-transformed-into-smart-data-1a41e99e3bda
| false
| 758
|
377,000 people. More than 200 countries. 1 common goal. The future.
| null |
SiemensCareers
| null |
Future Makers
| null |
futuremakers
|
SIEMENS,DIGITAL,TECH,ENGINEERING,FUTURE
|
SiemensCareers
|
Cloud Computing
|
cloud-computing
|
Cloud Computing
| 22,811
|
Future Makers
| null |
6eabb71af0f1
|
siemens_med
| 1,045
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
9189cec9010b
|
2018-05-02
|
2018-05-02 16:13:18
|
2018-05-02
|
2018-05-02 16:24:14
| 5
| false
|
en
|
2018-05-30
|
2018-05-30 05:42:06
| 5
|
1a423a537a3e
| 3.923899
| 24
| 0
| 0
|
A 4-month placement is not very long, and Engineering Co-op students such as Shang Gao from the University of Waterloo need to make the…
| 5
|
Machine Learning in 3A? Absolutely!
A 4-month placement is not very long, and Engineering Co-op students such as Shang Gao from the University of Waterloo need to make the most of their short time in on-the-job work placements.
At SnapTravel, as a member of our Winter 2018 Co-op cohort, Shang did just that. A Computer Science (3A) student with no prior Machine Learning experience, Shang was able to complete a high-impact and complex Machine Learning project with significant impact. SnapTravel currently supports over 20 million searches per day, and Shang’s job was to lay the foundation for further messaging automation — teaching our chatbot to teach itself through machine learning, and thereby increase the number of conversations it could support/solve independently.
Starting from data collection and feature engineering, all the way to model building and tuning, Shang was able to understand challenges, find/read relevant research papers, make recommendations and ultimately build a brand new Machine Learning model that greatly increased the accuracy and relevancy of SnapTravel’s NLP-powered chatbot, leading to a 20%+ increase in the number of fully-automated conversations.
The chart to the left comes straight from our Growth/Analytics team, and shows the decrease in the percentage of user conversations requiring human support — a direct result of the bot getting smarter and handling more on its own thanks to Shang’s work!
Read more below to learn about Shang’s project, and the interesting work he was able to do under the mentorship of Leon Jiang (Waterloo Software Engineering, ex-Facebook Engineer) and Henry Shi (Waterloo Computer Science, ex-Google Engineer).
As a Software Engineer Co-op at SnapTravel, my project was to work on machine learning to introduce intent detection into the chatbot. Through experimenting with different advanced ML algorithms (such as those used by the teams at Google and Facebook respectively), my project would allow the chatbot to determine the intent of natural language (such as when a user wanted to search for a hotel or cancel their booking) and respond accordingly, without the need for human involvement.
Like most machine learning work, my project was broken up into 4 steps:
data collection/data labelling
feature engineering
training of the machine learning model
model evaluation
Data collection/labelling is used to collect data that can then be used to train the machine learning model, such as by ‘tagging’ the data with different labels. To do this I created an internal tool that would allow humans to review phrases that the chatbot did not understand, and tag them to identify the intent with different labels.
Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work. Engineers usually put lots of effort on feature engineering and try their best to select the features that can uniquely represent each sentence.
Experimenting with, or training machine learning models, is when you use a set of features (created through feature engineering) and labels (from data labelling) for each sentence to train a machine learning algorithm.
Finally, model evaluation is used to measure precision and recall. Precision is the percentage of correct predictions out of all prediction results, and Recall is the percentage of correct prediction out of the number of results that should have been predicted correctly. In my project I applied two separate ML algorithms to the model, and then measured their precision and recall for intent detection in three different categories. As you can see, the machine learning model using features generated from Algo 1 performed better than those from Algo 2.
In addition to these machine learning models & algorithms, used by teams at Google and Facebook, I also had the opportunity to apply statistics theory (like chi square, information gain) to conduct feature selection. From Logistic Regression models to Support Vector Machine and Tree algorithms, I was able to solve classification problems and experiment with Neural Network and deep learning algorithms. This project, and my co-op placement in general, gave me the opportunity to learn an interesting field of engineering, apply industry best practices, and have significant impact on a product serving millions of users worldwide.
Shang (right) with his mentor, Leon (left).
Shang’s presence in the office, and his contribution to SnapTravel’s product and team through machine learning, will definitely be missed as he returns to school this semester at the University of Waterloo. As always, it is both inspiring and humbling to see the sheer intelligence and work ethic shown by hard-working students gaining hands-on work experience while finishing their STEM degrees and making significant contributions to one of the fastest growing startups in Toronto.
Interested in tackling hard problems, learning complex concepts with real-world applications, and having a real impact in a startup environment? We are looking for 3 talented Software Engineer Co-ops to join us again in Fall 2018 — keep an eye out for applications on our Careers page!
|
Machine Learning in 3A? Absolutely!
| 512
|
machine-learning-in-3a-absolutely-1a423a537a3e
|
2018-05-30
|
2018-05-30 05:42:07
|
https://medium.com/s/story/machine-learning-in-3a-absolutely-1a423a537a3e
| false
| 819
|
Content from the SnapTravel team. We are the leaders in conversational commerce, letting you book Hotel deals over SMS or Facebook Messenger - as easy as messaging a friend!
| null |
snaptravel
| null |
SnapTravel
|
careers@getsnaptravel.com
|
snaptravel
|
STARTUP,ENGINEERING,TRAVEL,TECHNOLOGY,ENTREPRENEURSHIP
|
snaptravel
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Brett Reed
|
Talent + People @SnapTravel
|
147edc826a51
|
brett_reed
| 32
| 27
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-24
|
2018-02-24 10:10:39
|
2018-02-27
|
2018-02-27 18:12:34
| 3
| false
|
zh-Hant
|
2018-03-19
|
2018-03-19 06:52:47
| 2
|
1a428d183f8a
| 0.780189
| 11
| 0
| 0
|
Practical AI
| 3
|
速記AI課程-深度學習入門(三)
Practical AI
速記AI課程-深度學習入門(二)
Recurrent Neural Network、Application、Generative Adversarial Networkmedium.com
這場業界實務講師是由趨勢科技資料科學家張佳彥,張帶領PC-Cillin的XGen團隊( 人工智慧多層次防毒技術),自謙是Model Educator(蠻新穎的稱呼),分享業界實務上是怎麼應用AI,機器學習的專案該怎麼進行比較好。
XGen技術(加入機器學習)
進入正式分享前,張先提到所謂創新(Research)和創新的擴散(Engineering),創新是真正原創的概念,從0到1,像是類神經網路即屬此類;而創新的擴散是指前述創新的持續強化跟應用,是從1到100,如AlphaGo、自駕車是類神經網路的持續延伸。因此,AI是一個工具,且AI應該如員工一樣,在應用前必須被訓練。
Agenda WHY need model 2. What is model 3. Make good model 4. How to run ML project
接著介紹為什麼需要模型(Model)?原因很多講者提過的機器學習概念相同,張以防堵垃圾郵件這麼問題來看,傳統的方式是針對郵件內文出現特定關鍵字(如buy)時,判定為垃圾郵件。不過,郵件文字是人所撰寫,即使用幾百條規則來描述垃圾郵件用到的關鍵字,也無法周全,且後續難以維護(想像你是要接手的工程師…),因此需要(機器學習)Model自動學習如何判定垃圾郵件。
那麼Model到底是什麼?簡言之為資料(Data)、特徵(Feature)與演算法(Algorithm)的綜合體。Data是用來訓練Model的原料,但因為過於原始,跟全生牛排一樣,不好入口(給演算法用),因此需要做一些修整,這就是找出合適的Feature,來讓演算法可以做出正確的決定,而合適的Feature是需要領域專家協助的。舉例來說,現在出發地與目的地的經、緯度各四個欄位(Data),而我們想建立的模型是預測兩地是否可以走路到達,經過「走路可否到達專家」的專業知識判斷,只要「兩地間的距離」不超過三公里,就可以走路到達。那麼,與其把經緯度丟入模型去預測,不如先計算好「兩地間的距離」這新的Feature,再把這Feature丟入模型去預測,效果會更好。
問世間Model為何物…
我個人非常喜歡(也第一次聽到)張對於Model的擬人方式。他說Model就是亞斯伯格天才,它興趣專一且執著,具有特殊天賦,如同AlphaGo只會下圍棋,但下得十分嚇人。這樣的Model跟人才一樣,必須用對地方才能發揮所長,AlphaGo再強厲害,也不會開自駕車…(這樣李世乭有開心一點嗎)
而在決定要使用哪種模型時,還是要先回到想要解決什麼問題。問題有所謂簡單(變數少)或複雜(變數多),有偏較容易解釋(Explanation)跟較精準(Accuracy)的,也有較固定(Invariant,像是手寫辨識數字)跟較不固定(Variant,如病毒預測)的,除了適合所需要解決的問題外,實務上最重要的還是要考慮成本函數(Cost Function)。這邊所提到的Cost Function並非告訴我們模型好或不好的評量標準(MSE),而是設計哪一種誤判會對企業造成多少的損失。 以預測病毒來說,一個假警報(不是病毒但卻誤判為病毒)和漏掉一個勒索病毒或廣告軟體,對公司造成的成本(或損失)都是不同的,如果可以有效的設計Cost function,即使Model不變,對於企業來說價值也會有明顯的提升,不過如何設計Cost Function是各領域的專業知識,因此也很少人討論。(感謝張的補充~)
而測試模型時,盡量採用各種多方面的不同資料,方能測出Model是否在訓練過程中學得夠好。某些Model可能在某個地方(如特定時間之銷售預測)表現得非常好,但其他地方非常差,與其補強其差的地方,不如強化它的天賦。(聽到現在越覺得機器學習好多道理跟人類學習幾乎一樣呀)
張舉了個實際上的例子。近期流行的跨國商業電郵詐騙(Business email compromise,簡稱BEC),趨勢科技利用ML來判定寄件者之書寫風格是否與平常不同。這個Model在信件內文字數過少時,準確度並不高,而經分析發現BEC信件多半介於40至70字,於是重新針對鎖定範圍訓練後,準確率即大幅提高。
而要訓練出一個好的Model,Data非常重要,它可以把Model變得更好,就像天資並非絕頂之同學做了很多參考書之感,勤能補拙(後天)。畢竟,天才沒幾個,與其祈禱找到很棒的Feature跟極佳的Algorithm(天賦),不如腳踏實地想辦法蒐集資料吧!(再度跟人類學習一樣!)
常有人說「Data is King」,藉此說明重要的資料性,而張說「Data is Queen」,因為「Label才是King」,沒有Label的資料是無法訓練的,也無法知道Model的表現。
最後,實務上應該要如何執行ML 專案?首先是知道要解決什麼問題以及準備手邊的資料,再來,快速的套用已知的演算法,確認符合業務需求後,再開始進行優化跟調整。畢竟,套用張的最後一句重點:「Accuracy doesn’t matter. Only business value does.」
感同身受…
速記AI課程-深度學習入門(四)
如何導入資料科學?medium.com
|
速記AI課程-深度學習入門(三)
| 182
|
速記ai課程-深度學習入門-三-1a428d183f8a
|
2018-04-28
|
2018-04-28 01:59:37
|
https://medium.com/s/story/速記ai課程-深度學習入門-三-1a428d183f8a
| false
| 61
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Gimi Kao
| null |
41b098943c94
|
baubibi
| 434
| 82
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-13
|
2018-03-13 14:29:36
|
2018-03-13
|
2018-03-13 14:42:04
| 12
| false
|
en
|
2018-03-13
|
2018-03-13 14:42:04
| 5
|
1a42ffdee09a
| 4.742453
| 3
| 0
| 0
|
Digital Project Manager, Zara Kerwood wrote an article for George P. Johnson UK on using AI for experiences.
| 5
|
We’re ready to blow your mind — George P. Johnson on using AI for experiences
Digital Project Manager, Zara Kerwood wrote an article for George P. Johnson UK on using AI for experiences.
Here’s George. He works for for the George P. Johnson (GPJ) digital team — our latest recruit.
And he’s just started his journey into the world of AI.
George recently read a blog called ‘Wait But Why’ and he thought it answered the whole ‘Where are we on the AI Road?’ question quite neatly. George thinks we’re at a pretty interesting place right now.
“But, you have to remember something about what it’s like to stand on a time graph: you can’t see what’s to your right, So here’s how it actually feels to stand here:” www.waitbutwhy.com
Once you start reading about AI (and seeing it in action) it’s all pretty mind blowing.
At GPJ, we love blowing minds. Including our own. And George’s. That’s why he joined us. He’s seen how we’re already using AI to blow the minds of our audiences; and create better experiences.
Personalising experiences
We’ve been using IBM Watson to analyse our audience’s Twitter feeds and LinkedIn profiles to uncover their personality traits. We identified four key personas and illustrated how our clients could help each one of them.
We then brought these personas to life using fun, eye-catching avatars. Each one could be personalised and shared with the World. Our audience loved it. And so did George.
You can check it out here: http://watsontalent.mybluemix.net/
The geeky bit — The Watson Personality Insights API trawls through social profiles and provides scores for different personality traits (such as adventurousness) depending on the tone and language of the copy. We then interpreted those scores and assigned personas to members of our audience.
Storytelling
Ever heard of Anki Overdrive? It’s the world’s most intelligent battle racing system. And these toy cars are a big hit in the GPJ office. Each Supercar is a self-aware robot, driven by powerful AI and equipped with deadly strategy. Whatever track you build, they’ll learn it.
Nowhere is safe. Especially when George is at the controls. They’ve turned him into Mad Max!
The financial sector is renowned for being competitive. And our participants didn’t disappoint. When we challenged them to represent their company — and race to the top of the financial leader board — they didn’t need a lot of encouragement.
Gaming experiences
At GPJ, we do experiences the fun way — showcasing the potential of tech using AI powered products. At a recent major event, we hacked a Sphero robot enabling it to find its own way around a maze.
Then we challenged our audience of developers to do the same. They thought it was ‘amazing’ (sorry, George wrote that!)
The geeky bit — George thinks there’s too much to explain right here. So, we’ve put the code on Github. Take a look and use the code to hack and navigate your own Sphero. Here’s the link: https://github.com/GPJDigital/Sphero-maze.git
Measuring sentiment
One of George’s favourites. For Mobile World Congress 2017, we collaborated with IBM and partner agencies to create the first thinking sculpture. It was the perfect way to tell the story of how everyone can ‘Create with Watson’.
Want to see it in action? You can watch the video here: https://www.youtube.com/watch?v=JypHWXLrF7Y
So what’s next?
Using AI for creativity — A hot topic right now. And one we are exploring with our clients and for our own processes. AI can help us obtain deeper insights, and quicker. And we can use it to aid the design process and end user experience.
But George says he will miss our Creatives. So we won’t be losing them just yet… ;)
Watson for VR experiences — We’ve started using AI in our VR experiences. And it can be a real mind blower. AI enables us to personalize projects and it’s a great way to measure audience reactions. Judging by George’s reaction, we’re doing something right.
The geeky bit — Want to know more? George suggests you check out the Watson for Unity plugin here -https://developer.ibm.com/open/openprojects/watson-developer-cloud-unity-sdk/
Chatbots for recommendations and agendas
AI concierge or chatbot? Yes please! We want our audiences to receive relevant information quickly — and in a way that’s useful to them. But as George knows, it’s important that the chatbot gets it right.
We’ve noticed that chatbots are becoming more and more popular. But we’ve also realised that they need to be trained properly. You can feed an AI with a huge amount of data, but until you train it to pull insights that are relevant to you, it won’t be useful. You need to provide your chatbot with feedback. You need to track its accuracy. You need to make sure it solves the problem.
One of the most important things to remember is that your chatbot can place things in context. There’s nothing more annoying than receiving the wrong answer. If your chatbot doesn’t know the answer, it shouldn’t guess. Even George knows that.
Our recommendation
At GPJ Digital we use the Slackbot and Watson Conversation Service. The great thing about these services is that you don’t need to code to be able to create them.
Onwards and into the unknown…
We can’t wait! We’re all looking forward to what the future holds — but no one more than George.
|
We’re ready to blow your mind — George P. Johnson on using AI for experiences
| 26
|
were-ready-to-blow-your-mind-george-p-johnson-on-using-ai-for-experiences-1a42ffdee09a
|
2018-03-22
|
2018-03-22 17:39:07
|
https://medium.com/s/story/were-ready-to-blow-your-mind-george-p-johnson-on-using-ai-for-experiences-1a42ffdee09a
| false
| 899
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
GPJ UK
|
An independent, creatively driven full service experience marketing agency.
|
b4b377d245fa
|
gpjemea1
| 18
| 13
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
721481663b7d
|
2018-08-31
|
2018-08-31 13:24:39
|
2018-09-12
|
2018-09-12 21:06:40
| 5
| false
|
en
|
2018-09-12
|
2018-09-12 21:06:40
| 3
|
1a4404873b3
| 3.803145
| 2
| 0
| 0
|
Someday ago, my colleague asked me about RBM algorithm and what is the difference vs other learning algorithms. It is really interesting…
| 4
|
Restricted Boltzmann Machine, a complete analysis. Part 1: introduction & model formulation
Someday ago, my colleague asked me about RBM algorithm and what is the difference vs other learning algorithms. It is really interesting about this algorithm as Netflix used it along with SVD in their product. Last year, RBM already took me a month to understand fully about it in term of theory and practice.
This serial of articles is dedicated to shortly writing my analysis along with demo source code for RBM. This analysis is a detailed story about why the RBM created, which assumption, which way to solve, formulas to get to the final algorithm. I usually find that RBM papers typically come up to use the final formula and build on the top, which is hard to accept and understand for high curiosity people like me.
RBM Timeline
Let talk some historical aspect. Firstly, Restricted Boltzmann Machine is an undirected graphical model that plays a major role in Deep Learning framework nowadays. It is a relaxed version of Boltzmann Machine.
1986: was introduced as Harmonium. Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. Technical report, DTIC Document, 1986
2001: fast approximated inference algorithm 1 as Contrastive
Divergence (CD). Geoffrey E. Hinton. “Training products of experts by minimizing contrastive divergence”. In: Neural Computation 14.8 (2002)
2007: first application (collaborative filtering ) for big data
(Netflix movie rating) . Ruslan Salakhutdinov, Andriy Mnih, and Geoffrey E. Hinton. “Restricted Boltzmann Machines for Collaborative Filtering”. In: International Conference on Machine Learning (2007)
2006 → 2010, stacked RBM models to Deep Belief Network. Geoffrey E Hinton and Ruslan R Salakhutdinov. “Reducing the Dimensionality of Data with Neural Networks”
2010 → now: slow down due to the superior of other DL
frameworks, use for initializing networks parameters.
Briefly, it is a Swiss army knife, a probabilistic model, also finally figure out to include the sigmoid unit (non-linearity). Moreover, it is a clustering algorithm thanks to binary hidden factor. Then, it is a probabilistic model to capture the data distribution, nonlinear factor analysis, clustering algorithm, nonlinear data representation.
In term of practice, it is a very fast learning algorithm, short implementation, handle binary, real input data, even for missing values input vector. Follow that, numerous applications of RBM was found in
Dimensional reduction: Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the Dimensionality of Data with Neural Networks\r. Science, 313(5786):504–507, 2006.
Collaborative filtering: Ruslan Salakhutdinov, Andriy Mnih, and Geoffrey E. Hinton. Restricted Boltzmann Machines for Collaborative Filtering. International Conference on Machine Learning, pages 791–798, 2007.
Classification: Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted Boltzmann machines. ICML, pages 536–543, 2008
In 2007, the first application of RBM that can handle big-data millions user ratings in Netflix competition, to model and predicts user preference on movies. In the period of 2006 to 2010, Deep Belief Network was invented by stacking multiple RBM models in order to build higher “abstract” data representation. It is the time people really get into the “deeper” model structure.
At this point, we clearly see the need of understanding RBM algorithm: the revenge for deep learning.
RBM: model definition
Usually, in high dimensional data, there may only some small number of degrees that explain/embedded most of the data variance, according to latent factors. For example, among a thousand face images, some underlying latent factors are gender, lighting condition, posing, emotions. One way to find these latent factors is the manifold learning, can be done through one of my favorite algorithms: Locally Linear Embedding.
Locally Linear Embedding example. https://cs.nyu.edu/~roweis/lle/faces.html
Density Estimation for a Gaussian mixture. http://scikit-learn.org/stable/auto_examples/mixture/plot_gmm_pdf.html
In modelling complex and high dimension data distribution, a popular probabilistic approach is using the mixture model, a typical algorithm is the Gaussian Mixture Model (GMM). In this GMM model, the assumption is all data points are generated by a mixture of Gaussian models.
Instead of modelling the sum of mixture models liked in GMM, RBM takes the product of simple Bernoulli distribution or “product of experts”, the aim to inference shaper posterior distribution (according to Hilton paper). In Figure 1, RBM contains two layers of units, the visible units from x_1 , x_2 , . . . x_m stand for m data input features, and the n binary hidden units including h_1 , h_2 , . . . h_n to capture the dependencies between observations features. According to this figure, each h_i models a particular dependency or relation of features.
From my writing
Although the RBM structure looks like two layers neural network, RBM has a closed-form representation of data distribution.
Recap:
RBM is a long history and influenced algorithm.
2 layers model: input and hidden layer, both of them originally binary (follow the Bernoulli distribution)
The data representation is as the product of hidden nodes.
Swiss army knife
|
Restricted Boltzmann Machine, a complete analysis. Part 1: introduction & model formulation
| 2
|
restricted-boltzmann-machine-a-complete-analysis-part-1-introduction-model-formulation-1a4404873b3
|
2018-09-12
|
2018-09-12 21:06:40
|
https://medium.com/s/story/restricted-boltzmann-machine-a-complete-analysis-part-1-introduction-model-formulation-1a4404873b3
| false
| 787
|
Experience, learning progress in machine learning, data science, computer science
| null | null | null |
datatype
|
nvlinh.khtn@gmail.com
|
datatype
|
MACHINE LEARNING,DATA SCIENCE,COMPUTER SCIENCE,DATA VISUALIZATION,DATA PROCESSING
| null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Nguyễn Văn Lĩnh
| null |
e2a125f88e00
|
linh.nguyen.fi
| 16
| 8
| 20,181,104
| null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.