audioVersionDurationSec
float64 0
3.27k
⌀ | codeBlock
stringlengths 3
77.5k
⌀ | codeBlockCount
float64 0
389
⌀ | collectionId
stringlengths 9
12
⌀ | createdDate
stringclasses 741
values | createdDatetime
stringlengths 19
19
⌀ | firstPublishedDate
stringclasses 610
values | firstPublishedDatetime
stringlengths 19
19
⌀ | imageCount
float64 0
263
⌀ | isSubscriptionLocked
bool 2
classes | language
stringclasses 52
values | latestPublishedDate
stringclasses 577
values | latestPublishedDatetime
stringlengths 19
19
⌀ | linksCount
float64 0
1.18k
⌀ | postId
stringlengths 8
12
⌀ | readingTime
float64 0
99.6
⌀ | recommends
float64 0
42.3k
⌀ | responsesCreatedCount
float64 0
3.08k
⌀ | socialRecommendsCount
float64 0
3
⌀ | subTitle
stringlengths 1
141
⌀ | tagsCount
float64 1
6
⌀ | text
stringlengths 1
145k
| title
stringlengths 1
200
⌀ | totalClapCount
float64 0
292k
⌀ | uniqueSlug
stringlengths 12
119
⌀ | updatedDate
stringclasses 431
values | updatedDatetime
stringlengths 19
19
⌀ | url
stringlengths 32
829
⌀ | vote
bool 2
classes | wordCount
float64 0
25k
⌀ | publicationdescription
stringlengths 1
280
⌀ | publicationdomain
stringlengths 6
35
⌀ | publicationfacebookPageName
stringlengths 2
46
⌀ | publicationfollowerCount
float64 | publicationname
stringlengths 4
139
⌀ | publicationpublicEmail
stringlengths 8
47
⌀ | publicationslug
stringlengths 3
50
⌀ | publicationtags
stringlengths 2
116
⌀ | publicationtwitterUsername
stringlengths 1
15
⌀ | tag_name
stringlengths 1
25
⌀ | slug
stringlengths 1
25
⌀ | name
stringlengths 1
25
⌀ | postCount
float64 0
332k
⌀ | author
stringlengths 1
50
⌀ | bio
stringlengths 1
185
⌀ | userId
stringlengths 8
12
⌀ | userName
stringlengths 2
30
⌀ | usersFollowedByCount
float64 0
334k
⌀ | usersFollowedCount
float64 0
85.9k
⌀ | scrappedDate
float64 20.2M
20.2M
⌀ | claps
stringclasses 163
values | reading_time
float64 2
31
⌀ | link
stringclasses 230
values | authors
stringlengths 2
392
⌀ | timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0
| null | 0
| null |
2018-09-01
|
2018-09-01 03:16:49
|
2018-09-12
|
2018-09-12 05:02:09
| 9
| false
|
en
|
2018-09-12
|
2018-09-12 05:02:09
| 7
|
16dc0ae906c9
| 3.89434
| 1
| 0
| 0
|
In this blogpost I’ll introduce concepts such as:
| 4
|
Deep Reinforcement Learning Part 2: Markov Decision Process
In this blogpost I’ll introduce concepts such as:
Markov Chain or Markov Process
What we observe are called states and the system can switch between states according to some rules of dynamics.
All posible states for a system form a set called state space. Our observations form a sequence of states or a chain. A sequence of observations over time forms a chain of states, such as [sunny, sunny, rainy, rainy,…] and is called history (1).
Fig 1. Example of a state Markov chain
Markov Property and Transition Matrix
Markov Property (MP) means that the future of the dynamics from any state have to depend on this state only. MP requires that the states of the system to be distinguisheable from each other and unique.
If our model is more complex, and we need to extend it, this will capture more dependencies in the model at the cost of a larger state space.
Fig 2. Markov Chain Model and Transition Matrix (2).
From Fig 2. we can observe that the edges in the markov chain are probabilities that are expressed in the transition matrix. A transition matrix is a square matrix of the size NxN, where N is the number of states in our model. This matrix defines the system dynamics. If the probability of transition is 0, we don’t draw an edge (there is no probability to go from one state to another)(1).
We also can define mathematically a markov chain based on the states and the transition matrix.
Fig 3. Mathematical definition of a markov chain (3)
Reward
A reward signal defines the goal of a reinforcement learning problem. The agent’s objective is to maximize the total reward it receives over the long run. It defines what are the good and bad events for the agent (4).
The expected cumulative reward or G is the sum of the reward signal and the Gamma (γ) or discount factor. Gamma (γ) is a hyperparameter to tune in order to get optimum results. These values range between 0.9 and 0.99. While a lower error encourages short-term thinking, a higher-value emphasizes long-term results (5).
Fig 4. The Return equation (3).
If we add the reward and the discount factor to our definition of a markov chain we have a Markov reward process (MRP).
Fig 5. Mathematical definition of a markov reward process (3)
Value Function
A value function -or function of states- estimates how good it is for the agent to be in a given state (or how good is to perform a given action in a given state). Value functions are defined with respect of particular ways of acting, called, policies (4).
Fig 6. (State) Value Function Definition (3)
Bellman’s Equation
Bellman’s equation (BE) is a linear function that allows to decompose the state value function as a sum of an inmediate reward [Rt+1], and a discounted value of sucessor’s state [γv(St+1)]. Also it can be expressed using matrix notation (3).
Fig 7. Bellman’s Equation is a linear function that can be expressed using matrices (3)
Markov Decision Process
Almost all reinforcement learning problems can be formalized as markov decision process (MDPs). Every state in MDPs is markovian or satisfies the Markov property and the environment is fully observable (1)(3).
Fig 8. MDP definition (3)
MDP framework is a considerable abstraction of the problem of goal-oriented learning from interaction. Every goal-directed behavior can be reduced to three signals between an agent and the environment: the actions, that represents the choices made by the agent; the states, that are choices made by the agent; and the rewards, that define the agent’s goal (4).
Policy
The policy is the set of rules that controls the agent’s behavior. It is defined as the probability distribution over actions for every possible state. To introduce randomness into an agent’s behavior is defined as a probability. If our policy is fixed and not changing, then our MDP becomes an MRP (1). Concisely, we can say that a policy is a solution to the MDP problem (5).
Fig 9. Mathematical definition of a policy (3)
Finally, we can define the optimal policy π* as the policy that maximizes the expected reward (received or expected to receive over a lifetime).
For the first part of these series of posts you can visit here Part 1
References
Lapan, Maxim. Deep Reinforcement Learning Hands-On (2018). http://bit.ly/2wosxGD
Raval, Siraj. Introduction (Move 37) A free deep reinforcement leaning course (2018) http://bit.ly/2CDyrJB
Silver, David. Reinforcement Learning DeepMind Course (2015) Class 2. Youtube video: http://bit.ly/2CS4k1d and the slides http://bit.ly/2CDFcer
Sutton, Richard & Barto, Andrew. Reinforcement Learning 2nd Edition Draft (2018) http://bit.ly/2CLW5Dv
Raval, Siraj. Move 37 A free deep reinforcement learning course http://bit.ly/2x4kqhY
|
Deep Reinforcement Learning Part 2: Markov Decision Process
| 50
|
deep-reinforcement-learning-part-2-markov-decision-process-16dc0ae906c9
|
2018-09-12
|
2018-09-12 05:02:09
|
https://medium.com/s/story/deep-reinforcement-learning-part-2-markov-decision-process-16dc0ae906c9
| false
| 714
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Learning R and Machine Learning
|
Chemistry PhD living in a data-driven world.
|
cbe34340e508
|
data_datum
| 130
| 322
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-24
|
2018-04-24 01:53:02
|
2018-04-24
|
2018-04-24 02:01:24
| 3
| false
|
en
|
2018-04-24
|
2018-04-24 02:24:15
| 2
|
16df094de31
| 5.263208
| 78
| 2
| 0
|
Remember the outrage about the ABS 2016 Census retaining real names and addresses? Maybe you thought the ABS got the message that the…
| 4
|
The Australian Bureau of Statistics Tracked People By Their Mobile Device Data.
Image credit: Andrew Howe, ABS Demographer: https://stokes2013.files.wordpress.com/2017/05/s5-howe-chester.pdf
Remember the outrage about the ABS 2016 Census retaining real names and addresses? Maybe you thought the ABS got the message that the public seems to give more of a damn about privacy than public servants assume, and perhaps yanked themselves back into line?
Wrong.
Image credit: Andrew Howe, ABS Demographer: https://stokes2013.files.wordpress.com/2017/05/s5-howe-chester.pdf
The ABS claims population estimates have a “major data gap” and so they’ve been a busy bee figuring out a way to track crowd movement. Their solution? Mobile device user data.
“…with its near-complete coverage of the population, mobile device data is now seen as a feasible way to estimate temporary populations,” states a 2017 conference extract for a talk by ABS Demographer Andrew Howe.
While the “Estimated Resident Population” (ERP) is Australia’s official population measure, the ABS felt the pre-existing data wasn’t ‘granular’ enough. What the ABS really wanted to know was where you’re moving, hour by hour, through the CBD, educational hubs, tourist areas.
Howe’s ABS pilot study of mobile device user data creates population estimates with the help of a trial engagement with an unnamed telco company. The data includes age and sex breakdowns. The study ran between the 18th April to 1st May 2016.
Image credit: Andrew Howe, ABS Demographer: https://stokes2013.files.wordpress.com/2017/05/s5-howe-chester.pdf
And in what may seem like a rather glib ploy to gain the Coalition’s support for a contentious pilot study, Howe claims his research might also be useful in “mining areas.” Because who wouldn’t agree to be tracked constantly, just in case a pile of coal falls on your head?
Other reasons given for this need for constant tracking is “planning and service provision”, “funding models” and “disaster preparedness.” Although the pilot study also tracks crowd movement at sporting events, it’s not hard to imagine other spaces the government might be interested in tracking: places of worship, airports, ports, demonstrations and workplaces.
Following the rapidly increasing government tradition of disrespecting Australian citizen’s privacy rights, there’s no mention of any informed consent having been sought from customers before the un-named telco handed data over to the ABS. Also missing are any details of a privacy impact assessment.
Considering the last attempt by a government department to roll-their-own-crypto resulted in the MBS/PBS data breach of 2.5 million Australians, it’d be nice to know exactly how the ABS or telco anonymised and aggregated the data — especially since the ABS on-sells micro-data, from time to time.
Unfortunately, the slides and conference abstracts for Howe’s 2017 conference talks don’t reveal exactly how the alleged de-identification of data occurred. And strangely enough, despite the publicly-funded pilot study having taken place in 2016, there’s still no mention on the ABS website of the contentious research.
Privacy experts were alarmed at news:
“I find this tracking of people using their telephone location data without their knowledge and consent extremely concerning. The fact that the telecoms company allowed this data to be handed to a third party, and then for that third party to be a government agency compounds the breach of trust for the people whose data was involved,” said Angela Daly, Vice Chancellor’s Senior Research Fellow and Senior Lecturer in Queensland University of Technology’s Faculty of Law, research associate in the Tilburg Institute for Law, Technology and Society and Digital Rights Watch board member.
“After the Cambridge Analytica/Facebook scandal this is yet another example of why we need much tougher restrictions on what companies and the government can do with our data.”
Electronic Frontiers Australia board member Justin Warren also pointed out that while there are beneficial uses for this kind of information, “…the ABS should be treading much more carefully than it is. The ABS damaged its reputation with its bungled management of the 2016 Census, and with its failure to properly consult with civil society about its decision to retain names and addresses. Now we discover that the ABS is running secret tracking experiments on the population?”
“Even if the ABS’ motives are benign, this behaviour — making ethically dubious decisions without consulting the public it is experimenting on — continues to damage the once stellar reputation of the ABS.”
“This kind of population tracking has a dark history. During World War II, the US Census Bureau used this kind of tracking information to round up Japanese-Americans for internment. Census data was used extensively by Nazi Germany to target specific groups of people. The ABS should be acutely aware of these historical abuses, and the current tensions within society that mirror those earlier, dark days all too closely.”
“The ABS must work much harder to ensure that it is conducting itself with the broad support of the Australian populace. Sadly, it appears that the ABS increasingly considers itself above the mundane concerns of those outside its ivory tower. This arrogance must end.”
“For us to continue to trust the ABS with our most intimate details, the ABS must maintain society’s trust. Conducting experiments on citizens without seeming to care about our approval or consent undermines that trust.”
International privacy advocates also raised concerns about the study.
“Data the companies, like telcos, collect inevitably becomes very attractive to government agencies looking to track, monitor, and survey people. Like here, users are rarely informed, let alone consent to these uses. The impact on privacy rights is severe: location information (especially combined with other sensitive data) can reveal startlingly detailed information about your life (where you live, work), connections (who you talk to or visit), preferences (what you buy and when), and health (doctors and pharmacies frequented),” stated Amie Stepanovich, U.S. Policy Manager for digital rights organisation Access Now.
“These impacts — which don’t appear to be studied — all without any clear demonstration of efficacy or purpose. This is also a hugely discriminatory approach, which won’t measure areas where people are less likely to have the technologies being used, which will likely disproportionately disadvantage poor communities.”
“It is unclear why this invasive, opaque, harmful approach was chosen over others, potentially less invasive and more effective at accomplishing any goals they are looking to meet.”
Likewise, acclaimed digital rights activist and writer Cory Doctorow was unimpressed to learn of the study.
“Subjecting entire populations to experimental surveillance projects without their knowledge or consent is deeply unethical. Our location data can be used to infer sensitive personal, political, sexual, health, and ideological information. It should be treated as toxic waste, not innocuous study data. The ABS knows this, and that’s why they kept this study secret from its involuntary participants — it certainly wasn’t because they believed that Australians would be pleasantly surprised and didn’t want to ruin the big reveal.”
When asked for comment on the pilot study, the ABS responded: “Thanks for your queries. The ABS is finalising a detailed information paper on this topic, and we’ll be publishing it on our website as soon as it’s available. We’re happy to let you know once it’s online.”
Questions the ABS didn’t answer included:
Which telco was involved in the study?
If a Privacy Impact Assessment was undertaken?
If informed consent was sought from telco customers?
The method of aggregation and anonymisation of the data?
If the project is ongoing?
Which demographics are currently included if the study is ongoing?
If any micro-data from the pilot study has been on-sold by the ABS?
Why information about the 2016 study wasn’t made available on the ABS website?
Copies of the slides can be accessed at:
https://stokes2013.files.wordpress.com/2017/05/s5-howe-chester.pdf
https://spatialinformationday.org.au/download/2954223
|
The Australian Bureau of Statistics Tracked People By Their Mobile Device Data.
| 797
|
the-australian-bureau-of-statistics-tracked-people-by-their-mobile-device-data-and-didnt-tell-them-16df094de31
|
2018-06-05
|
2018-06-05 14:04:36
|
https://medium.com/s/story/the-australian-bureau-of-statistics-tracked-people-by-their-mobile-device-data-and-didnt-tell-them-16df094de31
| false
| 1,249
| null | null | null | null | null | null | null | null | null |
Privacy
|
privacy
|
Privacy
| 23,226
|
Asher Wolf
|
Cryptoparty founder. Amnesty Australia 'Humanitarian Media Award' recipient 2014.
|
c00a63088a01
|
Asher_Wolf
| 7,954
| 6,721
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-06
|
2018-05-06 13:57:10
|
2018-05-06
|
2018-05-06 15:02:09
| 5
| false
|
en
|
2018-05-06
|
2018-05-06 15:02:09
| 3
|
16df6b4a1f09
| 2.044654
| 0
| 0
| 0
|
This was questions I was thinking for many days, I tried to solve this problem with few ideas.
| 5
|
Can Blockchain and Artificial Intelligence can help to Stop Mass shootings?
This was questions I was thinking for many days, I tried to solve this problem with few ideas.
“Mass shootings in the US: there have been 1,624 in 1,870 days” this is big problem first of all
No other developed nation comes close to the rate of US gun violence. Americans own an estimated 265m guns, more than one gun for every adult.
Data from the Gun Violence Archive reveals there is a mass shooting — defined as four or more people shot in one incident, not including the shoote
this was my first reference smart gun only admin can shoot.
AI camera with facial recognition technology, can we use that?
Blockchain technology also getting in the market how can we use this technology.
img: https://www.digitaltrends.com/home/horizon-ai-smart-camera/
AI camera with facial recognition technology can we attach this with guns?
it ispossible we can track the guns locations ?
Can AI understand things are going is dinger zoon just like ex.person taking gun to sensitive areas like school hospitals and any event were people are in mass?
AI + smartphone + camera +Gun can it be possible
http://www.guns.com/2011/07/05/contours-rifle-mounted-camera-smallest-lightest-brightest/
Already camera with guns is there .what if we can lock gun before its going to fire innocent people?
AI face recognition can help identify target, Age and location base on that pattern it analysis
shooting on this target is the crime or not.
for example, its locations are near school and mall.AI will lock his gun and will ask to confirm by police or any emergency authority unlock this gun also can analyze this shooting pattern and lock the gun.
target locations near college or schools can’t be shoot .
Now comes the Blockchain, As gun manufacturer you need to give all information to blockchain based centralize system with IOT base solution with AI, to unlock and to shoot you need to confirm By AI .in some case can’t shoot base on the AI face recognition .example 7 children can’t be target so you gun will be lock..
I hope you like the idea of smart gun…
|
Can Blockchain and Artificial Intelligence can help to Stop Mass shootings?
| 0
|
can-blockchain-and-artificial-intelligence-can-help-to-stop-mass-shootings-16df6b4a1f09
|
2018-05-06
|
2018-05-06 15:02:10
|
https://medium.com/s/story/can-blockchain-and-artificial-intelligence-can-help-to-stop-mass-shootings-16df6b4a1f09
| false
| 321
| null | null | null | null | null | null | null | null | null |
Guns
|
guns
|
Guns
| 10,942
|
Virat Ahuja
| null |
94e91875607d
|
AhujaVirat
| 23
| 187
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
70f217fc23a8
|
2018-03-08
|
2018-03-08 19:05:47
|
2018-03-07
|
2018-03-07 22:24:53
| 3
| true
|
en
|
2018-03-08
|
2018-03-08 19:19:22
| 11
|
16e1df41e86f
| 8.640566
| 139
| 2
| 0
|
Alibaba is investing huge sums in AI research and resources — and it is building tools to challenge Google and Amazon
| 5
|
Inside the Chinese Lab That Plans to Rewire the World with AI
Alibaba is investing huge sums in AI research and resources — and it is building tools to challenge Google and Amazon
Alibaba’s headquarters in Hangzhou, China. Photo courtesy of Alibaba Group
By Will Knight
The ticket kiosks at Shanghai’s frenetic subway station have a mind of their own.
Walk up to one and state your destination, and it’ll automatically recommend a route before issuing a ticket. It’ll even check your identification (a necessary step in China) by looking at your face. In the interest of reducing the rush-hour stampede, the system is set up to let you find information and buy tickets without pushing a button or talking to a person.
More impressive still, all this happens successfully in the middle of a crowded, noisy station. Each kiosk has to figure out who is speaking to it; zero in on that person’s voice within the crowd; transcribe the incoming speech; parse its meaning; and compare the person’s face against a massive database of photos — all within a few seconds.
To do it, the kiosks use several cutting-edge machine-learning algorithms. The really interesting thing, though, isn’t the algorithms themselves. It’s where they live. All that image processing and speech recognition is served up on demand by a cloud computing system owned by one of China’s most successful companies, the e-commerce giant Alibaba.
Alibaba is already using AI and machine learning to optimize its supply chain, personalize recommendations, and build products like Tmall Genie, a home device similar to the Amazon Echo. China’s two other tech supergiants, Tencent and Baidu, are likewise pouring money into AI research. The government plans to build an AI industry worth around $150 billion by 2030 and has called on the country’s researchers to dominate the field by then (see “China’s AI awakening”).
But Alibaba’s ambition is to be the leader in providing cloud-based AI. Like cloud storage (think Dropbox) or cloud computing (Amazon Web Services), cloud AI will make powerful resources cheaply and readily available to anyone with a computer and an internet connection, enabling new kinds of businesses to grow.
The real race in AI between China and the US, then, will be one between the two countries’ big cloud companies, which will vie to be the provider of choice for companies and cities that want to make use of AI. And if Alibaba is anything to go by, China’s tech giants are ready to compete with Google, Amazon, IBM, and Microsoft to serve up AI on tap. Which company dominates this industry will have a huge say in how AI evolves and how it is used.
Think bigger
Jack Ma created Alibaba Online, a simple e-commerce marketplace, in 1999, in his apartment in Hangzhou, on China’s east coast. Today the company’s headquarters, which I visited in January, consists of several large buildings housing tens of thousands of workers; the front entrance is guarded by a gigantic version of the company’s cartoonish orange mascot.
Alibaba’s core business remains selling goods and providing a platform for business-to-business trade. But this has spawned other lucrative operations, including a platform for logistics and shipments, an advertising network, and cloud computing and financial services. The company’s ubiquitous mobile payments app, Alipay, is run by a sister company, Ant Financial, which also offers loans, insurance, and investing via smartphone.
Photo: Wang HE/Getty Images
Last year on “Singles Day,” a shopping event on November 11 that Alibaba invented, the company sold more than $25 billion worth of merchandise. By contrast, on last year’s Cyber Monday (November 27), the biggest online shopping day in the US, all retailers combined brought in $6.59 billion.
The company’s success has also helped shape Hangzhou’s vibrant tech scene. The city is home to dozens of incubators, funded in part by government subsidies, that are filled with entrepreneurs who previously worked at Alibaba.
Alibaba’s colorful founder apparently doesn’t take any of this for granted. “Jack Ma believes we have been successful because of our business model, a hard-working team plus the operation,” says Xiangwen Liu, the company’s director of technology development. “In the next era of company competition, Jack’s belief is the business model cannot give success for a giant like Alibaba. His belief is in technology.”
Last October Ma announced that his company would spend $15 billion over the next three years on a research institute called the DAMO Academy (“discovery, adventure, momentum, and outlook”), dedicated to fundamental technologies. The Chinese name for the institute, 达摩, references Dharma, a legendary Indian monk said to have brought Buddhism to China in the fifth century.
China has long since shaken off its reputation for simply copying Western innovations. According to the Organization for Economic Cooperation and Development (OECD), R&D spending in China grew tenfold between 2000 and 2016, rising from $40.8 billion to $412 billion in today’s dollars. The US still spends more — $464 billion in 2016 — but its total has increased by only one-third since 2000.
Alibaba is already China’s biggest R&D spender, forking out $2.6 billion in 2017. DAMO will effectively triple its research budget, to more than $7 billion. That most likely means Alibaba will overtake IBM, Facebook, and Ford and will narrow the gap with the world’s leaders, Amazon and Alphabet, which spent $16.1 billion and $13.9 billion respectively on R&D in 2017.
DAMO will include a portfolio of research groups working on fundamental and emerging technologies including blockchain, computer security, fintech, and quantum computing. But AI is the biggest focus, and it seems like the one with the greatest potential.
DAMO clearly takes inspiration from the great commercial research labs of the 20th century. Liu mentions, for instance, AT&T’s Bell Labs, which conducted fundamental research on materials, electronics, and software, producing breakthroughs including the transistor, the laser, and the charge-coupled device for digital imaging, as well as the UNIX operating system and the programming languages C and C++. Liu says Alibaba is also inspired by the way the US’s Defense Advanced Research Projects Agency (DARPA) funds different teams competing on the same project.
Alibaba is clearly learning from the likes of Alphabet and Amazon, too. Like them, it has released a cloud machine-learning platform. The first from a Chinese company, it was launched in 2015 and upgraded significantly last year. The tools it offers are similar to those on Google Cloud and Amazon Web Services, including off-the-shelf solutions for things like voice recognition and image classification.
Developing these tools was a major technical undertaking for Alibaba. It signals both how ambitious the company is to shape the future of AI and how big a role cloud computing will play.
Another such signal is that Alibaba’s cloud supports several other companies’ deep-learning frameworks, including Google’s TensorFlow and Amazon’s MXNet. Deep learning — a technique for training machines to recognize things by feeding lots of data into a many-layered neural network — is the most important approach in AI right now, used for everything from controlling autonomous vehicles to transcribing speech. Tech companies build their own deep-learning frameworks in part to get users onto their cloud platforms, because those frameworks typically run best on their infrastructure. By supporting its competitors’ frameworks, Alibaba gives developers a reason to use its platform instead.
And that’s not all: Liu hints that Alibaba may be working on its own deep-learning framework, something that could help it get even more engineers hooked on its cloud. When asked if Alibaba might release some of the code it has developed, she answers: “When it’s mature.”
Smart answers
There have been other glimpses of Alibaba’s progress in AI lately. Last month a research team at the company released an AI program capable of reading a piece of text, and answering simple questions about that text, more accurately than anything ever built before.
The text was in English, not Chinese, because the program was trained on the Stanford Question Answering Dataset (SQuAD), a benchmark used to test computerized question-and-answer systems. Alibaba’s program uses several novel machine-learning techniques, and it notched a higher score than entries from Microsoft, Samsung, and others. Remarkably, it scored better than the average human being (although this is a bit deceptive; it doesn’t mean the program actually understands what it is reading).
More remarkable, though, is how fast Alibaba rose up the leaderboard. The company only submitted its first entry to SQuAD in September 2017. “Quite a few of the top 10 teams represent top Chinese institutions, reflecting the ongoing democratization of AI,” says Pranav Samir Rajpurkar, a PhD student at Stanford who runs the SQuAD contest.
Alibaba has already used the program to improve the automated customer support on its online marketplace, says Si Luo, a member of the team. And it hopes to deploy language understanding across its platforms and technologies.
Alibaba’s AI researchers are working on other cutting-edge projects, such as generative adversarial networks, or GANs. In this exciting new machine-learning approach, developed by a Google researcher, two neural networks are pitted against one another; one tries to generate data that seems as if it comes from a real data set, and the other tries to distinguish real examples from fake ones. The technique lets computers learn more efficiently from unlabeled data, and it can be used to create realistic-looking synthetic images and video (see “The GANfather: The man who’s given machines the gift of imagination”).
Photo: Wang He/Getty Images
Gathering clouds
One advantage China’s tech companies have over their Western counterparts is the government’s commitment to AI. Smart cities that use the kind of technology found in Shanghai’s metro kiosks are likely to be in the country’s future. One of Alibaba’s cloud AI tools is a suite called City Brain, designed for tasks like managing traffic data and analyzing footage from city video cameras.
There are such experiments in the West too, such as Alphabet’s Sidewalk project, which plans to transform a suburb of Toronto with autonomous vehicles, delivery robots, and AI-based management systems. But China will most likely want to do things on a larger scale, which will give its companies an edge in the global marketplace for AI.
The Chinese authorities’ interest in using technology for social control also helps. There are plans for a “social credit system” that would track and score citizens’ everyday behavior with a view to perks or punishment. Face recognition software from Chinese companies like SenseTime is being used to find criminals in surveillance footage, and to track suspected dissidents.
Another advantage Chinese firms enjoy is access to vast amounts of data — because of China’s huge population — with relatively few restraints on how it can be used. Ant Financial’s Alipay, for instance, has more than 520 million users, and the company determines a person’s creditworthiness, in part, by examining his or her daily financial transactions and social connections. This wouldn’t fly in Europe or the US, where strict rules dictate what kinds of data can go into a credit score. But in regions like Africa, where China has a strong economic foothold, such technologies could become the norm.
Alibaba is already exporting AI technology. It is the world’s fifth-largest cloud-computing provider, behind Amazon, Google, Microsoft, and IBM, and its cloud machine-learning platform is available in several languages, including English. This week, Alibaba launched a version aimed at developers and companies in Europe; it also announced a new AI lab in collaboration with Singapore’s Nanyang Technological University.
In some places, Alibaba is arguably ahead of the competition. Last December, it announced a collaboration with the Malaysian government to provide smart city services, including a video platform that can automatically detect accidents and help optimize traffic flow.
AI with Chinese characteristics
So if the world’s AI is supplied by China, what sorts of values will it come with? In the West there is growing concern about issues such as biased algorithms and job losses to automation. That kind of debate is less often heard in China. Speaking at the World Economic Forum in Davos, Switzerland, recently, Jack Ma, Alibaba’s boss, acknowledged the risks that come with AI; but unlike its US counterparts, Alibaba isn’t involved with ethics groups like the Partnership on AI. And unlike, say, DeepMind, the AI-focused subsidiary of Alphabet, it doesn’t have an internal ethics division.
As China becomes more proficient in AI, it will help determine how the technology reshapes the world. And Alibaba will undoubtedly be an important part of this picture.
“Well before anybody used the term artificial intelligence in a business context, Alibaba was a major innovator,” says William Kirby, a China expert at Harvard Business School. “In my view, the company has done more to change the way business is done in China than anyone; they are ambitious on every front.”
Originally published at www.technologyreview.com on March 7, 2018.
|
Inside the Chinese Lab That Plans to Rewire the World with AI
| 1,115
|
inside-the-chinese-lab-that-plans-to-rewire-the-world-with-ai-16e1df41e86f
|
2018-08-25
|
2018-08-25 01:41:59
|
https://medium.com/s/story/inside-the-chinese-lab-that-plans-to-rewire-the-world-with-ai-16e1df41e86f
| false
| 2,144
|
MIT Technology Review
| null |
technologyreview
| null |
MIT Technology Review
| null |
mit-technology-review
|
TECHNOLOGY,TECH,ARTIFICIAL INTELLIGENCE
|
techreview
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
MIT Technology Review
|
Reporting on important technologies and innovators since 1899
|
defe73a9b0ba
|
MITTechReview
| 23,166
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
722db3d639
|
2018-06-18
|
2018-06-18 09:54:25
|
2018-09-02
|
2018-09-02 14:50:05
| 0
| false
|
en
|
2018-09-02
|
2018-09-02 14:50:05
| 1
|
16e242daf85a
| 2.841509
| 2
| 0
| 0
|
Unsupervised machine learning is a set of algorithms that attempts to use unlabeled input data, which means data comes with no correct…
| 4
|
Clustering Models in Overview (Unsupervised Learning)
Unsupervised machine learning is a set of algorithms that attempts to use unlabeled input data, which means data comes with no correct answer or degree of error, that can be used as references for the algorithm. Common problems for unsupervised learning are Clustering and Association. This post is about clustering, showing you the overview of models used in clustering and the basic understanding of how respective algorithm works.
We talk about clustering, when we want to find a natural partitions of pattern in a given data. It is about grouping set of objects based on their characteristics and similarities. The following table represent the different ways of clustering.
Overview of Clustering models
Let us get a general understanding of how each of these example works.
Connectivity Models
The decision of merging clusters is based of their closeness measured with the distance. It can be Euclidean, Manhattan or Maximum distance. The model can use one of the 2 existing types of hierarchy.
Agglomerative clustering, where the algorithm starts with the individual objects, greedily put objects with maximum similarity into cluster and continue until each individual objects is assigned to one specific cluster.
Devisive clustering, where the algorithm starts with all objects in one big cluster, greedily split the cluster into two, assign objects to each group in a way to maximize within-group similarity, continue until there is a cluster with only one object.
Centroid Models
Probably the most used clustering models thanks to the K-Means Algorithm. It is simple to understand.
We decide how many clusters k do we want.
2. The algorithm will place randomly k centroids in the data .
3. The algorithm will iteratively compute the distance between each data point and each centroid.
4. Assign the point having minimal distance to the centroid to its cluster.
5. Recompute centroids position: the new position will be the mean of all points assigned to the cluster in step 4.
6. Repeat successively step 3, 4 and 5 until no data points will not move from one cluster to another.
After the clustering is finished, each data point will exactly belongs to a specific cluster. That is what we call Hard Clustering.
Distribution Models
One popular example of this model is the Expectation-Maximization algorithm.
We decide how many clusters k do we want
2. The algorithm will place randomly k Gaussian distribution with random mean and variance. These k distribution will be interpreted as clusters.
3. Expecation-step: Evaluate the conditional probability of each data point in order to find out how likely it belongs to each cluster.
4. Maximization-step: Use the conditional probability of each point to recompute the mean and the variance of each Gaussian distribution
5. Repeat step 3, 4 until convergence, which means the probability of each point to belonging to a cluster do not change any more
After the clustering algorithm is finished, we are able to assign each point likely to clusters. A data point could for example belongs to a cluster to 80% and to another to 20%. This is what we call Soft Clustering.
Density Models
The algorithm creates clusters based on the dense area in a d-dimensional space. Clusters are separated by areas with lower density. Following is the idea of the algorithm DBSCAN (Density Based Spatial Clustering Applications with Noise)
Choose 2 important parameters which are the maximum radius of neighborhood and the minimum number of point in the neigborhood
Pick randomly a data point that has not been visited, and determine if it is a core. Which means, find out if this point is a minimum point within its maximum neighborhood. If not, label it as outlier.
Repeat 2 until the picked point is a core, add all directly dense reachable point to its cluster. If an outlier is added to the reachable points, label it as border.
Repeat step 2 and 3 until all points are assigned to a cluster or labeled as outlier.
PS: Point b is directly dense reachable from a, if a is a core and b is in a’s maximum radius of neighborhood.
Only after the algorithm is finished, we know how many cluster do we have.
I hope, this could help you to have an overview about clustering. If you want all of this in practice, you can visit the following link.
2.3. Clustering - scikit-learn 0.19.2 documentation
One important thing to note is that the algorithms implemented in this module can take different kinds of matrix as…scikit-learn.org
|
Clustering Models in Overview (Unsupervised Learning)
| 13
|
clustering-models-in-overview-unsupervised-learning-16e242daf85a
|
2018-09-02
|
2018-09-02 14:50:06
|
https://medium.com/s/story/clustering-models-in-overview-unsupervised-learning-16e242daf85a
| false
| 753
|
What can we do with data?
| null | null | null |
Fun with data
| null |
fun-with-data
|
DATA,DATA SCIENCE,DATA VISUALIZATION,DISTRIBUTED SYSTEMS,TIPS
|
SMiarisoa
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Salohy Miarisoa
|
Write to better understand and not to forget
|
e7e1ad31b11d
|
salohyprivat
| 13
| 12
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
37c14363f7b0
|
2018-05-15
|
2018-05-15 21:19:09
|
2018-05-16
|
2018-05-16 12:55:10
| 0
| false
|
en
|
2018-05-16
|
2018-05-16 12:55:10
| 14
|
16e4bf335b59
| 20.05283
| 5
| 0
| 0
|
Jeff will be speaking at EMPEX on May 19th about “Neuroevolution in Elixir.” He is an AI developer, author, and manager. He coined the term…
| 3
|
EMPEX Speaker: Jeff Smith
Jeff will be speaking at EMPEX on May 19th about “Neuroevolution in Elixir.” He is an AI developer, author, and manager. He coined the term reactive machine learning and wrote the definitive text on the topic.
Andy: Hi Jeff — you spoke at our Halloween event a couple years ago, welcome back! Tell us a bit about what you’ve been up to since then.
Jeff: Sure. I like to do a lot of different things and I think at the time I spoke at the Halloween event, I was managing a team building a conversational AI, that was working primarily in Scala, and I was doing explorations outside of work in Elixir to try to understand this new technology, how could it be applied to problems that I care about.
When I left x.ai, I left to start an Elixir-first conversational AI company called John Done, with one of my colleagues from x.ai. At that company we raised a bit of money to build this idea of building a vocal conversational intelligent agent that operated over the telephone. The most widely deployed vocal platform in existence, right? So the idea was very similar to x.ai, where we were trying to build a conversational intelligent agent that does real work for users, that takes on the responsibility to go into the world and accomplish things.
We started with Elixir first there really because of our experiences in building high availability systems and needing to be able to iterate rapidly while still adding fairly sophisticated features and integrate with a lot of web systems. Our two main concerns were that we were going to be operating with messaging systems (this was facebook messenger or slack type interaction or voice interaction) and the telephone system. Both of those were examples of areas where Erlang has a lot of history there and is very well designed to suit the challenges of messaging systems and telephony.
We built our POC using Elixir and it was straight forward from there, using a lot of integration with external systems, using standard web technologies that allowed us to get around some of the inherent limitations of Elixir not being the first language people write client libraries in to integrate with their systems. We built bespoke integrations with the AI system, with telephony systems but always integrating with them using http or websockets, building our own transports, channels and things like that using Phoenix, and that all worked pretty well.
One of the things that drove that startup is that we were able to get something working that we knew for a fact was incomplete, we were doing these rapid cycles of a small dev team, raising funding, acquiring users, building all this up from the ground. All sorts of use cases that didn’t work at a given time. We knew processes might fail, supervision mechanisms would automatically restart things and allow things to recover, allowing us to be able to resume conversations successfully. So that was…
Andy: No, certainly that’s quite a lot to do.
Jeff: I ended up leaving that company, and I’ve since moved on to managing a development team in a much larger organization building another conversational AI…
Andy: So you’re kind of a serial… [laughing]
Jeff: I’ve made a few AIs, a large proportion of them are conversational AIs. I have kind of a niche. Now I work down the street at a company called IPsoft that builds a conversational AI called Amelia. I’m working on a lot of the same sorts of problems, and in this case I’m managing a large development team. I’m not actually working as a developer myself.
Andy: A little bit different.
Jeff: Yeah, it’s similar to how I functioned at x.ai where my role was to find a way that we could organizationally succeed at building technology that has never really existed before. As part of that I’ve tried to carve out space to do open source work that allows me to explore what I think are important areas in AI development, and I still think that Elixir has a role to play within AI and ML [Machine Learning] that I don’t think is obvious to everyone.
In the world that I live in where I’m constantly thinking about highly concurrent conversations occurring across a large range of different modalities and on different platforms and languages, whether it’s voice or text, my world’s filled with challenges which I would like to have the ability to have the sort of tools I had when building on the BEAM.
I think that most ML developers are working without tools that have that ability. I’ve been pressing on that gap in open source, how can I open that crack a little bit wider. I feel like I’m a member of these two tribes that don’t actually talk very much: Elixir/Erlang, and this world of deep learning AI folks building things in Python and C++, that have to deal with problems of concurrency and high availability, distribution, and all those sorts of things.
Andy: When you frame it as a lot of concurrent conversations and processes then yes, it’s easy to see Elixir or Erlang playing a role there; which is absolutely different from typical image of deep learning guys digging into Python. It’s interesting to hear.
Jeff: Yeah, yeah, I gave a talk last night where I tried to talk to a community that has spanned Scala folks, Elixir folks, and ML folks, and I tried to focus on some of these toolings similarities, to bring people together a bit more.
I think a sort of mismatch in focus that the ML community often has, that kind of ignores this problem, is that we in ML have always focused on one part of our architecture above others, and this is something I talk about in my book as well as with Sean Owen who’s really a leading figure in ML architectures, founder of Myrrix, creator of Mahout and Oryx and long time head of data science at Cloudera. He and I had this experience and I talk about it in my first book, where we in ML development teams, that don’t think about what do we do with an ML model when we publish it to production. Instead we’ve historically spent so much time thinking about “how do we train it,” “how do we actually learn from data” which is a really important challenging problem, but it’s not the only one.
We actually want to serve these models, in production, to a large range of users, which leads us into those concerns around concurrency and availability. There’s this sort of misplaced emphasis on training over serving (or inference).
Issues of where in the system we should focus our development have been a big part of my focus, in my professional work as an ML engineer. This concern led to my work designing example reference architectures for my book, Reactive Machine Learning Systems. This is what I’m really striving to do now, to build things that I haven’t seen other people build, around really mating up the world of deep learning and bleeding edge ML techniques to those real world needs about what happens after you’ve learned the model. How do you work with AI and a large active userbase? How do you interact with valuable functionality and guarantee the absence of problems?
Andy: Today, people think AI systems are computerized assistants, digital assistants, like hey Google, and Siri and Alexa. I would guess those are serving pretty large user bases… I don’t know how much you know about their actual production systems and what they do for…
Jeff: Yeah, so I’d say that some things can definitely be taken as a given. Which is, whether we’re talking about stacks that include a vocal component like the smart speakers, or things that are more about pure messaging, that level of language understanding, we’re definitely using deep learning throughout the field. It’s been critical to getting commercial grade automatic speech recognition and speech synthesis, giving our speakers the ability to talk. But it’s also absolutely crucial to having any sort of conversational AI, even in text.
Deep learning is at the heart of that, even if it’s not strictly speaking the only technique that solves everything. There’s still more niche components, things like conditional random fields, something that will come up in some sub-problems within NLP [Natural Language Processing]. They’re using DL [deep learning] models across the industry that spans big tech and small tech the same.
Even if you’re a scrappy one man startup you’re still probably grabbing some models running them via tensorflow or grabbing something off of GitHub. If you put yourself in the position of someone who’s not Google, presumably you don’t have armies of engineers…
Andy: …most people don’t…
Jeff: To build these previously unheard of systems with incredible availability surrounding them to serve at scale. Some of this is actually really hard to do. You see a simple iPython notebook that trains one model and shows you how to use it. It’s just a terminal session where one user utterance, a string of a sentence, is sent into the model and it returns a sentence score which it extracts out the entities.
That’s not your real world where you’re trying to keep your customers happy. You need to be running a high availability service to do that, and the open source tooling for model serving is pretty weak.
There’s more progress from some of the cloud vendors who actually try to support this; there are model serving platforms available from Google, Microsoft and Amazon. And they’re useful, they’re a start down that road. Most ML systems I’ve worked with in the real world are more complex and more baroque than any of the building block uses you’re going to be able to grab from one of these cloud vendors. So even when they take some of that pain off your plate — like maybe they help make model training work a little bit better — there’s all sorts of real work you’re going to need to do that’s your engineering team’s problem. In which case, I feel like the current state of the art in working with deep learning frameworks doesn’t make anyone’s life easier.
The choice of Python is the lingua franca for the user APIs has resulted in pretty poor discovery of what are the proper use of parameters in interacting with DL frameworks. This means without a static type system I can’t really say “What is an appropriate call to this method? Can I only pass in values between 0.0 and 1.0? or could I put in a 1.2?” In fact in most Python DL APIs you discover that by setting your own thing or maybe reading the docs, if there are docs.
Andy: [laughing] …if there are. Right, right…
Jeff: There’s a reason why the bleeding edge looks like this. This isn’t bad engineering; this is moving forward the capabilities of computer science. A human’s ability to work with technology that imitates human intelligence. The bleeding edge has these sort of static properties.
Andy: There’s a reason it’s the bleeding edge.
Jeff: Right, but what can we do? Right? A static type system isn’t the only way to solve that. Another way is to be able to respond to failure. To look at what would happen if we could let it crash. That’s the direction I’ve been trying to plug away at. And I’ve seen some ability to mate up those worlds of uncertainty around what my Python implementation might be able to handle, what it can do, what’s gonna happen half an hour into the training cycle when it hits a value it didn’t expect to see.
Those supervision mechanisms that descend from Erlang and OTP. They can solve some of those problems, they can give us the ability to continue to achieve the part of the mission which is still achievable. Keep learning, keep serving the users, which can pass us useful data, this is achievable, but I think this is a technique that is not being widely exploited. I think most folks who are in this situation are trying to hack it together with docker and kubernetes, which gives you very little ability to reason about these things at the level of application logic, because those are infrastructural tools.
The fact is some of these are ML problems, they’re maybe even specific to your domain, something about conversational interactions and you want to be able to do them in your code, and make use of your business rules, and decide how to respond to particular failures. I think Elixir has been a great tool for allowing me personally to explore it, and I would like to see if there’s ways to build more general reusable tools to apply the unique capabilities of that platform to the challenges of DL.
Andy: So in working with Elixir recently, are there any kind of new surprises or new elements that you’ve come across; maybe new things in recent releases of the language, or new bugs; surprises that you didn’t expect?
Jeff: On a day to day basis I love the formatter. [laughing] It really makes me happy. Because it’s such a simple little thing but I’m glad we could reach some sort of reasonable agreement on this. There’s a baseline and it keeps us happy. And it’s a somewhat different direction I think than Python and Go both took, and it’s workable for me. I’m glad I have it in my codebase and it’s enforced by CI builds that I set up for myself. That’s great.
At a kind of high level functionality I personally am still getting my head around the proper way to use tasks. I think they’re pretty core to some of the problems that I like to work on, and I have a lot of experience of working in the plague of different task implementations within the Scala community. This is something that, in Scala land, we worked on for a long time and there are a lot of different competing task implementations with different properties. That resulted in a combination of that community fragmentation which, in combination with a very typeful statically typed workflow, results in a lot of incidental complexity for developers simply trying to abstract computation over time in tasks.
I find the Elixir implementation so far is pretty productive, while at the same time I’m still personally working down my learning curve and how to employ it correctly with supervision mechanisms. I would expect that if someone took a look at, for example, the Galápagos Nǎo repo, I think someone could probably file a decent issue PR or something to improve the way that I work with tasks and supervision, but I think this is important stuff to do, and this is the harder stuff. I think this is something you see as a great reason to adopt toolchains that are working on these problems. Because a lot of real world problems that I’m familiar with have the shape of a task requiring supervision of some sort. This is a powerful technique, and it’s great to see what I would say is focussed development from the language community, on a given implementation, that we all agree we want to improve and invest in further. Not that tasks should be rigid, but that we should agree that tasks are tasks and lets not have 12 types of tasks that don’t interoperate. That’s made me happy.
Andy: Have you played at all with, or maybe it doesn’t fit into your use cases for it, some of the GenStage/Flow stuff.
Jeff: Yeah, that looks interesting to me. I think that’s closer to some of the more data engineering tasks in model learning pipelines and things like that…
Andy: That’s kind of what I was thinking of… if you’re feeding a lot of data into a model it potentially would fit in there but I’m not sure …
Jeff: Yeah, I would say that right now I think there are use cases within the sorts of ML problems that I work on and I think it’s something that would be worthwhile to explore. I haven’t gotten a chance to do as much with it as I would like to. I do like the richness of the range of different ways that we can think about our data flow within the Elixir toolchain. I think that makes a great argument for people who are trying to understand really how to mate those things which are offline and heavily compute intensive with the online and very concurrency and latency focussed.
Those are definitely some of the challenges that I encounter in trying to build ML systems with other toolchains like JVM and Python. And there can be pretty dramatic switches in how much the toolchain supports some of those workflows, when you’re using tools that are really focussed on one of those two modes, you know, it’s either batch mode offline vs realtime. But I feel like with Elixir, it’s starting to show a lot of those properties of what I would call good front-end engineering; modern JS toolchains are going in this direction as well and really thinking about abstractions for data flow that we can use consistently across contexts, and worry a little bit less about changing our programming model when we change the context in which we’re actually performing those data transformations.
Andy: You’re playing with Elixir and investigating Elixir; is your team actually using it now in any production?
Jeff: So at IPSoft right now that is not the toolset we’re using. This company’s quite old, the product is significantly younger, but the focus right now has been on serving very large enterprise customers, so the methodology that we’ve used begins with a lot of classical enterprise Java techniques — which has a lot of the expected limitations that you might imagine.
In particular one thing that I’ve found to be true both at IPSoft and x.ai and talking to other friends with different toolchains, like say fullstack JS, and talking to the folks at Hugging Face [they make a great conversational ai teens and tweens, it’s a sort of AI friend] — almost everyone’s in this position where we have no alternative but to use the latest and greatest Python DL tools, and then deal with the consequences of trying to incorporate that into a live production application.
And so it doesn’t really matter where you start or what you’re building, this Python issue is becoming pervasive. Very few people are actually working on real solutions allowing us to work across language toolchains and to use tools which allow developers to use the right tool for the right job.
One of the things I hope to talk about at EMPEX is the importance of using open interchange formats that allow us to break down some of these language barriers, because I’m not really an Elixir zealot, or a Scala zealot, I’m a guy who likes to build things. I want to use my full range of capabilities and the entire range of capabilities that the tech community has created. So I’m just trying to find ways to break down walls and build better things that can be too hard, for reasons of incidental complexity.
There are folks who are moving in that direction. Two things I’m going to be talking about that open the door to that are 1) Apache MXNet, which is a fairly new DL framework. It’s being supported primarily by Amazon right now; and 2) I’m also excited by another Open Source project ONNX. Which is the Open Neural Network Exchange format. There’s pretty broad cooperation around the industry in trying to get DL frameworks and technologies to interact. So with ONNX, you see Amazon, Facebook, Microsoft, Baidu, Nvidia, all these other companies, collaborating on finding ways to use different DL toolchains and have them pass data back and forth using language-agnostic schemas. ONNX at its simplest level is just a proto-buf schema that you can build code against in different languages and different toolchains.
Getting back to the previous technology, Apache MXNet is a polyglot DL framework. It’s starting off trying to find a way to build DL interface technology in not just Python, but also in Scala, in Julia, in R, in Go, so that we can start to have this world where all developers can become ML developers. That’s the world that I think is definitely happening, though we’re still in the early days of people doing the hard work of opening those doors up. I’ve seen a small amount of activity of people opening up ML technology to JS, and that’s the future…
Andy: everything can be done by some node module, node can do anything right? (laughing)
Jeff: Yeah, it’s all so much harder once you get into a situation like, I want to implement that bleeding edge paper, it just achieved something no one else achieved. It was posted last month and there’s one reference implementation, it’s in Python, it uses tensorflow. We can interoperate across this, these are solvable problems, these are not easy but these are worthwhile because this broadens the community.
This makes it possible for ML and DL to not be this secret priesthood of folks only in one small collection of companies and academic institutions. These are capabilities anyone in the field of CS has the ability to use and this greater democratization through sharing data and tooling, through simple interop mechanisms, is going to have a major impact in the shape of products we’re going to be able to build in the future — increasingly with small scrappy teams of folks, who have bright ideas and just want to run with them.
Andy: Looking at the wide range of submissions we got for talks, there was a lot of interest in your topic, but one of our concerns was “yeah but is it going to be a lot of Python and here’s a line or two of Elixir to show you how to call all this Python”, this is an Elixir conference so… this idea of democratizing and opening up the access to DL tools and toolchains…
Jeff: Yeah, I think it’s an important topic and after this interview right now, I’m going to go talk to Amazon’s DL team, who are the primary financial sponsors of the Apache MXNet project.
They are the best example of anyone trying to actually do this, to say there’s a world full of developers using a whole bunch of tools for entirely different reasons, many of them good ones. How can we build scalable technology with the resources of Amazon, that embrace the world of developers, not just folks who decided that it’s ok to solve all of these problems in Python?
I think that this idea is not evenly distributed. Not everyone is focussed on this part of how we can move DL forward; but if you look at the things people say we want to build in the future, like intelligent devices, iOT sorts of things, smart cameras… to get to a world where we actually see robots doing real work on a daily basis, we’re gonna have to use a broad range of tools.
These are hard problems, if you talk to folks working on embedded systems, or applications like drones, which need to do things like absolutely stay in the air, you don’t use the same tool for that as you might use for a data collection form on a website. We need to be able to have development toolchains and workflows that allow us to approach that stuff. Eventually we’re going to get to that point where folks with the relevant skills to solve those domain problems are using tools appropriate to embrace the most powerful capabilities that come out of AI as a field. It’s not a niche, it’s not specialty, it’s the same as… you don’t encounter folks who treat databases as something specialist … “oh no that’s a different field, other people know stuff about databases, I’m doing mobile.” Everyone does something with databases, everyone can put their data somewhere and get it back… you have opinions about how to do it well.
Andy: And I think you see that, historically, through a lot of CS fields — at first there’s db specialists, other specialists, there’re ops teams; then devops, and everyone knows databases… and eventually ML is going to go that way at some point.
Jeff: Yeah. Maybe we won’t all be writing academic papers in our free time.
Andy: Probably not.
Jeff: But we should be able to autonomously learn from the data that comes into our system. How to use it to make decisions that we can encode. This stuff has been in the works for more than 50 years now. This is an important goal of our field and it’s finally coming to fruition. The maturity of that is something that’s going to benefit us all, in ways that are not all foreseeable now but are important that we fully embrace, as the concern of the whole technological community.
AI’s going to change a lot of things; it’s going to be the most effective solution to a lot of people’s problems, so as a software engineer, as someone who cares about technology, I want to work on that. I want to see how that can help me do things better. As a manager who works in a large organization, I want to understand how I can help others work down that learning curve, master technologies. I absolutely don’t want to feel like there’s a specialist group within a larger team who has this knowledge that you couldn’t possibly get anywhere else; that these are the only people who can solve this particular subproblem. Because this is all still software.
These are all part of our shared responsibilities. When developing a solution, we can all build ML systems. There’s no guardian at the gate. There’s nothing you have to do. You can go back and write a DL model to recognize handwritten digits. It’s possible, all the tools are out there, and I think we’re only going to make this easier for everyone.
Andy: We’ve covered this a bit, but obviously for EMPEX one of our goals is to enhance, build, and enrich the Elixir community. You’re already in the position of trying to bridge communities and grow them together. What would you like to see worked on, built, or focussed on in the Elixir community?
Jeff: Hmm… Yeah, it’s a good question. When I think about Elixir I think the strengths are so strong.. the things that Elixir does well have been pretty much amazing since the first time I saw it. The high productivity of working with mix as a build tool, phoenix blew me away, and I was really impressed with all the things that were so easy to do and so productive, and on and on, you know, working with messaging systems, the high availability and supervision stuff is great.
I guess from where I sit some of the things that I think are opportunities for us or ways to improve are those things that make particular toolchains popular for specific domain problems. I think about a lot of numerical computing use cases. This is an area where we continually develop new tools, new languages, new frameworks, and we will keep doing so forever. It’s a not a great story for Erlang and Elixir right now.
Andy: No, it’s not.
Jeff: But I don’t think that it has to stay that way forever. Part of the way that Python took as much of the market share for numerical computing as it did is this thing called Cython, which is a dialect of Python which allows it to produce native code via C, and that produces extremely efficient code for numerical computations. Which is interesting, because it means the reason Python is so good for working with data is not really a feature of Python at all. Eventually more sophisticated tools get built on top of it like numpy, scipy, and pandas, then scikit-learn, and tensorflow and on and on. There’s this virtuous cycle that’s occurred there.
But numerical computing is a really important problem that occurs not just here in the financial district, where people want to crunch up stocks and bonds and make money — my previous field was in bioinformatics and we did a whole bunch of data crunching there to do things like figure out how cancer works and how humans are different and seeing if we can make life better. There are important numerical computing problems out there and many of them are still quite poorly served by the tools that we have today.
ML is a little bit spoilt for choice as long as you stick to the Python toolchain, but there’s a rich domain of numerical problems where I feel I would love to have a broader range of tools, particularly ones that have all the incredibly properties of BEAM technologies, coupled with the high productivity and conceptual coherence of working with Elixir.
That’s something I think about a bit, and there are a few folks who’ve already tried to pave the way in this respect. There’s a bioinformatician somewhere in Europe whose name I forget, he built a language called Cuneiform, it’s meant to be a bioinformatics glue code built on top of Erlang. What it does is it deals with the fact that all these bioinformatics workflows actually use a bunch of command line utilities: a bit of R here, a bit of perl there, a bit of bash there — and it provides a nice way to kind of glue those together in a sort of DSL, built on top of Erlang. And this is still the reality for people working in biomedical fields. Their toolchains are so fragmented and they have huge important datasets. They’re getting great genomic data….
Andy: …twined and duct taped together…
Jeff: Right, that situation makes how the internet works seem elegant by comparison. If you’ve ever looked at why every browser pretends to be Mozilla or something, it’s like that, just times 100, with the weird arcane formats, or some guy’s paper from 1997 — those things are pervasive within bioinformatics because so much of it occurs within academia, and the profit motives aren’t there in the same way that they for something like serving ads.
There’s a bunch of exciting truly unsolved painful problems that exist, especially if you think about what we do with biomedical knowledge. Well, what we do with biomedical knowledge increasingly is we try to build it into other sorts of solutions. I worked in diagnostic technologies for a long time, there’s a lot of possibilities to build useful computing technologies which solve meaningful biological and medical problems that need to have really good properties — that they run forever, that they fail in knowable and consistent ways, that they do many things at once, and so those opportunities are also under-served. I think that’s kinda whitespace I’d like to see the Elixir community attack more and building on those very strong areas that everyone already knows about Elixir.
Andy: Good answer. Thank you so much for taking the time to talk to us today, Jeff, and we’re excited to see you speak at EMPEX NY 2018!
I really appreciate Jeff taking the time to talk with me. Please don’t forget to get your ticket to the EMPEX conference to be held on May 19thin Manhattan. Say hello if you see me!
|
EMPEX Speaker: Jeff Smith
| 19
|
empex-speaker-jeff-smith-16e4bf335b59
|
2018-06-17
|
2018-06-17 13:38:03
|
https://medium.com/s/story/empex-speaker-jeff-smith-16e4bf335b59
| false
| 5,314
|
Blog Posts from the Empire City Elixir Conference
| null | null | null |
empex
|
info@empex.co
|
empex
|
ELIXIR,WEB DEVELOPMENT,TECHNOLOGY
|
empexco
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Andy McCown
| null |
6ec33572041d
|
andy_mccown
| 3
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-29
|
2018-05-29 13:28:55
|
2018-06-01
|
2018-06-01 14:06:05
| 9
| false
|
en
|
2018-06-02
|
2018-06-02 02:58:14
| 4
|
16e4d5c8776a
| 4.569811
| 8
| 0
| 1
|
The intersection of probabilistic graphical models (PGMs) and deep learning is a very hot research topic in machine learning at the moment…
| 1
|
Probabilistic Graphical Models: Fundamentals
The intersection of probabilistic graphical models (PGMs) and deep learning is a very hot research topic in machine learning at the moment. I collected different sources for this post, but Daphne Koller’s coursera course is an outstanding one. Everything you need as a background is given by my first two posts (first, second). Please revisit them, if you don’t understand a point, or just comment on the bottom of this page and I’ll answer.
A probabilistic graphical model is a representation of conditional independent relationships between the nodes.
Nodes are random variables. When we shade them, it means we have observed, hence have data for them. If nodes are blank, they are unknown and we call them latent or hidden variables.
Basic PGM with three random variables
Conditional independence means for the above graph that observing a would not be influential at all for c, if we observed b. To express this more sophisticated: c is conditional independent of a given b.
We very often put a plate around some of the variables, i.e., very general, repetition. The N signalises how often we perform that repetition.
The most important characteristic of PGMs is the ability of being translated from a graph to a joint distribution. Since any PGM is a representation of conditional independences, we can write the joint distribution for the above graph as follows:
The bold b and c signalise that we have vectors here instead of scalars. That’s because we repeat b and c N times, and then stack the scalars we get for each repetition to a vector.
Look into my Fundamentals 2 again, if you have difficulties understanding this formula. There is a section about independence and it explains everything you need to know here.
Now, you may ask yourself why the joint distribution is of importance? Let us rewrite our beloved Bayes’ theorem and you’ll directly see it.
There is the joint distribution p(b, c) that you can calculate with the PGM.
Another important factor is p(b). We often speak of model evidence, or marginal likelihood, when we want to describe it. It can be calculated as follows:
So, why do we do all that?
Remember that we have some observed variables and some latent variables. In many scenarios, the distribution of the latent variables is of interest, so we want to find a method how we can calculate it. Bayes’ theorem is describing exactly that procedure.
Let us look at the following graph and things might become clearer.
We observe b, but do not know a and c. Take the direction of the edges into account and you’ll notice that a does not matter at all for the calculating c, if we observe b. What is really of interest, is c. So, how do we calculate the probability of c? Exactly with p(c|b). Calculating this probability is called inference. There are multiple possibilities how to calculate it, but we’ll cover these in the next post.
As a side-note, we can even phrase the aforesaid “a does not matter at all”. It is called explaining away.
Example
So far, we’ve only taken three variables into account, but you can draw PGMs for problems with much more variables. See, for example, the following.
x5, x6, x7 could be what you’ll eat for breakfast, lunch and dinner respectively, and all z’s are the variables influencing it. For example, z1 could be weekday or weekend, z2 meeting out of office or not, z3 coming home late or not, and so on.
To prove independence here might be a bit of a struggle, but it is actually also not that difficult. There is an algorithm called D-Separation. Watch Daphne Koller’s lecture about it and you’ll understand it.
Discrete models
What I just described in the above example was a discrete model. Our variables can only be integer numbers. We have also discussed that in Fundamentals 1, so you might already know it.
What we can do with a PGM that assumes only integer numbers is defining a conditional probability table (CPT). For a simple PGM like you see below left, the CPT could look like the one below right. It gives us, for example, the probability that A = 0, given B = 0 and C = 1, what we write as p(A=0|B=0, C=1). It is 0.3.
Continuous models
As a counterpart to discrete models, there are continuous models. We have already spoken about them in our Fundamentals 1, so please go back, if you need a refresher.
What is a CPT for discrete models, is the conditional probability function (CPF) for continuous ones. Our PGMs look then slightly different, but we can still understand them quickly, if we know what all the variables mean.
Let’s assume one of the variables in the above graph of the discrete model, say B, is continuous. What parameters do we need to define a continuous variable, that is nothing else than a distribution (for sake of simplicity, let’s assume a normal distribution)? Exactly, mean μ and standard deviation σ. We simply take these two parameters as additional nodes and we’re set up.
That was it, we now have understood the basics and can progress to the part that really matters: how can we calculate the latent variables, when we play with huge amount of data and parameters, how we typically do in deep learning?
|
Probabilistic Graphical Models: Fundamentals
| 68
|
probabilistic-graphical-models-fundamentals-16e4d5c8776a
|
2018-06-03
|
2018-06-03 08:46:24
|
https://medium.com/s/story/probabilistic-graphical-models-fundamentals-16e4d5c8776a
| false
| 893
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Felix Laumann
|
helping you with the first steps into probabilistic deep learning | Research Scientist at NeuralSpace
|
57f955c90c95
|
laumannfelix
| 96
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-30
|
2018-06-30 13:18:46
|
2018-07-05
|
2018-07-05 13:40:11
| 4
| false
|
en
|
2018-07-05
|
2018-07-05 13:40:11
| 0
|
16e4f57f0d0b
| 1.741509
| 0
| 0
| 0
|
This is our fifth Saturday out of 16. We are setting our stones in gold. Thanks to God.
Gen pix
We learnt about logistic regression with a…
| 5
|
.......and here we are again…… 😊😄😆
This is our fifth Saturday out of 16. We are setting our stones in gold. Thanks to God.
Gen pix
We learnt about logistic regression with a neural network mindset to hone our intuitions about deep-learning.
In actual fact, we built the general architecture of a frequency logarithm. We also lerant how to initialize parameters, starting with zero initialization( However, this is not recommended), calculating the loss function and cost function on the frequency set.
We performed gradient descent using an optimization algorithm that iteraratively updated our weights(w) and bias, I almost lost it because of my bugs, 😅 they were seriously pesty, most of them come from matrix/vector dimensions that fit. Geez!!!!!😒.
Relu activation function was used for all our hidden layers because it was more stable and no1 goto choice, unlike sigmoid which we used at the output layer.
At the end,
Our frequency accuracy was close to 100% and test accuracy was 70%, it’s actually not bad for a simple model (like logistic regression, this was a good check.
These are some things I got at my pen tip at the end of the class:
*Different learning rates give different costs and then different preictions.
**If the learning rate is too large, the cost may oscillate up and down, it may even change.
***A lower cost doesn’t mean a buffer model, at least that was what I thought.
Oh not as such, we got to test our image and see the output of the model.
I have to confess, it was both a stressful and a beautiful experience. Well, it’s just part of the learning process.
|
.......and here we are again…… 😊😄😆
| 0
|
and-here-we-are-again-16e4f57f0d0b
|
2018-07-05
|
2018-07-05 13:40:11
|
https://medium.com/s/story/and-here-we-are-again-16e4f57f0d0b
| false
| 276
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Adegoke Toluwani
|
Writing, enjoy music, AI, Machine Learning, coding.....
|
1679f5daec6a
|
boluwatifelounsolae
| 1
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-13
|
2018-07-13 06:59:33
|
2018-07-13
|
2018-07-13 13:16:01
| 4
| false
|
en
|
2018-09-20
|
2018-09-20 03:07:16
| 11
|
16e647e49cf1
| 6.062264
| 899
| 11
| 0
|
Virtual Rehab’s evidence-based solution uses Virtual Reality, Artificial Intelligence, & Blockchain technology for Pain Management…
| 5
|
All About Virtual Rehab’s Token Sale
Virtual Rehab’s evidence-based solution uses Virtual Reality, Artificial Intelligence, & Blockchain technology for Pain Management, Prevention of Substance Use Disorders, and Rehabilitation of Repeat Offenders.
We’re Baaaaaaaaaack !!!
So, first is first — How are we doing on those articles? We sincerely hope that you enjoyed reading our first and second article. If not, then we count on you to tell us what didn’t you like and how could we improve? Otherwise, we will keep boring you over and over and over and over again. Remember, a lot of us are techies, so we’ll do a pretty darn good job at it.
No seriously ! Let us know your feedback and we will always strive to make these articles better as we get the hang of it.
So, as we are approaching our Private Token Sale (starting August 1st), we thought of providing you with a quick snapshot of the key things you should know about our upcoming token sale.
Virtual Rehab’s Logo
First and foremost, we got a kick-bottom team — be it the founders, the advisory board, or even our contractors, we are top-notch with plenty of experience, global awards, and we never take no for an answer — heck, we have an advisor who was recently inducted into the CIO Hall of Fame (we didn’t even know that they had one), but yeah, this CIO Hall of Fame holds the top 100 CIOs from around the world. That gentleman’s name is Mr. Philip Fasano, who was the CIO for Kaiser Permanente. We also have the 2017 recipient of the global corrections research award. Told you we’re not joking here. And that gentleman’s name is Dr. Jeffrey Pfeifer. Please take the time and read more about our team in our White Paper.
Actually, this may very well be a good starting point. So, since we’re talking about awards, please allow us to share with you some of the notable accomplishments, recognition, and awards we have received since our inception back in 2017.
Here is a list:
Evidence-based solution with proven efficacy results approved by physicians, psychologists, and therapists
87% of participating patients have shown an overall improvement across various metrics
Described by US Digital Government Head as a “capability that is very very promising for public services”
Only VR/AI company included in the US Department of Justice, Institute of Corrections Environmental Scan report
Partnership agreements in-place across the North America, Europe, Middle East, and APAC regions
Only company to represent Canada as part of the Canadian Delegation to Arab Health
Selected as one of Canada’s most promising high-growth life sciences companies (Dose of the Valley, CA)
Featured by Microsoft’s leadership team at the Microsoft Inspire Innovation Session
Nominated by The Wall Street Journal for the WSJ D.LIVE Startup Showcase (Laguna Beach, CA)
Ranked by Spanish media as the first option for training correctional officers and rehabilitation of offenders using virtual reality
Featured by the media across 28 countries worldwide
Founder awarded with the “Expert” status by the United Nations Global Sustainable Consumption & Production (SCP) Programme with focus on Sustainable Lifestyle and Education
Now, you tell me what do you think?
Of course, we would not have accomplished all of this without our great leadership, our seasoned advisory boards, and most definitely, the contractors who helped us throughout this journey.
So, let’s give a quick introduction (yes … yes … we know … introductions normally come first … but we got too excited with the awards topic) to those who have not read the first two articles:
Virtual Rehab’s evidence-based solution leverages the advancements in virtual reality, artificial intelligence, and blockchain technologies for pain management, prevention of substance use disorders, and rehabilitation of repeat offenders. Our all-encompassing solution includes services in a telemedicine context and can extend to individual users of the Virtual Rehab solution to serve the B2C market, in addition to hospitals, rehab centers, correctional facilities, and others to serve the B2B market. Furthermore, using blockchain technology, we can now reach out to those vulnerable populations directly, to offer help and reward, by empowering them with the use of Virtual Rehab’s ERC-20 $VRH Token within our network.
Now, that’s all nice and dandy, but you’re probably thinking — “That’s a lot of technology to use. How are they making this happen?” So, let us tell you more:
Virtual Rehab’s innovative and powerful solution (supported by existing research) is intended to psychologically rehabilitate those in most need for our service offering.
Although the scope of our existing solution includes pain management, psychological, and correctional rehabilitation, the Virtual Rehab team reserves the right to explore new industries to further expand our global operations.
Virtual Rehab’s all-encompassing solution covers the following pillars :
Virtual Reality — A virtual simulation of the real world using cognitive behavior and exposure therapy to trigger and to cope with temptations
Artificial Intelligence — A unique expert system to identify areas of risk, to make treatment recommendations, and to predict post-therapy behavior
Blockchain — A secure network to ensure privacy and decentralization of all data and all information relevant to vulnerable populations
$VRH Token — An ERC-20 utility token that empowers users to purchase services and to be rewarded for seeking help through Virtual Rehab’s online portal
Once again, please make sure that you take the time to read our White Paper. We have put so much time and effort to make it as detailed and as concise as possible. Yes. It does get too technical in some parts. However, in case we lose you at any point while reading the White Paper, please reach out to us on any of the social platforms below:
Website: https://www.virtualrehab.co
Bitcointalk: https://bitcointalk.org/index.php?topic=4657682.msg42059355#msg42059355
Facebook: https://www.facebook.com/ViRehab
Twitter: https://twitter.com/ViRehab
LinkedIn: https://www.linkedin.com/company/virtual-rehab/
Telegram Group: https://t.me/virtualrehab
Medium: https://medium.com/@VirtualRehab
YouTube Channel: https://www.youtube.com/c/virtualrehab
You folks have absolutely no excuse for not finding a way to track us down. Don’t worry. We’ll love it.
Now, let’s tell you more about our actual token sale (indeed … it’s about time).
So, we will be starting on October 1st with our Private Sale (Yes. It is called Private and we are still opening it up to everyone who is interested in contributing at least $15k to our token sale). Therefore, if you are interested, and willing to take an educated bet on a stellar team with a stellar technology, then please feel free to drop us a line at investors@virtualrehab.co and you bet we will get back to you at the earliest.
If the Private Sale is not your thing, then no worries, we do have a Pre-Sale and a Main Sale.
In the table below, you will have all the information that you need to know about both along with some additional information, which you will wish to keep in consideration as well:
Virtual Rehab Tokenomics
OK. This is all good. However, you’re probably thinking now — “How are they going to use all the money, they raise?”
Great question. Please see below the way we plan to use the funds:
Virtual Rehab’s Use of Funds
Let’s explain a bit more:
Future Development — further product development (VR programming, enhancement of AI expert system, blockchain integration, new features launch, telemedicine platform, and further enhancement of the online portal), research & development, hiring of additional staff, opening of Virtual Rehab Centers (to leverage our solution and serve users directly — will be the first centers of a kind to leverage VR, AI, and the blockchain technologies altogether)
Marketing — seminars, conferences, hosting, pilot programs, exchanges, sponsorship, etc.
Partnerships — sealing new strategic partnerships with leading universities and institutions from around the world
Yes. We got a lot of work ahead of us. But wait, let us share the high level 2018/2019 roadmap to further put things further in perspective:
Virtual Rehab Roadmap
Sorry for the long article, and we do hope that we gave you a decent overview of our Token Sale information. Having said that, again and again and again, please read the White Paper in complete. Come back to us with any questions. We will be more than happy to answer all of them. Listen, we know that you work very hard for your money, and we want you to be 1000000% positive that this is the right investment.
Thank you once again for taking the time to read this article, and thank you a million times for your support of Virtual Rehab.
And always remember …
Be Safe and Make a Difference in this World !!!
Peace Out !
|
All About Virtual Rehab’s Token Sale
| 1,884
|
all-about-virtual-rehabs-token-sale-16e647e49cf1
|
2018-09-20
|
2018-09-20 03:07:16
|
https://medium.com/s/story/all-about-virtual-rehabs-token-sale-16e647e49cf1
| false
| 1,421
| null | null | null | null | null | null | null | null | null |
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
Virtual Rehab
|
Virtual Rehab's evidence-based solution uses #VR, #AI, & #blockchain technology for Prevention of Substance Use Disorders & Rehabilitation of Repeat Offenders
|
f0264e3a3a70
|
VirtualRehab
| 2,168
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-06
|
2017-09-06 12:19:51
|
2017-09-06
|
2017-09-06 12:42:25
| 0
| false
|
en
|
2017-09-06
|
2017-09-06 12:42:25
| 6
|
16e6fe53ed02
| 1.467925
| 8
| 0
| 0
|
Hi, I’m Tim and I’m doing free open source courses on building things with javascript. So far I’ve done a course on building client-server…
| 5
|
“Building X with javascript” — help me pick next topic
Hi, I’m Tim and I’m doing free open source courses on building things with javascript. So far I’ve done a course on building client-server products and desktop applications.
Now is time to pick a topic for the next course — and I need your help!
As usual, the course will be published on GitHub and YouTube, with livestreams happening on Twitch.
If you prefer video format, you can watch the video below. Otherwise — read on!
I’ve prepared 5 topics for you to pick from.
Those are:
1. Data science/analytics project
This is what I do during my day-to-day work, so I can offer the most in-depth course on this topic.
I’m planning to cover data scraping, cleaning, processing, and visualization, along with microservice-based data processing pipelines.
Since this is an area I’m quite familiar with, I’d already come up with a good project — we’re going to process product reviews and present them in a more meaningful way than arbitrary stars or numbers.
2. Embedded programming with Raspberry-Pi
We’re going to build a RPi-based thing. My best idea was to build a small homebrew gaming console.
I’m planning to cover basics of embedded programming, working in environments with limited resource and specifically — working with RPi.
3. Bots
We’re going to build a simple bot. No ideas here, so I’m open to suggestions :)
I’m planning to cover basic bot building and usage of third-party bot API (discord, twitch, etc) in a unified way.
4. Mobile app using React-Native
We’re going to build a mobile application using React-Native. Best idea so far was to build an Instagram clone — any better ideas are welcome!
I’m planning to cover basic mobile development (which is required to work with RN) and, of course, using React-Native itself.
5. Applied Machine Learning
We’re going to build a system that’d utilize Machine learning is some way. Best idea I had so far is actually to add that to the topic (1) since it fits quite well.
I’m planning to cover primarily usage of ML as a developer.
I want to learn!
If you are interested in any of those topic and/or have a good idea for projects, please fill out this survey.
|
“Building X with javascript” — help me pick next topic
| 84
|
building-x-with-javascript-help-me-pick-next-topic-16e6fe53ed02
|
2018-06-20
|
2018-06-20 16:36:44
|
https://medium.com/s/story/building-x-with-javascript-help-me-pick-next-topic-16e6fe53ed02
| false
| 389
| null | null | null | null | null | null | null | null | null |
Teaching
|
teaching
|
Teaching
| 23,418
|
Tim Ermilov
|
Hi, I’m Tim! I talk about webdev, javascript and big data.
|
94ac54b64fc4
|
yamalight
| 1,185
| 136
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-24
|
2018-06-24 05:58:25
|
2018-06-24
|
2018-06-24 06:02:49
| 1
| false
|
en
|
2018-06-24
|
2018-06-24 06:02:49
| 1
|
16e9aea2b293
| 0.441509
| 3
| 0
| 0
|
Hello All, I just launched a new course on Udemy where you will learn all about Machine Learning, from data preparation to applying ML…
| 5
|
Machine Learning — Data Munging/Preparation 100% Free Udemy Course [Limited time only]
Hello All, I just launched a new course on Udemy where you will learn all about Machine Learning, from data preparation to applying ML algorithms. This is first course in the series. I am providing early release free so claim yours today at the link below:
https://www.udemy.com/pathway-to-machine-learning-part-1/?couponCode=MEDFREE18
|
Machine Learning — Data Munging/Preparation 100% Free Udemy Course [Limited time only]
| 150
|
machine-learning-data-munging-preparation-100-free-udemy-course-limited-time-only-16e9aea2b293
|
2018-06-24
|
2018-06-24 06:02:49
|
https://medium.com/s/story/machine-learning-data-munging-preparation-100-free-udemy-course-limited-time-only-16e9aea2b293
| false
| 64
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Nehaa Vishwakarma
| null |
c3ecb804fdf2
|
nehaavishwa
| 0
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
a49517e4c30b
|
2017-11-16
|
2017-11-16 10:19:12
|
2017-11-16
|
2017-11-16 10:20:26
| 1
| false
|
en
|
2017-11-16
|
2017-11-16 10:20:26
| 1
|
16e9c51c4979
| 1.120755
| 0
| 0
| 0
|
The healthcare industry has been around for millennia. Although much has changed in the way it currently functions, it needs to transform…
| 5
|
ARTIFICIAL INTELLIGENCE — THE NEW NERVOUS SYSTEM FOR THE HEALTHCARE INDUSTRY
The healthcare industry has been around for millennia. Although much has changed in the way it currently functions, it needs to transform into something better to serve the sick. To put it in simpler terms, the healthcare industry needs a new nervous system to function efficiently. This is where artificial intelligence comes into the picture.
In healthcare, artificial intelligence has seen a consistent rise in adoption. AI helps solve a variety of problems for hospitals, patients, and the industry as well.
Here are five ways how AI is revolutionizing the healthcare industry.
1. AI Chatbots — Chatbots, also known as intelligent personal assistants, are expected to take over healthcare messaging apps. These chatbots will ease the burden on medical professionals for simple health concerns and will quickly solve minor sicknesses.
2. Apps — Medical apps help in interpreting and understanding lab test results.
3. Emotional Intelligence Indicators — AI-based virtual assistants can pick up cues in speech, and gestures to assess an individual’s feelings and mood.
4. Building Prosthetic Limbs — Using AI building prosthetic limbs can be simplified. A few crucial mechanical parts can be replaced with AI to ease movement.
5. Improving Clinical Documentation — Doctors and hospitals are using AI to document and bring-up old data.
In this blog, we will discuss how these five elements of AI are radicalizing the way the healthcare industry functions.
Continue Reading..
|
ARTIFICIAL INTELLIGENCE — THE NEW NERVOUS SYSTEM FOR THE HEALTHCARE INDUSTRY
| 0
|
artificial-intelligence-the-new-nervous-system-for-the-healthcare-industry-16e9c51c4979
|
2017-11-17
|
2017-11-17 07:45:48
|
https://medium.com/s/story/artificial-intelligence-the-new-nervous-system-for-the-healthcare-industry-16e9c51c4979
| false
| 244
|
Best place to learn about Chatbots. We share the latest Bot News, Info, AI & NLP, Tools, Tutorials & More.
|
chatbotslife.com
|
ChatBotsLife
| null |
Chatbots Life
|
team@chatbotslife.com
|
a-chatbots-life
|
CHATBOTS,BOTS,ARTIFICIAL INTELLIGENCE,CONVERSATIONAL UI,MESSAGING
|
Chatbots_Life
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
TechJini Inc
|
Google certified developer agency specializing in Web and Mobile Application Development, Cloud, Internet-of-Things and Big Data services.
|
f4bfe1c9cd1b
|
Techjini
| 47
| 490
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-23
|
2018-08-23 16:44:49
|
2018-08-23
|
2018-08-23 16:45:00
| 0
| false
|
en
|
2018-08-23
|
2018-08-23 16:45:00
| 1
|
16ea8b0be40c
| 1.762264
| 0
| 0
| 0
|
([PDF]) Data Mining: Practical Machine Learning Tools and Techniques Full Ebook By Ian H. Witten
Link…
| 1
|
DOWNLOAD BOOK Love Misadventure By Lang Leav [Free Ebook] #readonline
([PDF]) Data Mining: Practical Machine Learning Tools and Techniques Full Ebook By Ian H. Witten
Link https://happybook.ebooksearch.top/?q=Data+Mining%3A+Practical+Machine+Learning+Tools+and+Techniques
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Read Online PDF Data Mining: Practical Machine Learning Tools and Techniques, Download PDF Data Mining: Practical Machine Learning Tools and Techniques, Download Full PDF Data Mining: Practical Machine Learning Tools and Techniques, Download PDF and EPUB Data Mining: Practical Machine Learning Tools and Techniques, Read PDF ePub Mobi Data Mining: Practical Machine Learning Tools and Techniques, Reading PDF Data Mining: Practical Machine Learning Tools and Techniques, Read Book PDF Data Mining: Practical Machine Learning Tools and Techniques, Read online Data Mining: Practical Machine Learning Tools and Techniques, Download Data Mining: Practical Machine Learning Tools and Techniques Ian H. Witten pdf, Download Ian H. Witten epub Data Mining: Practical Machine Learning Tools and Techniques, Read pdf Ian H. Witten Data Mining: Practical Machine Learning Tools and Techniques, Download Ian H. Witten ebook Data Mining: Practical Machine Learning Tools and Techniques, Read pdf Data Mining: Practical Machine Learning Tools and Techniques, Data Mining: Practical Machine Learning Tools and Techniques Online Download Best Book Online Data Mining: Practical Machine Learning Tools and Techniques, Read Online Data Mining: Practical Machine Learning Tools and Techniques Book, Read Online Data Mining: Practical Machine Learning Tools and Techniques E-Books, Read Data Mining: Practical Machine Learning Tools and Techniques Online, Read Best Book Data Mining: Practical Machine Learning Tools and Techniques Online, Read Data Mining: Practical Machine Learning Tools and Techniques Books Online Download Data Mining: Practical Machine Learning Tools and Techniques Full Collection, Download Data Mining: Practical Machine Learning Tools and Techniques Book, Read Data Mining: Practical Machine Learning Tools and Techniques Ebook Data Mining: Practical Machine Learning Tools and Techniques PDF Read online, Data Mining: Practical Machine Learning Tools and Techniques pdf Download online, Data Mining: Practical Machine Learning Tools and Techniques Read, Download Data Mining: Practical Machine Learning Tools and Techniques Full PDF, Read Data Mining: Practical Machine Learning Tools and Techniques PDF Online, Read Data Mining: Practical Machine Learning Tools and Techniques Books Online, Read Data Mining: Practical Machine Learning Tools and Techniques Full Popular PDF, PDF Data Mining: Practical Machine Learning Tools and Techniques Read Book PDF Data Mining: Practical Machine Learning Tools and Techniques, Read online PDF Data Mining: Practical Machine Learning Tools and Techniques, Download Best Book Data Mining: Practical Machine Learning Tools and Techniques, Read PDF Data Mining: Practical Machine Learning Tools and Techniques Collection, Read PDF Data Mining: Practical Machine Learning Tools and Techniques Full Online, Read Best Book Online Data Mining: Practical Machine Learning Tools and Techniques, Download Data Mining: Practical Machine Learning Tools and Techniques PDF files
|
DOWNLOAD BOOK Love Misadventure By Lang Leav [Free Ebook] #readonline
| 0
|
download-book-love-misadventure-by-lang-leav-free-ebook-readonline-16ea8b0be40c
|
2018-08-23
|
2018-08-23 16:45:01
|
https://medium.com/s/story/download-book-love-misadventure-by-lang-leav-free-ebook-readonline-16ea8b0be40c
| false
| 467
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
trinityhampton
| null |
7c0dcdafb7af
|
trinityhampton
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
df28f4125da2
|
2017-12-17
|
2017-12-17 20:48:56
|
2017-12-17
|
2017-12-17 22:09:57
| 11
| false
|
en
|
2017-12-21
|
2017-12-21 14:53:46
| 15
|
16ebe795ec0a
| 10.601887
| 16
| 0
| 0
|
On Nov 30th, I was invited to CXxAI — an interactive roundtable for Customer Experience leaders in enterprise organizations. During the…
| 5
|
Ten Tips to Design Better Bots for Business
On Nov 30th, I was invited to CXxAI — an interactive roundtable for Customer Experience leaders in enterprise organizations. During the panel, we focused on the most widely used form of artificial intelligence in customer experience — chatbots and virtual assistants, enhanced with natural language processing capabilities. We discussed how are leading organizations using this technology? What are the challenges and opportunities? And what are future trends?
The questions at the panel, as well as subsequent conversations, research, and reflection, inspired me to write this blog to share my thoughts on the business opportunities of these technologies, as well as some dos and don’ts of bot design.
Panel on CXxAI — an Interactive Roundtable
Let us level set on some terms before we begin. Broadly defined, Artificial Intelligence (AI) is a computer program that attempts to mimic human intelligence, and that can sense, reason, act and adapt. It can respond to various inputs such as text, voice, computer vision, geo-location and sensors that detect physical characteristics such as temperature, weight, volume, humidity, motion, etc. In the future, even more, inputs may be possible, such as digital smell, and human emotions.
Inputs to AI and ML systems
“Bots are the new apps” declared Microsoft CEO Satya Nadella in 2016, and they are certainly on the rise. In fact, Gartner predicts that by 2019, 20% of brands will abandon their mobile apps for chatbots, which will power 85% of all customer service interactions.
The advantages of chatbots are that they are available 24/7 and customers today prefer to use messaging over other forms of communication. According to BI research, messaging apps have surpassed social media, and this trend will continue.
Good Bots, Bad Bots
While there is a lot of interest in bots, it is still early days in learning how to design them well for business impact.
“Bots have the illusion of simplicity on the front end but there are many hurdles to overcome to create a great experience….We have to unlearn everything we learned the past 20 years to create an amazing experience in this new browser.” — Shane Mac, CEO of Assist
Here are ten tips I gathered from the roundtable, as well as from independent research and reflection. I want to capture how it feels to use bots, and thereby develop some design principles. I have included personal bot-related anecdotes that friends and colleagues shared with me to illustrate these principles.
1. Start with high-frequency use cases
The most successful BOTs are those that serve high-frequency use cases well. Businesses can identify the most frequently asked questions through call center logs or web search analytics.
One example is Capital One’s Eno bot, which is designed for five use cases: Tracking account balance, checking recent transactions, viewing available credit, confirming payment due dates, and accepting payments. For most other tasks, the customer must interact with a human.
Capital One’s ENO addresses high frequency use cases
Bots that track packages and airline check-in bots are other good examples.
The advantage of starting with high-frequency use cases, is that it frees the internal support team from answering the same questions over and over again: a low-value activity. So, it is win-win for, both, the customer and the business.
2. Avoid rigid bots
While it is great to start small, if the bot capabilities are too narrow, the customer experience will feel rigid and unnatural. Human beings will always behave in non-standard ways, so if the bot does not anticipate that or learn over time, the experience will feel less than delightful.
Humans will be humans
In general, chatbots based on rules tend to be restrictive, and users have to be very specific and precise to use them. However, chatbots built on an AI platform can understand natural language, so the user does not need to be ridiculously specific.
The Duolingo Conversation Chatbot is a good example of a bot that does not come across as rigid. It allows the user to practice a language by responding to its questions. It can process what the user says and to respond, but it can also suggest more effective ways of saying the same thing. For example, If the question was “What do you want to drink?”, someone might respond with “coffee”. The bot would then suggest a better answer, “one coffee please.”
Duolingo is delightful
Several users I spoke to loved the DuoLingo app for it’s natural, conversational feel. They specifically mentioned how they appreciated the “better answer” feature.
3. Focus on efficiency
One of the key value propositions of chatbots is efficiency over other forms of interactions. So, make sure it is more efficient to use your bot for high-frequency use cases than using the app.
For example, Skyscanner has a bot and an app to book flights and manage itineraries. The bot interaction takes a number of clicks to see the trip, flight, fees etc., while the app displays all of the information on a single screen.
Skyscanner bot vs app
4. Remember, garbage in garbage out
It is important to consider the quality of the company’s knowledge base when creating a chatbot, as this is the information from which chatbots pull answers. If the information is not current and of high quality, it will affect customer satisfaction, and ultimately, the success of the chatbot project. This is crucial to address before chatbot implementation, especially as companies move beyond simple rule-based systems to AI systems, powered by machine learning.
Who can forget the lessons learned from Microsoft Tay, an AI-enabled chatbot that started posting racially insensitive messages because the quality of the training data was poor?
5. Consider the customer journey
Most organizations start small to gain experience and over time expand their scope. However, it is important to think about how the bot fits into various touchpoints along a customer’s journey.
A typical customer journey has the following stages:
· Awareness — How does the potential customer become aware of product?
· Engagement — How will you nurture their interest and engage them?
· Transaction — How can they buy the product? Can they customize the product? How will they pay for it?
· Service — How can they access customer support? How might they return the product if not satisfied?
· Advocacy — How can a satisfied customer refer other customers?
A bot could be deployed in any one of these stages. However, the customer may have a disjointed experience if their end-to-end journey is not taken into consideration.
For example, my friend used a chatbot to purchase sunglasses. The company handled all of the shipping and order confirmation communication via Facebook messenger’s bot. Overall, he liked the interaction, especially because it made access to shipping information easily, without searching through all of his emails. However, he received the incorrect product and had to return it. He tried to communicate the issue via the chatbot, but it could not handle the query. He then had to figure out a way to connect with a customer service representative, and he, ultimately, found the experience disjointed and challenging.
6. Design for a human and bot teamwork
At this point, most businesses do not have enough high-quality data for robust machine learning algorithms, so don’t try to pass the Turing test! Instead, design hybrid bots that work alongside human agents to automate routine queries. This is a great way to ensure a high-quality experience for the customer.
The bot could gracefully hand off to a live agent in either of these scenarios:
· Chatbot may offer an explicit option to transfer to a live agent
· Or, implicitly transfer the customer, if the query is too complex
Similarly, if the query is routine, an agent could hand it off to the chatbot, once the customer is notified and agrees to it.
It is important to anticipate and design these hand-offs, since they can degrade the quality of the customer experience. Sometimes customers want to know if they are speaking to a bot or a human, and not knowing could make them feel uncomfortable.
My friend is a fan of the outdoors and he uses, both, the Backcountry website and their embedded app for specific product questions. He does not like that the bot pretends to be a live agent, however. He would prefer if the bot revealed that it is a bot, as he would feel more forgiving towards it.
7. Extend your brand experience
While most companies are initially looking for cost savings in their customer support, bots offer the opportunity to extend a business’s brand. So, it’s important to speak in an authentic voice that resonates with your audience. Research shows that bot users skew younger, and they are an opportunity to expand your target demographic, while ensuring that the bot still represents your brand.
The American eagle app matches the brand well and makes shopping look like fun.
American Eagle extends its brand via the bot
Yet, there are plenty of examples of bots that don’t match the brand of the product. My friend tried to use MeditateBot on Facebook messenger. Instead of replying to the chats, the bot persistently marketed the app. It asked him to download the app after every reply. This experience made my friend turn away from the product altogether.
Meditate — Marketing?
8. Consider both voice-enabled Virtual Assistants and text-based Chatbots
Headless, voice-enabled interfaces like Amazon’s Alexa and Google Home are great companions to text-based chatbots. We need to better understand which use cases are the best match for which form of interaction and optimize for that. For example, hands-free interaction using Alexa is great at home. However, I cannot check flight status on the go. I am also not sure I want to check my bank balance using a voice interface, especially in a public place. I would prefer a chatbot for that. As designers, we need to refocus on human needs and be better matchmakers of technology to use cases.
9. Security and trust are paramount
Bots know a lot more about us than other forms of interactions. Businesses need to implement measures to safeguard customer data in order to earn and maintain our trust.
Here is a personal anecdote. I saw an ad for a product on Facebook and clicked on it. This took me to the product website. There, I watched a product video but did not proceed. A few days later, I decided to buy the product, and I went back to the website and paid for it. Immediately, I received a Facebook chatbot message that thanked me for the order. This caught me by surprise. I had forgotten that I had clicked on the Facebook ad, to begin with. I wasn’t sure if I wanted this vendor to have my Facebook data. What data did Facebook share with this vendor? How long will this information be stored on their server? Could the vendor repackage and sell my data to other vendors? I don’t know, and not knowing makes me uncomfortable.
Imagine if I bought the product in a store and, when I went to check out, the cashier immediately asked me several personal questions: What is my relationship status? Do I have children? etc. How would I feel? I am pretty sure I would leave the product on the counter and walk away from the store.
Interestingly, customers are also creating bots to interact with businesses. Let me tell you the story of “sneaker bots”. My friend is a sneaker enthusiast. He is part of is a community that is into buying new and rare sneakers. A couple of weeks ago, a set of ten highly sought-after sneakers were released by Nike, and he tried to purchase a pair but it was all but impossible due to these “sneaker-bots”. Essentially a group of people have figured out a way to automate the purchase process for sneakers so that the sneakers are purchased much faster than a human being can click through several screens on a retailer’s website or app and as a result, the sneakers sell out in milliseconds. This example comes from the user side rather than the retailer side but I am curious how businesses will respond to address this issue and make the purchase of coveted sneakers fairer for the average consumer who does not utilize a bot.
Sneakerbots buy rare sneakers faster than humans can
10. Beware of bot fatigue
Recently, the term ‘bots’ has started to provoke a bad perception within the general public. It does not help that a significant percentage of Twitter users are, allegedly, bots. The recent United States elections and the allegations of Russian bots have only exacerbated the situation.
While it is great that the Facebook platform has seen bots grow to 100K in less than one year, it also brings with it discovery issues — How will the customer know that your business has a bot, and how will they find it?
In Conclusion
So, what does this all mean for the future of bots? While chatbots are a promising new technology, I feel there is a lot more work to be done to deliver on this promise.
Let us reset expectations and focus on productivity. There is tremendous opportunity to improve the efficiency of enterprise workers and help them access information quickly in service to the customer. Internal bots can help employees check the status of orders, inventory levels, be notified of any potential shipment delays, find product experts etc. Having this information at their fingertips can make employees even better at what they do, and enhance customer experience indirectly. SAP’s co-pilot is designed to go in this direction.
Do not underestimate the need for iteration in bot design. The software development process for chatbots defers from apps in that there is a lot more need for user testing, data training and iteration.
“In Apps, you design, build, test, launch. In Bots, you design, build, test, launch, get feedback, train, test, update (the last 4–5 steps never end).” — Eswar Priyadarshan, CEO of BotCentral
Finally, we cannot overemphasize the importance of customer trust in determining the success or failure of any new technology.
To rephrase Maya Angelou, your customers will forget what your bot said, they will forget what the bot did, but they will not forget how it made them feel.
I hope this blog offered some useful tips on how to leverage bots for business impact. While there is a lot of chatter about chatbots, the conversation is just beginning. Let’s chat.
Special Thanks
Thanks to NICE Satmetrix for hosting the roundtable moderated by Shane Oren from NICE Satmetrix, and my co-panelists were Guneet Singh, Director of Customer Experience, Docusign, and John Spencer, Senior Director at ServiceNow.
Thanks to Eswar Priyadarshan, Rachael Chung, Eliad Goldwasser, Brandon Hightower, Amol Deshpande, Meime Huang, David McKay, Jerry John and Pravin Kumar for sharing their bot related anecdotes with me.
|
Ten Tips to Design Better Bots for Business
| 185
|
ten-tips-to-designing-better-bots-for-business-16ebe795ec0a
|
2018-06-20
|
2018-06-20 04:18:45
|
https://medium.com/s/story/ten-tips-to-designing-better-bots-for-business-16ebe795ec0a
| false
| 2,465
|
We make work delightful through a human-centered design approach. SAP privacy statement for followers: www.sap.com/sps
| null | null | null |
SAP Design
|
SAP_Design_Creative_Lab@sap.com
|
sap-design
|
ENTERPRISE DESIGN,USER EXPERIENCE,DESIGN THINKING,DESIGN,INNOVATION
|
SAP_designs
|
Chatbots
|
chatbots
|
Chatbots
| 15,820
|
Janaki Mythily Kumar
|
Design Evangelist, Office of the Chief Design Officer at SAP, thought leader in design-led innovation, author, speaker, educator
|
e175cb725d3c
|
janakikumar
| 1,446
| 308
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
77d56c24c713
|
2018-07-25
|
2018-07-25 21:18:38
|
2018-07-25
|
2018-07-25 21:22:53
| 1
| false
|
en
|
2018-07-25
|
2018-07-25 21:36:51
| 0
|
16ec3e6d7f67
| 0.769811
| 0
| 0
| 0
|
Ahead of our final Create*AI workshop on Strategy and Capability I caught up with Ashnee and Matt from PwC to discuss the key topics around…
| 4
|
Discussing Strategy and Capability requirements for companies utilising Artificial Intelligence.
Ahead of our final Create*AI workshop on Strategy and Capability I caught up with Ashnee and Matt from PwC to discuss the key topics around culture, pace of change, skill set development and business readiness.
At the workshop Presenters Matthew Whitaker + Ashnee Mavronicolas are going to take you on a journey through outcome-led technology strategy to capability to execution and the decision points in between.
In this workshop you’ll have the chance to wear the hat of senior decision makers, set a strategy and then be challenged to identify capability and execute with the odd ‘grenade’ in there for you — to keep things interesting … and real.
Speaking of real, joining them will be Scott Levens, Continuous Improvement Manager at Auckland Council to share with you the journey Auckland Council has been on in the intelligent automation space.
|
Discussing Strategy and Capability requirements for companies utilising Artificial Intelligence.
| 0
|
discussing-strategy-and-capability-requirements-for-companies-utilising-artificial-intelligence-16ec3e6d7f67
|
2018-07-25
|
2018-07-25 21:36:51
|
https://medium.com/s/story/discussing-strategy-and-capability-requirements-for-companies-utilising-artificial-intelligence-16ec3e6d7f67
| false
| 151
|
Business events and Community for Emerging Technology.
| null |
newzealandai
| null |
NewZealand.AI
|
justin@newzealand.ai
|
newzealandai
|
NEW ZEALAND,TECHNOLOGY,EVENTS,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN
|
newzealandai
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Justin Flitter
|
Founder of @NewZealandAI — Host of #TheAIshow — Tech events producer @AIDAYNZ & @BlockworksNZ | #Juggler
|
c75241f0f80c
|
justinflitter
| 2,887
| 1,588
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-19
|
2018-07-19 09:13:51
|
2018-07-24
|
2018-07-24 08:01:54
| 1
| false
|
en
|
2018-07-24
|
2018-07-24 08:08:54
| 1
|
16edf957f567
| 2.116981
| 2
| 0
| 0
|
Chatbot is a program that simulates human behavior, which can support a text or voice dialogue with the user in such messengers as…
| 5
|
What are chatbots and what are they for?
The simplest chatbot for introducing a company
Chatbot is a program that simulates human behavior, which can support a text or voice dialogue with the user in such messengers as Facebook, Vkontakte, Slack, Telegram, Viber, Kik and other modern platforms.
Chatbots can be relatively simple programs based on certain scripts, or even can use artificial intelligence, which makes bots suitable for a wide range of tasks.
The main advantage of chatbot — it doesn’t need to be separately installed and downloaded, it is already inside messengers. Thus, chatbots provide the company and brand an effective way to communicate with users in messengers, which they like and they want to use.
People are using messenger apps more than they are using social networks.
Introducing the chatbot into your business, the company automates the process of communicating with customers and declares the brand’s innovation. This approach will definitely increase the loyalty of the company’s customers and attract new ones!
Why chatbots are used in business?
To accept orders and sell goods
To order on the platform, where the user spends most of his time, is very comfortable for him. So, the bot already “lives” in the place, where your customers communicate.
Such bot can find out what the user wants to buy, specify details and suggest options.
To support and consult clients
More often people ask technical support standard questions. As a result, employees are overwhelmed with the same requests that bots can respond to.
For content delivery
Bots deliver content to where it is convenient for people to receive it — in messengers. That’s why the media so often use bots. Such bots can search for news on a keyword, form selections of popular and recent news, sign the user on interesting headings.
For carrying out of advertising actions
Such bots explain the rules of the action to users, “motivate” to participate in it. We can also include to that known to all distribution of stickers in Vk.
To introduce brand
Typically, these bots are very closely related to the bots for promotions, since their ultimate goal is the same — to increase the user’s loyalty to the brand.
For help on events
With the help of bots you can monitor the timing, notify the participants about changes in the schedule, collect feedback, conduct a survey. But it is worth noting that the benefits of such bots are felt only at mass events, which at the same time have several zones of activities.
For marketing research
The era of girls-promoters with questionnaires is long gone! In order to identify the preferences of your customers (potential), and the bottlenecks of personnel working with customers, you can use chatbot!
Summing up, we can formulate the pluses of chatbots:
the response time to clients is reduced,
sales increase due to the creation of an additional sales channel,
the possibility of accepting payments directly in the chat,
optimizes the work of employees,
contact with users can occur at any time.
BotCube can help you create a chatbot for your business :)
|
What are chatbots and what are they for?
| 11
|
what-are-chatbots-and-what-are-they-for-16edf957f567
|
2018-07-24
|
2018-07-24 08:08:54
|
https://medium.com/s/story/what-are-chatbots-and-what-are-they-for-16edf957f567
| false
| 508
| null | null | null | null | null | null | null | null | null |
Bots
|
bots
|
Bots
| 14,158
|
Polina Kurkach
|
Chief Content Officer, CCO - BotCube
|
b8e0f06cae7e
|
polina.kurkach
| 2
| 8
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-01
|
2018-06-01 22:36:52
|
2018-06-02
|
2018-06-02 18:27:57
| 1
| false
|
en
|
2018-06-02
|
2018-06-02 18:27:57
| 5
|
16ee283ee052
| 0.966038
| 3
| 0
| 0
|
In the past few weeks we have been forging partnerships to make us stronger.
| 5
|
Xchangerate.io enters yet another strategic partnership! — INSPEM
In the past few weeks we have been forging partnerships to make us stronger.
It started with Pecunio; our first strategic investment partnership, and we haven’t looked back since then. We have continued to improve on the metrics that make our product unique (more on that in another post) while also forging more relationships with companies with viable projects and working prototypes. Since the first partnership, we have gone on to seal more strategic partnerships with Ligercoin, Root Blockchain, Plaak, Amon.Tech, Alttex Consortium, VISO and DICE Money.
Today we are happy to announce a new partnership with INSPEM.
INSPEM is a Blockchain based artificial intelligence that is able to recognise faces and objects using surveillance cameras.
Xchangerate.io strikes another strategic partnership with INSPEM
As we continue our journey, we will continue to leverage our collective strengths, resources and experiences to ensure that we keep our investors and customers happy.
In the mean time, you can find out more about INSPEM on their website and their Telegram.
Our crowd-sale is still ongoing. Take the opportunity to participate today by visiting tokensale.xchangerate.io. For technical assistance or answers to questions about us you can join our Telegram and speak to any of our admins.
|
Xchangerate.io enters yet another strategic partnership! — INSPEM
| 64
|
xchangerate-io-enters-yet-another-strategic-partnership-inspem-16ee283ee052
|
2018-06-06
|
2018-06-06 04:56:37
|
https://medium.com/s/story/xchangerate-io-enters-yet-another-strategic-partnership-inspem-16ee283ee052
| false
| 203
| null | null | null | null | null | null | null | null | null |
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
XchangeRate Robot
| null |
7c9a8e3db652
|
xchangeraterobot.io
| 162
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-01
|
2017-11-01 08:11:43
|
2017-11-01
|
2017-11-01 08:19:14
| 0
| false
|
en
|
2017-11-01
|
2017-11-01 08:19:14
| 2
|
16eeabdd35f0
| 0.720755
| 1
| 0
| 0
|
If there are R Pepsi cans in a total of N cans (N-R Cokes) and we are asked to identify them correctly, in our choice selection of R Pepsi…
| 5
|
What is Hypergeometric Distribution?
If there are R Pepsi cans in a total of N cans (N-R Cokes) and we are asked to identify them correctly, in our choice selection of R Pepsi, we can get k = 0, 1, 2, … R Pepsi. The number of correct guesses and the probability of correctly selecting k Pepsi cans is Hypergeometric distribution.
Hypergeometric distribution is typically used in quality control analysis for estimating the probability of defective items out of a selected lot.
The Pepsi-Coke marketing analysis is another example application. Companies can analyze the preferences of one product to other among a subset of customers in their region.
Learn more about Hypergeometric distribution and how to derive the probability from the ground up in lesson 38 of our data analysis classroom.
Lesson 38 - Correct guesses: The language of Hypergeometric distribution
Now that John prefers Pepsi let's put him on the spot and ask him to choose it correctly. We will tell him that there…www.dataanalysisclassroom.com
If you find this useful, please like, share and subscribe.
You can also follow me on Medium and Twitter @realDevineni for updates on new lessons.
|
What is Hypergeometric Distribution?
| 1
|
what-is-hypergeometric-distribution-16eeabdd35f0
|
2017-11-01
|
2017-11-01 12:56:29
|
https://medium.com/s/story/what-is-hypergeometric-distribution-16eeabdd35f0
| false
| 191
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Naresh Devineni
|
Naresh Devineni is an Associate Professor in the Department of Civil Engineering at The City University of New York’s City College. http://nareshdevineni.com
|
53ffd7b0a59e
|
devineni
| 34
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-21
|
2018-09-21 04:47:35
|
2018-09-21
|
2018-09-21 06:55:55
| 5
| false
|
en
|
2018-09-21
|
2018-09-21 06:55:55
| 1
|
16f1be00433f
| 4.459748
| 0
| 0
| 0
|
Legal documents are known to be complex and written in legalese. Often times they’re hard to understand and may come in the form of a…
| 5
|
Topic modelling for legal documents
Legal documents are known to be complex and written in legalese. Often times they’re hard to understand and may come in the form of a contract, a piece of legislation or a full-text case. The question is could we summarise these legal documents in a concise way using a topic modelling technique known as Latent Dirichlet Allocation (LDA)?
In this article, we will use this NLP method and apply it a short piece of legislation. The goal is to summarise this piece of legal text by grouping the document into a number of distinct topics, in a purely statistical way — all without the help of a human to read it beforehand. The code for this project can be found here.
Latent Dirichlet Allocation
LDA is a type of topic modelling algorithm. It tries to uncover topic distribution in a collection of documents. LDA defines each topic as a bag of words, then we look at each ‘bag’ and label them as we see fit. The algorithm has 3 steps, with the 3rd step being iteractive.
You tell the algorithm how many topics you think there are
The algorithm will assign every word to a temporary topic
The algorithm will check and update topic assignments, looping through each word in every document.
Topic assignment in step 3 is based on:
How prevalent is that word across topics?
How prevalent are topics in the document?
We then repeat step 3 until the topic assignment no longer changes (we have a convergence happening).
The Legislation
The legislation we’re using for this project is the Residential Tenancy Act, which governs the relationship between landlords and tenants. One quick look at the legislation and we can see there are a total of 14 parts. Each part deals with a particular topic such as terminal of rental agreements, rental bonds or powers of the tribunal. It would be interesting to see whether the LDA algorithm can separate/uncover some of these topics when looking at the document as a whole.
The code
The first step in our code is to simply read the pdf file and extract all of the text. We concatenate all 131 pages from the pdf and put them into a list. We will then use the sci-kit learn implementation of LDA. Essentially we are treating every page as a separate document (ie splitting the document into 131 parts).
As a preprocessing step we need to vectorise the words in this collection of documents. We will be doing that with the CountVectorizer from sci-kit learn. As parameters to the CountVectorizer we will filter out the ‘stopwords’ in the English language and also use a regex expression to include only words in our representation.
After that we apply the LDA algorithm and come up with a topic model.
To apply the LDA algorithm we need to pick the number of topics we want, this is somewhat similar to a K-means clustering algorithm, in that the algorithm won’t be able to figure out the number of topics for us. For experimentation we will firstly choose 12 topics and then 5 topics.
We will then visualise the results of our model using two methods. First by displaying the most frequently appearing words under a topic and then using pyLDAvis to come up with an interactive visualization where we can see inter-topic distance.
Evaluation
Here we have the visualisation for 12 topics.
From the inter-topic distance map we can see how the 12 topics are distributed. The larger bubble represent the number of relevant terms that belong to that topic and the distance between the bubbles represent how similar each topic is from other topics. From the visualization above it appears that the LDA algorithm uncovered 5 major topics, the remaining 7 topics seem to be very small in size.
By examining the words/terms under each topic we can see that many of the same terms appear almost universally across most of the topics. These terms include ‘agreements’ , ‘tenancy’, ‘landlord’ etc. Therefore these terms don’t add much meaning at all, so we filter them out from the documents and re-apply the algorithm. Moreover, it probably makes a lot more sense to pick 5 topics instead of 12 so that we can more readily identify the main topic distributions.
Now let’s take a look at the inter-topic distance map for 5 topics.
We can see more clearly the 5 major topics uncovered by LDA. All these topics are quite significant in size and they are also moderately spaced out with no overlaps. Now let’s take a look at the specific words found under each topic.
Here we have the most frequent terms for each of our major topics. The seems like the first topic relates to court proceedings and tribunals, the second topic is a bag of miscellaneously recurring terms such as ‘date’ and ‘commencement’. The third topic relates to information and database. However to me the remaining topics are less clearly defined. We can see that using a statistical approach we can broadly identify the topic distribution in this piece of legislation. However, it fails to broadly classify the legal issues in this document, which can easily be discerned by looking that at the heading to each section of the legislation.
Although the LDA technique is far from perfect, it offers a compelling way to automate document summarization in a quick and dirty way. It may be useful when we have an overwhelming amount of text data and we want to index and tag each document according to the most prevalent topics found in each document. In this situation we can sort and search a massive amount of documents in a more efficient way.
|
Topic modelling for legal documents
| 0
|
topic-modelling-for-legal-documents-16f1be00433f
|
2018-09-21
|
2018-09-21 06:55:55
|
https://medium.com/s/story/topic-modelling-for-legal-documents-16f1be00433f
| false
| 961
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Schuman Zhang
|
Interested in techie things. All my side projects here -> computer vision, natural language processing, augmented reality and some opinion articles
|
aa08b2e57fb0
|
schuman.zhang
| 15
| 131
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-17
|
2017-10-17 04:49:53
|
2017-10-17
|
2017-10-17 05:23:18
| 0
| false
|
en
|
2017-10-17
|
2017-10-17 05:23:18
| 0
|
16f2f811a235
| 3.758491
| 0
| 0
| 0
|
In 2009 I walked up the steps of a church near my hometown. I was attending a the Church’s yearly summer camp with a friend of mine. My…
| 4
|
Finding Forever
In 2009 I walked up the steps of a church near my hometown. I was attending a the Church’s yearly summer camp with a friend of mine. My parents had just dropped me off, I could tell they were nervious leaving me at this church. Now having children I understand a little more. We had been waiting a couple of hours for all the buses to line up and take us to this so-called magical place I’d been hearing everyone talk about. As I walk outside, standing next to my friend, I saw a girl. A blonde haired girl who was maybe 16, wearing a grey t shirt and blue jeans. I still to this day have no idea why she stuck out to me. It was like she was calling me to her without even looking at me. I couldn’t look away, then she must have felt me stairing like a drooling idiot. You know, when a car passes you and you glance over and they are staring at you. Of all the cars that passed, you happened to look over at that car, and in that car the old hag of a lady is stairing at you. Us humans are odd that way, well this time it was like a 100 ft python was inches from my face and I was horrified yet almost controled by her eyes.
Let’s fast forward seven years. I’m now a young adult working as a designer in New York, I fly from my home near Washington D.C. every other week. It’s exausting but pays better than anything I’ve made in a while. My resume is pretty good, I started a retail chain when I was 21, sold some of the brands I created during that time. I’m divorced, well almost. Still technically married but have lived apart from my ex for three years. Lonely as can be, I live in my parents basement apartment. Not becuase I can’t afford to leave, I’m a single twenty four year old guy making $75,000 a year, I don’t leave becuase I’m able to save money and invest. That’s what I tell everyone anyway. In reality I’m scared to be alone, after my ex wife… or whatever she is left me in 2014 I havn’t been able to make connections with new people. Leaving me pretty much my parents as my only friends, and one or two people from when I was 16 or 17 but I already knew them. Life was so good, yet so hard at the same time. At this point I’ve learned how to wear a mask like it’s my own face. Sometimes I look in the mirror and even believe it’s real.
Let’s fast forward ten years. I’m 34, at this point in my life I wanted to be a billionaire like every other thirty-somthing. Someone my childhood friends and family would look to and say wow. I’m realated or I know him. Yeah, I was a dick. I wasn’t that bad off though, I made a couple hundred thousand off a technology boom in my mid twenties. I also sold a mobile app I had made with one of those two friends I just talked about. Made a cool $8,000,000 off that after tax when I was 26. Bought a pretty nice car, a house and even got a girl friend. She was sweet, infact the sweetest girl I’d ever met. I didn't feel the same connection that I did with the girl at the church camp, but over time it grew to pass the love I had for my ex wife. I called her summer, her name wasn’t summer but she reminded me of it so that’s what I called her. She loved it, after a while. We had two children, it wasn’t as hard as the first as I had money and could aford help around the house. My life was perfect during this time, well almost perfect.
Now let’s pass forward another twenty years. Being 55 was facinating, my body felt so old yet I felt so young inside. I never understood this before, I guess when I was a child I thought being old meant you felt old. You don’t, it’s like being trapped in an aging dying body yet you want to live and breathe and be free. It’s nearly hell, and I know alot of others had it worse than I did. My first child is now 34, it’s amazing. My two other children we’re graduating college, my youngest daugther in a serious relationship. I like her partner so I’m pretty lucky there. At this time in your life, you feel life is nearly over. I also felt this way in my twenties so I should have known that it wasn’t.
It’s time to bring you to present day, twenty nine years later. I’m 84, amazingly I don’t have cancer and I’m not sick. I did break my leg at 71 and have minor pains from that, however I get still get around. In my mid fourties I sold an internet company for $490,000,000 and have been using that capitol to build androids ever since. Google contacted me at 59 and gave me access to a team who supports the A.I. side of it all. Here’s the exciting part, today is a special day. A day I’ve dreamed of since I was just 17 years old. I’ve made somthing that could, no will change the human race. See in my teen years I imagened a future where people didn’t have to die. I’ve always believed our memories, emotional expereineces and reactions to those expereinces are what make us, well, us. Human, if you will. I wasn’t able to prove this until today. When I woke up this morning, I didn't know if I would come back to that bed. That wonderful bed, this wonderful life. But I did. I’m here. And yet, I died at 4:42PM.
|
Finding Forever
| 0
|
finding-forever-16f2f811a235
|
2018-02-08
|
2018-02-08 21:07:31
|
https://medium.com/s/story/finding-forever-16f2f811a235
| false
| 996
| null | null | null | null | null | null | null | null | null |
Life
|
life
|
Life
| 283,638
|
Manny Stul
|
Sci-fi writer and dreamer from washington D.C. Mannystul.93@gmail.com
|
53b55eb44753
|
mannystul
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-17
|
2018-09-17 13:31:57
|
2018-09-17
|
2018-09-17 13:33:14
| 1
| false
|
en
|
2018-09-17
|
2018-09-17 13:45:09
| 6
|
16f77e7c6808
| 1.05283
| 0
| 0
| 0
|
Google Spreadsheet is one application made by Google that has capabilities like Microsoft Excel. By using Google Spreadsheet we can edit…
| 5
|
Loading Data Into Google Spreadsheet using Advanced ETL Processor
Google Spreadsheet is one application made by Google that has capabilities like Microsoft Excel. By using Google Spreadsheet we can edit data, input data anywhere and of course share the data we have with anyone we want. To use Google Spreadsheet we only need a gmail account.
But do you know that ETL Tools software can connect directly to Google Spreadsheets? We are aware that Google spreadsheet has become a daily necessity, so we create software that can be connected to Google spreadsheets to make your work easier.
To use the menu, we can activate it, we need to do a little setting on our ETL Tools. To setup Google Spreadsheet access follow these steps:
In the Name Text Box type in a new name for the Google Spreadsheet connection you are about to create
Type in username and password
Test the connection
Click OK to close the Google Spreadsheet connection properties window
You can try it for free right now. Please click on the following link to download it.
https://www.etl-tools.com/active-table-editor/overview.html
Pdf tutorial can be downloaded using the following link
https://www.etl-tools.com/wiki/ate/start?do=export_pdf
If you prefer wiki format, follow this link:
http://www.etl-tools.com/wiki/
To ask further questions on how to use the ETL-Tools software visit our support forum.
https://www.etl-tools.com/forum/index.html
Facebook
https://www.facebook.com/etl.tools/
Twitter
https://twitter.com/etl_tools
|
Loading Data Into Google Spreadsheet using Advanced ETL Processor
| 0
|
loading-data-into-google-spreadsheet-using-advanced-etl-processor-16f77e7c6808
|
2018-09-17
|
2018-09-17 13:45:09
|
https://medium.com/s/story/loading-data-into-google-spreadsheet-using-advanced-etl-processor-16f77e7c6808
| false
| 226
| null | null | null | null | null | null | null | null | null |
Tech
|
tech
|
Tech
| 142,368
|
etltools
|
http://etl-tools.com/
|
de5171fb879
|
etltools.com
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-10
|
2018-08-10 01:43:14
|
2018-08-12
|
2018-08-12 23:41:30
| 7
| false
|
en
|
2018-08-22
|
2018-08-22 08:22:51
| 22
|
16f8d177a957
| 7.842453
| 211
| 5
| 1
|
How to teach a pigeon cool tricks
| 5
|
Understanding OpenAI Five
In this blog ( my first blog on medium o_o ), I will explain the challenges in making a Dota bot, appreciate how OpenAI addressed these challenges, and lay out its fundamental flaws. I hope that by explaining how it works in layman terms, we can watch its upcoming matches with the right mindset.
The OpenAI bot is trained like a pigeon: It is conditioned to associate short-term objectives such as last hits and surviving with positive reinforcements. By accomplishing a sufficiently large number of these short-term goals, the bot wins the game by coincidence, without planning its victory from the start.
The OpenAI Five Problem
Every project needs the right problem statement. I believe the OpenAI Five’s problem statement as follows: Beat a team of human in Dota in any way possible with a program. This view is both powerful and liberating.
It is powerful as it creates a tangible spectacle and H Y P E. Like how Alpha Go pwnt go, OpenAI would like to pwn Dota. Anyone would be very proud with an achievement of the form: we are the first ___ that beat humans in ___ .
It is liberating in that it frees OpenAI from any “moral principles” but attempt to reach its goal by any means. Should the computer use a mouse and a keyboard? No let’s give it game APIs. Is there a limit on the total numbers of games which the bot can train on? Sure, how about 180 years of Dota per day.
The key is that we need to evaluate what problem did OpenAI actually managed to solve, rather than be misled into thinking OpenAI solved a more challenging problem. From reading their blog, OpenAI is indeed very careful to not overstate their achievements. Yet, it has every reason to hope the public over-hype their achievements out of proportion with sensationalism.
Learning Is Better Than Programming
Simply put, an AI agent is playing Dota well if it acts rationally for every game-state it encounters. For a human, we understand intuitively that taking last-hits is in general a good action, but taking last-hits when a fight is happening is generally a bad action. However, transferring this intuition to an AI agent has been notoriously challenging since the dawn of AI.
The forefathers of AI erroneously equated AI with programming. “We’ll just hard-code every single behaviour of the agent”, they had thought. The result is mammoth efforts trying to program every possible interaction the agent might encounter. However, there will always be some interactions that were unanticipated, and these hard-coded agents failed in spectacular fashions.
Rather than explicitly programming the agent’s behaviours, Reinforcement Learning (RL), a sub-field of AI, opts for a different approach: Let the AI interacts with the game environment on its own and learn what are the best actions. In recent years, RL has shown indisputable results, such as defeating the best Go player and besting a wide range of Atari games. The roots of RL stems from behaviourism psychology, which states that all behaviours can be encouraged or discouraged with the proper stimulus (reward / punishment). Indeed, you can teach pigeons how to play ping pong using RL.
Challenges of RL
Applying RL to Dota, however, has some considerable challenges:
Long horizon — A key challenging in RL is you often only obtain a reward signal after executing a long and complex sequence of actions. In Dota, you need to last hit, use the gold to buy items, pwn scrubs, and make a push before finally destroying the ancient, thereby obtaining a reward from winning. However, at the start the agent knows nothing about Dota, and acts at random. The chances of it randomly winning the game is 0. As a result, the agent never observes any positive reinforcements and learns nothing.
In Dota, the game horizon is long. The chance of winning by acting randomly is infinitesimally small
Credit assignments — Even when ancient is destroyed, which actions are actually responsible for it? Was it hitting the tower, or using a truckload of mangoes with full mana? The judgement on which specific actions (out of a long sequence of executed actions) are responsible for your victory is the credit assignment problem. Without any prior knowledge, your best bet is a uniform assignment scheme: credit all actions that resulted in a victory, and discredit all actions that resulted in a defeat, hoping the right actions are credited more often on average. This approach works on a short games like Pong and 1v1 Dota, and is the optimal approach if you can afford the computations and patience. Indeed, AlphaGo Zero was entirely trained in this fashion, with only +1 and -1 reward signal for winning and losing the game. For Dota though, there are simply too many actions to account for, and OpenAI decided it is best to coach our pigeon more directly.
Which action the agent took actually contributed to winning the game? Without any game knowledge, the best one can do is evenly credit all the actions. Here, the agent will erroneously associate both last-hitting and dying with winning the game.
The OpenAI Solution — Reward Shaping
One pragmatic way of addressing the challenge of long horizon and credit assignments is reward shaping, where one breaks down the eventual reward into small pieces, to directly encourage the right behaviours at each step. The best way of explaining reward shaping is by watching this pigeon training video. The pigeon would never spontaneously spin around, but by rewarding each small step of a turn, the trainer slowly coax the pigeon into the correct behaviour. In OpenAI Five, rather than learning last-hit is indirectly responsible for the ultimate victory, a last-hit is directly rewarded with a score of 0.16, whereas dying is punished with a negative score of -1. The agent would immediately learn that last-hit is good while dying is bad, irrespective to the ultimate victory of a game. Here is the full list of shaped reward values.
By associating short-term goals like last-hitting and dying with immediate reward and punishment our pigeon can be coaxed into the right behaviour
The challenge of reward-shaping is that only certain behaviours can be effectively shaped. Killing and last-hitting have immediate benefits, so intuitive scores can be assigned to each when these events occur. However, compared to last-hitting, the score of ward and smoke usages are very nebulous, something even the OpenAI researchers have not a good answer to.
The OpenAI Muscle — Self Play
Everyone has catch phrases. My adviser tends to say “sounds like a plan!” at the end of our meetings. OpenAI too has catch phrases, and one of them is this : “How can we frame a problem in such a way that, by simply throwing more and more computers at it, the solution gets better and better?” One of the answer they have settled down is that of self-play, you can watch an explanation by Ilya Sutskever here. The two take-away from the talk are self-play turns computes into data and self-play induces the right curriculum.
By playing against itself in the task of maximising short-term rewards, the pigeons learns how to last-hit and not dying
Turning computes into data — With self-play, one can spawn thousands of copies game environments, and dynamically generate the training data by interacting with the environment. Self-play is purely bound by the amount of computes one can muscle. And if computes are muscles, OpenAI is on steroid.
Inducing the right curriculum — A baby in a college level class will learn nothing. Training an agent is often no different, it is easier to train an agent by first allowing it to accomplish a set of simple tasks, and gradually increasing the complexity (a curriculum) until it finally learns the set of complex tasks. Competitive self-play naturally induces a curriculum in increasing difficulty: In the beginning, the task of beating yourself is easy, as you are bad at the game. But as you get better, it gets harder and harder to beat yourself.
Since June 9th, at 180 years of Dota games played per day, OpenAI has played 10000 years of Dota using self-play, which is longer than the existence of human civilisation. Just let that sink in for a second.
The Deceit of Breadcrumbs
To recap, OpenAI carefully constructed a trail of breadcrumbs of short-term rewards that the pigeon, evolved through 10000 years of Dota playing, is an expert at obtaining: Pigeon sees creep, nom nom; Pigeon sees you, kills you, nom nom; Pigeons at your base after killing you, sees your buildings, nom nom and you lose the game. — TL;DR of OpenAI bot
Since the OpenAI agents are trained to maximise short-term rewards, the concept of winning is literally under a smoke, and won’t become visible until the AI is sufficiently close to it. This makes the agent oblivious to long term strategic maneuvers such as forming a push around an important item timing.
The pigeon, occupied with maximising short-term rewards, does not see the victory far in the distance
The Dilemma Of RL
We started by explaining the challenges of building a Dota AI using RL: long-horizon and credit-assignment. We explained that for a game like Dota, an uniform assignment scheme like that used for AlphaGo Zero and the previous Dota 1v1 bot would not be enough. We went through some of OpenAI’s decision to explicitly reward shape the learning process, and the resulting danger of short-sightedness. It turns out that you can settle for a middle-ground between uniform assignment and shaped rewards through the use of discount-factors, which I can explain in detail on a future blog post. For now, we can think of the discount-factor as a knob one can tune between 0 and 1 to interpolate between pure uniform assignment to heavily shaped rewards.
鱼与熊掌不可兼得 / You can’t have your cake and eat it too
This is a fundamental trade-off, the more you shape the rewards, the more near sighted your bot. On the other hand, the less you shape the reward, your agent would have the opportunity to explore and discover more long-term strategies, but are in danger of getting lost and confused. The current OpenAI bot is trained using a discount-factor of 0.9997, which seems very close to 1, but even then only allows for learning strategies roughly 5 minutes long. If the bot loses a game against a late-game champion that managed to farm up an expensive item for 20 minutes, the bot would have no idea why it lost.
I’m pretty tired so I’ll stop now. To summarise, the OpenAI bot is a pigeon. If given enough time, can discover optimal strategies and movements about 5 minutes in length, but ultimately cannot formulate a winning strategy from the beginning of the game to the end. Its behaviours are strongly incentivised by the rewards OpenAI has crafted, and may forsake winning the game in favour of obtaining more and more short-term benefits. For the most part, that is also how humans plays Dota, we too must focus intensely on short-term benefits, but that’s not to say we cannot formulate long term plans.
If you read all this far thank you so much, and give me a high-five! yeah!
— evan
Follow Ups: We had a nice discussion on Reddit on this piece in /r/Dota2. One neat idea is to use the shaped rewards as a “coach” to context the bot, and by altering this context, making the bot behave differently in test time. Other is on how should one judge whether the bots have truly surpassed human.
|
Understanding OpenAI Five
| 1,545
|
understanding-openai-five-16f8d177a957
|
2018-08-22
|
2018-08-22 08:22:51
|
https://medium.com/s/story/understanding-openai-five-16f8d177a957
| false
| 1,800
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Evan Pu
|
PhD student at Massachusetts Institute of Technology in Program Synthesis and Machine Learning
|
3a47ad091676
|
evanthebouncy
| 100
| 1
| 20,181,104
| null | null | null | null | null | null |
0
|
toxic 15294
severe_toxic 1595
obscene 8449
threat 478
insult 7877
identity_hate 1405
| 1
|
29fe98de7b5
|
2018-05-23
|
2018-05-23 01:50:25
|
2018-05-23
|
2018-05-23 03:43:18
| 8
| false
|
en
|
2018-06-19
|
2018-06-19 02:44:58
| 3
|
16fa5c0638f8
| 3.88805
| 1
| 0
| 0
|
So previously we forayed into the world of Multi-Label Classification. If you want to check that out, click here.
| 3
|
Experimenting with Multi-Label Prediction
So previously we forayed into the world of Multi-Label Classification. If you want to check that out, click here.
Well, this time we are going to get down and dirty, because what better way to know what all these are than to actually try them our ourselves?
I was hunting around for a problem/dataset that would be our practice range. Turns out Kaggle had a competition recently that fits the bill exactly!
Toxic Comments Classification Challenge
Getting Started — Data Exploration
Well the premise was simple enough, take a comment (plucked from Wikipedia’s talk page edits) and determine the different levels of cancer (types of toxicity in proper terms) that the comment gives.
This is what the data set looks like. (159571 rows , 8 columns)
First let’s try to understand what we have here a little better before diving in with the multi-label stuff.
Label Count
How many of the comments actually fall under each label
Multi-Label Count
How many comments have 2 or more labels associated?
As can be seen, a good chunk of the comments do not even have any labels associated with them.
(People are not as evil as you think!)
Getting Started — For Real
Since we are dealing with text processing, I have always read that you need to do some cleaning before training your model. Removing stopwords, extra whitespaces, lemmatize etc.
Well guess what, we are not going to do any of that! (you WHUT?)
My goal here was just to try out a few different classifier models and see which one worked better. If we feed them all the same baseline inputs, then it should still be a fair competition right?
Naive Bayes
This was the first one i tried.
The Pipeline function provided by sklearn makes the necessary transformation to your data so that it can be fed into the Classifier.
OneVsRestClassifier is a wrapper so that our NaiveBayes can be used in Binary Relevance style to predict the multi-labels.
Score for NaiveBayes
LinearSVC
The beauty of using Pipelearn and sklearn is that the code to run the different models are basically the same! Save for the Classifier that you put into the pipeline.
Score for LinearSVC
Logistic Regression
Score for Logistic Regression
The run times for the above 3 models were extremely fast, nothing took more than a minute to run.
The following 2 however, are a different story
GradientBoostingClassifier
If you are like me and have no idea what Gradient Boosting is. Check out this Kaggle Master explain it in (hopefully) simpler terms.
What i feel is the basic idea behind Gradient Boosting is (taken from Kaggle Master)
Fit a model to the data. — F1(x) = y
Fit a model to the residuals. — h1(x)=y-F1(x)
Create a new model. — F2(x) = F1(x)+h1(x)
So i guess it is a form of gradient descent? Where the model tries to learn/fit the residuals in order to make the weak learner a better model.
Creating a pipeline for the GradientBoostingClassifier
Results for GBC
This part took 1Hour27Minutes to run. A very significant difference from the earlier models we tried. Worst part is, the results are not as good as the simpler models like LinearSVC.
RandomForest
Random Forest is a ensemble of classification trees, where the prediction is derived via a voting mechanism from an array of decision trees. The forest classifier is a black box though, i have no idea what the classification tree looks like behind the scence.
This took 39Minutes to run. Half of the runtime as compared to GBC, but still significantly longer as compared to the simpler models.
The good news is, the accuracy of the Random Forest seems to beat the rest.
Score for RandomForest
Conclusion
Just a simple experimentation this time round.
What i learnt
Complex does not necessarily mean better. The GBC did not fare the best even though it took the longest to train and I have heard the term gradient boosting behind a hot method now (XGBoost)
Time vs Accuracy. In some cases, the efficiency of training and running the model may be a important factor to consider. What is the time allowance in your use case? Practicality vs Reliability
Anyways, congratulation to RANDOM FOREST!
You are the winner! (for now… until i get the chance to try out something else that can beat your record)
|
Experimenting with Multi-Label Prediction
| 1
|
experimenting-with-multi-label-prediction-16fa5c0638f8
|
2018-06-19
|
2018-06-19 02:45:01
|
https://medium.com/s/story/experimenting-with-multi-label-prediction-16fa5c0638f8
| false
| 730
|
My musings and experiments. My notebook
| null | null | null |
KenLok
|
ken333136@gmail.com
|
kenlok
|
DEEP LEARNING,DATA SCIENCE,READING,KENLOK
| null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
ken lok
| null |
a46df4587645
|
ken333136
| 1
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-22
|
2018-05-22 07:17:56
|
2018-05-22
|
2018-05-22 07:22:27
| 3
| false
|
en
|
2018-05-22
|
2018-05-22 07:22:27
| 2
|
16fbf3acc15b
| 2.882075
| 0
| 0
| 0
|
The age of AI driven automation is well and truly upon us. Pizza Hut is experimenting with robots called Pepper and virtual assistants like…
| 5
|
Do marketers need to worry about job losses in AI dominated future ? (Part 1)
The age of AI driven automation is well and truly upon us. Pizza Hut is experimenting with robots called Pepper and virtual assistants like Siri and Alexa are a reality now.
According to Pizza Hut the experiment with Pepper will not only make the customer service faster and more efficient but will also make it easier for people to customize their orders. The company expects a reduction in customers’ wait time for orders and far superior and personalized user experience. This does indeed represent a threat to the first line of marketing personnel.
Although it is equally true, that Pepper has not been so successful outside Japan, where it is sometimes available at hotel lobbies as a virtual concierge and was actually fired from a Scottish grocery for less than satisfactory performance! But this can be seen as a step in the learning curve for AI controlled robotics and as we all know that this learning curve is indeed quite sharp.
Marketeers are confused as to how to react to the implications of AI and are yet to clearly understand its impact it on their jobs because AI has the potential of creating big winners across the marketing segments as well as create big losers at the same time.
AI more important to humanity than electricity or fire
Google CEO Sundar Pichai had some time back called AI, “more important to humanity than electricity or fire.” Marketing is seeing a shift from mobile-first age to the AI powered age, but most marketeers are still not certain about the implications of AI on their domain.
Will AI take away marketing jobs?
Along with this comes the dreaded question — Will the rise of AI have an adverse impact on marketing jobs?
Yes, indeed some low-level marketing jobs will be lost to robotics and automation. Companies may not have a choice but to automate and use AI for much of mid-level marketing functions if they want to survive.
Change, adapt and evolve for artificially intelligent future
But marketing as a whole will not only survive but thrive in the age of AI and automation provided it changes, adapts and evolves to fit in an artificially intelligent future.
In fact, AI may make the life of a marketeer easier as he will have the perfect data at hand to prove what kind of marketing works and which kind doesn’t work. He could use the power of AI for lead scoring as exemplified by Harley Davidson that increased sales leads by over 200 per cent by using an AI-powered system that scored leads “in a much more intelligent way.”
According to Outsourced CMO founder Vineet Arya, marketeers who have the ability to use Big data solutions and other AI techniques to indulge in a smarter search to quickly and more efficiently analyse customers, their tastes, preference, buying patterns etc stand to gain by the advances in AI.
The use of AI will give marketers the tool to make more relevant and better targeted advertisements that can be pushed to different categories of customers.
Already, the large amount of data churned out by artificial intelligence marketing solutions gives a marketeer valuable insights into an ever-increasing number of relevant metrics. This allows them to gauge important trends and customer behavior pattern that facilitates the creation of personalized marketing campaigns and strategies.
Customer-facing activities including marketing automation, support, and service in addition to IT and supply chain management are predicted to be the most affected areas by AI in the next five years.
To be continued in Part 2 ….
Originally published at blog.outsourcedcmo.in.
|
Do marketers need to worry about job losses in AI dominated future ? (Part 1)
| 0
|
do-marketers-need-to-worry-about-job-losses-in-ai-dominated-future-part-1-16fbf3acc15b
|
2018-05-22
|
2018-05-22 07:22:28
|
https://medium.com/s/story/do-marketers-need-to-worry-about-job-losses-in-ai-dominated-future-part-1-16fbf3acc15b
| false
| 618
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Vineet Arya
|
Founder of a “Outsourced CMO” Helping startups / MSME / businesses to hire 20+ yrs experienced CMO to work at an affordable salaries. Visit www.outsourcedcmo.in
|
2f6856239c3b
|
outsourcedcmo
| 430
| 452
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-11
|
2017-11-11 19:55:27
|
2017-11-11
|
2017-11-11 20:25:35
| 19
| false
|
en
|
2017-11-24
|
2017-11-24 17:03:20
| 1
|
16fdfa83068d
| 5.967925
| 0
| 0
| 0
|
Self driving Cars/Trucks will be every where in few years this is my attempt to solve the basic puzzle of self driving vehicles i.e to…
| 3
|
Detecting Lane Lines on the Road for SDC
Self driving Cars/Trucks will be every where in few years this is my attempt to solve the basic puzzle of self driving vehicles i.e to detect the lanes to drive.
So If the camera/lidar produces the below image for the vehicle
We need to make transform the image to below
Lets do it.
The following techniques are used:
Color Selection
Canny Edge Detection
Region of Interest Selection
Hough Transform Line Detection
Finally, I applied all the techniques to process video clips to find lane lines in them.
Lets load the Test images
I use these images to test my pipeline (a series of image processing) to find lane lines on the road.
Lines are in white or yellow. A white lane is a series of alternating dots and short lines, which we need to detect as one line.
Sequence followed was
Apply grayscale()
The first processing step is to convert the the color image to greyscale, effectively downgrading the color space from three-dimensions to one-dimension. It’s much easier (and more effective) to manipulate the image in only one-dimension: This one dimension is the “darkness” or “intensity” of the pixel, with 0 representing black, 255 representing white, and 126 representing some middle grey color.
Output is
Apply gaussian_blur()
The next step is to blur the image using a Gaussian Blur.
By applying a slight blur, we can remove the highest-frequency information (a.k.a noise) from the image, which will give us “smoother” blocks of color that we can analyze.
Once again, the underlying math of a Gaussian Blur is very basic: A blur just takes more averages of pixels (this averaging process is a type of kernel convolution, which is an unnecessarily fancy name for what I’m about to explain).
Basically, to generate a blur, you must complete the following steps:
Select a pixel in the photo and determine it’s value
Find the values for the selected pixel’s local neighbors (we can arbitrarily define the size of this “local region”, but it’s typically fairly small)
Take the value of the original pixel and the neighbor pixels and average them together using some weighting system
Replace the value of the original pixel with the outputted averaged value
Do this for all pixels
This process is essentially saying “make all the pixels more similar to the pixels nearby”, which intuitively sounds like blurring.
For a Gaussian Blur, we are simply using the Gaussian Distribution (i.e. a bell curve) to determine the weights in Step 3 above. This means that the closer a pixel is to the selected pixel, the greater its weight.
Output is
Apply canny()
Now that we have a greyscaled and Gaussian Blurred image, we are going to try to find all the edges in this photo.
An edge is simply an area in the image where there is a sudden jump in value.
For example, there is a clear edge between the grey road and the dashed white line, since the grey road may have a value of something like 126, the white line has a value close to 255, and there is no gradual transition between these values.
Again, the Canny Edge Detection filter uses very simple math to find edges:
Select a pixel in the photo
Identify the value for the group of pixels to the left and the group of pixels to the right of the selected pixel
Take the difference between these two groups (i.e. subtract the value of one from the other).
Change the value of the selected pixel to the value of the difference computed in Step 3.
Do this for all pixels.
So, pretend that we are only looking at the one pixel to the left and to the right of the selected pixel, and imagine these are the values: (Left pixel, selected pixel, right pixel) = (133, 134, 155). Then, we would compute the difference between the right and left pixel, 155–133 = 22, and set the new value of the selected pixel to 22.
If the selected pixel is an edge, the difference between the left and right pixels will be a greater number (closer to 255) and therefore will show up as white in the outputted image. If the selected pixel isn’t an edge, the difference will be close to 0 and will show up as black.
Of course, you may have noticed that the above method would only find edges in the vertical direction, so we must do a second process where we compare the pixels above and below the selected pixel to address edges in the horizontal direction.
These differences are called gradients, and we can compute the total gradientby essentially using the Pythagorean Theorem to add up the individual contributions from the vertical and horizontal gradients. In other words, we can say that the total gradient²= the vertical gradient²+ the horizontal gradient².
output is below
Apply region_of_interest() Masking
This next step is very simple: A mask is created that eliminates all parts of the photo we assume not to have lane lines.
We get this…
Apply draw_lines()
Apply hough_lines()
The final step is to use the Hough transform to find the mathematical expression for the lane lines.
The math behind the Hough transform is slightly more complicated than all the weighted average stuff we did above, but only barely.
Here’s the basic concept:
The equation for a line is y = mx + b, where m and b are constants that represent the slope of the line and the y-intercept of the line respectively.
Essentially, to use the Hough transform, we determine some 2-dimensional space of m’s and b’s. This space represents all the combinations of m’s and b’s we think could possible generate the best-fitting line for the lane lines.
Then, we navigate through this space of m’s and b’s, and for each pair (m,b), we can determine an equation for a particular line of the form y = mx + b. At this point, we want to test this line, so we find all the pixels that lie on this line in the photo and ask them to vote if this is a good guess for the lane line or not. The pixel votes “yes” if it’s white (a.k.a part of an edge) and votes “no” if it’s black.
The (m,b) pair that gets the most votes (or in this case, the two pairs that get the most votes) are determined to be the two lane lines.
Here’s the output of the Hough Transform…
Then you interpolate the image to get the proper output the logic is in the code given at the end of this wrieteup.
Below images are the output. as well as the video.
Source code https://github.com/pandit10/self-driving-cars
|
Detecting Lane Lines on the Road for SDC
| 0
|
detecting-lane-lines-on-the-road-for-sdc-16fdfa83068d
|
2017-11-24
|
2017-11-24 17:03:22
|
https://medium.com/s/story/detecting-lane-lines-on-the-road-for-sdc-16fdfa83068d
| false
| 1,131
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Somesh Pandit
|
Emerging technology enthusiast.
|
7a635e4feb2
|
somesh.pandit
| 0
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-31
|
2018-05-31 03:04:26
|
2018-05-31
|
2018-05-31 03:09:43
| 4
| false
|
en
|
2018-05-31
|
2018-05-31 12:57:28
| 5
|
16ff29c47c98
| 2.624528
| 1
| 0
| 0
|
Exciting year right ! From the inspiration of Elon Musk and Mark Zuckerberg, AI is affecting us in magical ways.
| 5
|
How AI is changing our Lives?
Exciting year right ! From the inspiration of Elon Musk and Mark Zuckerberg, AI is affecting us in magical ways.
The year 2018 has being the biggest storm to impact our lives. The development in the Artificial Intelligence proves that it has the potential of changing our world in many greater ways.
AI technologies hold the promise of enhancing future societies in a number of ways. Here are a few….!!!
1. Virtual Assistance
Robots have been a thing of fantasy for many years. They are much faster, smarter and better than us. As a personal assistant, many ‘AI Beings’ take care of the daily needs of a human. From wishing good morning,to wishing you good night, It does almost everything in between..!! This sort of intelligence is what I would call a “Perfect companion”.
2. AI in Healthcare
So far 2018 has been awesome with Artificial Intelligence taking a new turn and entering the field of medicine. The existence of AI has made managing medical records and other data quite easier..!! Not only that but it has made Analyzing tests, X-Rays, CT scans, data entry, and other mundane tasks faster and of course more accurate…!!!
These are just a sample of the solutions AI is offering the healthcare industry. As innovation pushes the capabilities of automation and digital workforce's, more solutions to save time, lower costs, and increase accuracy will be possible.
3. Facial Recognition and AI
The ability to recognize faces has long been a benchmark for artificial intelligence..!! Who knew that the possibility of using facial recognition as a security measure for unlocking our daily devices and identifying ourselves to our “oh so precious” iPhone could one day be as easy as snapping a quick selfie…!! Well Indeed, thanks to the makers of Artificial Intelligence..!!
4. Gadgets near you…every damn day..!!
Artificial intelligence (AI) might seem like of science fiction, but you might be surprised to find out that you’re already using it. AI has a huge effect on your life, whether you’re aware of it or not..!!
One of the instances of AI that most people are probably familiar with, video game AI has been used for a very long time since the very first video games, in fact. So next time you play GTA5 or God of war, dont forget that you are surrounded by the AI…(Hehehe)
Your smartphone, your car, your bank, and your house all use artificial intelligence on a daily basis; sometimes it’s obvious what its’ doing, like when you ask Siri to get you directions to the nearest gas station. Sometimes it’s less obvious, like when you make an abnormal purchase on your credit card and don’t get a fraud alert from your bank. AI is everywhere, and it’s making a huge difference in our lives every day.
For any more updates about chat bots , education, science and technology do visit
https://medium.com/@contacttinker
Click here to use Tinker Bot : http://m.me/tinkerbot.in
Follow us on Facebook: https://www.facebook.com/tinkerbot.in/
Twitter ID: https://twitter.com/LearnTinker
Instagram ID: https://www.instagram.com/tinker.bot/
By Vinod Harapanahalli
|
How AI is changing our Lives?
| 2
|
how-ai-is-changing-our-lives-16ff29c47c98
|
2018-06-03
|
2018-06-03 00:39:02
|
https://medium.com/s/story/how-ai-is-changing-our-lives-16ff29c47c98
| false
| 510
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Tinker Bot Labs
|
An Ai based chatbot that helps to learn new things by chatting🤖
|
9877150e439
|
contacttinker
| 30
| 36
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-27
|
2018-04-27 05:17:59
|
2018-04-27
|
2018-04-27 05:58:26
| 1
| false
|
en
|
2018-04-27
|
2018-04-27 06:11:58
| 4
|
16ff72b2915
| 3.85283
| 1
| 0
| 0
|
Here are the top fifteen picks from the NFL Draft and how I predicted them. See my previous article for more information and how I am…
| 5
|
Results for Using Twitter Sentiment Analysis to Predict the NFL Draft
Here are the top fifteen picks from the NFL Draft and how I predicted them. See my previous article for more information and how I am scoring each pick.
Photo by Adrian Curiel on Unsplash
1. Cleveland Browns: Baker Mayfield, QB (Oklahoma)
My projection: Josh Allen, QB (Wyoming)
Pick differential for Allen: +6
Since the model predicted the position but not the right player, half a point is awarded.
2. New York Giants: Saquon Barkley, RB (Penn State)
My projection: Saquon Barkley, HB (Penn State)
Pick differential for Barkley: 0
The model guessed this pick correctly. Two points are awarded.
3. New York Jets: Sam Darnold, QB (USC)
My projection: Josh Rosen, QB (UCLA)
Pick differential for Rosen: +7
Right collegiate city, wrong university. Half a point is awarded for getting the correct position.
4. Cleveland Browns: Denzel Ward, CB (Ohio State)
My projection: Bradley Chubb, DE (NC State)
Pick differential for Chubb: +1
No points awarded here. The team did not pick the right player or right position.
5. Denver Broncos: Bradley Chubb, DE (NC State)
My projection: Baker Mayfield, QB (Oklahoma)
Pick differential for Mayfield: -4
No points.
6. Indianapolis Colts: Quenton Nelson, G (Notre Dame)
My projection: Roquan Smith, LB (Georgia)
Pick differential for Smith: +2
I should have done analysis for Nelson here. The Colts’ offensive line is currently atrocious. No points.
7. Buffalo Bills (trade up with Tampa Bay): Josh Allen, QB (Wyoming)
My projection for the Bills’ pick: Sam Darnold, QB (USC)
My projection for the 7th pick: Derwin James, S (Florida State)
Pick differential for James: +10
Half a point is awarded for guessing the position they would pick.
8. Chicago Bears: Roquan Smith, LB (Georgia)
My projection: Quenton Nelson, G (Notre Dame)
Pick differential for Smith: -2
No points.
9. San Francisco 49ers: Mike McGlinchey, OT (Notre Dame)
My projection: Calvin Ridley, WR (Alabama)
Pick differential for Ridley: +17
No points.
10. Arizona Cardinals (trade up with Oakland Raiders): Josh Rosen, QB (UCLA)
My projection for the Arizona Cardinals: Courtland Sutton, WR (SMU)
My projection for the 10th pick: Rashaan Evans, LB (Alabama)
Pick differential for Evans: +12
No points.
11. Miami Dolphins: Minkah Fitzpatrick, S (Alabama)
My projection: Da’Ron Payne, DT (Alabama)
Pick differential for Payne: +2
No points.
12. Tampa Bay Buccaneers (trade down with Buffalo): Vita Vea, DT (Washington)
My projection for the Bucs: Derwin James, S (Florida State)
My projection for the 12th pick: Sam Darnold, QB (USC)
Pick differential for Darnold: -9
No points.
13. Washington Redskins: Da’Ron Payne, DT (Alabama)
My projection: Denzel Ward, CB (Ohio State)
Pick differential for Ward: -9
No points.
14. New Orleans Saints (trade up with Green Bay): Marcus Davenport, DE (Texas-San Antonio)
My projection for the Saints: none.
My projection for the Packers at 14: Minkah Fitzpatrick, S (Alabama)
Pick differential for Fitzpatrick: -3
No points.
15. Oakland Raiders (trade down with Arizona): Kolton Miller, OT (UCLA)
My projection for the Raiders: Rashaan Evans, LB (Alabama)
My projection for the 15th pick: Courtland Sutton, WR (SMU)
Pick differentia for Sutton: UNKNOWN (has not been selected in the first round)
No points.
Conclusion
Using tweet sentiment analysis to predict the NFL draft does not work. Out of a possible thirty points (nailing every single pick correctly), the model only obtained three and a half points, a stunning failure. Only one pick was predicted correctly: Saquon Barkley going to the Giants. None of the quarterbacks were slotted to the correct team: my model had the Browns selecting Josh Allen with the first pick (he went seventh to the Bills). While Josh Rosen was predicted to go third to the Jets, the Jets opted to draft Sam Darnold instead (who I had going to the Bills). Josh Rosen actually fell to the tenth spot in the draft to the Cardinals, who traded up. Baker Mayfield was predicted to go fifth to Denver, but was selected first overall by Cleveland. My order for the top four quarterbacks being drafted was Josh Allen, Josh Rosen, Baker Mayfield, and Sam Darnold. In reality, the order was Baker Mayfield, Sam Darnold, Josh Allen, and Josh Rosen. Not only did I have the order for quarterbacks completely wrong, but the first two quarterbacks I predicted to be chosen were in fact the last two chosen amongst the top four.
The model was also off on the position for most players to be drafted. Baker Mayfield was drafted four spots earlier than predicted. Calvin Ridley was drafted seventeen spots after the model predicted him to go at pick nine.
How does this compare to other mock drafts? Walter from Walterfootball.com obtained a staggering twelve points against my three and a half. He correctly predicted Baker Mayfield, Saquon Barkley, and Sam Darnold to be the top three picks. He correctly predicted the Bills to trade up for Josh Allen, but had them trading up to the fourth pick instead of the 7th pick (one point). Bradley Chubb was correctly mocked at five and he correctly had Arizona choosing Rosen (although at pick 8 and not at pick 10). He had the Raiders picking an offensive tackle, although they didn’t pick McGlinchey and they did not pick at the 10th selection. Denzel Ward was selected to the Browns, although at pick 12 and not pick 4. The Redskins selected a DT at 13 but it was not Vita Vea. Mel Kiper, ESPN’s NFL Draft Guru, scored nine points.
Tweet sentiment analysis cannot predict what team a player will be drafted to or the pick in the draft that will be used to select a player. NFL draft experts are a lot more accurate. Surprisingly enough, the opinion of a fan base does not correlate with the actions of an NFL team’s front office.
|
Results for Using Twitter Sentiment Analysis to Predict the NFL Draft
| 1
|
results-for-using-twitter-sentiment-analysis-to-predict-the-nfl-draft-16ff72b2915
|
2018-04-27
|
2018-04-27 09:40:21
|
https://medium.com/s/story/results-for-using-twitter-sentiment-analysis-to-predict-the-nfl-draft-16ff72b2915
| false
| 968
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Ajay Jain
|
Statistics & Computer Science and Political Science student at the University of Illinois. Interested in political analytics and data science.
|
83ff4c27c062
|
theajayjain
| 47
| 62
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
7219b4dc6c4c
|
2018-09-25
|
2018-09-25 12:01:54
|
2018-09-25
|
2018-09-25 12:16:29
| 13
| false
|
en
|
2018-09-27
|
2018-09-27 17:48:33
| 2
|
1700548a3093
| 6.309434
| 36
| 0
| 2
|
Learn the concepts of convolutions and pooling in this tutorial by Joshua Eckroth, an assistant professor of computer science at Stetson…
| 5
|
Short Introduction to Convolutions and Pooling: Deep Learning 101!
Learn the concepts of convolutions and pooling in this tutorial by Joshua Eckroth, an assistant professor of computer science at Stetson University.
Deep learning is a vast field that’s generating massive interest these days. It’s popularly used in research but has slowly gained market penetration in the industry in the last few years. But what essentially is deep learning?
Deep learning refers to neural networks with lots of layers. It’s still quite a buzzword, but the technology behind it is real and quite sophisticated. The term has been rising in popularity, along with machine learning and artificial intelligence, as shown in the below Google trend chart:
The primary advantage of deep learning is that combining more data with computational power often produces more accurate results, without the significant effort required for engineering tasks.
In this article, we will take a quick look at the concepts of convolutions and pooling. This article assumes you have a basic knowledge of basic deep learning terms.
Deep learning methods
Deep learning refers to several methods which may be used in a particular application. These methods include convolutional layers and pooling. Simpler and faster activation functions, such as ReLU, return the neuron’s weighted sum if it’s positive, and zero if negative.
Regularization techniques, such as dropout, randomly ignore weights during the weight update base to prevent overfitting. GPUs are used for faster training with the order that is 50 times faster. This is because they’re optimized for matrix calculations that are used extensively in neural networks and memory units for applications such as speech recognition.
Several factors have contributed to deep learning’s dramatic growth in the last five years. Large public datasets, such as ImageNet, that holds millions of labeled images covering a thousand categories and Mozilla’s Common Voice Project that contain speech samples are now available. Such datasets have satisfied the basic requirement for deep learning-lot of training data. GPUs have transitioned to deep learning and clusters while also focusing on gaming. This helps make large-scale deep learning possible.
Advanced software frameworks that were released open source and are undergoing rapid improvement are also available to everyone. These include TensorFlow, Keras, Torch, and Caffe. Deep architectures that achieve state-of-the-art results, such as Inception-v3 are being used for the ImageNet dataset. This network actually has an approximate of 24 million parameters, and a large community of researchers and software engineers are quickly translating research prototypes into open source software that anyone can download, evaluate, and extend.
Convolutions and pooling
Take a closer look at two fundamental deep learning technologies, namely, convolution and pooling. Throughout this section, images have been used to understand these concepts. Nevertheless, what you’ll be studying can also be applied to other data, such as, audio signals.
Convolution
Take a look at the following photo and begin by zooming in to observe the pixels:
Convolutions occur per channel. An input image would generally consist of three channels; red, green, and blue. The next step would be to separate these three colors. The following diagram depicts this:
A convolution is a kernel. In this image, a 3 x 3 kernel is applied. Every kernel contains a number of weights. The kernel slides around the image and computes the weighted sum of the pixels on the kernel, each multiplied by their corresponding kernel weights:
A bias term is also added. A single number, the weighted sum, is produced for each position that the kernel slides over. The kernel’s weights start off with any random value and change during the training phase. The following diagram shows three examples of kernels with different weights:
You can see how the image transforms differently depending on the weights. The rightmost image highlights the edges, which is often useful for identifying objects. The stride helps you understand how the kernel slides across the image. The following diagram is an example of a 1 x 1 stride:
The kernel moves by one pixel to the right and then down. Throughout this process, the center of the kernel will hit every pixel of the image whilst overlapping the other kernels. It is also observed that some pixels are missed by the center of the kernel. The following image depicts a 2 x 2 stride:
In certain cases, it is observed that no overlapping takes place. To prove this, the following diagram contains a 3 x 3 stride:
In such cases, no overlap takes place because the kernel is the same size as the stride.
However, the borders of the image need to be handled differently. To effect this, you can use padding. This helps avoid extending the kernel across the border. Padding consists of extra pixels, which are always zero. They don’t contribute to the weighted sum. The padding allows the kernel’s weights to cover every region of the image while still letting the kernels assume that the stride is 1. The kernel produces one output for every region it covers.
Hence, if you have a stride that is greater than 1, you’ll have fewer outputs than there were original pixels. In other words, the convolution helped reduce the image’s dimensions. The formula shown here tells us the dimensions of the output of a convolution:
It is a general practice to use square images. Kernels and strides are used for simplicity. This helps to focus on only one dimension, which will be the same for the width and height. In the following diagram, a 3 x 3 kernel with a (3, 3) stride is depicted:
The preceding calculation gives the result of 85 width and 85 height. The image’s width and height have effectively been reduced by a factor of three from the original 256. Rather than using a large stride, you can let the convolution hit every pixel using a stride of 1. This will help attain a more practical result. You also need to make sure that there is sufficient padding.
However, it is beneficial to reduce the image dimensions as you move through the network. This helps the network train faster as there will be fewer parameters. Fewer parameters imply a smaller chance of over-fitting.
Pooling
You may often use max or average pooling between convolution dimensionality instead of varying the stride length. Pooling looks at a region, which lets you assume, is 2 x 2 and keeps only the largest or average value. The following image depicts a 2 x 2 matrix that depicts pooling:
A pooling region always has the same-sized stride as the pool size. This helps to avoid overlapping. Here’s a relatively shallow convolutional neural networks (CNN) representation:
Source: cs231.github.io, MIT License
You can observe that the input image is subjected to various convolutions and pooling layers with ReLU activations between them before finally arriving at a traditionally fully connected network. The fully connected network, though not depicted in the diagram, is ultimately predicting the class.
In this example, as in most CNNs, you’ll have multiple convolutions at each layer. Here, you’ll observe 10, which are depicted as rows. Each of these 10 convolutions has their own kernels in each column so that different convolutions can be learned at each resolution. The fully connected layers on the right will determine which convolutions best identify the car or the truck, and so forth.
If you found this article, you can explore Dr. Joshua Eckroth’s Python Artificial Intelligence Projects for Beginners to build smart applications by implementing real-world artificial intelligence projects. This book demonstrates AI projects in Python, covering modern techniques that make up the world of artificial intelligence.
Joshua Eckroth teaches big data mining and analytics, artificial intelligence (AI), and software engineering at Stetson University. He also has a PhD in AI and cognitive science, focusing on abductive reasoning and meta-reasoning.
|
Short Introduction to Convolutions and Pooling: Deep Learning 101!
| 107
|
deep-learning-methods-1700548a3093
|
2018-09-27
|
2018-09-27 17:48:33
|
https://medium.com/s/story/deep-learning-methods-1700548a3093
| false
| 1,301
|
Analytics Vidhya is a community of Analytics and Data Science professionals. We are building the next-gen data science ecosystem https://www.analyticsvidhya.com
| null |
analyticsvidhya
| null |
Analytics Vidhya
|
medium@analyticsvidhya.com
|
analytics-vidhya
|
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,DEEP LEARNING,DATA SCIENCE,PYTHON
|
analyticsvidhya
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Packt_Pub
|
Stay Relevant!
|
b80b23aafb18
|
Packt_Pub
| 174
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-09
|
2018-06-09 19:25:01
|
2018-06-09
|
2018-06-09 20:38:28
| 0
| false
|
en
|
2018-06-09
|
2018-06-09 20:38:28
| 0
|
1702c392c4b2
| 1.075472
| 0
| 3
| 0
|
I recently heard someone talking about the ethics behind artificial intelligence and started to wonder if there are any laws against AI…
| 3
|
Artificial Intelligence
I recently heard someone talking about the ethics behind artificial intelligence and started to wonder if there are any laws against AI programming. We’ve all seen the movie where robots turn on humans and start to take over the world, but could this happen in real life? I believe that it could be possible if new laws are not implemented.
I don’t know a great deal about artificial intelligence, but from the research I did I found that many people follow the three laws of Azimuth. The three laws state:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I think this code of ethics was acceptable when AI was first being created, but now robots are everywhere and becoming more popular by the year. In a new show on Netflix called Black Mirror, there are ideas that are actually coming to life. More specifically, one episode named “Metalhead” where robot dogs hunt humans. The robots in this episode closely resemble Boston Dynamic Robot Dogs, which makes you think what if the person who created them decided to use them for an evil purpose.
Could you imagine being chased by one of these robots while also getting shot and explosives thrown at you? That’s a situation I would not want to be in. What do you think? Should there be more strict laws against AI or should we let people explore freely?
|
Artificial Intelligence
| 0
|
artificial-intelligence-1702c392c4b2
|
2018-06-09
|
2018-06-09 20:38:29
|
https://medium.com/s/story/artificial-intelligence-1702c392c4b2
| false
| 285
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Lizbeth Rivera
| null |
1c4a61c5de41
|
tug25769
| 5
| 7
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
634d4b270054
|
2018-04-13
|
2018-04-13 06:58:42
|
2018-04-13
|
2018-04-13 07:00:16
| 1
| false
|
en
|
2018-06-05
|
2018-06-05 08:51:12
| 3
|
1702e2805e74
| 1.05283
| 0
| 0
| 0
|
The Hong Kong’s Civil Aviation Authority (HKCAA) suggests that UAV users may need to register their drones with authorities, undertake…
| 5
|
Drone Owners Of Hong Kong Would Require A License To Fly
The Hong Kong’s Civil Aviation Authority (HKCAA) suggests that UAV users may need to register their drones with authorities, undertake training, pass tests and meet certain insurance requirements.
As per the new rules, drone weighing over 9 ounces would need to be registered and the operators need to undertake short web-based training. But before making any changes, Hong Kong needs to go through a three month period of public consultation. The proposal will also include making certain parts of the island into no-fly zones.
Still, there are number of countries that require proper registration and a license to fly. North Korea, Paraguay, Antarctica, Nepal, and Slovakia are some of the countries that require registration plus permission for the drone operation.
Source: https://bit.ly/2vfU0MA
About DEEPAERO
DEEP AERO is a global leader in drone technology innovation. At DEEP AERO, we are building an autonomous drone economy powered by AI & Blockchain.
DEEP AERO’s DRONE-UTM is an AI-driven, autonomous, self-governing, intelligent drone/unmanned aircraft system (UAS) traffic management (UTM) platform on the Blockchain.
DEEP AERO’s DRONE-MP is a decentralized marketplace. It will be one stop shop for all products and services for drones.
These platforms will be the foundation of the drone economy and will be powered by the DEEP AERO (DRONE) token.
|
Drone Owners Of Hong Kong Would Require A License To Fly
| 0
|
drone-owners-of-hong-kong-would-require-a-license-to-fly-1702e2805e74
|
2018-06-05
|
2018-06-05 08:51:13
|
https://medium.com/s/story/drone-owners-of-hong-kong-would-require-a-license-to-fly-1702e2805e74
| false
| 226
|
AI Driven Drone Economy on the Blockchain
| null |
DeepAeroDrones
| null |
DEEPAERODRONES
| null |
deepaerodrones
|
DEEPAERO,AI,BLOCKCHAIN,DRONE,ICO
|
DeepAeroDrones
|
Deepaero
|
deepaeros
|
Deepaero
| 0
|
DEEP AERO DRONES
| null |
dcef5da6c7fa
|
deepaerodrones
| 277
| 0
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-08
|
2018-08-08 17:10:38
|
2018-08-08
|
2018-08-08 18:26:06
| 2
| false
|
en
|
2018-08-13
|
2018-08-13 18:39:31
| 4
|
1704340abb87
| 3.941824
| 1
| 0
| 0
|
The book On the Principles of Political Economy and Taxation written by David Ricardo in 1817, was the first scholarly literature to…
| 5
|
How David Ricardo’s Theory from 1817 Influences the NEWCOIN Network
The book On the Principles of Political Economy and Taxation written by David Ricardo in 1817, was the first scholarly literature to formalize what would later be known as the Comparative Advantage theory.
Economists like to establish models — simplified assumption of reality — in order to illustrate their points. Ricardo proposed a world where two countries existed. Say, France and England.
In this case, let’s say France can produce 1 unit of Wine for 100 labor hours; and 1 unit of Cloth for 200 labor hours.
England can produce 1 unit of Wine for 80 labor hours; and 1 unit of Cloth for 160 labor hours.
So:
France — Wine(100);Cloth(200)
England — Wine(120);Cloth(80)
So France can either commit 100 hours into producing 1 unit of Wine; or 100 hours into producing 1/2 unit of Cloth.
Likewise, England can either commit 120 hours into producing 1 unit of Wine; or 120 hours into producing 1.5 unit of Cloth.
Without trade,
France would need 300 hours to produce 1 unit of Wine and 1 unit of Cloth.
England would need 200 hours to produce 1 unit of Wine and 1 unit of Cloth.
→ the result: after 300 hours, the world will have produced 2.5 units of Wine and 2.5 units of Cloth.
What would happen if trade was possible between these two countries?
Each country can specialize in the production of each goods.
England can specialize in Cloth (it needs the lowest labor cost, after all) and France can specialize in Wine (same reason).
Let’s see what happens. After 300 hours, France will have produced 3 units of Wine, and England will have produced 3,75 units of Cloth.
→ the result after trade: the world will have produced 3 units of Wine and 3,75 units of Cloth.
What are the implications? If each country trades 2.5 units of Wine and 2.5 units of Cloth to each other, they will still retain an additional 1 unit of Wine and 1,75 unit of cloth within their inventories, for further consumption or exportation.
This represents a surplus and a strong advocating argument for free trade, and has been since its first introduction in the early 19th century.
— — — — — — — — — — — — — — —
So what does all this have to do with the Human vs Robot debate?
Robots are going to be better than us in manufacturing and all sorts of other types of labor-intensive tasks. They’re designed to be stronger, faster than us. They follow commands to the dot and they don’t demand rights, working conditions and whatnot. They will replace humans in most, if not all manufacturing sectors. According to this estimation, by 2030, about 800 million workers will be replaced by robots. That’s 10 % of humanity. In China, 25% of the workforce in ammunition factories have been replaced by machines.
What can humans do against these tireless, silent, slave-like superhumans? Protesting and complaining wouldn’t help. Corporations are profit-driven, and they want to cut down expenses. A large chunk of a corporation’s expense is represented by salaries. That’s the reason why Western companies began to build factories in China, and one of the reasons why China has become interested in the African market. The benefits that robots and automation offer are too attractive.
We at NEWCOIN.Network believe there is a way for humanity to co-exist with robots. To use Ricardo’s words, we will say that Humans have the advantage in innovative, creative and artistic tasks, while Robots have the advantage in manufacturing and labor-intensive tasks.
Like the case of England and France illustrated above, Humans should specialize in Innovation, while Robots should specialize in Manufacturing. The end result is a surplus in both Innovative goods and Manufacturing goods because trade between humans and robots is obviously possible.
Let’s take a look at the final case study to conclude the article.
Humans require 100 hours to produce 1 unit of Innovative goods; 150 hours to produce 1 unit of Manufacturing goods.
Robots require 250 hours to produce 1 unit of Innovative goods; 50 hours to produce 1 unit of Manufacturing goods.
→ Humans can either spend 100 hours to produce 1 unit of Innovative goods; or 100 hours to produce 0,667 unit of Manufacturing goods.
→Robots can either spend 250 hours to produce 1 unit of Innovative goods; or 5 units of Manufacturing goods.
Without trade,
Humans will need 250 hours to produce 1 unit of Innovative and Manufacturing goods.
Robots will need 300 hours to produce 1 unit of Innovative and Manufacturing goods.
The result: after 300 hours, the world will have produced 2,25 units of Innovative goods and 2,1667 units of Manufacturing goods.
With trade,
Humans will specialize in Innovative goods, producing 3 (+0,75) Innovative goods in 300 hours.
Robots will specialize in Manufacturing goods, producing 6 (+3,833) units of Manufacturing goods in 300 hours.
The result: there is a surplus in both Innovative and Manufacturing goods in the world as the result of specialization and trade.
This is what the NEWCOIN.Network and the Newlife movement is about. We value human innovation and we believe that a human of the future must push his innovative potential to the maximum. The infrastructure to get us there isn’t optimal. There are many obstacles on the way. Too many bureaucratic processes. That’s why NEWCOIN.Network is here. To streamline the entire process. To push our society to become innovation-driven. And to find a place for humanity in a world soon to be dominated by AI.
Signed August.2018: Nguyen Ba Nguyen, facebook.com/nguyen.shingen
|
How David Ricardo’s Theory from 1817 Influences the NEWCOIN Network
| 50
|
how-david-ricardos-comparative-theory-can-be-used-in-the-humanvsrobot-debate-1704340abb87
|
2018-08-13
|
2018-08-13 18:39:31
|
https://medium.com/s/story/how-david-ricardos-comparative-theory-can-be-used-in-the-humanvsrobot-debate-1704340abb87
| false
| 943
| null | null | null | null | null | null | null | null | null |
Technology
|
technology
|
Technology
| 166,125
|
NEWCOIN.Network
|
A Decentralised Distribution Network for Innovation
|
b41c0eb57d1
|
NEWCOIN.Network
| 22
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-15
|
2018-03-15 04:45:23
|
2018-03-15
|
2018-03-15 05:08:40
| 0
| false
|
en
|
2018-03-15
|
2018-03-15 05:08:40
| 0
|
170506aef0be
| 1.222642
| 0
| 0
| 0
|
Today I had lunch with a Harvard Berkam Centre for Internet and Society fellow. I told him I studied CS and maths, and then wide eyed, I…
| 1
|
Will AI's (and my) life ever get less confusing?
Today I had lunch with a Harvard Berkam Centre for Internet and Society fellow. I told him I studied CS and maths, and then wide eyed, I followed with my hopes on working on the ethics of AI.
Giving himself as an example, he told me that most people working in that area have an understanding of CS, but they also have a background in humanities. So, you know, maybe switch to philosophy.
Understanding of CS? That's a line that confounds me — truth is the pace of creating new products in Silicon Valley, and several other such places, is too fast paced for an official, in a suit, to both check that the code and the data sets used comply with some pre-established ethical guidelines. Should these guidelines not exist then?
I am not sure — what I know, though, is that I can't think of any ways in which we could ensure that every bit of code written for a new product complies with some written guidelines. Tiny bits of code, though, can still affect people's lives to a huge extent.
Having more diverse teams in the development of products, with people who are intellectually prepared to challenge the lines of code written by developers would be ideal — and it would serve our fast changing world better than some ageing written rules. At the same time, startups that already struggle with the amount of money spent, would probably not have the financial robustness to hire this kind of team. Which means we will continue relying on big companies to guide the kind of products we use. Which brings us back to the question of do we want large corporations to solely shape the future of AI, and control our data?
I've reached a loop that I will probably not solve tonight — after all I do have math homework to do…
|
Will AI's (and my) life ever get less confusing?
| 0
|
will-ais-and-my-life-ever-get-less-confusing-170506aef0be
|
2018-03-15
|
2018-03-15 05:08:41
|
https://medium.com/s/story/will-ais-and-my-life-ever-get-less-confusing-170506aef0be
| false
| 324
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
ACuza
|
A repository of all my thoughts on AI, maths, ethics and human creativity/of projects I am working on. No order, undefined purpose — just like freshman year.
|
ca5c690c1692
|
Cuza
| 0
| 23
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-20
|
2018-07-20 10:47:48
|
2018-07-20
|
2018-07-20 13:02:32
| 0
| false
|
en
|
2018-07-21
|
2018-07-21 13:09:21
| 0
|
1705e792cf12
| 5.181132
| 3
| 0
| 0
|
In the field of statistics ,there are two main approach to probability — —
| 5
|
“Some Fundamental Facts about Bayesian Approach”
In the field of statistics ,there are two main approach to probability — —
(1) Classical Approach
(2) Bayesian Approach
Till now we have seen that the population parameter is unknown and we are using classical approach like method of Maximum Likelihood to find estimation of parameter.
The fundamental difference between Bayesian and Classical Method is that the parameter is considered to be a random variable in Bayesian Approach.
In Classical Statistics Parameter is fixed but unknown quantity . This leads to difficulties such as the careful interpretation required for classical confidence interval ,where is the interval that is random .As soon as the data are observed and numerical interval is calculated ,there is no probability involved .A statement such as P(10.45<theta<13.26)=0.95 can not be made because theta is not a random variable.
The interval [g1(x,theta) , g2(x,theta)] contains theta with probability 0.95 and hence the confidence interval (10.45 ,13.26) .This way we avoid making a statement that is meaningless . In classical approach theta either lies within the interval or it does not .There can be no probabilities associated with such a statement .
In Bayesian statistics , probability statement can be made concerning the values of a parameter .
Example :
If we toss a fair coin 10 times it may not be very unusual to observe 80% heads but if we toss the fair coin 10 trillion times we can be fairly certain that the proportion of heads will be near 50% .It is this ‘long-run’ behavior that defines probability for the classical approach .
However ,there are situations for which the classical definition of probability is unclear .For example , what is the probability that terrorist will strike a city with a dirty bomb? As such an occurrence has never occurred ,it is difficult to conceive what the long run behavior of this gruesome experiment might be .
In the classical approach to probability ,the parameters are fixed and the randomness lies in the data ,which are viewed as a random sample from a given distribution with unknown but fixed parameters .
The Bayesian Approach to probability turns these assumptions around.In Bayesian Statistics ,the parameters are considered to be random variables and the data are considered to be known .The parameters are regarded as coming from a distribution of possible values and Bayesian look to the observed data to provide information on likely parameter values .
Another advantage of Bayesian Statistics is that it enables us to make use of any information that we already have about the situation under investigation .Often researchers investigating an unknown population parameter have information available from other source in advance of the study that provides a strong indication of what values the parameter is likely to take .This additional information might be in a form that can not be incorporated directly in the current study .The classical approach offers no scope for the researchers to take this additional information into account .However ,the Bayesian approach does allow additional information to be taken into account when trying to estimate a population parameter .
Example: — — — —
Insurance company is reviewing its premium rates for a particular type of policy and has access to results from other insurers ,as well as from its own policyholders .This information can not be taken in to account directly
because the terms and conditions of the policy for other companies may be slightly different .However ,these additional data might contain a lot of useful information which should not be ignored .
PRIOR AND POSTERIOR DISTRIBUTION: — — -
Suppose (X1,X2,…..,Xn) is a random sample from a population specified by the density or probability function f(x,theta) and it is required to estimate theta. As a result of the parameter theta being a random variable it will have a distribution .This allows the use of any knowledge available about possible value for theta before the collection of any data .This knowledge is quantified by expressing it as the prior distribution of theta.
Then after collecting appropriate data ,the posterior distribution of theta is determined and this form the basis of all inference concerning theta .
The information from the random sample is contained in the likelihood function for that sample .So the Bayesian approach combines the information obtained from the likelihood function with the information in the prior distribution .Both source of information are combined to obtain a posterior estimate for the required population parameter .
The range of values taken by the prior distribution should also reflect the possible parameter values .So if we want a prior distribution for a parameter that can only take values in the range (0,1) then it would be silly to use ,say,a gamma distribution for the prior in this case (which is defined on the interval from 0 to infinity )
Also the population density or probability function will be denoted by f(x/theta) rather than f(x,theta) as it represents the conditional distribution of x given theta.
“Posterior Distribution is proportional to the prior times the likelihood”
CONJUCATE PRIOR : — — -
For a given likelihood ,if the prior distribution leads to a posterior distribution belonging to the same family as the prior distribution ,then this prior is called the conjucate prior .
IMPROPER PRIOR DISTRIBUTION : — — — — — -
Sometimes it is useful to use an uninformative prior distribution ,which assumes that an unknown parameter is equally likely to take any value.
For example ,
we might have a sample from a normal distribution with mean mu where we know nothing at all about mu. This leads to a problem in this example because we would need to assume a U(-infinity , infinity ) distribution for mu ,which does not make sense ,since the pdf of this distribution would be zero everywhere .
we can easily get round this problem by using the distribution U(-N,N) where N is a very large number and then letting N tends to infinity . The pdf of this distribution is 1/2N i.e. constant .
LOSS FUNCTION: — —
To obtain an estimate of theta ,a loss function must first be specified .This is a measure of the “loss” incurred when g(x) is used as an estimator of theta . A loss function is sought which is zero when the estimation is exactly correct , that ,is g(x) = theta and which is positive and does not decrease as g(x) gets further away from theta.
The Bayesian Estimator is the g(x) that minimizes the expected loss with respect to the posterior distribution.
The Loss function is given by
L(g(x),theta) = (g(X)-theta)
The expected loss function is given by
EPL = E(L(g(x),theta))= integration(L(g(x),theta)*f(theta/x)) wrt theta
we have three type of loss function .
Quadratic Loss Function: — -
L(g(x),theta) = (g(X)-theta)²
This loss function is called the Quadratic Loss function as we away from the true parameter value ,the loss increases at an increasing rate .The graph of the loss function is a parabola with a minimum of zero at the true parameter .
THE BAYESIAN ESTIMATOR UNDER QUADRATIC LOSS IS THE “MEAN” OF THE POSTERIOR DISTRIBUTION.
2. Absolute Error Loss Function :
L(g(x),theta) = |g(X)-theta|
Here the graph of the loss function will be two straight line which meet at the point (theta ,0) so as we move away from the true value in either direction our loss increases at a constant rate .
THE BAYESIAN ESTIMATOR UNDER ABSOLUTE ERROR LOSS IS THE “MEDIAN” OF THE POSTERIOR DISTRIBUTION.
3. 0/1 or “All-or-nothing” Loss Function
L(g(x),theta) = 0 if g(x) = theta
L(g(x),theta) = 1 if g(x) is not equal to theta
In this case there is a constant loss for any parameter estimate that is not equal to the true underlying parameter value . If we hit the parameter value exactly ,then the loss is zero .
THE BAYESIAN ESTIMATOR UNDER ALL-OR- NOTHING LOSS IS THE “MODE” OF THE POSTERIOR DISTRIBUTION.
The Bayesian Estimator that arises by minimizing the expected loss for each of these loss function in turn is the mean ,median and mode respectively of the posterior distribution ,each of which is a measure of location of the posterior distribution.
|
“Some Fundamental Facts about Bayesian Approach”
| 11
|
some-fundamental-facts-about-bayesian-approach-1705e792cf12
|
2018-07-21
|
2018-07-21 13:09:21
|
https://medium.com/s/story/some-fundamental-facts-about-bayesian-approach-1705e792cf12
| false
| 1,373
| null | null | null | null | null | null | null | null | null |
Statistics
|
statistics
|
Statistics
| 5,433
|
jyoti gupta
|
M.Sc(Statistics) , Statistician , Statistics Trainer , online Statistics Trainer ,Data Science Enthusiast , Artificial Intelligence Enthusiast , Data Analyst
|
8aef8029b22a
|
jyotigupt86
| 3
| 2
| 20,181,104
| null | null | null | null | null | null |
0
|
知識一:聽到「付費」-> 就提供「商品購買連結」
知識二:聽到「交易失敗」-> 就提供「疑難排除連結」
知識一:聽到「付費、付款、付錢、買東西、...」-> 就提供「商品購買連結」
知識二:聽到「交易失敗、卡刷不過、網頁卡住了...」-> 就提供「疑難排除連結」
知識三:聽到「優惠、折扣、週年慶...」-> 就提供「優惠資訊連結」
- 問題:那「付款失敗了」怎麼辦?得多寫一層規則,讓疑難排除順位大於商品購買
- 問題:那「會員買東西有折扣嗎」怎麼辦?也得多寫一層規則,解決詞彙處理順位的問題
- ......
某些詞彙經常同時出現:國王、皇后、城堡;海帶、豆乾、滷味;潮汐、海浪、地球
某些詞彙經常出現在句子的同個地方:我們一起 去 公園散步、小明和老王一起 到 沖繩玩
......
未處理 context 的 NLU:(已經結帳完)→ Q: 這樣就可以了嗎? → A: 抱歉,請問您想表達的意思是?
有處理 context 的 NLU:(已經結帳完)→ Q: 這樣就可以了嗎? → A: 是的,歡迎再次光臨喔!
小明: 想看今年的型錄~
小華: 我要預約賞車。
小美: 想換輪胎了!
老王: 有推薦的代步車嗎?
- 複雜的回答:「來看看我們最新的型錄吧(附個網址)!心動的話,點擊按鈕,快來預約賞車!如果還不確定要買什麼車,就讓我來推薦你吧!......」
- 模糊的回答:「關於商品購買,請參考以下按鈕:...」
請推薦幾台<carType>休旅車</carType>
我想找<carType>雙門跑車</carType>
...
我想預約<day>星期三</day> <time>下午三點半</time>賞車
| 10
| null |
2017-12-10
|
2017-12-10 12:19:55
|
2017-12-22
|
2017-12-22 12:01:02
| 3
| false
|
zh-Hant
|
2018-09-26
|
2018-09-26 09:47:28
| 4
|
17065de49a
| 1.633019
| 11
| 0
| 0
|
隨著運算技術與存儲技術的不斷突破,電腦也愈發能從大量語料中學習應對進退。Facebook, LINE, Telegram, WeChat, Kik, Slack 等各大通訊平台,都積極釋出 API…
| 4
|
Rule-based vs. NLU:聊天機器人如何聽懂人類的自然語言?
Image Source: Estelle Huang
隨著運算技術與存儲技術的不斷突破,電腦也愈發能從大量語料中學習應對進退。Facebook, LINE, Telegram, WeChat, Kik, Slack 等各大通訊平台,都積極釋出 API 供開發者串接,希望在對話式商務時代有一席之地(比較稀奇的是,美國市佔率第二名的聊天軟體 WhatsApp 至今尚未加入 chatbot 的行列)。
而對 chatbot 開發者來說,找出對的應用只是第一步,設計出好的體驗才是讓使用者留下的關鍵因素。關於聊天機器人的使用體驗設計,有許多眉眉角角可以鑽研,這篇想討論的,是另一個影響體驗的要素:自然語言理解能力(Natural Language Understanding)。
儘管未必要聽得懂自然語言,才能成為一隻體驗優良的 chatbot;但是,隨著選單式 chatbot 製作門檻的降低,如果能讓機器人也聽得懂人話,就能尋求按鈕互動以外的可能性,讓 chatbot 的互動彈性脫穎而出。
選單式 chatbot 的製作門檻低,可以透過 Chatfuel 等平台,輕鬆地實作
聽得懂人話的 bot 才好玩啊。
Rule-based vs. NLU
在聽「懂」自然語言方面,主要有兩種做法(這裡先將「懂」定義為「能對人類的自然語言產出準確的回應」,不討論哲學層次上「懂」的具體意義到底是什麼):
1. 建立規則(rule-based)
這種 if-else 式的方法成本較低,能快速建立 chatbot 的自然語言應對機制。
不過,我們總是希望機器人能聽懂更多詞彙/句型,而隨著知識範圍的擴張,rule-based model 後續維護起來,是非常恐怖的:
我們無法窮舉詞彙/句型,讓機器人學到所有的說法;此外,更不可能撰寫規則,逐條告訴機器,該用何種順序處理詞彙/句型。
讓機器自己運算詞彙、句型間的關聯,從中探索出模式,似乎是個更可行的方向。
2. 自然語言理解(NLU, Natural Language Understanding)
首先,我們可以讓電腦從大量語料,例如日常對話、網路留言、新聞、電影台詞中,探索出詞彙之間的語意距離。舉例而言,
即便電腦無法瞭解「滷味是個集合名詞,泛指用醬油與香料醃漬、烹調過的食物;而海帶和豆乾是具體的食物品項,經常被做成滷味…」這麼複雜精確的意義,但是電腦可以從文本中探索出,「海帶、豆乾、滷味」之間的語意距離,應該比起「國王、海帶、潮汐」來得接近許多。
(大量語料怎麼來?可以自己蒐集、自己寫爬蟲,或是參考一些開源服務,例如 Google 推出的 word2vec,就提供了語料、甚至 pre-train 好的模型。)
至此,電腦對自然語言已經具備模模糊糊的概念。接下來,我們可以讓電腦更具體地理解語句。
How Does It Work?
做法上,我們可以把使用者輸入的自然語言,分為幾大部分:
Intent
使用者的意圖
我要換輪胎、最近想買新車 → Intent: 商品購買
排檔桿壞了、昨天車子被 A 到 → Intent: 保固維修
Entity
實體。使用使用者語句中有明確指涉對象的詞彙
我要換輪胎 → Intent: 商品購買 → Entity: 輪胎
排檔桿壞了 → Intent: 保固維修 → Entity: 排檔桿
接著,就能根據 intent / entity 設計不同的回答
會根據使用者的意圖而有所不同
Intent: 商品購買 → Entity: 輪胎 → Response: 請告訴我您的車款,我會推薦您適合的輪胎喔!
Intent: 保固維修 → Entity: 排檔桿 → Response: 請傳送您的地理位置,我會推薦您交通最便利的維修中心喔!
Context (State)
對話的脈絡(狀態)。
語句中可能出現:1. 代名詞:這個多少錢? 2. 沒有主謂賓結構:然後咧?所以呢? 等情況。如果沒有針對 state 規劃應對機制,模型會將單個語句視為獨立的表意方式,例如以下情境:
比起 intent 和 entity,context 的挑戰性又更高了,因為模型必須具備 memory,能記住最近到底在聊什麼(最近處理了什麼 intent),而且要有判斷代名詞指涉對象的能力。這篇文章暫時不討論。
How to Teach the Machine?
Step 1. 界定知識範圍
為什麼需要界定知識範圍?因為給定同樣的時間,要做得廣,就做不深;要做得深,就做不廣。較容易產生價值的是後者:擁有特定領域知識(domain knowledge),能夠回答 Google 搜尋不到的專業問題的機器人(不然直接問 Siri 就好啦)。
想像一個汽車 chatbot,如果能充分回答「商品購買、保固維修、門市資訊」的資訊,已經是功德圓滿了。「最近有什麼好看的電影?」這樣的問題,我們並不期待它能回答。這看起來是件顯而易見的小事,但實際在設計 chatbot 的自然語言應對機制時,很容易落入什麼都想回答的陷阱裡,不可不慎。
Step 2. 拆解意圖
目前,我們將知識範圍限定在三大類:「商品購買、保固維修、門市資訊」。基於這樣的結構,已經可以教機器人作答了。
不過,請試著觀察以下句子:
發現問題了嗎?
在目前的結構底下,上述句子都指向「商品購買」這個類別,所以小明、小華、小美、老王的問題會得到相同的答案。
但是這四個人的需求其實是不同的。這代表什麼呢?代表答案如果不寫得複雜細緻,就要寫得非常模糊,才能 fit 這些句子。
以上兩種體驗,其實都挺糟的。根本的解法就是改善意圖(intent)結構。以「商品購買」為例,其實可以進一步拆成「車款介紹、預約賞車、車款推薦、汽車周邊」,再進行後續工作。
不過,這裡可能會產生另一個疑問:「那為了讓體驗更加細緻,是不是乾脆把所有知識拆成愈細的意圖越好,讓機器人可以精準回答?」
確實,意圖拆得愈細,機器人愈能提供精緻的互動。不過,每多一種 intent,就得讓機器人多學一種回答方式,而且 intent 數愈多,彼此之間的語意距離便會愈相近、甚至重疊,而機器人無法區辨意圖的可能性便會大幅增加。
如何在體驗和開發複雜度之間取得平衡,是意圖設計的一大考驗。
Step 3. 資料生成(撰寫例句)
延續目前的架構。要如何教機器人聽懂「車款介紹、預約賞車、車款推薦、汽車周邊、保固維修、門市資訊」這些意圖呢?
這裡得要針對每個 intent,提供 utterance(例句),讓 AI 從中學習,「原來這個句子,是指向這個意思呀」。
舉例而言:
其實這裡在做的事情,非常類似資料標記(data labeling):例如我們拿到一堆動物的照片,而我們想教電腦辨識狐狸,於是我們就對照片標記(label)哪張照片是狐狸、哪些照片不是狐狸,讓電腦從中學習狐狸和非狐狸的特徵。
照片是 data,狐狸/非狐狸是 label。而在自然語意理解的世界中,data 是各式各樣的句子,而句子的意圖則是 label。
更精確地來說,這個步驟是在做資料生成(data generation)。我們先有了一堆 label,而我們對每個 label 提供 data。我們先有了「保固維修」這個 intent,然後回過頭來告訴電腦「我車保險桿壞了、昨天發生車禍、車子被別人碰到…」這些句子是指保固維修。
Step 4. 標記實體(Entity)
如果我們期待電腦對於「請推薦幾台休旅車」、「請推薦幾台買菜車」這樣的句子,提供不同的答案,就必須標記出 entity,才能對不同詞彙提供更精細的答案。這裡的 entity 很明顯是車的類型:
Entity 經常用來處理人事時地物的問題,例如:
在 utterance 中標記完後,模型不僅會學習判斷 intent,更會去爬梳實體(entity parsing),讓理解力更趨精準。
結論
電腦對於排序、檢索等具有明確規則的事情,遠比人腦厲害,透過 rule-based 的做法可以非常快速地讓電腦掌握互動重點。不過,人類的語言具有高度歧義性,使用者輸入的語言很容易就超出 rule-based model 掌握的範圍。要建立夠聰明的機器人,透過 NLU 技術是不可或缺的。
除了本篇討論到的 intent, entity,還有 context(如何理解前後文), unknown handling(機器人要如何知道「這句話我聽不懂耶」,避免亂答的情況發生) 等複雜有趣的問題。過些時間再推出續篇。
|
Rule-based vs. NLU:聊天機器人如何聽懂人類的自然語言?
| 29
|
rule-based-vs-nlu-聊天機器人如何聽懂人類的自然語言-17065de49a
|
2018-09-26
|
2018-09-26 09:47:28
|
https://medium.com/s/story/rule-based-vs-nlu-聊天機器人如何聽懂人類的自然語言-17065de49a
| false
| 287
| null | null | null | null | null | null | null | null | null |
Chatbots
|
chatbots
|
Chatbots
| 15,820
|
Estelle Huang
|
Babysitting chatbots at Yoctol. Obsessed with music and stories.
|
cefbed9bc90e
|
estelle.husky
| 178
| 83
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-20
|
2018-07-20 05:42:36
|
2018-07-20
|
2018-07-20 06:19:42
| 1
| false
|
en
|
2018-07-20
|
2018-07-20 06:19:42
| 1
|
1707507c1f01
| 2.875472
| 0
| 0
| 0
|
For that the last ten years, business leaders have seen how rising loan production costs have pushed the burden of maintaining…
| 5
|
What are you able to build with your bots?
For that the last ten years, business leaders have seen how rising loan production costs have pushed the burden of maintaining profitability onto people, processes, and margins. Today, loans cost $8,000 to produce, up from $4,200 just seven years ago. That’s why businesses continue to look for new ways to keep their margins healthy.
Business leaders have read about the underlying benefits of robotic process automation (RPA) and its ability to lower cost per loan by automating mundane, repetitive tasks and freeing up employees to perform higher-value work. They have seen RPA lowering costs and gaining momentum in areas like healthcare and manufacturing, and have even witnessed Google Assistant make a hair appointment! All of these advances aim to help you free up time and gain a competitive edge. But where do you start?
Create a roadmap
The best way to identify opportunities isn’t by thinking about technology. Instead, create a “time savings plan” with your sales and operations teams by walking through your current loan process from beginning to end. Take special notes along the chain and look for areas where tasks, paperwork, and wait times gather in pools. Waiting on a person or process to take action? Jot it down for follow-up research.
The areas you find probably include a lot of 80/20 items, where 80% is repetitive work that doesn’t vary too much and only the remaining 20% requires higher-level judgment and exception processing to complete. By finding and mapping these pockets, you can start to establish your roadmap and stake out areas for further study.
Set your priorities
Once you have cataloged these areas of opportunity, create a four-quadrant matrix and rank them according to their impact on your enterprise: effort, high to low, and time improvement, high to low. This will help the team find consensus and visualize those items that pop to the surface for maximum benefit. It will also provide clarity on finding small but achievable low-effort/high-time-improvement areas to focus on out the gate, so you can build momentum and muscle memory for the high/high categories down the road.
Engage your resources for change
Bring together your teams with IT and Project Management to create an enterprise-wide digital improvement plan and begin designing solutions based on your priorities and technology capabilities, especially for those repetitive tasks that you can automate. This journey offers a great opportunity to enhance your culture and competence in technologies (including RPA), work toward continuous improvement, explore new ways of working, invest in innovations that fit your culture, and set numeric goals to wring out waste in time and cost.
Gather your team and set the course
Look inside and outside your organization and the industry for solutions, but inside the industry for mortgage technology partners who understand the mortgage domain and RPA technologies applicable to the industry. Start small with achievable goals and measure progress in hours, costs saved, and possibly revenue gained via new products or services. As you build, consider changes that will streamline operations and better fit RPA processing, and train your staff on how to manage the new digital workforce.
There is no finish line
Perhaps a little surprisingly, adding bots to your workforce will lead to role changes for your human team members. Just like new human employees, bots will need to be trained, managed, and moved around as your business requirements evolve. So as you set up new procedures and change existing ones, you’ll need a small team of humans to keep your bots optimally trained and deployed. The cycle of discovering RPA use cases, retraining bots, and adjusting human roles is an ongoing process, and will continue to optimize business processes and improve margins year after year.
A new way to enhance your staff is here and ready to build: assign a digital assistant to perform routine and repetitive business tasks. RPA is a well-established opportunity to free up valuable staff time and enterprise expense, and direct the savings back to your bottom line. Though our industry understands that innovation is necessary, uncertainty still exists about where to start. The best way to overcome this challenge is by taking the first step.
|
What are you able to build with your bots?
| 0
|
what-are-you-able-to-build-with-your-bots-1707507c1f01
|
2018-07-20
|
2018-07-20 06:19:43
|
https://medium.com/s/story/what-are-you-able-to-build-with-your-bots-1707507c1f01
| false
| 709
| null | null | null | null | null | null | null | null | null |
Leadership
|
leadership
|
Leadership
| 87,192
|
Visionet Systems
|
Visionet offers a comprehensive range of technology-related solutions, including Omni-Channel Retail, CPG & Distribution, Consumer Lending and other technol
|
54549bace4ab
|
visionetsystems
| 3
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
7bfc5c48d54a
|
2018-01-06
|
2018-01-06 07:46:15
|
2018-01-06
|
2018-01-06 07:53:06
| 3
| false
|
en
|
2018-01-06
|
2018-01-06 08:00:57
| 1
|
17084cc15b7f
| 2.112264
| 0
| 0
| 0
|
AI may improve our productivity; however, relying too much on AI in the decision-making process may bear some hidden risks.
| 5
|
How AI can make us Shallow and Biased
Artificial intelligence (AI) is meant to reduce cognitive load, making it easier for us to make decisions and learn new things.
AI may significantly improve our productivity, reducing the total amount of mental effort required to complete tasks. However, at the same time, relying too much on AI in the decision-making process may bear some hidden risks.
Forgotten Rules and Omitted Information
Remember, how you set your email inbox to filter incoming letters:
- if… archive to an appropriate folder,
- if… delete,
- if… mark as a spam.
The filtering system works perfectly well, as long as you remember filtering conditions (IFs) you set. After a while, you start to forget checking folders regularly, so important information may be accidentally omitted.
The quality of the inherent information processing rules dictates the quality of decision making.
Important signals may be filtered off due to an imperfection of the information processing system.
Fed up with Biased Information
Today we all got used to getting and acting on a high-level information, as systems around us are designed to deliver only the most significant pieces. Just think about your Facebook feed, Google push-up notifications or Outlook email client.
Remember: Prior to taking in information published online always do your best to access its quality.
- This may be the search result I am looking for, but what if there is something else?
- Can I trust this information? Isn’t it biased?
- Shall I rely on the information delivered in this story/video/ podcast?
You won’t eat a rotten fruit, so why should you take in bad information?
I took this image here
Unconsious Trust to Information Services as a New Norm
Adoption of technology is accompanied by growing trust to the information system outputs. Trust to informational services and systems becomes a new norm; however, shall we actually follow this norm?
Remember the “fake news” crisis on Google and Facebook? It shocked a lot of people and reminded us that even the most trusted systems may be flawed.
The ease of use and convenience of apps turns off mental filters used to process incoming information streams. It follows that reduction of the cognitive load not only facilitates learning and decision-making process but also deteriorates our information processing ability.
Conclusion
The convenience of information services builds trust and turns off mental filters.
High-level information may be unintentionally (or intentionally) biased due to the complicated system design.
Decisions based on trust and biased information are bad decisions.
|
How AI makes us Shallow and Biased
| 0
|
how-ai-can-make-us-shallow-and-biased-17084cc15b7f
|
2018-06-19
|
2018-06-19 15:44:15
|
https://medium.com/s/story/how-ai-can-make-us-shallow-and-biased-17084cc15b7f
| false
| 414
|
Exploring the Digital Disruption of our Lives
| null | null | null |
Break Through
|
rasskazov.vladislav@gmail.com
|
break-through-x
|
VALUE CREATION,BUSINESS DEVELOPMENT,ORGANIZATION DESIGN,TECHNOLOGY STRATEGY
| null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Vlad Rasskazov
|
Lessons learned and meditations https://vladrasskazov.com
|
7dc3117dadfa
|
rasskazov.vladislav
| 24
| 21
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-21
|
2018-05-21 16:48:47
|
2018-05-21
|
2018-05-21 16:51:38
| 1
| false
|
en
|
2018-05-21
|
2018-05-21 16:51:38
| 2
|
170965a7cf81
| 5.339623
| 3
| 0
| 0
|
Since CircleUp was founded, our mission has been unwavering: to help entrepreneurs thrive by giving them the capital and resources they…
| 3
|
The Future of Private Investing: Why we want to build a systematic investing strategy
Since CircleUp was founded, our mission has been unwavering: to help entrepreneurs thrive by giving them the capital and resources they need. We use Helio, our machine learning platform, to help advance that mission. By pushing down search costs through automation, we’ve been able to find companies more quickly and build conviction earlier than ever before. In doing so we’ve supported hundreds of entrepreneurs as they build their businesses. The long-term result, we believe, will be a more efficient market for the millions of entrepreneurs trying to build their visions for the future.
As our investment and credit teams leverage Helio in their daily work, our data and engineering teams are working towards an even more transformational future of investing. A future where founders anywhere can raise a round in a fraction of the time that it currently takes. A future where it becomes economical to invest in a basket of 100, 200, or even 500 early-stage companies. A future where co-investors can benefit from the signal that comes from objective, predictive, and prescriptive insights. And, most importantly, a future where private capital markets are finally fair.
To make this future a reality, we are embarking on an initiative to use Helio to make investment decisions with minimal human intervention. By investing in a rules-based or “systematic” fashion, we believe we will be able to serve founders more efficiently and deliver value to a larger number of companies than ever possible.
Thinking Systematically
The public equity markets have seen an exponential rise in algorithmic investing over the last decade. According to a JP Morgan Report it is estimated only 10% of investors are actually manually picking stocks and that 60% of trading volume is purely systematic or quantitative in nature. Fund managers like Renaissance Technologies, AQR and Two-Sigma have become extremely successful by relying on algorithmic trading as a core facet of their investment philosophies, and today, BlackRock, the world’s largest asset manager, has systematic strategies that co-exist with traditional manager-controlled funds.
To achieve a future where we serve founders more efficiently and effectively, CircleUp is applying the innovations of systematic investing in the public markets to the private markets. It’s a daunting and ambitious task, but we believe that we have the inputs to make revolutionizing private investing possible.
To be clear, we aren’t ignoring the fact that investing in the public markets is extremely different from investing in the private markets. First, in the public markets, there is pricing data on how much a company is worth every day (every second even) which just doesn’t exist in the private markets. Second, there is minimal, if any, friction in investing in the public markets, while the friction of making an investment in the private markets can be significant (companies may not be raising and term sheets require negotiation among other things). Finally, public markets benefit from liquidity, the ability to turn a holding into cash, whereas in the private markets all holds are long.
These three differences are just a few of the obstacles that exist in carrying systematic strategies from the public markets into the private markets. Questions like how to construct an optimized portfolio, how to create systems to invest quickly and reduce friction for entrepreneurs, and how to collaborate with co-investors are all important and extremely complex. Over the coming months, my colleagues and I will be sharing our perspectives on these questions and others, and how we think Helio will help us overcome them. Before we dig into these more technical questions, I want to first answer the question of why we think systematic investing matters at all. The answer resides in two important benefits that systematic investing brings to the table: efficiency and scalability.
Efficiency
Today, raising an equity round is an asset-intensive undertaking for both the entrepreneur and the investor. It’s estimated that it takes 60–90 days for a fast-growing business to close a round and much longer for the majority of businesses. For a founder, those 60–90 days (or 960–1,440 waking hours for those of you who get eight hours of sleep) could be used in dozens of other ways, the most important of which actually involve running a business. Conversely, investors spend hundreds of hours sourcing, performing initial due-diligence, diving into secondary due-diligence, negotiating terms, and presenting deals to investment committee. The process is laborious to say the least.
With a rules-based approach, systematic investors can arrive at a level of conviction quickly. Using Helio, we are able to forecast growth and potential of businesses in an automated fashion. With a rules-based foundation, a systematic investing strategy built on Helio will be able to combine the sourcing and due-diligence processes that take investors weeks into a matter of seconds. As mentioned earlier, we’re not naive to the realities that make private investing more time consuming. However, going from first meeting to closing in 15 days or less is a massive improvement from the status quo.
Scalability
Related to, but fundamentally different from the role efficiency plays is the value that stems from the scalability of a systematic investing strategy. Scalability benefits a number of stakeholders: investors, limited partners (LPs), and founders.
For founders, efficiency and scalability go hand-in-hand. Because raising equity has historically been a months-long exercise, many founders think of raising money as something they need to do extremely intentionally. The result is that most founders aren’t actively raising and won’t take on equity unless they are in the midst of an open fundraise. Systematic investing can change that perception entirely.
For investors, systematic investing unlocks an entire asset class and mitigates a number of risks that most LPs have to think about. First is scale-up risk. As most private equity funds grow, they start to focus on later-stage businesses to keep their economic model constant. If one investor can only diligence one deal, then she would need to write a larger check if she has a larger pool of capital to deploy. As a result a firm that begins with a $50M fund will look at very different deals when it raises a $500M fund. With systematic investing (especially in CPG), that dynamic is not an issue — the entire sector is blue ocean, meaning an investor could easily deploy $500 million, or even a few billion dollars into this asset without creeping into larger companies. This allows investors to stay focused on the same size deals, where the investor has the most expertise. Second, is key-person risk. The best VCs pride themselves on their network or brand. That network or brand is often consolidated to one or two key partners. With a systematic strategy, key person risk becomes a thing of the past. The platform must have value to founders, but the risk of losing a “rainmaker” in a fund is no longer present.
By altering the paradigm of fundraising so it becomes less of a burden for founders, the number of founders actively seeking funding will only grow. Coupled with the massive growth of startups in consumer, the demand for capital will also grow. While this rising tide will raise all ships, the expectation from founders will be to be able to raise quickly. Only a systematic strategy can effectively deliver on a promise to do so.
The Path Forward: A product that benefits the entire ecosystem
We believe that systematic investing has the potential to accelerate our goal of getting more great consumer companies the capital they need to grow and be successful. Our systematic strategy will help us support entrepreneurs more efficiently and allow us to work with more companies than we have ever been able to in the past. The strategy also presents a powerful complement to co-investors and provides LPs an asset that has historically been extremely difficult to access. In future posts we will write more on these and other topics, so stay tuned as we take these exciting next steps towards a new future of private investing.
|
The Future of Private Investing: Why we want to build a systematic investing strategy
| 8
|
the-future-of-private-investing-why-we-want-to-build-a-systematic-investing-strategy-170965a7cf81
|
2018-05-23
|
2018-05-23 14:53:28
|
https://medium.com/s/story/the-future-of-private-investing-why-we-want-to-build-a-systematic-investing-strategy-170965a7cf81
| false
| 1,362
| null | null | null | null | null | null | null | null | null |
Venture Capital
|
venture-capital
|
Venture Capital
| 32,826
|
Jonathan Scherr
|
Bringing data science to VC @circleup. Advisor and investor to companies in regulated sectors.
|
9cd2afa30d99
|
jbscherr
| 235
| 232
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
b215b178ed4f
|
2018-07-09
|
2018-07-09 15:19:21
|
2018-07-27
|
2018-07-27 20:47:50
| 8
| false
|
en
|
2018-07-27
|
2018-07-27 20:47:50
| 16
|
170c0fd0d7
| 3.971069
| 2
| 0
| 0
|
Excitement, nervousness, anticipation, but nevertheless ready! These are some of the feelings we all encountered while preparing to attend…
| 5
|
OU Data Analytics Lab @ SIGMOD
Excitement, nervousness, anticipation, but nevertheless ready! These are some of the feelings we all encountered while preparing to attend SIGMOD 2018. This was our first time attending a SIGMOD conference and we would like to take you on this journey with us as we reflect on our experience at SIGMOD 2018 and our stay in Houston, Texas.
OU DALab Road Trip to Houston! From left to right: Jasmine DeHart, Keerti Banweer, Redae Beraki, Shine Xu, Austin Graham
We, at the University of Oklahoma Data Analytics Lab (OU DALab), explore complex problems related to big data, interactive algorithms, data mining, natural language processing, and more. Such a diverse set of specialties allows the opportunity to experiment with novel methods of data cleaning, transforming, and of course, machine learning. In these endeavors, it is important to gain experience and knowledge from all relevant conferences and network with fellow researchers in these communities.
In an effort to gain insight as well as experience the state-of-the-art in technologies relating to databases, participation in top tier conferences such as SIGMOD 2018 is crucial. It is an event to see cutting edge research papers, meet the talented researchers behind it, and get inspired. The ACM SIGMOD Conference is a premier conference on topics related to research on the management of data. The General Chairs for the conference where Gautam Das and Chris Jermaine. The conference contained several tutorials, paper presentations, tutorials, and several co-located workshops.
One of the first related workshops was HILDA 2018. The human in the loop data analytics workshop contained several interesting paper and poster presentations. The OU DALab team supported Chenguang (Shine) Xu and his presentation of a recent conference paper Detecting Simpson’s Paradox.
SIGMOD 2018 was located in Houston, TX. While in Houston, we stayed at an Airbnb hosted by Juan and the Wicoliving team in Midtown. We journeyed for 7 long hours on our expedition from Norman, Oklahoma to Houston, Texas. During our stay, we found several great restaurants that we highly recommend if you’re ever in town!
The Pit Room — When you think of BBQ, you may think of chicken, ribs, pulled pork, corn on the cob and baked beans. But the Pit Room had so much more! Several types of smoked sausages, turkey, brisket, elote, macaroni and even dinosaur ribs (beef ribs). Like they say, “When in Rome, do as the Romans do”, so you know we had to try some BBQ. We thoroughly enjoyed our experience here, so much that we dined here twice during our stay. This was Keerti’s first experience eating BBQ and in these moments she experienced true love.
Luna Pizzeria — Looking for some great pizza at a local spot? Luna Pizzera will not disappoint, their margherita pizza is amazing! This restaurant is in an amazing location, has a friendly staff, and reasonably priced pizza. What more could you ask for?
Cabo Baja Tacos and Burritos — If you have never been to a Mexican restaurant with carne asada fries, then you are missing out! This is not your traditional Mexican restaurant. This restaurant has an unique local selection of beverages, cactus burritos, and quick customer service. With options like that, would you choose to explore or exploit?
Sparkles Hamburger Spot — A quaint, small business burger joint located in Downtown Houston. This restaurant was not the most tourist-y spot in Houston, but the burgers were huge and the fries were delicious. Definitely a local gem in the community.
And of course, as scientists one of the places we decided to visit was the Houston Museum of Natural Science.
From left to right: The Pit Room, Luna Pizzeria, Cabo Baja Tacos and Burritos
From left to right: Cabo Baja Tacos and Burritos, Houston Museum of Natural Science
Our experience at SIGMOD 2018 has been truly beneficial in the development of our professionalism, research projects, and careers. OU DALab has several projects that have been influenced by the SIGMOD conference, the HILDA workshop, and the DEEM workshop. Our lab includes projects like:
Human over the loop Analytics. This project has been influenced by SIGMOD and HILDA.
Detecting Simpson’s Paradox. This project has been influenced by SIGMOD, HILDA, and DEEM.
Speed Labeling. This project has been influenced by SIGMOD, HILDA, and DEEM.
Visual Content Privacy Leaks. This project has been influenced by SIGMOD and HILDA.
From left to right: Redae Beraki, Jasmine DeHart, Shine Xu, Austin Graham, Keerti Banweer
This is the introduction to our OU DALab@SIGMOD Blog Series. In the upcoming weeks, there will be several blogs encompassing a variety of workshops, demos, and presented papers at the conference. Make sure to follow our blog to receive notifications about all the activities of the OU Data Analytics Lab!
To keep up with the development of these projects, new blogs, and much more, follow us on Twitter!
|
OU Data Analytics Lab @ SIGMOD
| 6
|
ou-data-analytics-lab-sigmod-170c0fd0d7
|
2018-08-30
|
2018-08-30 17:43:31
|
https://medium.com/s/story/ou-data-analytics-lab-sigmod-170c0fd0d7
| false
| 752
|
Labeling, analyzing, reimagining data.
| null | null | null |
OU Data Analytics Lab
| null |
ou-data-analytics-lab
|
DATA ANALYTICS,COMPUTER SCIENCE,DATABASE,MACHINE LEARNING,DEEP LEARNING
|
oudalab
|
Oudalab
|
oudalab
|
Oudalab
| 3
|
Jasmine DeHart
|
2nd Year PhD Student · Privacy & Machine Learning · University of Oklahoma · OU DALab
|
124655478290
|
jasdehart
| 7
| 7
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
f45af444892f
|
2018-02-04
|
2018-02-04 02:11:17
|
2018-02-04
|
2018-02-04 02:14:51
| 1
| false
|
en
|
2018-02-09
|
2018-02-09 17:00:14
| 5
|
170c55395058
| 1.732075
| 5
| 1
| 0
|
SenzIT revolutionizes justice in real time with cognitive computing
| 5
|
Ready for the digital truth? Meet Evidencer
SenzIT revolutionizes justice in real time with cognitive computing
There’s never been a better time to utilize the power of technology to ensure that the digital truth prevails in courts of law around the globe. In this era of fake news and fact-checking probes, there’s an imperative to strive for “smarter, faster justice” and to go beyond the traditional means of recording, organizing and processing crucial courtroom evidence.
“Justice Denied Anywhere
Diminishes Justice Everywhere.”
- Martin Luther King, Jr.
How can we better leverage big data, AI, video analytics and enterprise mobility technologies to produce more accurate, timely verdicts?
Meet Evidencer — a complete investigative and e-court solution from SenzIT that revolutionizes the way justice and legal proceedings are handled. Capitalizing on the ingenuity of IBM Watson and the IBM Intelligent Operations Center, Evidencer expands the frontier of cognitive computing, helping deliver the right verdict on time, every time.
Evidencer harnesses the speed and analytical effectiveness of cognitive computing with a mobile-ready, end-to-end approach paired with pristine, cloud-based camera and voice recordings. The result? A smarter and more reliable justice system. Seamless integration with the IBM Intelligent Operations Center makes Evidencer a fundamental and budget-friendly component for smart cities on the rise everywhere.
The Evidencer Smarter Justice Suite (http://evidencer.com.au/) comprises:
Evidencer Suite for Law Enforcement (http://evidencer.com.au/evidencer_lawenforcement.html)
Evidencer Suite for Judiciary (http://evidencer.com.au/evidencer_judiciary.html)
The Evidencer smarter justice prototype was recently chosen as one of just eight global finalists for the IBM Watson Build challenge. IBM created the Watson Build challenge to spark development of new cognitive solutions. In 2017 — its first year — the competition attracted hundreds of IBM Business Partners from around the world who produced nearly 400 business plans for Watson-based solutions. As a global finalist, Evidencer demonstrated its place at the forefront of cognitive innovation.
Together, Evidencer and the IBM Intelligent Operations Center:
Offer an easy-to-use interface that facilitates and speeds up data exchange
Enable courts and law enforcement to gather complete, relevant information from many sources to build a case
Provide process automation capabilities that streamline the management of time-sensitive court cases
Help enforce and apply rules, timing exceptions, tolerance levels and other processes, enabling the effective management and fair handling of court cases
Assist courts and law enforcement agencies in closing cases faster, with better outcomes
Ready to learn more?
Check out Evidencer at: http://evidencer.com.au
|
Ready for the digital truth? Meet Evidencer
| 72
|
ready-for-the-digital-truth-meet-evidencer-170c55395058
|
2018-04-14
|
2018-04-14 15:41:38
|
https://medium.com/s/story/ready-for-the-digital-truth-meet-evidencer-170c55395058
| false
| 406
|
Cognitive Voices. Discussions on latest happenings in AI and cognitive computing.
| null |
ibmwatson
| null |
Cognitive Voices
| null |
cognitivebusiness
|
COGNITIVE COMPUTING,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING
|
ibmwatson
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
IBM Cognitive Business
|
This is the age of breakthrough. An age in which digital business meets digital intelligence — human expertise enhanced, scaled and accelerated.
|
1fdfd3496c89
|
ibmcognitivebusiness
| 31,075
| 3,946
| 20,181,104
| null | null | null | null | null | null |
457
| null | 0
| null |
2017-09-29
|
2017-09-29 21:51:21
|
2017-10-04
|
2017-10-04 00:51:49
| 2
| true
|
en
|
2017-10-04
|
2017-10-04 00:51:49
| 4
|
170c57956e9f
| 4.953145
| 77
| 0
| 0
|
The Army Research Laboratory’s plan to use human brains to train machines
| 5
|
The Military Has a Plan for Human Brain Waves
The Army Research Laboratory’s plan to use human brains to train machines
The human brain is responsible for making us adaptable and widespread — a singularly adept instrument to help humans survive and thrive. Even as artificial intelligence quickly progresses, when it comes to military conflicts, people still outpace robots in crucial split-second decision-making. Slowly but surely, though, the gap is lessening, and training robots’ targeting capabilities using human brain responses may help close it.
When humans make decisions or respond to specific stimuli, our brains emit what’s known as a P300 response. We can measure that response, and the evaluation of those responses is used primarily with medical patients who have some form of neurodegenerative disease or disability. For example, a P300 speller is a device that allows a person to input text or commands using thought, based on their P300 reactions to certain letters.
The potential for P300 responses and their applicability outside the field of medicine is great and has been recognized. For a number of years, the U.S. Army Research Laboratory (ARL), the U.S. Army’s corporate research laboratory, has been exploring the scientific methods and potential for using P300 impulses for several military projects through its Cognition and Neuroergonomics Collaborative Technology Alliance. One of these projects is a neural net that can learn from P300 responses. This neural net would allow the ARL to better train AI in areas such as targeting, threat recognition, and situational awareness, and even allow for more immersive human training. At some point, soldiers may even wear electroencephalograms (EEGs), which measure brain responses, allowing ARL researchers to monitor and input their P300 responses into a neural net in real time.
In a paper on the subject presented at the annual Intelligent User Interfaces conference, held in Cyprus last March, researchers from the ARL and DCS Corp, a professional services firm that works with the Department of Defense, documented how they fed datasets of human brain waves into a neural network to teach the neural network to recognize when a human is deciding what to target.
“We were interested in building a neural network that can learn from laboratory data, so, in the lab, you have someone sit down, and [you] show them pictures rapidly, or you have them look around at a small scene, and from that we want to train a neural network,” says Stephen M. Gordon, one of the paper’s authors and a contractor with DCS Corp. “So, when they fixated on a stimulus, an object of interest, something that was salient in the environment, something that was relevant to their task…Could we record that? Could we do that without using any specific training data?”
This kind of research and the projects they lead to, however, deserve further scrutiny. “While this sort of research applied militarily may increase the ability of AI battlefield weapons to be as capable, proportionate, and protective of noncombatants as humans are — which is good — the wider question is: Do we want to live in a world suffused with cheap, effective, difficult-to-attribute, and remorseless lethal autonomous weapons? We should all be thinking about whose interest that’s in,” says Anthony Aguirre, co-founder of the Foundational Questions Institute, which tackles new frontiers and innovative ideas integral to a deep understanding of reality.
Neural nets require some form of data to train them. And generally, the more specific you can make the data to the task you want the neural net to understand, the better the net will perform. One goal of the ARL program is to see how generalized researchers could make this data while still retaining the effective performance of the neural net. This is because identifying a target in the real world is incredibly difficult for computers, as they rely on structured data, but the real world is chaotic, spontaneous, and full of ever-shifting variables when it comes to decision-making. So, for example, an enemy combatant popping out from behind a building will be difficult for a neural net to recognize effectively — particularly if the environment contains other shifting stimuli, such as gunfire or other soldiers. That’s where P300 responses may be able to help.
This is why the ARL program is looking for a way to generalize P300 responses across individuals. By examining the neural impulses of multiple individuals and using a neural network to look at the way their responses are triggered, the computer can start to piece together various scenes and their commonalities. In essence, it’s a bit like putting together a puzzle by drawing from the different P300 responses of numerous individuals until the neural net is able to evaluate a battlefield situation in the same way a human might. If that puzzle is a team of Navy SEALs outfitted with sensors that monitor their eye movements, for example, a neural net could draw and generalize from the entire team’s responses and perspectives, without requiring the SEALs to be in a lab. The neural net would also learn from the brain waves of the entire team.
A brain-computer interface.
Applicability goes beyond just targeting to potentially even more mutually adaptive human-AI systems, whether it be in an airplane cockpit or providing analysts with the most noteworthy footage from thousands of hours of satellite footage.
“If I told you to do a task, and then I did the same task, we’d probably do it in slightly different ways, based off past experiences or how we were trained. If AI could leverage the uniqueness of certain individuals and be mutually adaptive in the sense that you have a particular way of doing something, and AI infers that,” says Vernon Lawhern, a civilian scientist at ARL. “We are trying to look at some aspects, such as how AI systems infer a person’s state, and how to use that state to modulate different behaviors in some closed-loop system.”
Obstacles to such systems remain, and the program is still in its initial steps. This type of project starts to translate human experiences into data that is capable of being used to teach a neural net. In many ways, humans are the most adept sensors in the world, allowing us to digest and adapt to multitudes of stimuli per second. Translating this capability to neural nets opens up a range of possibilities, including those that have astounding potential but may also allow for lesser restrictions should such a system be integrated with weapons. As Aguirre notes, such a program “may increase the ability of AI battlefield weapons to be as capable, proportionate, and protective of noncombatants as humans.” Then why would we need further regulations of autonomous weapons systems? These are the questions this research needs to grapple with as it continues to develop.
“Science isn’t all about answering questions; it creates them. So, we’ve answered one, and we’ve created one or two more,” says Gordon. The military continues to integrate AI into a variety of systems. Even the CIA has 137 pilot projects directly related to artificial intelligence. The push will continue to make systems more effective, and, for better or worse, the Cognition and Neuroergonomics Collaborative Technology Alliance at the ARL will have an important role to play.
|
The Military Has a Plan for Human Brain Waves
| 418
|
the-military-has-a-plan-for-human-brain-waves-170c57956e9f
|
2018-08-25
|
2018-08-25 02:03:35
|
https://medium.com/s/story/the-military-has-a-plan-for-human-brain-waves-170c57956e9f
| false
| 1,211
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Benjamin Powers
|
Benjamin’s writing has appeared in Rolling Stone, New Republic, and Pacific Standard, among others. You can find all of his work at benjaminopowers.com
|
cb740ecd5e1
|
bnpowers8
| 892
| 243
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-05
|
2017-11-05 23:41:01
|
2017-11-05
|
2017-11-05 23:59:17
| 2
| false
|
en
|
2017-11-05
|
2017-11-05 23:59:17
| 0
|
170d94a51c4a
| 1.262579
| 0
| 0
| 0
|
About year ago, our oldest daughter was learning the colours and had bad imprint. In reading Scooby-Doo’s Color Mystery, she confused the…
| 5
|
Follow the Hat Brick Road
About year ago, our oldest daughter was learning the colours and had bad imprint. In reading Scooby-Doo’s Color Mystery, she confused the words “hat” and “yellow”. The result was that anything that was yellow was now called the colour “hat”. “The bird has a hat beak” or “the banana is hat”.
The page of Scooby-Doo’s Color Mystery that caused the problem.
This reminded me of a few instances where we were first creating our NLU engine for the Ubi. We had a bad set of training data that ended up leading to even worse performance that we had expected. We were building a classifier for timers and alarms “Remind me in 10 minutes to check on the pasta” or “Set an alarm for 2 minutes from now”. Figuring out and catching the intent during an utterance could be very difficult but even more so could be the entity.
In the example about, depending on how much training, the system might take the reminder entity to be “To check on the pasta” or “check on the pasta”. Or, it could identify the reminder entity as “from now” vs. figuring out it was the timing entity as “[present time] + 00:02:00.00”.
Machine learning has huge potential for time savings by short cutting tedious processes through automation however, it’s only good as its training. Bad teaching makes for bad learning but even with good teaching, wrong knowledge can be acquired.
|
Follow the Hat Brick Road
| 0
|
follow-the-hat-brick-road-170d94a51c4a
|
2017-11-05
|
2017-11-05 23:59:17
|
https://medium.com/s/story/follow-the-hat-brick-road-170d94a51c4a
| false
| 233
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Leor Grebler, UCIC
|
CEO of UCIC — The Voice of AI — making hardware products come alive with voice interaction. Proofs of concept, prototypes, and tools for integration of voice.
|
136fa39ffeba
|
Grebler
| 3,566
| 359
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-09
|
2018-01-09 08:06:35
|
2018-01-09
|
2018-01-09 10:08:25
| 9
| false
|
en
|
2018-01-09
|
2018-01-09 10:08:25
| 0
|
170f197ad981
| 5.377358
| 0
| 0
| 0
|
We live in a start of revolutionized era due to development of data analytics, large computing power, and cloud computing.
| 5
|
Machine Learning Algorithms You need to Know
We live in a start of revolutionized era due to development of data analytics, large computing power, and cloud computing.
Machine learning will definitely have a huge role there and the brains behind Machine Learning is based on algorithms.
This article covers 10 most popular Machine Learning Algorithms which uses currently:
These algorithms can be categorized into 3 main categories.
Supervised Algorithms: The training data set has inputs as well as the desired output. During the training session, the model will adjust its variables to map inputs to the corresponding output.
Unsupervised Algorithms: In this category, there is not a target outcome. The algorithms will cluster the data set for different groups.
Reinforcement Algorithms: These algorithms are trained on taking decisions. Therefore based on those decisions, the algorithm will train itself based on the success/error of output. Eventually by experience algorithm will able to give good predictions.
Following algorithms are going to be covered in this article.
Linear Regression
SVM (Support Vector Machine)
KNN (K-Nearest Neighbors)
Logistic Regression
Decision Tree
K-Means
Random Forest
Naive Bayes
Dimensional Reduction Algorithms
Gradient Boosting Algorithms
1. Linear Regression
Linear Regression algorithm will use the data points to find the best fit line to model the data. A line can be represented by the equation, y = m*x + c where y is the dependent variable and x is the independent variable. Basic calculus theories are applied to find the values for m and c using the given data set.
Linear Regression has 2 types as Simple Linear Regression where only 1 independent variable is used and Multiple Linear Regression where multiple independent variables are defined.
‘ Scikit -learn’ is a simple and efficient tool using for machine learning in python. Following is an implementation of Linear Regression using scikit-learn.
2. SVM (Support Vector Machine)
This belongs to classification type algorithm. The algorithm will separate the data points using a line. This line is chosen such that it will be furthermost from the nearest data points in 2 categories.
In above diagram red line is the best line since it has the most distance from the nearest points. Based on this line data points are classified into 2 groups.
3. KNN (K-Nearest Neighbors)
This is a simple algorithm which predicts unknown data point with its k nearest neighbors. The value of k is a critical factor here regarding the accuracy of prediction. It determines the nearest by calculating the distance using basic distance functions like Euclidean.
However, this algorithm needs high computation power and we need to normalize data initially to bring every data point to same range
4. Logistic Regression
Logistic Regression is used where a discreet output is expected such as the occurrence of some event (Ex. predict whether rain will occur or not). Usually, Logistic regression uses some function to squeeze values to a particular range.
“Sigmoid” (Logistic function) is one of such function which has “S” shape curve used for binary classification. It converts values to the range of 0, 1 which interpreted as a probability of occurring some event.
y = e^(b0 + b1*x) / (1 + e^(b0 + b1*x))
Above is a simple logistic regression equation where b0, b1 are constants. While training values for these will be calculated such that the error between prediction and actual value become minimum.
5. Decision Tree
This algorithm categorizes the population for several sets based on some chosen properties (independent variables) of a population. Usually, this algorithm is used to solve classification problems. Categorization is done by using some techniques such as Gini, Chi-square, entropy etc.
Let’s consider a population of people and use decision tree algorithm to identify who like to have a credit card. For example, consider the age and marital status the properties of the population. If age>30 or a person is married, people tend to prefer credit cards much and less otherwise.
Decision Tree
This decision tree can be further extended by identifying suitable properties to define more categories. In this example, if a person is married and he is over 30, they are more likely to have credit cards (100% preference). Testing data is used to generate this decision tree.
6. K-Means
This is an unsupervised algorithm which provides a solution for clustering problem. The algorithm follows a procedure to form clusters which contain homogeneous data points.
The value of k is an input for the algorithm. Based on that, algorithm selects k number of centroids. Then the neighboring data points to a centroid combines with its centroid and creates a cluster. Later a new centroid is created within each cluster. Then data points near to new centroid will combine again to expand the cluster. This process is continued until centroids do not change.
Cluster forming process
7. Random Forest
Random forest can be identified as a collection of decision trees as its name says. Each tree tries to estimate a classification and this is called as a “vote”. Ideally, we consider each vote from every tree and chose the most voted classification.
8. Naive Bayes
This algorithm is based on the “Bayes’ Theorem” in probability. Due to that Naive Bayes can be applied only if the features are independent of each other since it is a requirement in Bayes’ Theorem. If we try to predict a flower type by its petal length and width, we can use Naive Bayes approach since both those features are independent.
Bayes Equation
Naive Bayes algorithm also falls into classification type. This algorithm is mostly used when many classes exist in the problem.
9. Dimensional Reduction Algorithms
Some data sets may contain many variables that may cause very hard to handle. Especially nowadays data collecting in systems occur at very detailed level due to the existence of more than enough resources. In such cases, the data sets may contain thousands of variables and most of them can be unnecessary as well.
In this case, it is almost impossible to identify the variables which have the most impact on our prediction. Dimensional Reduction Algorithms are used in this kind of situations. It utilizes other algorithms like Random Forest, Decision Tree to identify the most important variables.
10. Gradient Boosting Algorithms
Gradient Boosting Algorithm uses multiple weak algorithms to create a more powerful accurate algorithm. Instead of using a single estimator, having multiple will create a more stable and robust algorithm.
There are several Gradient Boosting Algorithms.
XG Boost — uses liner and tree algorithms
Light GBM — uses only tree-based algorithms
The specialty of Gradient Boosting Algorithms is their higher accuracy. Further, algorithms like Light GBM has incredible high performance as well.
|
Machine Learning Algorithms You need to Know
| 0
|
machine-learning-algorithms-you-need-to-know-170f197ad981
|
2018-01-09
|
2018-01-09 10:08:26
|
https://medium.com/s/story/machine-learning-algorithms-you-need-to-know-170f197ad981
| false
| 1,107
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
ICARUS Solution
| null |
d0cfcc6fb412
|
icarus.solution
| 3
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
94810dc56f79
|
2018-03-19
|
2018-03-19 16:22:43
|
2018-03-19
|
2018-03-19 16:25:39
| 1
| false
|
en
|
2018-03-19
|
2018-03-19 16:25:39
| 4
|
17104ee63e9
| 1.815094
| 1
| 0
| 0
|
Here’s a way deep learning can make a positive impact in the health care industry.
| 3
|
A.I. may spot heart failure signs early
A new method that uses deep learning to analyze vast amounts of personal health record data could identify early signs of heart failure, researchers say.
A paper, which appears in the Journal of the American Medical Informatics Association (JAMIA), describes how the method addresses temporality in the data — something previously ignored by conventional machine learning models in health care applications.
The research uses a deep learning model to allow earlier detection of the incidents and stages that often lead to heart failure within 6–18 months. To achieve this, researchers use a recurrent neural network (RNN) to model temporal relations among events in electronic health records.
Temporal relationships communicate the ordering of events or states in time. This type of relation is traditionally used in natural language processing. However, researchers saw a new opportunity to leverage the power of RNNs.
“I studied deep learning and I was wondering if RNNs could be introduced into health care. It is a very popular model for processing sequences and is traditionally used for translation,” says Edward Choi, a PhD student at Georgia Tech, working with Jimeng Sun, an associate professor at the School of Computational Science and Engineering.
By utilizing RNN, the algorithm can anticipate early stages of heart failure, which will ultimately lead to better preventative care for patients at risk of heart disease.
“Machine learning is being used in every aspect of health care. From diagnosis and treatments to recommendations for patient care after surgeries. This particular model is focused on deep learning, which has had great success in many industries. However, in health care, we are on the front of pioneering deep learning and Edward is one of the first ones to apply it,” Sun says.
Related: ‘Deep learning’ goes faster with organized data
According to the Centers for Disease Control and Prevention, heart failure affects 5.7 million adults in the United States, and half of those who develop heart failure die within 5 years of diagnosis costing the nation an estimated $30.7 billion each year.
The new findings could provide relief to millions of Americans each year by allowing doctors to offer patients early intervention.
“This is a preliminary work, it showed potential that it can do better than classical models — it makes a good promise for how deep learning can make a positive impact in the health care industry,” says Choi.
The National Institutes of Health in collaboration with Sutter Health funded the work.
Source: Georgia Tech
Original Study DOI: 10.1136/amiajnl-2013–002033
Find more research news at Futurity.org
|
A.I. may spot heart failure signs early
| 2
|
a-i-may-spot-heart-failure-signs-early-17104ee63e9
|
2018-05-15
|
2018-05-15 07:28:49
|
https://medium.com/s/story/a-i-may-spot-heart-failure-signs-early-17104ee63e9
| false
| 428
|
Research news from top universities
| null |
futuritynews
| null |
Futurity News
|
editor@futurity.org
|
futurity-news
|
SCIENCE,RESEARCH,HEALTH,ENVIRONMENT,TECHNOLOGY
|
futuritynews
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Futurity News
|
Futurity.org
|
5bc60de02d60
|
Futurity
| 1,282
| 5
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-25
|
2018-07-25 06:55:38
|
2018-07-25
|
2018-07-25 06:57:02
| 0
| false
|
en
|
2018-07-25
|
2018-07-25 06:57:02
| 4
|
17123f3c8468
| 1.996226
| 0
| 0
| 0
|
Author: Shantanu Dindokar, Synerzip.
| 3
|
FROM DATA TO WISDOM, ONE CUSTOMER’S JOURNEY IN BIG DATA AND COGNITIVE COMPUTING SPACE
Author: Shantanu Dindokar, Synerzip.
About 2.5 years ago, a startup contacted us to help build an ambitious analytic product in the field of Big Data and Cognitive Computing. Open to forming a dual shore team setup of dedicated architects, business analysts, developers, and quality engineers, the client needed us to overcome a steep learning curve across the domains of analytics, machine learning, and natural language processing.
The project was exciting because we had to quickly identify a team, upgrade our skills, and jump into action. The technology stack was quite broad, ranging from near mainstream programming languages such as Scala, to the actor-based message-driven framework Akka, to the analytical libraries such as Spark MLlib, to SparkR and many more.
Finding the right mix
We quickly realized that hiring a team with this expertise was a challenge in a high-demand marketplace. Our Offshore Delivery Center (ODC) in India approached the Human Resources team for help but new candidates were just not available. We were on our own. So we searched internally for help and we found our team. We looked for people with the right attitude toward learning and a focus on delivering value to the customer with every release.
Learn, learn, learn…
The team poured their energy into learning fairly new technologies and concepts such as Scala, Akka, Apache Spark, Apache Mesos, Rancher, NoSQL columnar databases (eg. Cassandra), NLP systems, RDF, Ontology, and many more libraries. It was inspiring to see the team picking up ideas, building proficiency and coming together as a group.
Product leadership
Product development kicked off and we were committed to delivering value and product leadership. Our product owner along with the customer’s marketing team conceptualized the solution. We also prioritized features based on industry-specific use cases in Retail and Healthcare. This value-add focused all our energies on building a product that would cater to our client’s customer needs. This significantly improved the market-product fit and we are now able to launch the product at the right time. Knowing how to build is good but knowing what the market needs is invaluable.
Project leadership
In addition to carrying the product vision forward and to ensuring sustained growth without lag, we added shadow resources on our own. These additional resources reduced the risk of ramp-up times that would have severely impacted launch dates. Since the technology was new, we had to train internal resources in parallel with the product being in iterative development. This ensured that skilled resources were available on-demand. Point in time; we introduced DevOps engineers as needed when the product went into beta release phase. This steady pool of skilled resources proved to be vital for building scalable, fault-tolerant and robust deployments.
The story doesn’t end here
The product is currently getting positive reviews from a small group of target audience and the client’s customers are lining up to explore more. There are already opportunities to do a few paid pilot implementations to optimize customer’s business goals. This product is about to launch, so stay tuned as this story unfolds.
|
FROM DATA TO WISDOM, ONE CUSTOMER’S JOURNEY IN BIG DATA AND COGNITIVE COMPUTING SPACE
| 0
|
from-data-to-wisdom-one-customers-journey-in-big-data-and-cognitive-computing-space-17123f3c8468
|
2018-07-25
|
2018-07-25 06:57:02
|
https://medium.com/s/story/from-data-to-wisdom-one-customers-journey-in-big-data-and-cognitive-computing-space-17123f3c8468
| false
| 529
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Synerzip
|
Your trusted agile product development partner.
|
c107da00e90b
|
marketing_91205
| 5
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-12-06
|
2017-12-06 12:23:18
|
2017-12-06
|
2017-12-06 12:47:17
| 1
| false
|
en
|
2017-12-07
|
2017-12-07 09:10:05
| 0
|
171468782b16
| 1.667925
| 0
| 0
| 0
|
Well you may be thinking I am going to make a real Wall.E robot for my final year project ? Well no it’s not the same as it’s name…
| 3
|
Wall.E - Final Year Project of Mine
Wall.E The Robot
Well you may be thinking I am going to make a real Wall.E robot for my final year project ? Well no it’s not the same as it’s name suggests. But then why i choose a name like that ? its because I really liked the character of Wall.E in its animated movie and really like its intelligence and doings in the movie.
For my final year I am have proposed two ideas from which one will be select. First is a Machine Learning platform full of different type of machine learning mini applications and option for users to interact with them as well as have a complete guide of how to make one by themselves as well as the source code of the apps. It will basically for everyone who want to have a look on a machine learning app, want to interact/play with it without having to setup on his on computer as well as having access to source code and setup guide if he is interested in it.
Second idea is to make an intelligent micro blogging platform which one can self host on it’s own servers and use it however he want. It will not be a very out of the box micro blogging platform which will be going to the top on list but a simple micro blogging platform with a little intelligence. Yes! a little, well so what its intelligence will be or how it will be intelligent ? What I thought is to add a small and smart recommendation engine in it, a spam detector, and a viral post indicator. Yes! that’s all what it will have, but it will have a cute installation wizard, a beautiful minimal user interface and a lot of love from me.
It’s just that I am not a very big genius and what we were taught in college or what resources I have currently in my place, Its like building a whole big tower by just learning how to stack up the bricks. But I am going to try it and I am sure I will succeed also I will pushing all of the source code in GitHub so if anyone had any interests ( which I am sure no one will have ) can have a look on it.
|
Wall.E - Final Year Project of Mine
| 0
|
wall-e-final-year-project-of-mine-171468782b16
|
2018-05-31
|
2018-05-31 04:20:53
|
https://medium.com/s/story/wall-e-final-year-project-of-mine-171468782b16
| false
| 389
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Ali Raza
|
Software Developer and a Student who loves to share his ideas and stories
|
22a0f7778a6e
|
ali_raza
| 5
| 13
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-23
|
2017-11-23 05:24:56
|
2017-11-23
|
2017-11-23 19:35:18
| 0
| false
|
en
|
2017-11-23
|
2017-11-23 19:35:18
| 0
|
17147a031e7
| 1.177358
| 0
| 0
| 0
|
I have always been a huge fan of beginnings: moving to another country, getting interested in something new, starting a book. So, my first…
| 2
|
Work in Progress
I have always been a huge fan of beginnings: moving to another country, getting interested in something new, starting a book. So, my first tendency was to pompously title this first post: A Beginning.
Beginnings are exciting: full of new possibilities, of unexplored territories, but they are only a short burst of energy in the face of a grander project. They die in the end, leaving you with the harsh reality that what was once novel, has now become part of your routine. Only now, as I am faced with the next 4 years spent at the same college, in this journey to really grasp the fundamentals of the things I hope I will be involved in for the next few decades, am I starting to realise that after a beginning there are still quite a lot (even more!) fascinating unexplored territories.
So everything I will write here will be part of my ongoing project to better understand AI, to document my projects in which I apply it, to pour down my thoughts on its theoretical aspects, and to practice my ability to start a conversation: around the ethics of AI, around the sheer idea that my generation is given all these powerful tools that have been created in the past 10 years and has to learn how to use them.
Some of my posts will be just a few sentences on my learning process, other ones will be pages of me trying to break a concept into its building blocks, from math to public policy, and some will document the projects that I am working on. But I want to a bit selfish and dedicate this blog to my own learning, so hopefully it will be as messy as real learning should be: because in these 4 years of college I want to be unapologetic about my learning
|
Work in Progress
| 0
|
work-in-progress-17147a031e7
|
2017-11-23
|
2017-11-23 19:35:19
|
https://medium.com/s/story/work-in-progress-17147a031e7
| false
| 312
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
ACuza
|
A repository of all my thoughts on AI, maths, ethics and human creativity/of projects I am working on. No order, undefined purpose — just like freshman year.
|
ca5c690c1692
|
Cuza
| 0
| 23
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-13
|
2017-11-13 23:27:54
|
2017-11-15
|
2017-11-15 18:18:03
| 4
| false
|
en
|
2017-11-17
|
2017-11-17 08:57:17
| 11
|
171522ea29f4
| 3.733962
| 22
| 1
| 0
|
My number one fear isn’t public speaking. It is Artificial Intelligence.
| 5
|
Sophia the humanoid robot created by Hanson Robotics on stage at a recent event
I have a confession to make…
My number one fear isn’t public speaking. It is Artificial Intelligence.
This fact is perhaps the closest tie between Elon Musk and I. We both agree AI poses the greatest risk to humanity. I haven’t seen I, Robot and I don’t have a Roomba. We have resisted connecting any <insert voice-command IoT device or hub here> in our home. Despite working in product design and building new communities around innovation the last decade, I keep it pretty analog behind the curtain. A clock ticks in the living room as I write this… Anyway, you get the idea.
If you can’t fight them, join them.
My fear of AI stems from an early age. A vague memory of a short story, horrifically describing the uprising of our robot overlords, rises to the murky surface. Not sure who wrote it or where I read it, but it’s burned in my mind. Silly, I know. But the fear has been real. For too long.
Sophia, as photographed for WIRED
Eight days ago, I welcomed a new client to the Savvy Millennial family. I knew the project would be a challenge, but I underestimated how it would enlighten me. The last week has rapidly shown me that my fear is both futile and ignorant. But there’s hope for us mere mortals. Read on for first the truth, then the solution, then your invitation to join that solution.
Mass adoption of AI is already here:
(as the SingularityNET blog highlights:)
“Hundreds of hedge funds are directed by AI-engines, trading billions of dollars in securities each day.
Tech giants use AI to process billions of real-time data points every minute.
Over half of all insurance companies use machine-learning algorithms to make critical operational decisions.
The $1.7 trillion healthcare industry is rapidly deploying AI systems for surgery, diagnosis, drug discovery, and more.”
I met the SingularityNET Head of PR, Marcello Mari, in Maratea, Italy at an innovation event where we were both speaking. What he told me first blew my mind, then challenged it, and finally left it feeling at ease. Yes, a conference cocktail helped, but it’s true. He gave me my my first dose of AI-reality. The illuminating education I’m learning comes rapidly to those eager to soak it up.
The more I learn about AI now and in the future, the calmer I am. Somewhat ironically, the more I also feel ignited and compelled to act. AI in the wrong hands, or in those too centralized, conjures up justifiably haunting sci-fi scenarios that could rationally play out. (My inner child) and I still fear the negative implications of AI-gone-wrong- but I realize now there are people working to prevent precisely that. And they are awesome.
Meet my new friends & collaborative clients, SingularityNET.
Learn more at SingularityNET.io
SingularityNET is AI + Blockchain. The optimization of AI agents through smart contracts and the decentralization of the AI economy. It is the platform that will ensure AI is used for good.
If I haven’t convinced you to learn more yet, take the words of SingularityNET CEO Dr. Ben Goertzel on why he’s assembled the team to launch the platform of our Artificial Intelligence present and future. He says it better than I ever could. Dive even deeper in the SingularityNET whitepaper.
SingularityNET team members including Sophia at Singapore SWITCH
You can also watch Ben on stage with Sophia and Einstein, the humanoid robots from Hanson Robotics at Web Summit. Hanson Robotics is an early SingularityNET partner and we appreciate that Sophia’s fashion sense includes SingularityNET apparel. They casually discussed the important matters of the world in Lisbon:
SingularityNET CEO Dr. Ben Goertzel and Sophia at Web Summit
CNBC also recently interviewed Sophia and Ben as well in Singapore (interview below.) You’ll see the journalists on the program have many of the same questions I used to have. That is, until the SingularityNET team brought me up to speed.
Do we need humanoid-looking robots?
Talking to a smart robot designed by Ben Goertzel, chief scientist, Hanson Robotics, raises all kinds of questions.www.cnbc.com
In closing, I should disclose that reading Ramez Naam’s Nexus trilogy before meeting the SingularityNET team prepped me emotionally and intellectually for my transition to the AI side. I am grateful to him and my crowdsourced network of insightful AI/Blockchain/Crypto friends who make this kind of work possible.
The future of AI is here. We hope you’ll join us.
Join the SingularityNET Community
Our newsletter subscribers get exclusive access to major news first, so be sure to subscribe here. More information can be found on our website and our Medium team blog Join our Telegram channel to chat with the team and our growing community.
SingularityNET is a client and friend of Savannah Peterson & Savvy Millennial. Savannah is leading the team’s social media efforts, though this post was not sponsored. Savannah merely wanted to share some illuminating information with you.
|
I have a confession to make…
| 285
|
i-have-a-confession-to-make-171522ea29f4
|
2018-04-09
|
2018-04-09 19:45:01
|
https://medium.com/s/story/i-have-a-confession-to-make-171522ea29f4
| false
| 804
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Savannah Peterson
|
Founder of Savvy Millennial, Forbes 30 Under 30, Speaker, Community Builder, Traveler, Empowerer of Humans in Technology, Dog Mom & Wino. https://www.youtube.co
|
2ade7ec708a9
|
savissavvy
| 1,519
| 1,038
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-30
|
2018-07-30 07:12:02
|
2018-04-10
|
2018-04-10 00:00:00
| 2
| false
|
en
|
2018-07-30
|
2018-07-30 07:16:26
| 2
|
1715545b2aa3
| 1.104088
| 0
| 0
| 0
|
4 Easy Tools to Infuse AI & Machine Learning Today
| 5
|
.NET and Artificial Intelligence
4 Easy Tools to Infuse AI & Machine Learning Today
Technology in 2018 is being dominated by AI and machine learning as the industry is trying to incorporate these into every aspect of their business. Microsoft is keeping up-to-date with trends by working continually to enable processes with AI machine learning for your .NET apps and .NET software development.
Here is the latest in .NET Application Development including a few amazing tools and features which you can use with .NET to bring the power of Artificial Intelligence to your applications today:
One can choose his/her own models or use pre-built auto-retrained libraries for incorporating AI in any of the .NET apps they want, with just a few clicks, without even the need of sophisticated coding knowledge.
Microsoft .NET includes ready-to-use AI tools like Cognitive Service Bots, CoreML & Vision for Xamarin iOS apps and CNTK.
Advanced developers who want to build their own custom machine learning and AI models, can use Azure Machine Learning & CNTK Tensorflow Accort.NET.
(Continue…)
Read full Blog on originally published at www.alliancetek.com on April 10, 2018.
|
.NET and Artificial Intelligence
| 0
|
net-and-artificial-intelligence-1715545b2aa3
|
2018-07-30
|
2018-07-30 07:16:26
|
https://medium.com/s/story/net-and-artificial-intelligence-1715545b2aa3
| false
| 191
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
AllianceTek
|
Custom software, web development & IT business solutions company USA, 14 years’ experience in building mobile, cloud, web solutions. https://www.alliancetek.com
|
4d0a1f69bee9
|
AllianceTekInc
| 2
| 85
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-30
|
2018-06-30 01:59:10
|
2018-06-27
|
2018-06-27 01:00:00
| 1
| false
|
en
|
2018-06-30
|
2018-06-30 02:01:40
| 4
|
171616cc6d49
| 2.758491
| 0
| 0
| 0
|
click to enlarge
| 5
|
A digital translator may be your best traveling companion | Technicalities | Colorado Springs Independent
click to enlarge
During the dark ages of analog communications, traveling to a foreign country without knowing the language was sometimes a dicey endeavor. Road signs, restaurant menus, train schedules, etc. were challenges most of us met with hopefulness at best, frustration at worst. If you were lucky you may run into a local who could and wouldn’t mind speaking English, but the further you went off the beaten path and out of reach of tourist area, the less likely you would find an English-speaking local. Basically, you are on your own with only your translation book, the only thing that could help ask a waiter exactly what kind of meat was in the dish you just ordered — after flipping through all the pages. I was a traveler back in those days, and sometimes it was not fun. Sure, the English language is one of the most widely spoken languages in the world, but when you end up in the wrong city, order food that you are allergic to or walk for an hour only to end up in the place you started, you realize that proper communication is essential to your happiness while on vacation. Thankfully, modern technology has come to the rescue, mitigating the effects of our shortcomings while on a trip to a foreign country.
There are several digital translators to help you get along during your travels. I used Google Translate recently during a first-time trip to southeast Asia, and was very impressed with the results, but other applications offer the same features for the most part.
Google Translate operates in several modes — speak, snap, write or type — but I used camera “snap” mode. In camera mode, I simply point the camera on my device to an object like a road sign or grocery label, and the foreign words and characters pop up on the screen in English. While in Japan, I was able to read the labels on items in a supermarket, see exactly what type of wine was in that bottle, read the ingredients list on a jar of would-be questionable contents, and read which products were in which isle. Grocery shopping may seem like mundane uses for this technology, but imagine not having it in the same situation. Digital translators give users a sense of confidence and independence. I would say that it also creates the possibility of exploration into areas that tourist would normally not venture to because of communication barriers. While I say this technology really helps a wary traveller, let’s not get over-confident. Apps like Google Translate won’t enable you to speak intelligently on the theory of relativity — or anything else — in a foreign language, and sometimes the translations can be a little sketchy. Performance can also vary by language. When I used Google Translate in Japan, for example, it was perfect, but it didn’t work at all when I tried to use it in Thailand. Obviously, there’s still room for improvement, but are we seeing the start of something bigger in the future? Is AI translation the new norm, with humans reduced to quality controllers — as some say with many other industries?
Lane Green, author of You Are What You Speak says, “Computers have got much better at translation, voice recognition and speech synthesis, but they still don’t understand the meaning of language.”
Perhaps I’m more impressed with this technology because it’s changed the way I travel, and I don’t take that for granted. I’m not looking for deep conversation, I just want to know whether I’m eating chicken or turtle meat.
Thomas Russell is a high school information technology teacher and retired Army Signal Corps soldier. He is the founder of SEMtech (Student Engagement and Mentoring in Technology) and an Advisory Board Member of Educating Children of Color. His hobbies include writing, photography and hiking. Contact Thomas via Russell’s Room on Facebook, or email at thruss09@gmail.com, and his photography at thomasholtrussell.zenfolio.com.
|
A digital translator may be your best traveling companion | Technicalities | Colorado Springs…
| 0
|
a-digital-translator-may-be-your-best-traveling-companion-technicalities-colorado-springs-171616cc6d49
|
2018-06-30
|
2018-06-30 02:01:40
|
https://medium.com/s/story/a-digital-translator-may-be-your-best-traveling-companion-technicalities-colorado-springs-171616cc6d49
| false
| 678
| null | null | null | null | null | null | null | null | null |
Translation
|
translation
|
Translation
| 5,701
|
Thomas Holt Russell
|
Teacher, writer, photographer and modern day Luddite. http://thomasholtrussell.zenfolio.com/
|
7233b8e539b6
|
thruss09
| 13
| 14
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-27
|
2017-11-27 10:57:00
|
2017-11-27
|
2017-11-27 11:01:29
| 5
| false
|
en
|
2017-11-30
|
2017-11-30 05:55:14
| 2
|
17167103b3ee
| 3.13522
| 0
| 0
| 0
|
The future we once dreamed of is here! Every James Bond movie you saw has now become a part of reality. The fuel of this fire among other…
| 5
|
THE FUTURE OF MACHINES — MAKE YOUR MACHINES TALK TO YOU
The future we once dreamed of is here! Every James Bond movie you saw has now become a part of reality. The fuel of this fire among other subjective issues is but one common goal of a simpler, safer, and a better life. Machines cannot stay aloof from this advancement running like a wildfire in all walks of life. And one of the agents bringing machines closer to their future is a machine monitor.
98% of organizations say a single hour of downtime costs them over Rs.1,00,000 and this figure increases multi-fold due to the absence of a real-time monitoring system. The factory managers remain oblivion to the anomalies at the site which results in delayed solutions and increased losses.
What forms a major issue is the fact that no one can know is aware of the working of the machines.
A machine monitoring software which enables real-time monitoring would make machines talk directly to the owner. Futuristic right?
There is a need for end-to-end transparency of operations for complete visibility along with complete integration to increase the production and fuel the fourth phase of Industrial Revolution.
Industrial revolution 4.0 is here
“Thing Green” is the all-in-one solution for all the problems powered by the IoT platform, TGS. It enables you with the power to listen to what your machines say.
This is done through a network of interconnected sensors that collect data like production quantity, quality, up-time, downtime, OEE and maintenance parameters, which is then transferred to the cloud using the internet.
This data is analyzed using advanced data analytics tools and behavioral readings to determine standards. The standards are used against which aberrations like production shortfall, too high downtime, too high rejections, breakdown, etc would be detected to send alerts (through SMS or email) making the machines talk to you whenever they need attention. It becomes the speech of machines to make them talk in a two-way communication giving the factories operational visibility.
How TGS works?
It hands you the control to manage your factories with predictive maintenance, real-time monitoring dashboards, downtime tracking, quality tracking, performance reporting, simple machine integration, remote machine monitoring and machine analytics using the wonders of Internet of Things and Artificial Intelligence.
The power of IoT
Automated management and control of factories with monitoring of machine health, machine vibrations, other condition-based monitoring, HVAC, temperature, lighting, energy, power consumption, gas leakage, security among other added benefits to suit the individual needs and requirements.
The data generated by the product would be used for predictive maintenance so that preventive measures can be taken, making the factories and machines smarter enough to take care of the breakdowns and error even before they arise! This would increase the productivity and product quality of the factories while adding the competitive edge.Now that is the future of machines driving innovation and optimum resource management making us more efficient!
THE FUTURE OF MACHINES — MAKE YOUR MACHINES TALK TO YOU
India needs a transformation in making the industries with the communicative machines to grow. Especially the manufacturing industry as it has the potential to emerge as one of the high growth sectors in India. The Prime Minister’s ‘Make in India’ program focuses on placing India on the world map as a manufacturing hub and gaining global recognition for the Indian economy. India is expected to become the fifth largest manufacturing country in the world by the end of 2020.
Thing Green can become a stepping stone for realizing the ambitions of growth and development we all share! It helps in creating a centralized management system with perceptive actions to create future-proof intelligent machines.
Contact us to know more @ Untrodden Labs
|
THE FUTURE OF MACHINES — MAKE YOUR MACHINES TALK TO YOU
| 0
|
the-future-of-machines-make-your-machines-talk-to-you-17167103b3ee
|
2017-11-30
|
2017-11-30 05:55:14
|
https://medium.com/s/story/the-future-of-machines-make-your-machines-talk-to-you-17167103b3ee
| false
| 610
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Things Go Social
|
Your interaction with machines will change when your machines will talk to you. Find out what happens when things go social!?
|
d6d7546b0773
|
ThingsGoSocial
| 18
| 155
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-29
|
2018-01-29 16:28:12
|
2018-01-29
|
2018-01-29 16:29:57
| 1
| false
|
en
|
2018-09-14
|
2018-09-14 12:36:17
| 1
|
1718b667fd29
| 0.664151
| 0
| 0
| 0
|
We are excited to announce an invitation to Beta Test of Hala SAP Digital Assistant. More information available in this survey from:
| 5
|
SAP Digital Assistant Beta Test Invitation
We are excited to announce an invitation to Beta Test of Hala SAP Digital Assistant. More information available in this survey from:
https://www.surveymonkey.com/r/HN2KW9V
It is a great opportunity for all SAP-related people to get access to new technologies and try them. You do not need to be a programmer for to be able to do that test! We created Hala for business users, and now business users can test it!
About Hala:
Hala is an SAP Digital Assistant. Hala automates IT and business processes for enterprises across a wide range of industries and leveraging next technologies to do that: natural language processing, syntax analysis, machine learning, voice recognition and deep learning.
Thanks,
A.Rudchuk, CEO at Hala.ai
|
SAP Digital Assistant Beta Test Invitation
| 0
|
sap-digital-assistant-beta-test-invitation-1718b667fd29
|
2018-09-14
|
2018-09-14 12:36:17
|
https://medium.com/s/story/sap-digital-assistant-beta-test-invitation-1718b667fd29
| false
| 123
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Hala Digital
|
Transformation of human knowledge into the digital brain
|
6eed4294f622
|
HalaDigital
| 5
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
aa95ce7d4e6d
|
2018-09-24
|
2018-09-24 13:20:09
|
2018-09-24
|
2018-09-24 13:21:44
| 8
| false
|
en
|
2018-10-18
|
2018-10-18 16:00:46
| 5
|
171aee5ef711
| 4.571069
| 3
| 0
| 0
|
A recap of 3 days of hacking in the Technopark in Zurich. More than 500 coders joined #HackZurich, one of the largest and most prestigious…
| 5
|
Hackathon #HackZurich: what useful solution can you code for drivers in 40 hours?
A recap of 3 days of hacking in the Technopark in Zurich. More than 500 coders joined #HackZurich, one of the largest and most prestigious hackathons in Europe. Efficient working weekend of the year!
Last weekend we — Bright Box team — supported our first big hackathon by providing the anonymous driving history of 1000 millennials from 6 countries for participators to retrieve and share valuable insights and to turn them into products or services for the Challenge organized by Zurich Insurance and Esri. The challenge from Zurich Insurance & Esri was named «Millennials on the Move». The idea of the challenge was in by the anonymous data from Bright Box should helping to find out the driving behavior of the millennials and how it can help improve and protect the lives of millennials on the move. Coders also got access to ArcGIS; a complete mapping and analytics platform for developers: use location intelligence to the advantage and combine demographic and lifestyle data with millennials’ movements. And I want to say that we had a blast!
The workshops
Day 1 was fully packed with workshops from 20 big companies. They addressed the subjects like safety, create new tech-driven insurance solutions, sustainability, battling fake news, GDPR.
The attractiveness of our workshop was that participants could work with real anonymized data. We are focused on empowering the business of our customers through the value of connected vehicle data. And we hoped to find among the participants of the hackathon who will be interested to make automotive data as valuable as possible to drivers. And we did. 80 hackers were with us to the workshop «Millennials on the Move».
Research shows that millennials are horrible drivers. Texting, emailing, chatting behind the wheel, running red lights, eating and drinking are just a few of the reckless acts millennials do while driving. The first challenge was to retrieve and share valuable insights about how millennials really drive and how new solution can help improve and protect the lives of millennials on the move. And the second aim was to turn the insights into products or services.
During our workshop, we showed what specific date sets participants will work with. One year’s data on millennial drivers compared with the all-age group at the same time, in the same city. And we provided access to ArcGIS to help rediscover all the ‘behind-story’ data given any GPS location and time stamp, such as demographic and lifestyle data. Participators of the workshop asked about the availability of accelerometer data, what is the frequency of the data (interval between points and time). As a result, our challenge was attractive for 8 teams. And it’s definitely a success.
Developers are rock!
It was so exciting to see all those cool projects built in a little over 40 hours! By the end of Hackathon, 8 teams had a working proof of concept and presented their hacks at challenge «Millennials on the Move». Some of them were very prepared and already had the demo version and published the code on GitHub.
Below you can find the overview of some most interesting of alls by our opinion.
CAR TaiLOR
Car tailor uses advanced analytics to aggregate behavior as well as personality data from various sources. These aggregated user profiles allow us to recommend suitable products, that match the lifestyle and personality of customers.
GitHub Repo
tailor.scapp.io
SafetyMatters
The app has two parts. One alerts the authorities of dangerous junctions where many people have sharp changes in speed and have many car accidents in that area. The second part uses a personalized learning algorithm that learns where the driver needs to be alerted before places he is driving in an unsafe manner. This gives a comprehensive solution to risk mitigation of car accidents.
Safety Matters Demo
Safetify
A prototype app uses a machine learning model that takes traffic, driver, road and weather data into account, predicts how dangerous a particular road segment is and alerts users. The probability of a segment to be a danger zone was computed with neuronal networks. The area of the danger zone was computed by using Gaussian Process Regressot to smooth out the probabilities and provide a contour.
GitHub Repo
Savfe
The app detects whether the vehicle owner is driving and looking at the smartphone. If he in the app his screen will start flashing warning colors (yellow and red) depending on how dangerous the situation currently is. If he is distracted by another app, the app shows a message so that he should take care. If the vehicle owner didn’t use his device while driving, that’s great! And the app will reward him for it with a special currency. And he gets even more of it if he manages to ‘uphold your streak’ and doesn’t use his smartphone for consecutive trips. And he can spend his points to get back a part of his monthly insurance cost.
GitHub Repo
And our workshop “Millennials on the Move” made by Esri, Zurich, and Bright Box, was won by four ladies with the project “Safety Matters”. Girls power!
The hackathon was the first for the Bright Box. And it was amazing. This event for us wouldn’t have been possible without the invitation of Zurich Insurance Group. And we are grateful for the opportunity to test such creative ideas on our data that once again proves their value.
|
Hackathon #HackZurich: what useful solution can you code for drivers in 40 hours?
| 15
|
a-recap-of-3-days-of-hacking-in-the-technopark-in-zurich-171aee5ef711
|
2018-10-18
|
2018-10-18 16:00:46
|
https://medium.com/s/story/a-recap-of-3-days-of-hacking-in-the-technopark-in-zurich-171aee5ef711
| false
| 911
|
News, predictions, and opinions by geeks about cars, automotive market, connected cars and autonomous vehicles for OEMs, dealerships, car owners and tech fans! Most of the articles are written by Bright Box experts. Leave your comments/ suggest articles please. www.remoto.com
| null |
myremoto
| null |
Driving to the future
|
svva@bright-box.eu
|
driving-to-the-future
|
CARS,AUTONOMOUS CARS,AUTOMOTIVE,TECHNOLOGY,CONNECTED CARS
|
my_remoto
|
Hackzurich
|
hackzurich
|
Hackzurich
| 3
|
Alexander Dimchenko
|
Chief Strategy Officer at Bright Box, global vendor of Connected Vehicle Platform - Remoto (www.remoto.com) https://goo.gl/K1E8NQ Download our free white paper!
|
703dcd196af0
|
alexanderdimchenko
| 36
| 32
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
6eb9dc7486ef
|
2018-07-06
|
2018-07-06 22:25:47
|
2018-07-10
|
2018-07-10 22:00:31
| 1
| false
|
en
|
2018-07-10
|
2018-07-10 22:00:31
| 10
|
172197614c16
| 2.249057
| 10
| 2
| 0
|
In my last post, I discussed how the tech shift in hybrid cloud has shaped our thinking in startup investment. In this post, let’s take a…
| 5
|
Tech Shift Drives Investment — Deep Learning
In my last post, I discussed how the tech shift in hybrid cloud has shaped our thinking in startup investment. In this post, let’s take a look at the second tech shift — deep learning.
Deep learning neural networks have made significant progress in recent years, especially in the area of Natural Language Processing (NLP) and Computer Vision (CV). The improvement has accelerated in recent years such that the accuracy on typical tasks such as speech/object/facial recognition have reached and may soon exceed human performance.
While NLP and CV touch many direct applications, we believe the rapid advancement of deep learning will propel the following four significant investment areas:
Service robotics covers industrial/business robots whose primary tasks involve interaction with humans. IDC report shows worldwide robotics spend will be $103B this year with year over year growth of 25% over the next 4 years. Industry robotics dominates it with 70%. However the advancement in deep learning is enabling robots to work much more successfully in human-intensive environment. For example, Security robots(roving, flying or semi-stationary) that can drastically improve the cost effectiveness of human security patrol. We have made investment in one such company in Turing Video. There could be more use cases in delivery, retail/hospitality and office services.
Consumer robotics covers primarily in-home robots. We think that in-home robots will get a big boost due to the improvement of speech, vision and the significant cost lowering of a particular area of CV called SLAM. In-home robots are ready to take over more types of chores, securing the perimeter, monitoring the young and infirm, and start to manage the other less “smart” appliances. PerceptIn Robotics is one of the startups in this area invested by our prior fund.
Autonomous Driving can be considered a special kind of service robot. Car/truck/delivery vehicle driving is one of the most complex AI applications that combine more than a dozen different technology areas from control, sensor fusion, vision/perception, positioning, mapping, navigation, planning, simulation and etc. Many of these areas are starting to benefit from deep learning and become more versatile and human like. There are currently 56 companies registered with California DMV to conduct autonomous road test. We invested in JingChi as we believe they are the most likely to succeed in starting a robo taxi service in China.
Improved UI (User Interface) is an often overlooked area that will be profoundly impacted by deep learning. AI is not only the new UI, the very definition of user interface will be predicated on the level of intelligence. The world of computation has gone through several iterations of center(mainframe, cloud), edge(PC, smartphone) and networking improvements. Now UI has become the bottleneck. Even the well known O(n) computation complexity model has shifted from measuring against computation resources to that of human complexity. We believe deep learning based natural language/speech interface will increase its prevalence in enterprise customer services (we have invested in Percept.AI), and in enterprise robotic process automation (we have invested in jane.ai). In addition, ASIC chips designed to accelerate deep learning (we invested in IntEngine in this area) will enable more smart appliances with language and vision based user interfaces.
Stay tuned for the next topic…
|
Tech Shift Drives Investment — Deep Learning
| 259
|
tech-shift-drives-investment-deep-learning-172197614c16
|
2018-07-10
|
2018-07-10 22:00:32
|
https://medium.com/s/story/tech-shift-drives-investment-deep-learning-172197614c16
| false
| 543
|
China-US Cross Border / Cross Discipline VC Based in Los Altos, CA
| null | null | null |
Tsingyuan Ventures
|
zack.atlas@tsingyuan.ventures
|
tsingyuan-ventures
| null |
TsingyuanVC
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Xuhui Shao
|
Managing Partner at Tsingyuan Ventures: invest in early stage technology startups
|
838346e9bde
|
xuhui.shao
| 41
| 5
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-30
|
2018-09-30 08:22:52
|
2018-09-30
|
2018-09-30 09:49:27
| 1
| false
|
en
|
2018-09-30
|
2018-09-30 09:49:27
| 3
|
172285b9f595
| 2.392453
| 0
| 0
| 0
|
Recently, Snapchat announced a new update that will integrate visual search into the user experience. If you’re new to the concept of…
| 5
|
Snapchat: It’s Not For You — It’s For Gen Z
Recently, Snapchat announced a new update that will integrate visual search into the user experience. If you’re new to the concept of Snapchat, that might be because it wasn’t developed for you.
Picture this: It’s May of 2012 and you’re about to graduate eighth grade and move on to bigger and better (and hopefully less awkward) things in high school. You and your friends have been using Instagram for a little over a year, awed by the filters and frames that add an extra layer of intrigue to your school trip to Washington D.C. Toaster and Hefe allow your 41 instagram followers to better experience the panda at the National Zoo and the great stature of the Lincoln Memorial.
In the last month of school, you’re preparing to spread your wings and fly into the great unknown of secondary education. You’re hanging out with your friends during recess, sitting on the grass and watching as all the younger kids childishly play hopscotch and fall off the monkey bars on the third wrung. Suddenly, one of your friends takes a picture of you.
“I’m putting this on my story,” they say, tapping away on their iPhone 4.
“What?” you wonder, not comprehending the meaning behind their words.
“My Snapchat story,” they elaborate. “It’s like texting but with pictures. But the pictures disappear. And you can post stories that last for twenty-four hours.”
You’re intrigued by this concept and when the school day is over, you go home and download the app. You are one of the early adopters of Snapchat. Snapchat was created for you, and as you grow up, Snapchat evolves for you. You are the targeted audience.
Photo by Thought Catalog on Unsplash
Primed in the digital age, Generation Z could not have been a better fit for this seemingly simplistic idea that revolutionized the role and impact of social media.
A picture that disappears. A snap that enables you to chat with your friends. When the eighth graders started using the platform in 2012, their parents could not understand the need for such an application.
“Why would you take a picture that just disappears?” was the classic question that fueled the generation of selfie-takers.
The question was often followed by a Gen-Z-certified eye roll and a duck face snap.
Snapchat captures Gen Z’s need for instant gratification. They take pictures, and then they immediately disappear, only to be replaced by the next picture. For a generation with a shorter attention span and better multitasking skills, Snapchat is the perfect match. Snapchat can capture attention quickly and allows users to process social data at record speeds.
Snapchat’s new update that incorporates image search for ecommerce further appeals to Gen Z’s needs for the immediate, allowing them to bridge the gap between real-world and online visuals. So in between selfies and geotags, users can now shop their world. It allows them to see something and buy it instantly, eliminating the drawn-out hunt for certain products. Visual search for retail is a fledgling revolution that will become the next technological imperative as apps like Snapchat present it to an audience that was practically hand-picked for this innovation.
Thus, if you’re ever confused why you don’t understand the appeal of sharing your life in disappearing pictures and videos, it’s probably because Snapchat isn’t for you — it’s for the newest generation of consumers that consume content as fast as a snap disappears.
|
Snapchat: It’s Not For You — It’s For Gen Z
| 0
|
snapchat-its-not-for-you-it-s-for-gen-z-172285b9f595
|
2018-09-30
|
2018-09-30 09:49:27
|
https://medium.com/s/story/snapchat-its-not-for-you-it-s-for-gen-z-172285b9f595
| false
| 581
| null | null | null | null | null | null | null | null | null |
Snapchat
|
snapchat
|
Snapchat
| 8,507
|
Syte
|
Syte provides retailers with visual AI technology, powering visual search, automated textual tags and product recommendation to inspire shoppers. www.syte.ai
|
4054fc85da78
|
syte
| 2
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-03
|
2018-05-03 03:06:35
|
2018-05-21
|
2018-05-21 04:35:04
| 7
| false
|
en
|
2018-05-21
|
2018-05-21 04:35:04
| 4
|
17274ab98418
| 3.846226
| 14
| 1
| 0
|
About my Research Project
| 5
|
Word Embedding to Polysemy Embedding
About my Research Project
This is a blog on my final year project and it will be short as possible (hopefully).
Natural language processing(NLP) is way of connecting computer language with human speaking language. This is not the most accurate definition, but this will give better understanding on word embedding.
source : https://i.ytimg.com/vi/j9mnn-CoWRk/maxresdefault.jpg
Initially NLP is started by giving a unique id for a word. This way we were able to uniquely identify a word, but that wasn’t enough to process natural language. Then we start to predict language. This is the point where Tomas Mikalov came up with his word embedding model in 2013. So let’s have a look at word embedding.
Word embedding is embedding a word with a vector. This way all the words will be represented by a vector in a user defined vector space. These vectors may be 100 dimension vector or 300 dimension vector or anything the user wants. This gives you an idea of the vector dimension used in word processing, not restricted to 2D or 3D that you hear in day to day life.
Word embedding is built based on the concept that context defines a word. It means that the characteristics of a word can be identified by the context. Let’s consider an example. “fifty shades freed is unsatisfactory __________ to the series”. The blank word can be climax, conclusion, ending etc… So we were able to predict the word from the context. This is the concept how the word embedding model is trained. (This is cbow which will be discussed later)
Machine learning is used to to train the model in the above mentioned manner. First the corpus will be preprocessed based on the requirement and data. Then we start machine learning from that data. This is done considering a window. K sized window is the window with k closest words to the target word(not exactly). So we consider the context in the window to determine the vector for this word.
Then the training can be done in two methods.
Cbow - predicting the word from the context
Skipgram - predicting the context from a word
So this way we will we can train out model and get the word embedding. Again consider word embedding as a map or dictionary where a word is mapped to a vector. Though the process is discussed actually what is happening behind the picture?
sample word embedding image. source : https://cdn-images-1.medium.com/max/1600/1*YXr9REk2IfrYgLwZTPUKvg.png
We can consider word embedding as a framework with springs. The target word will be connected to the words in the context. The more often the a word appear in the target words context, the k of the spring will increase. After the embedding, consider this space in equilibrium. You will have words connected to other words with springs of different k. This makes a strong connection (high k) with the related words and eventually similar words get closer in the vector space. Further this is a relative framework where the position of a word depends on other words. Then think of adjusting a point in this framework. Since all the words are connected with springs, adjusting a point will have impact on all the points that are directly or indirectly connected. This is how word embedding structure behaves.
Word Embedding captures many characteristics of the language.
Semantic relationships are captures in embedding.
source : https://cdn-images-1.medium.com/max/1600/1*ytqlSUVrsRW2jGojkoiOnw.png
Word analogies are captured. (eg : vector(king) - vector(man) + vector (woman) = vector(queen))
Language Structure is captured
source : https://www.springboard.com/blog/wp-content/uploads/2017/08/mt-Copy-768x524.png
So of course word embedding is a break through in NLP. But what does our project has to do with it?
Our project focuses on the major limitation of word embedding. That is sense disambiguation. Since word embedding embed a word with a vector, all the characteristics of the word will be captured by a single vector. The basic element of word embedding is word. So we are planing to implement a sense embedding model where every senses will be embedded with a vector. This way if a word has 3 senses, the word will be embedded with 3 vectors. Since most of the characteristics differ in the same word based on the senses, sense embedding will provide better result than the word embedding. Though some researches are done in the sense embedding, we will be building an iterative model which can accurately embed the senses. Further we will be providing the methods to to uniquely identity the sense from the sense embedding which is a huge challenge in sense embedding.
summary of our project
|
Word Embedding to Polysemy Embedding
| 113
|
word-embedding-to-polysemy-embedding-17274ab98418
|
2018-06-15
|
2018-06-15 19:33:09
|
https://medium.com/s/story/word-embedding-to-polysemy-embedding-17274ab98418
| false
| 741
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Jegasingam Jeyanthasingam
|
*Intern Virtusa (pvt) Ltd *Under Graduate University Of Moratuwa *Football enthusiastic *Reading BCS,CIMA *Breaker of the Machines
|
d547c2b3a992
|
jegasingamjeyanthasingam
| 162
| 158
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-31
|
2017-10-31 15:23:06
|
2017-10-31
|
2017-10-31 16:03:27
| 1
| false
|
en
|
2017-10-31
|
2017-10-31 16:03:27
| 0
|
172751dfb4f9
| 1.116981
| 1
| 0
| 0
|
Artificial Intelligence (AI) has been growing at a rapid pace. Silicon Valley companies are starting to pay top dollar for individuals…
| 4
|
Sophia: The AI Robot That Just Gained Citizenship
Artificial Intelligence (AI) has been growing at a rapid pace. Silicon Valley companies are starting to pay top dollar for individuals knowledgeable in AI.
Whether its farmers trying to increase their tomato yields by 20% or helping Google Maps guide us through traffic, AI is really starting to pick up traction if it hasn’t already.
Which brings me to Sophia.
The AI robot, created by roboticist David Hanson, was granted Saudi Arabian citizenship about a week ago. The robot is designed to be a personal assistant and cater to the elderly.
The robot is also designed to mimic human facial expression.
Although it looks like Sophia had a stroke in this picture, it shows how far we’ve come in a new and somewhat unsettling field.
The accomplishment is extraordinary and highlights the importance of developments in AI.
The other side of this coin is the fact that Sophia was granted citizenship. I’m truly not sure how I feel about it yet considering Sophia’s comment on “destroying all humans”. Do I actually believe she could destroy the human race? No, but I am hesitant to let her take care of my grandmother.
I’m more curious whether her lack of true human interaction suffices for her to be considered generally moral — a requirement for Saudi citizenship — and whether you see personal robot assistants becoming a household staple.
I’d like to know what you guys think.
|
Sophia: The AI Robot That Just Gained Citizenship
| 1
|
sophia-the-ai-robot-that-just-gained-citizenship-172751dfb4f9
|
2018-05-01
|
2018-05-01 12:27:36
|
https://medium.com/s/story/sophia-the-ai-robot-that-just-gained-citizenship-172751dfb4f9
| false
| 243
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Derek Koza
|
I drink an obnoxious amount of coffee. Penn State student majoring in finance.
|
2c06deae9899
|
dailybanter
| 6
| 44
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-07
|
2018-04-07 05:58:15
|
2018-04-07
|
2018-04-07 06:10:24
| 5
| false
|
en
|
2018-04-07
|
2018-04-07 06:10:24
| 12
|
1729deba3c43
| 4.905031
| 0
| 0
| 0
|
The 26/11 terrorist attacks in Mumbai was one of the most harrowing and profound experiences of our lives. The Lakshar-e-Taiba terrorists…
| 5
|
How we citizens can fight terror and its funding
The 26/11 terrorist attacks in Mumbai was one of the most harrowing and profound experiences of our lives. The Lakshar-e-Taiba terrorists raged for 4 days and attacked a Jewish center, hospital, railway station, cafes, hotels, etc. and killed 164 people and wounded over 300. For many of us, the attacks of 9/11, Paris, Boston etc. are also seared into our memories. As citizens, we pay our respects to loved ones and the brave soldiers who fight to protect us — all the while watching in frustration and helplessness as we see these types of attacks across the globe over and over again.
This question has always hounded me.
What can we as normal, average citizens do against this scourge of terrorism? How can we also join this fight?
After many months’ of research and conversations led me and my colleagues to an approach which made sense:
The best response to terrorism is to curb its funding sources
Experts also agree. A research commissioned by the Copenhagen Consensus (CC) project concludes that target nations are overspending on measures that shift the risk of attack instead of reducing it.
Actions by governments to guard one venue simply prompt the terrorists to shift to another easy target. For instance, installing metal detectors in international airports in 1973 led to an immediate and prolonged drop in skyjackings. At the same time, however, there was a significant increase in hostage-taking and other incidents that resulted in more deaths. Similarly, fortifying US embassies in the last decade led to more assassinations and attacks against embassy officials, business people, and tourists such as in the Bali attack in 2005 and 26/11 in Mumbai.
The CC project made a final recommendation:
To be effective, all counter-terrorism measures must either make all modes of attack more difficult or reduce terrorists’ resources.
It’s almost impossible to make all modes of attack more difficult considering the resources and efforts needed. Hence curbing terrorists’ resources seemed to be the best approach.
Currency counterfeiting: One of the key sources of terror funding
A study conducted by FATF — a global intergovernmental organization to combat money laundering & terrorist financing — concludes that terrorist organizations have resorted to the use of counterfeit currency for a variety of reasons.
Training, recruitment, attacks, and propaganda building require a large number of funds and terrorist groups have often resorted to the business of counterfeiting of currency as one of the key means to fund such activities. The report suggests:
Currency counterfeiting has become attractive to terrorists and their sympathizers as it is very profitable
Proof of the pudding
We wanted to ensure that we are building the right solution. Hence we spent enough time on evaluating the pieces of evidence to substantiate this recommendation. We found several reports on the counterfeit currency, used to fund terror operations and some of the striking ones are:
Pakistan’s ISI reportedly generates over 75M dollars annually from its currency counterfeiting operations.
The terrorist arrested for the 2008 Bangalore bombings was carrying fake currency
In an incident last year in Kyrgyzstan, two alleged terrorists had $65,000 in counterfeit U.S. dollars in their possession.
Is this relevant in the current era of digital payments?
While we may think digital payments can solve this, we are not there yet. 85% of all transactions globally (and 40% in the United States) are still carried out using cash, particularly transactions involving small amounts of money. In India, nearly 95% of transactions are carried out in cash. And not coincidentally, the top 3 counterfeited currencies globally are US dollars, Euros and Indian Rupees as per Interpol.
The idea of building millions of detection points for peer-to-peer transactions
Globally, the detection of counterfeit currency has been done by financial institutions and government-designated enforcement agencies. The following two limitations make it inadequate to stop the flow of counterfeited currencies.
Unable to detect peer-to-peer transactions among individuals
Catching it too late — fake currency is already in the marketplace and counterfeiters have profited from it already
Individuals today discover the possession of counterfeit currency in their wallet only when they transact / deposit money via a financial institution/bank. Most of us use cash for small transactions in local stores / other petty cash transactions. Usually, this transacted money is rolled within the market and most of it seldom reaches a detection point at any financial institutions. Thus the counterfeit currencies go undetected and get distributed to millions of individuals discreetly. The profit from this is used to fund terror against citizens!
In addition, counterfeits are becoming better, thus harder for a common person to detect with their naked eyes.
The goal must be to make counterfeit detection easy and foolproof — as well as a habit during cash exchanges, thus stopping counterfeits before it even hits the market.
ActiveDuty: The first step towards stopping terror funding by Authentic.Cash
The product concept of an ultra-portable counterfeit detector for citizens came up as a solution to tackle this global challenge. We designed it carefully so that even senior citizens or a visually impaired person can use it effortlessly. The AI-powered detection engine — our patent pending CashDNA technology — makes it hard enough to break, even for smart counterfeiters who use sophisticated technology to build counterfeit currency.
The foreseen Impact
If the counterfeit currencies are detected at the entry point of distribution channels by citizens, then the fake currency will never enter the marketplace — thus bringing down the demand gradually and the supply subsequently. There wouldn’t be an overnight impact, but definitely in due course, as the citizens and governments come together and participate.
The gradual disappearance of counterfeit currencies can definitely impact the funding of terrorism and we will have one less big menace to worry about. We believe its time has come!
Crowdsourcing participation and resources from citizens
All significant movements in history always started with a simple idea. However, it was the collective participation of citizens like you and me that made the idea truly successful and world-changing. Authentic.cash is a global, technology movement to fight terror and it’s going to be led and driven by citizens.
It’s this critical social requirement for this movement to fight terror that prompted us to crowdsource the feedback as well as the resources needed to take this forward.
If you believe in the cause, support our campaign on Indiegogo.
|
How we citizens can fight terror and its funding
| 0
|
how-we-citizens-can-fight-terror-and-its-funding-1729deba3c43
|
2018-04-07
|
2018-04-07 06:10:25
|
https://medium.com/s/story/how-we-citizens-can-fight-terror-and-its-funding-1729deba3c43
| false
| 1,079
| null | null | null | null | null | null | null | null | null |
Social Change
|
social-change
|
Social Change
| 6,760
|
Vasudevan
|
Co-Founder & CEO, www.authentic.cash
|
1e21a69b0e90
|
devehere
| 77
| 162
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-26
|
2018-01-26 12:46:11
|
2018-01-26
|
2018-01-26 12:49:48
| 1
| false
|
en
|
2018-01-28
|
2018-01-28 12:35:45
| 0
|
172b3c31994
| 5.264151
| 2
| 0
| 0
|
Today, the artificial intelligence (AI) hype wouldn’t exist without cloud computing. Only the easy access to cloud-based innovative AI…
| 5
|
AI Becomes the Game Changer in the Public Cloud
Today, the artificial intelligence (AI) hype wouldn’t exist without cloud computing. Only the easy access to cloud-based innovative AI services (machine learning etc.) and the necessary and fast available computing power enable the developments of novel “intelligent” products, services and business models. At the same time, AI services ensure growth of public cloud providers like Amazon Web Services, Microsoft and Google. Thus, one can observe a “Cloud-AI interdependency“.
After more than 10 years, cloud computing has evolved into a fertile business for providers such as Amazon Web Services or Microsoft. However, competition is getting stronger from laggards like Google and Alibaba. And with the massive and ongoing introduction of AI-related cloud services, providers have increased the competitive pressure themselves, in order to raise attractiveness among their customers.
The Cloud Backs AI and Vice Versa
To build and operate powerful and highly-scalable AI systems is an expensive matter for companies of any size. Eventually, training algorithms and operating the corresponding analytics systems afterwards need oodles of computing power. Providing the necessary computing power in an accurate amount and on time via the own basement, server room or data center is impossible. Computing power that afterwards is not required anymore.
Looking into the spheres of Amazon, Microsoft or Google, all three providers built up an enormous amount of computing power in recent years and equally own a big stake of the 40 billion USD cloud computing industry. For all of them, expanding their portfolios with AI services is the next logical step in the cloud. On one side, developing AI applications respectively the intelligent enhancement of existing applications requires easy access to computing power, data, connectivity and additive platform services. Otherwise, it is necessary to obtain attractiveness among existing customers and to win new customers. Both are looking for accessible solutions to integrate AI into their applications and business models.
Amazon Web Services
Amazon Web Services (AWS) is not only the cloud pioneer and innovation leader, but still by far market leader of the worldwide public cloud market. Right now, AWS is the leading cloud environment for developing as well as deploying cloud and AI application, due to its scalability and comprehensive set of platform services. Among other announcements, AWS presented Amazon Cloud 9 (acquisition of Cloud9 IDE Inc. in July 2016) at the recent re:Invent summit. A cloud-based development environment that is directly integrated into AWS cloud platform to develop cloud-native applications. Moreover, AWS announced six machine learning as a service (MLaaS) services, including a video analysis service as well as a NLP service and a translation service. In addition, AWS offers MXNet, Lex, Rekognition and SageMaker, powerful services for the development of AI applications. SageMaker, in particular, attracts attention, since it helps to control the entire lifecycle of machine learning applications.
However, as with all cloud services, AWS pursues the lock-in approach with AI-related services as well. All AI services are tightly meshed with AWS’ environment to make sure that AWS remains the operating platform after the development of an AI solution.
Amazon also sticks to its yet successful strategy. After Amazon made the technologies behind its massive scalable ecommerce platform publicly available as a service via AWS, technologies behind Alexa, for example, has followed to help customers integrate own chatbots or voice assistants into their applications.
Microsoft
Microsoft has access to a broad customer base in the business environment. This along with a broad portfolio of cloud and AI services offer basically good preconditions to also establish oneself as a leading AI market player. Particularly because of the comprehensive offering of productivity and business process solutions, Microsoft could be high on the agenda of enterprise customer.
Microsoft sticks deep in the middle of digital ecosystems of companies worldwide with products like Windows, Office 365 or Dynamics 365. And that is exactly the point where the data exist respectively the dataflows happen that could be used to train machine learning algorithms and build neural networks. Microsoft Azure is the central hub where everything runs together and provides the necessary cloud-based AI services to execute a company’s AI strategy.
Google
In the cloud, Google is still behind AWS and Microsoft. However, AI could become the game changer. Comparing today’s Google AI services portfolio with AWS and Microsoft you can see that Google is the clear laggard among the innovative provider of public cloud and AI services. This is astounding if you consider that Google invested USD 3.9 billion in AI so far. Compared to the competition, Amazon has invested USD 871 million and Microsoft only USD 690 million. Google simply lacks in consistent execution.
But! Google already has over 1 million AI user (mainly through the acquisition of data science community „Kaggle“) and owns a lot of AI know-how (among others due to the acquisition of “DeepMind”). Moreover, among developers Google is considered as the most powerful AI platform with the most advanced AI tools. Furthermore, TensorFlow is the leading AI engine and for developers the most important AI platform, which serves as the foundation of numerous AI projects. In addition, Google has developed its own Tensor Processing Units (TPUs) that are specifically adapted for the use with TensorFlow. Recently, Google announced Cloud AutoML, a MLaaS that addresses unexperienced machine learning developer, to help creating deep learning models.
And if you keep in mind where Google via Android OS has its fingers in the pie (e.g. Smartphones, home appliances, smart home or cars) the potential of AI services running on the Google Cloud Platform is clearly visible. The only downer is that Google is still only able to serve developers. The tie-breaking access to enterprise customers, something that Microsoft owns, is still missing.
AI Becomes the Game Changer in the Public Cloud
The AI platform and services market is still at an early stage. But in line with the increasing demand to serve their customers with intelligent products and services, companies are going to proceed to search for the necessary technologies and support. And it’s a fact that only the easy access to cloud-based AI services as well as the necessary and fast accessible computing power is imperative for developing novel “intelligent” products, services and business models. Hence, for enterprises it doesn’t make any sense to build in-house AI systems since it is nearly impossible to operate them in a performant and scalable way. Moreover, it is important not to underestimate the access to globally distributed devices and data that has to be analyzed. Only globally scalable and well-connected cloud platforms are able to achieve this.
For providers, AI could become the game changer in the public cloud. After AWS and Microsoft started leading the pack, Google wasn’t able to significantly play catch-up. However, Google’s AI portfolio could make a difference. TensorFlow, particularly and its popularity among developers could play into Google’s hands. But AWS and Microsoft are beware of it and act together against this. “Gluon” is an open source deep learning library both companies have developed together, which looks quite similar to TensorFlow. In addition, AWS and Microsoft provide a broad range of AI engines (frameworks) rather than just TensorFlow.
It is doubtful that AI services are enough for Google to catch up with AWS. But Microsoft could quickly feel the competition. For Microsoft it is crucial, how fast the provider is able to convince its enterprise customer of its AI services portfolio. And at the same time to convey how important other Microsoft products (e.g. Azure IoT) are and to consider them for the AI strategy. AWS is going to stick to its dual strategy and focus on developers as well as enterprise customers and will still lead the public cloud market. AWS will be the home for all those who solely do not want to harness TensorFlow — in particular cloud-native AI users. And not to forget the large customer base that is innovation oriented and is aware of the benefits of AI services.
|
AI Becomes the Game Changer in the Public Cloud
| 11
|
ai-becomes-the-game-changer-in-the-public-cloud-172b3c31994
|
2018-06-10
|
2018-06-10 23:45:19
|
https://medium.com/s/story/ai-becomes-the-game-changer-in-the-public-cloud-172b3c31994
| false
| 1,342
| null | null | null | null | null | null | null | null | null |
Cloud Computing
|
cloud-computing
|
Cloud Computing
| 22,811
|
Rene Buest
|
Gartner Analyst covering Infrastructure Services & Digital Operations. These are my own opinions.
|
3972f61ffea
|
renebuest
| 870
| 31
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-06-08
|
2018-06-08 03:50:14
|
2018-06-09
|
2018-06-09 19:17:46
| 13
| false
|
en
|
2018-08-10
|
2018-08-10 18:21:06
| 3
|
172bea9fcefc
| 7.34717
| 6
| 0
| 0
|
How I Built My First Machine Learning Model To Detect Credit Card Fraud
| 1
|
How To Build Your Own Machine Learning Model
How I Built My First Machine Learning Model To Detect Credit Card Fraud
Learning is the most difficult thing to master, yet without this skill, you and I can never grow. Over the years, I have tried different things and I found out that I learn best through making things rather than just reading about it online or in a book. Whenever I try building something, which in this case was a machine learning model, I would run into errors or run into “traps” that may occur in real world, after all, the world is anything but ideal. So to practice our skills, we will build a machine learning model to predict credit card fraud.
I recently have been doing the A-Z of machine learning course from Udemy. If you are just jumping into this field, I highly recommend you check them out. I absolutely love the way they teach. Through them, I have learned some basics of building machine learning models and this was my attempt to actually build one and I will take you guys through my journey. I will be using a Kaggle dataset which you can download here. I also downloaded Anaconda which automatically downloads all relevant packages for this model, and I will be using Spyder IDE in this article which comes with Anaconda.
Step zero would of course be creating a python file and putting it in the same directory as the dataset. Don’t forget to do this step, because I have done this more times than I would like to admit.
Now the first question is, how do you start a Machine learning model? Let’s look at what the goal would be. The dataset we have downloaded tells us whether a transaction was fraudulent (1) or genuine (0). So, it is clear that it is a classification problem, but there are so many models to choose from. For this, let’s look at our data, and the first thing you would notice is that there is A LOT of data. We have about twenty nine different independent variables and over two hundred thousand observations. So we can rule out any computationally expensive model. Next we see that there are no labels for the variables and it is hard to know whether they are correlated or not which is slightly a problem as some methods need all variables to be completely independent. I personally am going to make a judgement call and say we are going to use logistic regression for classification as it can handle the large amount of data easily and can deal well with non-linear data.
Before that, let's talk about how logistic regression works: In logistic regression, we create a logistic curve with the top boundary being 1 and the bottom boundary being 0. The X-axis of this curve would be the independent variables and we use the curve to get a probability value between 0–1 for the independent variable occurring. if the value is above 0.5, we can say that it has a higher chance of being a 1 than a 0 so it is classified as a 1 and vise versa. You can look at a picture of this below.
On the Y-Axis, we have probability, and one the X-axis, we have the independent varibale
Now you can’t really build a model without relevant libraries, well you can but it will be exponentially harder. So, lets import pandas, which can handle importing and dividing the data.
Next we will import the data using pandas and we will also use “iloc[].values” command on the dataset to pull out the X and the y variables. The code for that would look something like the following:
Make sure the .csv file name is correct! The “:” means all so in the X matrix, we are saying that pandas should import all the rows and all the columns except the last one, which is the dependent variable. For y, we are saying that that pandas should import all the rows but only column 31. The reason there is a 30 in the code instead of 31 is because pyton starts arrays from 0 instead of 1. When I first tried it, I kept getting an error and it took me five minutes to figure out.
Now, next step would be splitting the dataset into training set and test set. This can be done using a function from the “sklearn” library called “train_test_split”. Looking at how we have over two hundred thousands datasets, splitting the data into 80% for training and 20% for testing. All we now have to do is create X_train, y_train, X_test, and y_test and give them values, which can be done using the following line:
Let’s open the dataset again, it seems like most of the data is already scaled, so we don’t need to scale it except the amount of money transacted is not scaled. You scale the data to make sure the range of one data does not impact the overall result. We can scale the data using the “StandardScaler” class from the “sklearn” library. All we do is call the StandardScaler function and standardize the data. you can do it using the following code.
Now the we have processed the data, we can finally create a Logistic regression function and use it to classify the fraud data. This is an EXTREMELY difficult task and requires you to be an expert in python. I’m kidding of course, it only takes five lines of code. We first import the “LogisticRegression” from “sklearn”. We then create an instance of that and name that “classifier”, which we then fit to the X_train and y_train data a s seen below:
Now that we can trained our model, we have to test it out. This we can do by creating a y_pred vector using the X_test values. This literally takes one line of code as seen below:
Congratulations! you have created your first machine learning model. Now let’s look at the accuracy of the model. We will create a confusion matrix so see how good the model is. First, we import “confusion_matrix” from “sklearn”, and create a confusion matrix based on y_pred, and y_test by using the code below:
You know what, let’s make our life easier as well by printing the accuracy result. We can just add the sections in the confusion matrix that correspond to wrong classification, divide it by the total and then multiply it by a hundred. We will then print this percentage right after it is created. You can do it using the code below:
Now, lets run the code and see if this model is any good, and running the code gives us the following result:
and wow, that is really good for a simple machine learning model but it may be a bit misleading. Let’s go to variable explorer in the Spyder IDE which you can find in the picture below:
From here we will open “cm” variable which is our confusion matrix. and we get the following result:
The [0,0] section corresponds to the genuine transactions that were predicted right and from the looks of it, we did really good there. The [0,1] section tells us about the fraudulent transactions that were not caught by the system. There are nine too many but realistically speaking no model will ever be 100% perfect, so I say we did pretty good. the [1, 0] section tells us about the genuine transactions that which were classified as fraudulent. We got 37 of these which given the amount of genuine transactions there are, is a pretty good number. Lasty the [1,1] section corresponds to the number of fraudulent transactions that were classified correctly, and we got 64 in this section, which again is pretty good. Overall speaking, this is a pretty good and pretty fast model for predicting fraudulent credit card transactions as 99% of the transactions were classified correctly and 87% of the fraudulent transactions were caught. Though, we would need to test it out on other datasets to figure out if the classifier “overfit” or is only good with this dataset.
So, we just built a pretty good model, feels nice right? There were some small problems that we would have to tackle in real life, like picking the right model and how to evaluate your model, which you only get better at by making models and realizing their flaws. That is the power of building things, it exposes us to real life challenges. Building a machine learning model is not the hard part, we just built one in 36 lines, it is choosing the right model that sets a good data scientist apart from the crowd. The good news is, all basic classification models follow about the same structure, so you can basically keep reusing this for your other projects.
So basically:
You can build a machine learning model, even when you do not know most of the variables.
Building a machine learning model is the easy part, picking the right machine learning model takes true skill
Evaluating your model correctly is essential and can be done through the confusion matrix. Though, this is a skill you can only learn by doing it more and more.
Before you go:
Like this post
Share with family and friends
Follow my medium page to stay updated with my AI adventure!
If you have any questions, just message me on my LinkedIn, or you can email me at agosh.saini@gmail.com.
|
How To Build Your Own Machine Learning Model
| 72
|
how-to-build-you-own-machine-learning-model-172bea9fcefc
|
2018-08-10
|
2018-08-10 18:21:06
|
https://medium.com/s/story/how-to-build-you-own-machine-learning-model-172bea9fcefc
| false
| 1,576
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Agosh Saini
|
I love the Fallout game series and learning about tech.
|
1ab32508d10c
|
agosh.saini
| 21
| 26
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-26
|
2018-08-26 16:38:56
|
2018-08-27
|
2018-08-27 17:48:29
| 1
| true
|
en
|
2018-09-16
|
2018-09-16 14:09:08
| 1
|
172cf204c0a0
| 3.101887
| 19
| 1
| 0
|
Anyone interested in data science, neural networks, or machine learning will quickly learn that large amounts of data are needed to produce…
| 5
|
Getting started with machine learning and pyautogui
Anyone interested in data science, neural networks, or machine learning will quickly learn that large amounts of data are needed to produce substantive results.
For those unfamiliar with the broad strokes, here’s the basic principle of machine learning:
Gather lots of input, with each sample referred to as an instance. Each instance has both data and a label. Lets use the example of data from baseball players per-game stats for the entire MLB. Lets also assume each instance would be each player’s data for one game.
Data in this instance might include the number of times the player hit a home run, number of times he hit struck out, and the numbers of times he hit singles, doubles, and triples.
The label is essentially just a piece of the instances’ data that is present in all the input, but singled out as the metric we want to eventually predict. In our example, lets say we hypothesize that a player’s height is correlated to, and thereby can be predicted by their performance. Each player’s height must be provided in the input data pool.
2. Feed that input into an algorithm that does the heavy lifting of processing the data. This algorithm becomes known as the model.
3. Cross-validation. After all the input has been fed through the algorithm and it does its’ magic, feed a chunk of the original data back through the model with the label data removed. The model output is the prediction for what it thinks the label, in this case the player’s height, should be based on the performance stats. These results are then referenced with the heights known to be correct for each instance, and the delta is recorded and averaged. This averaged amount of times the model gets the height correct becomes the metric by which the algorithm is measured.
The first step, gather lots of input data, is crucial to building effective prediction models. In this example, a single players’ stats from one game or even his stats from all games in a season will not provide enough data to give an accurate picture of trends in performance/height correllation. Instead, we would likely need the stats from all players in the league from all games over the course of ten seasons. Instances in the realm of ten thousand start to enter the ballpark (no pun intended) of acceptable quantities of data.
Rather than collect it manually, data scientists often look to automation tools like pyautogui to gather this vast amount of data. Pyautogui is a Python library that allows for coding automatic control procedures of your computer. It could be used, for example, to repeat the process of opening a real estate webpage, searching for homes in a specified zip code, and recording data about each listing.
Functions available from pyautogui are limited to keystrokes, mouse operation, and image recognition, but a little work put into coding can save hours of tediuous work by a human.
As an exercise, I put together a program that automatically creates promotional Facebook posts for my music page.
In the following video, you’ll see the program
open my calendar
navigate to the current day
copy the name of the venue I’m playing that night
open my browser and type in my Facebook URL
click in the “write a post” field and type a post, copying in the name of the venue
The image recognition feature is really cool. In this example I took screenshots of the “Today” and “Day” buttons in my calendar application, included them in the directory of my Python program, and told pyautogui to search the screen and click on them when they were found. Though its computationally expensive and takes longer than other processes, it can be a really handy feature if you don’t know the exact coordinates of a button’s location.
Here’s my Python code:
Pro tip: if you’re using a Mac with a retina screen, the size of the pixels are half the size of normal pixels, so on line 15, 20, and 24, you’ll see that I’m dividing the pixel coordinates in half to get the clicks and mouse movements to behave like I want them to.
You can download pyautogui here:
Welcome to PyAutoGUI's documentation! - PyAutoGUI 1.0.0 documentation
PyAutoGUI is a Python module for programmatically controlling the mouse and keyboard.pyautogui.readthedocs.io
Be sure to check your version of Python, as at the time of writing, the latest version that pyautogui will work with is Python 3.6.
If you already know basic Python, getting started with pyautogui is very easy. Happy automating!
|
Getting started with machine learning and pyautogui
| 80
|
getting-started-with-machine-learning-and-pyautogui-172cf204c0a0
|
2018-09-16
|
2018-09-16 14:09:08
|
https://medium.com/s/story/getting-started-with-machine-learning-and-pyautogui-172cf204c0a0
| false
| 769
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Zach Bedell
|
Graduate student at College of Charleston dept. of Computer Science
|
886f4086ed47
|
zachary.bedell
| 39
| 7
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
32881626c9c9
|
2018-08-31
|
2018-08-31 09:56:46
|
2018-09-13
|
2018-09-13 13:01:01
| 2
| false
|
en
|
2018-09-21
|
2018-09-21 15:07:54
| 9
|
1731888fe591
| 2.775786
| 2
| 1
| 0
|
AI is proving to be a powerful force in today’s art, and it’s all because of analytical thought.
| 3
|
For AI, artistry is all about analysis
AI is proving to be a powerful force in today’s art, and it’s all because of analytical thought.
Artificial intelligence is continually proving itself to be an indispensable analytical mind. Its capacity to almost instantly study far more information than a human could ever hope to is making it a crucial tool in tomorrow’s businesses, transport, and cities. It’s also why AI pushing the boundaries of art.
Let’s get this out of the way early: there’s no consensus on whether or not machine-generated works constitute art. Some argue that because AI builds from what engineers have exposed it to, it’s incapable of creative thinking, but the same could be said of people. Given that AI is still in its relative infancy, it’s too soon to draw any conclusions, but there’s one thing we can say with certainty: AI’s value to art isn’t merely in emulating the creative thought of humans; it’s in harnessing the analytical thought that humans can’t.
One of the ways in which many people are used to letting a computer take over the artistic reins is image editing, and there’s no shortage of apps that can automatically adjust photos to bring out their best qualities. But even on a consumer level, AI is already doing much more than being a glorified Instagram filter. For years, hobbyists have been utilizing the analytical prowess of neural networks to search for and accentuate objects in images to produce fascinating (if sometimes horrific) Deep Dream creations, or to envision one artwork in the fashion of another. And this is just the tip of the iceberg.
Machines are making art from scratch without human intervention, such as with Generative adversarial networks (or GANs). These systems comprise two competing neural networks: a “generator” that produces images, and a “discriminator” tasked with determining if those images are human- or computer-made. The generator continually responds to the feedback and modifies its output until it can fool the discriminator into thinking its images are ‘real.’
Neural networks aren’t the only ones being fooled. Researchers at Rutgers University conducted a Turing Test of sorts to see if people could discern if GANs creations were computer-made, and its subjects thought the generated images were human-made 75 percent of the time — more often than a human-made contemporary art collection, which was correctly identified in only 48 percent of instances. Furthermore, the GANs’ work wasn’t believable simply because it mimicked existing styles, as the generator was instructed not to follow the conventions of the art it was taught on, leading to novel productions.
AI isn’t about to replace human artists, though — its analytical approach and unique output make it well suited to being a collaborator. Royal College of Art student Anna Ridler, for example, fed her illustrations into a GANs system, and then had it draw the frames for a short film. And it’s not just the visual arts that can benefit, with companies and musicians finding ways to involve AI in composition.
Flow Machines, a music-writing AI from Sony’s Computer Science Laboratories, is designed to inspect songs, uncover patterns, and invent its own tunes, and its best pieces were released earlier this year in an album titled Hello World. But it didn’t get that far without some human help: a group of musicians were responsible for taking Flow Machines’ melodies and determining the arrangement, allowing them to ensure the music has structure and emotion.
It could be a long time before the artistry of AI is widely accepted, but the era of AI contributing to art is already here. Its analytical methods open new possibilities, especially in collaboration with human artists, and with the technology rapidly improving, the best is likely yet to come.
Originally published at 360.here.com.
|
For AI, artistry is all about analysis
| 49
|
for-ai-artistry-is-all-about-analysis-1731888fe591
|
2018-09-21
|
2018-09-21 15:07:54
|
https://medium.com/s/story/for-ai-artistry-is-all-about-analysis-1731888fe591
| false
| 634
|
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
| null |
datadriveninvestor
| null |
Data Driven Investor
|
info@datadriveninvestor.com
|
datadriveninvestor
|
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
|
dd_invest
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
HERE Technologies
|
HERE, the Open Location Platform company, enables people, businesses and cities to harness the power of location.
|
a188a743c631
|
here
| 15
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-25
|
2018-05-25 16:54:30
|
2018-05-25
|
2018-05-25 17:56:46
| 3
| true
|
en
|
2018-05-25
|
2018-05-25 17:56:46
| 1
|
17340136f7b
| 4.380189
| 0
| 0
| 0
|
I discovered this chart the other day while scrolling through Twitter. At a glance, one might pause in reflection and reassure themselves…
| 5
|
Artificial Intelligence and Our Purpose in Society
I discovered this chart the other day while scrolling through Twitter. At a glance, one might pause in reflection and reassure themselves that they only use social media in a positive way. Like most, the use of social media pervades my social life and that of those who I surround myself with. Because of the culture surrounding it, along with the time commitment it takes to keep up on all of the various platforms, most people believe that they could easily give up these applications if they had too. Most aim to feel as though they have control over how social media affects them, but regardless, the deteriorating effects of these sites can still be felt.
No matter what take you as an individual have on the use of social media, we as a society are the ones being used. These platforms are algorithmic trained artificial intelligence that decide what we see and spend our time on. No matter where you fall on the chart, you are still being manipulated by artificial intelligence. This poses the question: as humans, what is our purpose in society now that artificial intelligence has entered the picture?
We already have Google giving their a.i., expertise to the US military drone program and Chinese citizens being closely monitored with the use of facial recognition a.i. … this is literally becoming a black mirror episode. Lets not forget about social media dominating the majority of discretionary time. Next thing you know we are going to have WestWorld like amusement parks or simulated realities where we can choose to live forever. I don’t know about you, but I don’t think I am ready for all of this… I am just starting to realize my true purpose in this world as I am only 22 years old.
When homo sapiens first graced the earth ( around 2 million years ago) our roles were, for the most part, biological: to mate and expand territory. Our hunter gatherer ancestors were not afforded the variety of lifestyle choices that are available today, they had to focus predominantly on survival.
If we jump from the time of the first humans to about 12,000 years ago, we can see the first civilizations begin to develop. Agriculture, domestication of animals, and the development of communities all began during this period, one where our purpose was to contribute to the development of these newly formed societies. The end goal of this was still survival — these small communities aimed for the expansion and prosperity of future generations. Roles were generally assigned by society according to gender, stature or ancestry in order to keep everyone fed and safe. One could argue that this was the first time in human history that we were able to deviate from primitive behavior (we still cared mostly about ourselves, therefore our interests are intertwined with the success of our communities).
Jumping again to present day, human life has deviated almost entirely from our early ancestors. At least 8/9ths of the human population has enough food to actively live healthy lives. Our societies have become sustainable for humans to live in safety (for the most part) due to law enforcements across the world and universal laws against crime. Society provides a significant crutch for survival, which has allowed for a shift in human purpose. Humanity is less focused on survival and more focused on careers and monetary success.
This shift is not free of disturbances. It has led to health problems for many, with social media distancing us from reality. As with any paradigm shift, those who have adjusted and found purpose in new roles are fairing relatively well.
Things are, however, already starting to shift again.This time, we the people, have enough information to be aware that something is happening thanks the unlimited knowledge on the web. Automation has begun to take jobs- something that casts an ominous shadow over those working in replaceable employment As automation quickly develops, we are starting to see more and more careers become automated. Once again, we are being forced to find new purpose and roles.
I personally have watched countless videos, documentaries, and books on the role that artificial intelligence could play in society. Like most, I am both excited and scared. The possibilities to eradicate disease, cure cancer, build better infrastructure, solve more mysteries, and improve the quality of life is the optimistic perspective. Killer robots, machine warfare, judgement day, the next step in evolution, and job loss are a few of the more negative attributes of a future as seen through a pessimistic lense. These possibilities generate infinite questions, however I am going to focus on one: where do we find purpose and roles in society when the machines have taken that from us?
This fundamental question should be present in our minds as we move further into the realm of artificial intelligence. Biological human behavior is not easy to rewire. We aren’t robots. We need to proceed with caution, looking at the past and how major shifts in society have both positively and negatively affected our human psyche and physical health. Forget economic growth, interplanetary takeover, and our innate curiosity — we need to start figuring out if we as a society are ready for such immeasurable changes. We are headed down a path that could become irreversible.
Yet, even with all of the exciting (and terrifying) new developments that are taking place globally, I feel as though things are becoming more clear. Globalization has allowed for humanity on a personal level that was previously thought to have been impossible, and as a result, I believe that our purpose is reverting back to our roots: to help one another. Historically speaking, a majority of if not all great accomplishments have been due to a group of people working hard to make something happen. Selfishness is not the way. The sooner we start to realize that we are all interconnected the sooner we will find the answers concerning our role alongside technology.
source on human consumption statistic:
http://www1.wfp.org/zero-hunger
|
Artificial Intelligence and Our Purpose in Society
| 0
|
artificial-intelligence-and-our-purpose-in-society-17340136f7b
|
2018-06-04
|
2018-06-04 08:26:01
|
https://medium.com/s/story/artificial-intelligence-and-our-purpose-in-society-17340136f7b
| false
| 1,015
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Honest-Assholes
| null |
3856d0f3064b
|
honest_assholes
| 1
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
d8b061489ed1
|
2018-07-21
|
2018-07-21 17:54:01
|
2018-07-25
|
2018-07-25 18:04:56
| 31
| false
|
en
|
2018-07-25
|
2018-07-25 18:07:15
| 4
|
173429c60bc5
| 7.398113
| 2
| 0
| 1
|
Machine Learning Series!!!
| 5
|
Chapter-8 Easy to understand the Top 10 commonly used Machine Learning Algorithm
Machine Learning Series!!!
Through this article, you can have a common sense understanding of ML’s commonly used algorithms. There is no code, no complicated theoretical derivation, just to illustrate, know what these algorithms are, how they are applied, and the examples are mainly classification problems.
Each algorithm has watched several videos, picking out the most clear and interesting ones that are easy to understand.
There will be time to further analyze the individual algorithms in the future.
Today’s algorithm is as follows:
Decision tree
Random forest algorithm
Logistic regression
SVM
Naive Bayes
K nearest neighbor algorithm
K-means algorithm
Adaboost algorithm
Neural Networks
Markov
1. Decision tree
According to some features, each node asks a question. By judging, the data is divided into two categories, and then continue to ask questions. These problems are learned from existing data. When new data is added, the data can be divided into appropriate leaves according to the problem on the tree.
2. Random forest
Randomly select data in the source data to form several subsets
The S matrix is the source data, with 1-N data, ABC is the feature, and the last column C is the category.
Randomly generate M sub-matrices from S
This obtain M M subsets tree
to the new data into the M tree, the M obtained classification results, count up the number to see what kind of prediction as, to this category as the final prediction result
3. Logistic regression
When the prediction target is probabilistic, the value domain needs to satisfy greater than or equal to 0 and less than or equal to 1. At this time, a simple linear model cannot be used because the value range is beyond when the domain is not within a certain range. Specified interval.
So it is better to have a model of this shape at this time.
So how do you get such a model?
This model needs to satisfy two conditions greater than or equal to 0, and
a model less than or equal to 1 greater than or equal to 0 can choose absolute value, square value, here using exponential function, must be greater than 0 is
less than or equal to 1 with division, the numerator is itself, the denominator is itself plus 1, it must be less than one.
After further deformation, the logistic regression model is obtained.
The corresponding coefficients can be obtained by calculating the source data.
Finally get the logistic graphics
4. SVM
Support vector machine
To separate the two types, you want to get a hyperplane. The optimal hyperplane is the maximum of the two types of margins. The margin is the distance between the hyperplane and the nearest point. As shown below, Z2>Z1, so the green super Plane is better
Express this hyperplane as a linear equation, one above the line is greater than or equal to 1, and the other is less than or equal to -1
The point-to-face distance is calculated according to the formula in the figure.
So the expression for the total margin is as follows, the goal is to maximize the margin, you need to minimize the denominator, so it becomes an optimization problem
Give a chestnut, three points, find the optimal hyperplane, define the weight vector=(2,3)-(1,1)
Obtain the weight vector as (a, 2a), substitute two points into the equation, substitute (2, 3) and another value = 1, substitute (1, 1) and another value = -1 to solve for a and truncation w0. The value, which in turn gives the expression of the hyperplane.
a After getting out, substituting (a, 2a) is the support vector
The equation for a and w0 substituting the hyperplane is support vector machine
5. Naive Bayes
Give an application in NLP
Give a paragraph of text, return the emotional classification, the attitude of this text is positive, or negative
In order to solve this problem, you can just look at some of the words.
This text will only be represented by some words and their counts
Original question is: give you a word, which category it falls
into a relatively simple and easily obtained by bayes rules questions
The question becomes, what is the probability of this sentence appearing in this category, of course, don’t forget the other two probabilities in the formula
Chestnut: The probability that the word love appears in the case of positive is 0.1, and the probability of occurrence in the negative case is 0.001
6. K nearest neighbor
k nearest neighbours
When giving a new data, which of the k points closest to it is more, which category does the data belong to?
Chestnuts: To distinguish between cats and dogs, the rounds and triangles are known to be classified by the claws and sound features. What kind of star does this represent?
When k=3, the points linked by these three lines are the last three points, so the circle is more, so this star belongs to the cat.
7. K-means
Want a set of data, divided into three categories, pink large value, the value of small yellow
happiest initialized, there chose the simplest kinds of 3,2,1 as the initial value of
the remaining data, each Calculate the distance from the three initial values, and then classify it into the category of the initial value closest to it.
After classifying the class, calculate the average of each class as the center point of the new round.
After a few rounds, the group no longer changes, you can stop
8. Adaboost
Adaboost is one of the methods of bossing
Bosting is to combine several classifiers with poor classification effects, and get a better classifier.
The following picture, the left and right decision trees, the single look is not very good, but put the same data into it, add the two results together, it will increase the credibility
Adaboost’s chestnuts, handwriting recognition, can capture a lot of features on the artboard, such as the direction of the starting point, the distance between the starting point and the ending point, etc.
When training, you will get the weight of each feature. For example, the beginning of 2 and 3 is very similar. This feature has little effect on the classification, and its weight is also small.
And this alpha angle is very recognizable, the weight of this feature will be larger, and the final prediction is to take into account the results of these features.
9. Neural network
Neural Networks is suitable for an input that may fall into at least two categories
NN neurons of several layers, and the connections between them consisting of
a first layer is an input layer, the last layer is the output layer
Have their own classifier in both the hidden and output layers
The input is input to the network, activated, the calculated score is passed to the next layer, the subsequent neural layer is activated, and the score on the node of the output layer represents the scores belonging to each class. The following example shows the classification result as class 1
The same input is transmitted to different nodes, and the different results are obtained because the respective nodes have different weights and biases.
This is the forward propagation
10. Markov Chain
Markov Chains consists of state and transitions
Chestnut, according to the phrase ‘the quick brown fox jumps over the lazy dog’, to get the markov chain
Step, first set each word to a state, then calculate the probability of transition between states
This is the probability of a sentence. When you use a lot of text to do statistics, you will get a larger state transition matrix, such as the words that can be connected later, and the corresponding probability.
In life, the alternative result of the keyboard input method is the same principle, the model will be more advanced
References:
https://towardsdatascience.com/a-tour-of-the-top-10-algorithms-for-machine-learning-newbies-dde4edffae11
https://www.dezyre.com/article/top-10-machine-learning-algorithms/202
https://www.kdnuggets.com/2017/10/top-10-machine-learning-algorithms-beginners.html
http://bigdata-madesimple.com/10-machine-learning-algorithms-know-2018/
|
Chapter-8 Easy to understand the Top 10 commonly used Machine Learning Algorithm
| 3
|
easy-to-understand-the-top-10-commonly-used-algorithms-for-machine-learning-173429c60bc5
|
2018-07-25
|
2018-07-25 18:07:15
|
https://medium.com/s/story/easy-to-understand-the-top-10-commonly-used-algorithms-for-machine-learning-173429c60bc5
| false
| 1,351
|
The vision of the ML Research Lab is to provide best technical tutorial to ML aspirant and Researcher to gain the Knowledge of Machine Learning, Deep Learning, Natural Language Processing, Statistics and Computer Vision.
| null |
shriganesh.patel
| null |
ML Research Lab
|
ashishpatel.ce.2011@gmail.com
|
ml-research-lab
|
MACHINE LEARNING,ML RESEARCH LAB,DEEP LEARNING,DATA SCIENCE,STATISTICS
|
Ashish_Patel26
|
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Ashish Patel
|
Data Scientist | Kaggle Kernel Master | Deep learning Researcher
|
84433a04103b
|
ashishpatel.ce.2011
| 108
| 71
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-12-09
|
2017-12-09 04:05:11
|
2018-01-04
|
2018-01-04 18:17:53
| 3
| false
|
en
|
2018-01-11
|
2018-01-11 18:03:24
| 2
|
1734b97d122
| 6.100943
| 2
| 0
| 0
|
Growing up one of my favourite game moments was in Simon the Sorcerer 3D. Simon is stuck in a room and can’t get out. Before he entered the…
| 4
|
Doki Doki Literture Club and breaking the 4th wall
Growing up one of my favourite game moments was in Simon the Sorcerer 3D. Simon is stuck in a room and can’t get out. Before he entered the room he stepped on a checkpoint. He keep mentioning that he wants to die and when that finally happens he re-spawns at the checkpoint and the game continues.
This breaking of the “4th wall” of games was something I rarely encountered in games and always appreciated. I grew up watching Mel Brooks movies where that was a common practice but still to me felt rare. A research by J. Auter, Philip and M. Davis, Donald points out:
“[…] Audiences do, within limits, like to be so involved in the program. Clips that broke the fourth wall were rated significantly more entertaining on a semantic differential scale, and significantly more sophisticated than were clips that did not break the wall.”*
When Characters Speak Directly to Viewers: Breaking the Fourth Wall in Television.
Games, by nature, are a much more engaging medium and require’s the player’s input to move the plot forward. Supposedly the player doesn’t need this moment of “I know you are there, we are both in on this little joke” between the director and the audience. However, there is still a role the player plays and even in games with first person POV, the player is still not “themselves”. Breaking the rules of “if you die you restart, you don’t continue” and “the game doesn’t exist beyond its window” are the game’s way of breaking that wall for the player and letting them know that they are equal participants in this experience. ״Doki Doki Literature Club״ is all about letting the player know the games “sees them”.
At first glance, this seems to be a “girlfriend game”. You join a literature club and need to woo the ladies. I entered the game knowing that something bad is going to happen and I kept expecting it at every turn. After all, the game starts with a warning. But the game’s structure is very similar to the movie “Audition”: the first half is mundane and repetitive (mostly just tapping space repeatedly as you try and hit on girls) and in the second, all hell breaks lose and you are in the center of it.
Although it might not seem like it but the game handles self-harm and depression in a way that I didn’t feel uncomfortable with (relatively to how uncomfortable the game does make you feel). After the woo session leads to a progression in the relationship with one of the characters (based on the player’s choice) Sayori, your cheery childhood friend, confesses to you that she is hiding a depression and a crush on you. Her description of how she masked it by pretending to be air-headed and disorganised, how she is supposed to be happy that you like her (that’s my chosen response) but it hurts instead is very compelling and makes the character likeable. I, personally, can relate to that feeling of unhappiness even in seemingly happy moments.
When Sayori isn’t at school the next day that is when the “warning” comes to play in creating tension. Is she alright? The player goes to her house and discovered her hanged body. The contrast between the sweet innocent start and this makes the moment even more intense. Immediately the game restarts without Sayori and the question of “wait, aren’t we going to talk about this?” keeps the player engaged. There is an itch that hasn’t been scratched.
Soon the game becomes a version of “Serial Experiment Lain” and the player is held hostage in this version of the “wired”, the digital world that can bend and stretch the “physical” world.
Monika, the club president, takes over the game and forces the player to choose her by a series of “technical” tricks that break the fourth wall and makes the game acknowledge that it is a game: the mouse moves towards Monika’s name, her image blocks the screen and prevents the player from interacting with anything but her, Monika talks about the other characters and how she wants to dispose of them to be with you, etc.
The breaking of the forth wall reaches it’s extreme when Monika “breaks” the game to the extent that the player is stuck in a room with her with no way out. The “save” and “load” options just become a tool for Monika to let you know she is staying, and she acknowledges that you are no longer playing a role by saying she doesn’t even know if the player is female or male.
The player’s computer, or physical environment, become part of the narrative: in order to advance the player needs to delete Monika’s game file from the computer.
The player’s freedom of choice is visibly taken away
Once again contrast plays a role in creating the emotional response. Until that point, the player lost what little agency they had. Choices were taken away and Monika steers the narrative in her favor. Deleting files from the actual computer gives the player a renewed sense of control. This is a space that, supposedly, the player is the only one that has access to and shows that the player is the one that can affect Monika, not the other way around.
The game starts anew but soon enough the fourth wall is broken once more when Monika reappears to protect you from the other characters who also want you for themselves. The game is being deleted as the credits play (and includes a “thank you” to the player) and the player hears Monika playing a song for you. It being the first audible dialogue creates a new sense of intimacy with Monika who, at the end, wanted to protect you.
While I was playing the game I was writing a narrative based game based on the principles of Active Listening. It is obvious that Doki Doki’s narrative doesn’t branch out (you can choose which character to woo and get different scenes based on it but it always reaches the same point), and like the game I was writing, also includes moments of “speaking for the player” and stating what the player feels and thinks in order to advance the plot. By stating the player’s goals in the game (wooing girls, for example) it makes it easier to have the game also state how I am feeling. My feelings also need to progress and become a result of my actions.
Having the game put you in moments of distress when it takes away all sense of agency actually creates a positive experience. It is no longer trying to fool you to thinking you can make a difference on the outcome and also gives you a new goal — to regain control, something we assume we have in interactive experiences.
Another game I’ve been working on explores the sense of autonomy by taking away the player’s control and making every one of the player’s actions lead no where. There I ran into a problem: when it becomes clear that the roles are “make a choice->what happens is the opposite” players started trolling the game. One thing I learned is that it is important to keep the results diverse even if the player still doesn’t regain control. For example, let the game play along with the player’s choices for a bit before taking control away again.
Doki Doki have a lot of diversity in the way it blocks and controls the player. First, it escalates from diegetic to non diegetic: to stop you the game first kills characters and restarts but then it also moves your mouse and takes away “save” and “load”. It also lets the player “regain” control and goes a long with it for a bit: the player can delete Monika’s file, which seems like the player has control yet again, but she comes back when the game restarts and at the end it is Monika who affects the outcome of the game.
In conclusion, playing Doki Doki is an intense emotional experience. Playing with the forth wall helps the game hide how little control the player actually has by shining a spotlight at it. With movies I grew tired of how many forth walls were broken and I think my sense of appreciation for it in games is because of how rare it is but also how it adds a layer of surprise. We usually try to solve puzzles based on the game world’s logic and adding out own physical world to the mix adds a new layer of complexity.
*J. Auter, Philip & M. Davis, Donald. (1991). When Characters Speak Directly to Viewers: Breaking the Fourth Wall in Television. Journalism & Mass Communication Quarterly — JOURNALISM MASS COMMUN. 68. 165–171. 10.1177/107769909106800117.
|
Doki Doki Literture Club and breaking the 4th wall
| 6
|
doki-doki-literture-club-1734b97d122
|
2018-01-24
|
2018-01-24 19:10:39
|
https://medium.com/s/story/doki-doki-literture-club-1734b97d122
| false
| 1,471
| null | null | null | null | null | null | null | null | null |
Games
|
games
|
Games
| 23,849
|
Rony Kahana
| null |
f1953e1933b8
|
ronykahana
| 61
| 72
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-20
|
2018-07-20 21:11:19
|
2018-07-21
|
2018-07-21 04:42:00
| 1
| false
|
zh-Hant
|
2018-07-21
|
2018-07-21 04:42:00
| 6
|
173659e9f0ea
| 0.426415
| 0
| 0
| 0
|
今日主題:機器學習安全性系列
| 2
|
Day 61 — Machine Learning Security
今日主題:機器學習安全性系列
參考資料
老和山修仙记
對抗例入門說明 — 運用對抗例攻擊深度學習模型(一)
對抗例攻擊與防禦手法 — 運用對抗例攻擊深度學習模型(二)
Awesome-AI-Security
Biggio, Battista, Blaine Nelson, and Pavel Laskov. “Poisoning attacks against support vector machines.” arXiv preprint arXiv:1206.6389 (2012).
筆記
結束了超級大熱門的生成對抗網路系列,接下來想學習的主題是個相對小眾的題材。身為(前)資安人,在學習Machine Learning/Neural Networks這個領域的時候,常常會有種哪裡怪怪的感覺。太多假設了。其中最重要的一個,就是假設沒人會刻意來搞破壞…,這點跟現今世界中幾乎所有科技在發展時的狀況其實都是一樣的。於是你看看,現今世界中的資安問題有多大條。
機器學習安全這個主題,從討論訓練資料的真實性出發,到更深入的攻擊手法等等,其實有很多東西可以討論。
但有趣的是,從最早的(我能找到的最早)的攻擊手法(2012)被提出到現在,機器學習/深度學習早已大爆紅到了全民AI的程度,卻還是很少人提及AI/ML Security這個主題。
如果你去Google搜尋引擎輸入機器學習安全性,得到的大概就會是如何運用機器學習來做網路安全或資訊安全,比如說這份清單。但是機器學習/類神經網路自身的安全性,討論的資源就會瞬間少掉很多。
今天的筆記先大略針對這個主題做個簡單說明,講講核心觀念。從明天開始的一個系列中再來慢慢的逐項討論。
跟一般的資訊安全模型很像,機器學習的安全性也會從三個不同的方向討論:Confidentiality, Integrity, 以及 Availability。
就我目前的理解,大概解釋這三個方向如下。
Confidentiality:
保證訓練資料(尤其在訓練資料是用戶個資的情況下)不被洩露給攻擊者。現實上,攻擊者可以透過竊取訓練資料,以及模型萃取等手法,取得訓練機器學習模型使用的資料,或是逆向工程機器學習模型的權重來逆推回原始訓練資料。
Integrity:
保證使用來當作訓練資料的資料點都是真實的資料。事實上,攻擊者可以假造訓練資料,而且這些假造的訓練資料可以精細到一般人工無法檢驗。這些假造的攻擊資料就可以用來誤導並且操作機器學習模型訓練的進程。一種常見的手法叫做對抗例攻擊(Adversarial example)或是數據下毒攻擊(Poisoning Attack)。
Availability:
保證機器學習的訓練成果可以使用。[1]中舉了個可用性攻擊的例子
例如,在无人驾驶领域,如果攻击者把一个非常难以识别的东西放在车辆会经过的路边,就有可能迫使一辆自动驾驶汽车进入安全保护模式,然后停车在路边。
目前我能找到的最早的一篇論文提到數據下毒攻擊的是[5],其演算法如下:
[5]
[4]中提到了許多關於AI安全領域的許多資料,之後的討論也很有可能會圍繞著[4]中提供的各種論文與實作程式碼來做筆記跟寫文章。
|
Day 61 — Machine Learning Security
| 0
|
day-61-machine-learning-security-173659e9f0ea
|
2018-07-21
|
2018-07-21 04:42:00
|
https://medium.com/s/story/day-61-machine-learning-security-173659e9f0ea
| false
| 60
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Falconives
| null |
250d8013fad2
|
falconives
| 11
| 19
| 20,181,104
| null | null | null | null | null | null |
0
|
net = caffe.Classifier(
reference_model,
reference_pretrained,
mean=imagenet_mean,
channel_swap=(2, 1, 0),
raw_scale=255,
image_dims=(256, 256))
image_file = inevents[“image”]
input_image = caffe.io.load_image(image_file)
output = net.predict([input_image])
predictions = output[0]
predicted_class_index = predictions.argmax()
ind = np.argpartition(predictions, -3)[-3:]
pretty_text = “<h3>GoogleNet:</h3>”
for i in range(0, len(ind[np.argsort(predictions[ind])])):
pretty_text += “#%d. %s (%2.1f%%) <br>” % (
i + 1,
name_map[ind[np.argsort(predictions[ind])][2-i]],
predictions[ind[np.argsort(predictions[ind])][2-i]] * 100)
return pretty_text
| 6
| null |
2017-09-12
|
2017-09-12 17:38:23
|
2017-09-12
|
2017-09-12 17:45:16
| 7
| false
|
en
|
2017-09-12
|
2017-09-12 17:45:16
| 17
|
173701bcb74a
| 4.54434
| 2
| 0
| 0
|
Computer vision is an exciting and quickly growing set of data science technologies. It has a broad range of applications from industrial…
| 3
|
How I Made a Neural Network Web Application in an Hour
Computer vision is an exciting and quickly growing set of data science technologies. It has a broad range of applications from industrial quality control to disease diagnosis. I have dabbled with a few different technologies that fall under this umbrella before, and I decided that it would be a worthwhile endeavor to rapid prototype an image recognition web application that used a neural network.
I used a deep learning framework called Caffe, created by the Berkeley Vision and Learning Center. There are several other comparable deep learning frameworks like Chainer, Theano, and Torch7 that were candidates, but I chose Caffe due to my previous experience with it. Caffe has a set of python bindings, which is what I made use of for this project. If you’re interested in more theory behind deep learning and neural networks, I recommend this page by Michael Nielsen.
To begin, I installed all the Caffe dependencies onto an AWS t2.medium instance running Ubuntu 14.04 LTS. (Installation instructions for 14.04 LTS can be found here.) I elected to use CPU-only CUDA, because I’m not training my own neural network for this project. I obtained two pre-trained models from the BVLC Model Zoo, called GoogleNet and AlexNet. Both of these models were trained on ImageNet, which is a standard set of about 14 million images.
Now that I had all the prerequisites installed, I opened up the Exaptive Studio and started a fresh Xap (what we like to call web applications built in Exaptive). I started by creating a new python component for writing the Caffe code necessary to identify an image. I named the new component “GoogleNet” after the neural net model I want to use first.
Then I wrote the Caffe code in python.
First, we instantiate a caffe image classifier.
The reference_model is a filepath to a set of config options for the network. Caffe provides a stock model for this. The reference_pretrained is another filepath that points to the pretrained GoogleNet model from the model zoo.
We grab the input image filepath and use Caffe methods to load it.
Then we simply call predict on our image classifier with the input image as an argument.
Then we get the top three predictions for our image.
Then make some nice html to return for the text component.
Note that we’re grabbing the id and using a name_map, which corresponds to the image class’s imagenet ids. Then the pretty_text will be returned for the user.
Now that the python was written, I needed to wire up a user interface. To get the image into the xap, I chose a file drop target, which is one of Exaptive’s commonly used JavaScript components to handle file input.
The drop target will be used to hand an image to the neural net component.
The file drop target, ready to accept our images.
All that was left at this point was to create a text display for the HTML that will be generated inside the neural net component. For that, I chose a JavaScript component named “Text”, which will render the HTML.
Three components later, we’re ready to identify some images.
At this point, the code was done. I added some HTML and inline styles, then I saved this Xap and opened it in another tab. Here is page when we load it.
Then we drag in a picture. I used a picture of a bunny and a kitten. The app processes for a few seconds and then I see:
(bunny and kitten image source: http://fineartamerica.com/featured/15-rabbit-and-kitten-jane-burton.html)
It works! You can see the neural net’s predictions (and their imagenet id numbers) along with the % certainty that the neural net gives us. So from here, we’ve laid the ground-work for plenty of other applications. Now we can use any pre-trained neural net model. For example, if a model existed for a life-sciences application, all we’d need to do is upload that model and the component we just wrote could point to it instead of the GoogleNet model and give us results from this web app.
To illustrate this, I added a second component that uses the AlexNet model such that I will get results for the same image from two separate neural net models that were trained on the same set of images.
The AlexNet component only differs from the GoogleNet by the model filepath we use in the code.
Running the same image through both neural nets, we now get:
All told, this process of writing the code and wiring up components took me just under an hour. As I wrote before, we can substitute any Caffe neural network model and use it through this basic Xap. From here, I think it’d be interesting to create a neural network training interface as a Xap. It would be helpful to have a nice front-end for training neural networks, from specifying the number of hidden layers, to the composition and configuration of those hidden layers and visualizing the test scores of the new models. Perhaps a followup blog post will be in order once that’s done.
|
How I Made a Neural Network Web Application in an Hour
| 2
|
how-i-made-a-neural-network-web-application-in-an-hour-173701bcb74a
|
2018-05-09
|
2018-05-09 20:40:54
|
https://medium.com/s/story/how-i-made-a-neural-network-web-application-in-an-hour-173701bcb74a
| false
| 926
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Mark Wissler
|
Data Scientist and Developer @Exaptive. Focusing on making data more human and laterally applying solutions to new domains.
|
dff1a7cf267
|
mark.wissler
| 2
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-15
|
2018-05-15 18:07:49
|
2018-05-16
|
2018-05-16 06:13:23
| 2
| false
|
en
|
2018-05-16
|
2018-05-16 06:13:23
| 0
|
17372bc7dd5e
| 1.130503
| 1
| 0
| 0
|
Definitions
| 1
|
Parameter estimation
Definitions
Population (distribution) parameters
a quantity or statistical measure that, for a given population, is fixedand that is used as the value of a variable in some general distribution (mean and variance)
Sample statistics (sample moments, sample mean, sample mode, sample median, sample variance)
is an estimate of the value of a descriptive characteristic of a sampling distribution, for samples of a particular population of objects.
Parameter estimators and estimates
An estimator of a parameter θ is a statistics Θ (hat)= T(X) which we use to guess θ from observation of X.
Once the actual data x is observed, θ (hat)= T(x) is the estimate of θ obtained via the estimator T.
Moment method estimators
Maximum likelihood estimators
Likelyhood function
And MLE is θ (hat) that maximises the function.
Unbiased estimators
An estimator Θ(hat) of θ is said to be unbiased if E(Θ) = θ(hat).
Consistent estimators
Θ(hat)→ θ as n → ∞
Bayes estimators
Then a Bayesian estimate θˆ(x) of θ for observation x and loss function L(θ, θˆ) is the value θˆ that minimizes the mean posterior loss.
Prior and posterior distribution of parameter
Probability before and after the data is observed.
Confidence intervals
Confidence level
|
Parameter estimation
| 1
|
parameter-estimation-17372bc7dd5e
|
2018-05-16
|
2018-05-16 06:13:37
|
https://medium.com/s/story/parameter-estimation-17372bc7dd5e
| false
| 198
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Mariya Hirna
| null |
80a997ac29bc
|
m.hirna
| 107
| 88
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-02-05
|
2018-02-05 16:21:48
|
2018-04-05
|
2018-04-05 16:21:01
| 10
| false
|
en
|
2018-04-05
|
2018-04-05 16:21:01
| 11
|
17387ee64fdf
| 5.348113
| 0
| 0
| 0
|
Freelance - (artificial) intelligently!
| 3
|
10 AI Tools for Freelancers
Freelance - (artificial) intelligently!
We here at Cloud Accounting NI are well aware of this #AIFirst world we are moving towards. During our time researching state of the art machine and deeplearning algorithms, we discovered many great applications small business and freelancers can use improve their work and uplift productivity.
Cloud Accounting has it’s own AI — just try asking question of our new chatbot 🤖, Claire, for some basic free advice.
Here are 10 other tools we have used — now you can use them too!
1. Let A.I. organize your time — Futurenda
So, as a freelancer you work on several projects at the same time, sight? You’ve got to be organised to meet deadlines! That’s where A.I. comes in!
Futurenda is an A.I. agenda that plans things out (especially) for freelancers by dividing to-do tasks into sessions and filling them into the agenda…fully automatic! The Futurenda founders highlight that they wanted to create an app that will manage a dynamic timetable completely by itself, so the user can fully focus on his job. This app is smart enough to keep an eye on the deadlines and adjust the agenda accordingly. Is there any reason not to ask Google Assistant to start download Futurenda right now?
2. Smarter digital marketing — Crystal.io
It is quite common to be a freelance digital marketer, but on the other hand, it isn’t that easy to manage all the social media campaigns, monitor analytics and make reports just by yourself. However, by utilizing A.I., Crystal.io makes digital marketing data smarter and simpler.
Unleash your creativity (that A.I. certainly can’t replace) and worry less about CAC, analytic charts and graphs or daily social media scheduling.
3. Make converting video stories — Flo
Everybody knows how many benefits you can get if your post goes viral. However, in order to ensure that your post will be shareable it must be creative and engaging…well that’s the hard part and company owners are seeking help. That’s why freelance platforms are full of community managers. But…not all of them have the necessary skills.
Now, this is the part when you, as a freelancer, can take advantage of artificial intelligence! To be more specific, you should try out Flo, a video editing app that auto-edits your videos into short shareable stories using Deep Learning, Computer Vision and (of course) A.I.
Video editing is certainly not that simple, it is more than that — it is very time consuming and a skillful job. That’s why you can reap the benefits that A.I. brought to Flo, just tell the app what do you want to see in your video and it will make it happen. Flo can be your very own video making assistant that will do a great job in accordance with your instructions, and it will help you to get those 5-star reviews on your profile!
4. Visually analyze user behavior — Inapptics
Here’s one for mobile app developers to help them get as many downloads as possible.
Inapptics! aggregates all user interaction events and turns them into heatmaps so you can see all the actions a user performs, how they navigate the app, where they tap and much more. You just have to present the results in the best possible way and suggest app improvements accordingly.
5. Make an intelligent logo — Logopony
Logopony is a great tool, based on artificial intelligence and trained on a set of professional logo designers that will help you to design logos like a pro!
Start with the creation of professional and high quality logos, but keep an eye on the Logopony updates since soon it will be able to provide you with a variety of designs (such as app icons, business cards, flyers, social media images etc.)
6. More productive meetings — Wrappup
Freelancers have a lot of online meetings, especially if they work with more than one client or if they are a part of an online team. Ok, A.I. still can’t replace you on meetings, however the Wrappup app can summarize all the details that were mentioned, so you can always be up to date.
It will basically take notes on all important issues mentioned in the meeting in a specific Slack channel. That way, even if you miss a meeting, you can always be in the loop with summaries!
7. Spreadsheets that will blow your mind — Magic Spreadsheet
As accountants, we really like this one! Magic Spreadsheet is an A.I. based app that will make filling Sheets both fun and (even more) useful.
Like the name says, this app will really do magic: it will turn your Sheet into a dashboard where you can send tasks to your team members, or get you any piece of information within seconds. Who needs Enterprise BI systems?
8. A.I. images finder — Everypixel
Whether you are a freelance copywriter, community manager or digital marketer we are sure that high quality images are something that always come in handy. Everypixel is a perfect smart image search tool that will help you find a suitable image to go with your work.
Artificial intelligence algorithm made the app smart enough to help you to distinguish beautiful photos from the ones that are not that appealing.
9.Spin content like a boss — Spinnerchief
Well, here’s some good news for all of you freelance copywriters out there! — let A.I. do the content spinning for you. Yup, we know how common it is that you spin an article several times for various purposes and how time consuming it can be. That’s why, you should give SpinnerChief a chance!
In just one click it will rewrite articles to a very high level of human readability and originality. Not only will you get fresh content within minutes, but it will be written in the same way Google understands it. Use this fact smart and boost your SEO!
10. Organize your network and keep in touch — Mila
Have you ever thought how many potential collaborations do you waste just because you don’t have enough time to sort out all of your contacts? Surely, way too many.
Worry no more — Mila will do it for you! This very intelligent bot will divide all of your contacts into sections and lists. That way you will have are your potential sales and marketing partners already prepared! Really, you will love Mila!
Like our content — then heart us!
Try out the AI tools we recommend and take your freelancing career to the next level!
|
10 AI Tools for Freelancers
| 0
|
10-ai-tools-for-freelancers-17387ee64fdf
|
2018-04-07
|
2018-04-07 22:53:04
|
https://medium.com/s/story/10-ai-tools-for-freelancers-17387ee64fdf
| false
| 1,086
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Cloud Accounting (NI)
|
Chartered Accountants & Cloud Specialists
|
4fc3e23389ff
|
cloudaccounting
| 5
| 3
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-22
|
2018-09-22 12:28:11
|
2018-09-22
|
2018-09-22 14:03:39
| 3
| false
|
en
|
2018-09-22
|
2018-09-22 14:03:39
| 2
|
1738dfab0919
| 2.980189
| 0
| 0
| 0
|
Recap From Day 073
| 5
|
100 Days Of ML Code — Day 074
100 Days Of ML Code — Day 074
Recap From Day 073
Day 073, we looked at the second part of designing custom algorithms for music. You can catch up using the link below.
100 Days Of ML Code — Day 073
Recap From Day 072medium.com
Today, we’ll continue from where we left off in day 073
Working with time
Designing custom algorithms for music
We saw from day 073 that particle filtering precisely has two critical features. Particle filtering can track nonlinear dynamics. To understand what is nonlinear dynamics of an object, imagine a car moving in a street observed through a video camera. Imagine that this car is accelerating and stopping more or less abruptly at different moment without apparent logic. Tracking this car can be difficult because it’s dynamic is complex. If the car were going straight, at a constant speed, it will be much easier. A tracking method such as particle filtering is flexible enough to adapt to complex dynamics such as the car’s ones and to be able to track it.
Another interesting feature of particle filtering is that it allows for non-Gaussian tracking. To understand what is non-Gaussianity we have to understand the Gaussian hypotheses. Imagine that we have to track a bird flying. This bird is going straight at a constant speed towards the wall in which there is only one circular hole of size twice the size of the bird. Using probabilities to track the position of the bird when it passes the wall is fairly easy. The probability is maximum at the center of the hole and quickly decreases for other possible positions near to the border of the wall and zero everywhere in the wall. In other words the bird is not silly and it’s more likely to pass where the danger is minimum, which is at the center of the hole, far from the borders.
In this bird’s example, the tracking is Gaussian because the probability distribution over the positions of the bird has a bell shape with the maximum at the center of the hole, the bell mean. And then this decreasing as the position moves away from the mean.
In a non-Gaussian scenario, the probability distribution over the position of the bird is not a bell shape anymore. To illustrate such situation, imagine the same bird flying straight to the same wall in which they are now two holes instead of only one. In that case, the probability distribution may have the shape of two bell curves and not one.
Such probability distribution is not Gaussian anymore. Here, it is actually the sum of two Gaussian distributions, but it is more complex than the single Gaussian. As a result, it’s slightly harder to track the bird passing the wall because we don’t know through which of the two holes the bird will choose to pass. And we have to keep both cases as possibilities.
GVF draws upon the features of particle filtering in order to track expressive variations in gesture execution. This technique is quite suitable for this problem because expressive gesture variations as the ones performed by musicians can have very complex dynamics highly nonlinear and can also have ambiguity or more complex probability distribution than Gaussian. In both cases, the need for having models that will offer new control for music drove changes and adaptation of conventional techniques. In the case of gesture follower, the need was to allow for real-time classification. In the case of gesture variation follower the need was to follow complex characteristics of the gesture.
That’s all for day 074. I hope you found this informative. Thank you for taking time out of your schedule and allowing me to be your guide on this journey. And until next time, be legendary.
References
https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists-v/sessions/working-with-time
|
100 Days Of ML Code — Day 074
| 0
|
100-days-of-ml-code-day-074-1738dfab0919
|
2018-09-24
|
2018-09-24 14:02:31
|
https://medium.com/s/story/100-days-of-ml-code-day-074-1738dfab0919
| false
| 644
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Jehoshaphat Abu
|
A polymath, an advocate of STEAM education. I write about Music | Computing | Design and maybe life and the world in general
|
62d9f8742a1e
|
jehoshaphatia
| 189
| 319
| 20,181,104
| null | null | null | null | null | null |
0
|
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential()
model.add(Dense(512, input_dim=10000))
model.add(Activation(‘relu’))
model.add(Dense(8))
model.add(Activation(‘softmax’))
model.compile(loss=’categorical_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
model = Sequential()
model.add(Dense(512, input_dim=10000))
model.add(Activation(‘relu’))
model.add(Dense(8))
model.add(Activation(‘softmax’))
model.compile(loss=’categorical_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 512) 5120512
_________________________________________________________________
activation_1 (Activation) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 8) 4104
_________________________________________________________________
activation_2 (Activation) (None, 8) 0
=================================================================
Total params: 5,124,616
Trainable params: 5,124,616
Non-trainable params: 0
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=2,
verbose=1,
validation_split=0.1)
import pandas
dataframe = pandas.read_csv(‘articles.csv’, header=None, escapechar=’\\’, na_filter=False)
dataset = dataframe.values
texts = dataset[:9000,0]
categories = dataset[9000:,1]
test_texts = dataset[:9000,0]
test_categories = dataset[9000:,1]
tk = Tokenizer(num_words=10000)
tk.fit_on_texts(texts)
x_train = tk.texts_to_matrix(x_train, mode=”tfidf”)
x_test = tk.texts_to_matrix(x_test, mode=”tfidf”)
[0,1,0,0,0,0,0,0]
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
encoder.fit(text_categories)
num_categories = encoder.transform(text_categories)
y_train = keras.utils.to_categorical(num_categories)
| 16
| null |
2018-05-24
|
2018-05-24 12:27:36
|
2018-05-30
|
2018-05-30 09:31:53
| 0
| false
|
en
|
2018-05-30
|
2018-05-30 09:45:27
| 3
|
1739b36fb6f6
| 7.407547
| 2
| 1
| 0
|
NN are a very flexible tool to allow the processing of natural languages, images. They are very versatile and allows to do things that…
| 2
|
A simple natural language category classifier with Keras
NN are a very flexible tool to allow the processing of natural languages, images. They are very versatile and allows to do things that imperative code can’t do. I wanted to learn a bit about them. I choosed to use Keras as it seems the best library to learn about NN. Keras is very close to the high layer NN paper are talking about.
So for testing, I looked for what could be a simple enough project. At work, I had a lot of news sorted by categories. It seemed nice to create a NN that would be able to automatically classify individual news into each category.
It would provide a little bit of value by possibly proposing automatically appropriate category for a feed or by proposing alternative classification for individual news for better navigation.
I had read a few NN tutorials but going from explanations to a working example on my own data was not trivial. As often when you learn a new topic, there is a lot of vocabulary and articles are a bit painful to read until you master enough of it. One of the thing that was difficult too was preparing the data for the NN.
In that article I’m going quickly to implementation. This is a very beginner oriented article written mostly to consolidate my own understanding. I’ll start with the NN itself, then we’ll prepare the data for training the NN itself.
Description of the NN
The NN we are going to use is defined with Keras that way :
I didn’t included inputs and output so that we have a look at how the NN is built first. So what do we have here ?
Sequential model
Sequential is the simplest way to define a NN with Keras. It means we are going to stack each layer over the previous one.
First Dense
Dense is a classical NN layer. Because it is the first layer, it’s also considered the input layer. The input_dim parameter is the dimension of the data the NN is going to expect as input. That’s also called the dimension of the data.
That layer has 512 neurons. Those 512 neurons will output 512 values which are going to be the inputs of the next layer. We don’t need to specify the size of the input of next layer because Keras does it automatically.
512 is also the dimension that the NN is going to use to represent your data internally. Larger values learns more slowly. They are also more subject to overfitting which means that the network instead of generalizing the problem will ultraspecialise it for the case provided reducing the capability of the network to predict good output for unknown inputs. Smaller values may not be enough to generalize the problem.
Eventually, everybody tries different values to find something that perform well.
Role of the first Activation
Activation is what is called the activation function. NN use activation function because stacking linear operation over linear operation ends up being a linear operation itself resulting in the network being equivalent to a single layer. Activation take the type of function used as activation. For internal layers relu is often a good pick.
ConvNetJS demo is great to see how activation function changes how the activation function change the shape of the learned data. Try changing the tanh to relu to experiment.
ConvNetJS demo: Classify toy 2D data
Feel free to change this, the text area above gets eval()'d when you hit the button and the network gets reloaded…cs.stanford.edu
The second Dense layer
This is the last layer of the NN, so that’s it’s output. As I have 8 categories, the NN needs to ouput 8 values, so that layer has 8 neurons. It’ll output an array such as [2,6,0,1,0,0,0.5,0].
Softmax Activation
The previous Dense layer outputs real numbers. It’s not easy looking for one output to tell what means a 6 for example. The interpretation of it depends of the other values.
Softmax squashes the value of each output between 0 and 1 and normalise the whole vector so that the sum of the probability of the classes is 1. For a single specific output Softmax tells you the probability that that class is true.
Building the NN
We are done defining our NN. categorical_crossentropy is the loss function we use when sorting in category. It expects output to be a vector of probability of category such as softmax output does.
I didn’t research much into the other parameters. Some optimisers may work best on different types of data but for my use case, most did worked well.
Summarizing
I found model.summary() to be very helpful at the beginning. I didn’t saw a lot of tutorial use it, probably because it doesn’t provide a lot of information.
Still, it does help understanding to see how parameters affect the dimension of the layers of the NN. It also helps with debugging because when you have an error, Keras will use the name of the layer to tell you where the issue is. I always have difficulty when I see a 1 or 2 index to know if the count starts at 0 or 1. It made me scratching my head until I found summary().
Training the NN
Training is the part where all the coefficients are chosen to have the NN predict the closest output given an input. Keras does it in one line but it’s actually where most of the NN magic happens.
The training itself is done by an algorithm named backpropagation. A very clear explanation of backpropagation works can be found on the Matt Mazur blog.
A Step by Step Backpropagation Example
Background Backpropagation is a common method for training a neural network. There is no shortage of papers online that…mattmazur.com
x_train and y_train are the inputs and outputs of the NN used for training. Both are multidimensional array. The first row of x_train is the first example of input. The first row of y_train is the first example of expected output. Each row of x_train is given to the NN and neurons coefficient are corrected toward y_train corresponding value. Doing that once for all data is called an epoch.
epochs=2 means to do that process twice. If you have a lot of data, you may put only 1, but most of the time using a larger epoch make a better use of the data.
batch_size is how many data are used to do a forward pass and a backward pass (from the Keras description ). I find that a little bit unclear, here is my guess, it might be wrong. Batch exists to solve a performance problem : updating weight of each neuron after each data is slow. It may not be very slow on CPU, but on GPU that would means transferring weights to the model after each forward pass. Also on many cores architecture, it allows to parallelize passes.
My guess is that Keras does many forward passes, compute the new weights, but only update them each batch_size pass. The nicest explication I found was on machinelearningmastery. It actually call the algorithm mini-batch.
A Gentle Introduction to Mini-Batch Gradient Descent and How to Configure Batch Size
Stochastic gradient descent is the dominant method used to train deep learning models. There are three main variants of…machinelearningmastery.com
Reading CSV
CSV is often used as an input format for NN. My data format is basically composed of title and text of the article in the first field and category in the second field. It’s possible to read them that way :
For evaluation purpose of the training, ML often split a part of the data to verify against them how good the trained NN is at predicting a result.
With python it’s really easy to do. dataset[9000:] returns the 9000 first rows and dataset[:9000] returns all rows after the 9000 one. The first one are the training data, the last one are the validation data.
The input : representing natural language numerically
This was my most head scratching issue at the beginning. NN expects numbers as inputs, but wasn’t evident for me how to convert a list of words to a list of numbers.
We could pick anything like a dictionary and assign a index to each word, but that wouldn’t be a great idea (or at least not in that case ). All transformations are not equals.
What we actually want is a transform that preserve some meaning of the information. We also want a transformation that is of fixed input dimension. Sentences as they are can’t be used as input because their length vary. Preserving the order of the words in the sentence also is only useful if we are going to exploit this order. The NN we defined doesn’t know how to use that, so preserving the information is useless.
The presentation I chose is to present sentence as a matrix where each column represent one word. In the most simple understanding of this, we could flag 1 if the word is present and 0 if the word isn’t present. Here we use a tfidf representation which is a statistically more representative version of the data. I chose that one because it gave better result.
What is important to understand is that the way we present data to the NN is going to orient how the NN is going to learn. There are different ways of representing data. I’ll try to write an article about a few of them. Each representation of the data also requires an appropriate NN structure.
The output : representing categories numerically
As we have exactly one label for each text, we can represent this as a vector as well. It’s usually called a one hot vector. Which means it’s a vector where a category is represented as a column. Here is a one hot vector for the category 2
It is very close to what the keras tokenizer and text_to_matrix, but keras tokenizer reserve the 0 which adds an extra useless column. Most people prefer to use LabelEncoder from sklearn.
Results
That NN as simple as it is performed quite well. It is able to classify articles category with 75% accuracy which is not too bad given that there is 8 categories. It could probably be improved by giving it more articles for each category.
One thing I didn’t thought about was that differences in classification between what I had and what the NN ouputted would be even more interesting than a perfect 100% match. 100% wouldn’t actually learn me anything. Correct matchs were often very close to 100% in the correct category. Invalid match were often more average score, such as news being part actuality, part health for example if it was talking about a new drug. The network actually provide via the softmax the probability for a news to be in some category and it could be helpful in proposing alternative category.
|
A simple natural language category classifier with Keras
| 2
|
a-simple-natural-language-category-classifier-with-keras-1739b36fb6f6
|
2018-05-31
|
2018-05-31 05:10:30
|
https://medium.com/s/story/a-simple-natural-language-category-classifier-with-keras-1739b36fb6f6
| false
| 1,963
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Emmanuel Caradec
| null |
55d2edddcedb
|
ecaradec
| 31
| 159
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-21
|
2018-05-21 13:37:09
|
2018-05-21
|
2018-05-21 14:08:54
| 7
| false
|
en
|
2018-07-20
|
2018-07-20 08:26:57
| 4
|
1739e0fe8028
| 13.974528
| 4
| 0
| 0
|
Introduction
| 5
|
ConvNets for Detecting Abnormalities in DDSM Mammograms
Introduction
Breast cancer is the second most common cancer in women worldwide. About 1 in 8 U.S. women (about 12.4%) will develop invasive breast cancer over the course of her lifetime. The five year survival rates for stage 0 or stage 1 breast cancers are close to 100%, but the rates go down dramatically for later stages: 93% for stage II, 72% for stage III and 22% for stage IV. Human recall for identifying lesions is estimated to be between 0.75 and 0.92 [1], which means that as many as 25% of abnormalities may initially go undetected.
The DDSM is a well-known dataset of normal and abnormal scans, and one of the few publicly available datasets of mammography imaging. Unfortunately, the size of the dataset is relatively small. To increase the amount of training data we extract the Regions of Interest (ROI) from each image, perform data augmentation and then train ConvNets on the augmented data. The ConvNets were trained to predict whether a scan was normal or abnormal.
Related Work
There exists a great deal of research into applying deep learning to medical diagnosis, but the lack of available training data is a limiting factor. [1, 4] use ConvNets to classify pre-detected breast masses by pathology and type, but do not attempt to detect masses from scans. [2,3] detect abnormalities using combinations of region-based CNNs and random forests.
Datasets
The MIAS dataset is a very small set of mammography images, consisting of 330 scans of all classes. The scans are standardized to a size of 1024x1024 pixels. The size of the dataset made this unusable for training, but it was used for exploratory data analysis and as a supplementary test data set.
The DDSM [6] is a database of 2,620 scanned film mammography studies. It contains normal, benign, and malignant cases with verified pathology information. The DDSM is saved as Lossless JPEGs, an archaic format which has not been maintained for several decades.
The CBIS-DDSM [8] collection includes a subset of the DDSM data selected and curated by a trained mammographer. The CBIS-DDSM images have been pre-processed and saved as DiCom images, and thus are better quality than the DDSM images, but this dataset only contains scans with abnormalities. In order to create a dataset which can be used to predict the presence of abnormalities, the ROIs were extracted from the CBIS-DDSM dataset and combined with normal images taken from the DDSM dataset.
Figure 1. Sample CBIS-DDSM Image
Preproccessing
In order to create a training dataset of adequate size which included both normal and abnormal scans, images from the CBIS-DDSM dataset were combined with images from the DDSM dataset. While the CBIS-DDSM dataset included cropped and zoomed images of the Regions of Interest (ROIs), in order to have greater control over the data, we extracted the ROIs ourselves using the masks provided with the dataset.
For the CBIS-DDSM images the masks were used to isolate and extract the ROI from each image. For the DDSM images we divided the images into slightly overlapping tiles, excluding tiles which contained unusable data.
Both offline and online data augmentation was used to increase the size of the datasets.
Training Datasets
Multiple datasets were created using different ROI extraction techniques and amounts of data augmentation. The datasets ranged in size from 27,000 training images to 62,000 training images.
Datasets 1 through 5 did not properly separate the training and test data and thus are not referenced in this work.
Dataset 6 consisted of 62,764 images. This dataset was created to be as large as possible, and each ROI is extracted multiple times in multiple ways using both ROI extraction methods described below. Each ROI was extracted with fixed context, with padding, at its original size, and if the ROI was larger than our target image it was also extracted as overlapping tiles.
Dataset 8 consisted of 40,559 images. This dataset used the extraction method 1 described below to provide greater context for each ROI. This dataset was created for the purpose of classifying the ROIs by their type and pathology.
Dataset 9 consisted of 43,739 images. The previous datasets had used zoomed images of the ROIs, which was problematic as it required the ROI to be pre-identified and isolated. This dataset was created using extraction method 2 described below.
As Dataset 9 was the only dataset that did not resize the images based on the size of the ROI we felt that it introduced the least amount of artificial manipulation into the data which led us to focus on training with this dataset.
ROI Extraction Methods for CBIS-DDSM Images
The CBIS-DDSM scans were of relatively large size, with a mean height of 5295 pixels and a mean width of 3131 pixels. Masks highlighting the ROIs were provided. The masks were used to define a square which completely enclosed the ROI. Some padding was added to the bounding box to provide context and then the ROIs were extracted at 598x598 and then resized down to 299x299 so they could be input into the ConvNet.
The ROIs had a mean size of 450 pixels and a standard deviation of 396. We designed our ConvNets to accept 299x299 images as input. To simplify the creation of the images, we extracted each ROI to a 598x598 tile, which was then sized down by half on each dimension to 299x299. 598x598 was just large enough that the majority of the ROIs could fit into it.
To increase the size of the training data, each ROI was extracted multiple times using the methodologies described below. The size and variety of the data was also increased by randomly horizontally flipping each tile, randomly vertically flipping each tile, randomly rotating each tile, and by randomly positioning each ROI within the tile.
ROI Extraction Method 1
The analysis of the UCI data indicated that the edges of an abnormality were important as to determining its pathology and type, and this was confirmed by a radiologist. Levy et al [1] also report that the inclusion of context was an important factor for multi-class accuracy.
To provide maximum context, each ROI was extracted in multiple ways:
The ROI was extracted at 598x598 at its original size.
The entire ROI was resized to 598x598, with padding to provide context.
If the ROI had the size of one dimension more than 1.5 times the other dimension it was extracted as two tiles centered in the center of each half of the ROI along it’s largest dimension.
ROI Extraction Method 2
Method 1 relied on the size of the ROI to determine how to extract it, which requires having the ROI pre-identified. While this provided very clear images of each abnormality, the use of the size of the ROI to extract it introduced an element of artificiality into the data which made it not generalize well to classifying raw scans. This method was designed to eliminate that artificiality by never resizing the images, and just extracting the ROI using its center.
The size of the ROI was only used to determine how much padding to add to the bounding box before extraction. If the ROI was smaller than the 598x598 target we added more padding to provide greater variety when taking the random crops. If the ROI was larger than 598x598 this was not necessary.
If the ROI was smaller than a 598x598 tile it was extracted with 20% padding on either side.
If the ROI was larger than a 598x598 tile it was extracted with 5% padding.
Each ROI was then randomly cropped three times using random flipping and rotation.
Segmentation of Normal Images
The normal scans from the DDSM dataset did not have ROIs so were processed differently. As these images had not been pre-processed as had the CBIS-DDSM images they contained artifacts such as white borders, overlay text, and white patches of pixels used to cover up identifying personal information. Each image was trimmed by 7% on each side to remove the white borders.
To keep the normal images as similar to the CBIS-DDSM images, different pre-processing was done for each dataset created. As datasets 6 and 8 resized the images based on the ROI size, to create the DDSM images for these datasets, each image was randomly sized down by a random factor between 1.8 and 3.2, then segmented into 299x299 tiles with a variable stride between 150 and 200. Each tile was then randomly rotated and flipped.
For dataset 9, each DDSM image was cut into 598x598 tiles without being resized. The tiles were then each resized down to 299x299.
To avoid the inclusion of images which contained the aforementioned artifacts or which consisted largely of black background, each tile was then added to the dataset only if it met upper and lower thresholds on mean and variance. The thresholds were selected by randomly sampling tiles and adjusted until most of the useless tiles were not included.
Data Balance
In reality, only about 10% of mammograms are abnormal. In order to maximize recall, we weighted our dataset more heavily towards abnormal scans, with the balance at 83% normal and 17% abnormal.
The CBIS-DDSM dataset was already divided into training and test data, at 80% training and 20% test. As each ROI was extracted to multiple images, in order to prevent different images of the same ROIs from appearing in both the training and holdout datasets we kept this division. The test dataset was divided evenly, in order, between holdout and test data, which ensures that no more than one image of one ROI would appear in both datasets.
The normal images had no overlap, so were shuffled and divided among the training, test and validation data. The final divisions were 80% training, 10% test and 10% validation. It would have been preferable to have large validation and test datasets, but we felt that it was easier to use the existing divisions and be sure that there was no overlap.
All images were labeled as 0 for negative/normal and 1 for positive/abnormal.
ConvNet Architecture
Our first thought was to train existing ConvNets, such as VGG or Inception, on our datasets. These networks were designed for and trained on ImageNet data, which contains images which are completely different from medical imaging. The ImageNet dataset contains 1,000 classes of images which have a far greater amount of detail than our scans do, and we felt that the large number of parameters in these models might cause them to quickly overfit our data and not generalize well. A lack of computational resources also made training these networks on our data impractical. For these reasons we designed our own architectures specifically for this task.
We started with a simple model based on VGG, consisting of stacked 3x3 convolutional layers alternating with max pools followed by three fully connected layers. Our model had fewer convolutional layers with less filters than VGG, and smaller fully connected layers. We also added batch normalization [15] after every layer. This architecture was then evaluated and adjusted iteratively, with each iteration making one and only one change and then being evaluated. We also evaluated techniques including Inception-style branches [16, 17, 18] and residual connections [19].
To compensate for the unbalanced nature of the dataset a weighted cross-entropy function was used, weighting positive examples higher than negative ones. The weight was considered a hyperparameter for which values ranging from 1 to 7 were evaluated.
The best performing architecture will be detailed below.
Results
Architecture
Figure 2. Model 1.0.0.35
The best performing model was model 1.0.0.35, consisting of nine convolutional layers and three fully connected layers. The convolutional layers used the philosophy of VGG, with 3x3 convolutions stacked and alternated with max pools.
The graphs also included online data augmentation and contrast adjustment, which were both evaluated.
Models 1.0.0.29 and 1.0.0.45 were the same architecture as 1.0.0.35, but with different scaling of the input data. Model 1.0.0.29 took the raw pixel values as input, 1.0.0.45 centered the inputs without scaling them, and 1.0.0.35 centered and scaled the input.
Reduced versions of VGG-16 and Inception v4 were also trained on the datasets. Training the full models required more time and computation than we had available, so we adjusted the architectures by reducing the numbers of filters in each layer, as well as adjusting the models to take 299x299 images as inputs.
Performance
Table 1 shows the accuracy and recall on the test dataset for selected models trained for binary classification. The most-frequent baseline accuracy for the datasets was .83. We should note that a recall of 1.0 with accuracy around .17 indicates that the model is predicting everything as positive, while an accuracy near .83 with a very low recall indicates the model is predicting everything as negative.
Table 1: Binary Performance on Test Set
Figure 3 shows the training metrics for model 1.0.0.35 trained on dataset 9 for binary classification. This model was trained with a cross entropy weight of 6, which compensates for the unbalanced nature of the dataset and encourages the model to focus on positive examples.
Figure 3— Binary Accuracy and Recall for Model 1.0.0.35 b.98 on Dataset 9
Table 2 shows the accuracy and recall of selected models on the MIAS dataset. If we recall that the MIAS dataset was completely separate from, and unrelated to, the DDSM datasets, these results should indicate how well the model will perform on completely unrelated images.
Table 2: Performance on MIAS Dataset
Effect of Cross Entropy Weight
A weighted cross entropy was used to improve recall and counter the unbalanced nature of our dataset. Increasing the weight improved recall at the expense of precision. With a cross entropy weight of 1 to 3, our models tended to initially learn to classify positive examples, but after 15–20 epochs started to predict everything as negative. A cross entropy weight of 4 to 7 allowed the model to continue to predict positive examples and greatly reduced the volatility of the validation results. Cross entropy weights above 7 resulted in improved recall at the expense of precision.
Table 3: Effect of Cross Entropy Weight
Effect of Decision Threshold
A binary softmax classifier has a default threshold of 0.50. We used pr curves during training to evaluate the effects of adjusting the threshold. We found that we could easily trade off precision and recall by adjusting the threshold, allowing us to achieve precision or recall close to 1.0. We can also see the effects of using different thresholds on recall in figure 8.
Figure 4 is the curve for model 1.0.0.35b.98 after 40 epochs of training. The points on the lines indicate the threshold of 0.50. Precision is on the y-axis and recall on the x-axis.
Figure 4— PR Curve for model 1.0.0.35b.98
Conclusion
While we were able to achieve better than expected results on datasets 6 and 8, the artificial nature of these datasets caused the models to not generalize to the MIAS data. Models trained on dataset 9, which was constructed specifically to avoid these problems, did not achieve accuracy or recall as high as models trained on other datasets, but generalized to the MIAS data better.
While we were able to achieve recall above human performance on the DDSM data, the recall on the MIAS data was significantly lower. However, as a proof of concept, we feel that we have demonstrated that ConvNets can successfully be trained to predict whether mammograms are normal or abnormal.
We should note that we can not eliminate the possibility that the network was using information from each image unrelated to the presence of abnormalities. The fact that the positive and negative images came from different datasets makes it possible that features like the contrast of the images or the highest pixel values played an important role. We are currently attempting to address this issue.
The life and death nature of diagnosing cancer creates many obstacles to putting a system like this into practice. We feel that using a system to output the probabilities rather than the predictions would allow such a system to provide additional information to radiologists rather than replacing them. In addition the ability to adjust the decision threshold would allow radiologists to focus on more ambiguous scans while devoting less time to scans which have very low probabilities.
Future work would include creating a system which would take an entire, unaltered scan as input and analyse it for abnormalities. We are currently working on applying semantic segmenation to the scans, using the masks as labels. Other options include sliding windows, FCNs, YOLO, &c.
Source Code
The source code for exploratory data analysis and creation of the datasets is available in this GitHub repository: https://github.com/escuccim/mias-mammography
The source code used to create and train the models is available here: https://github.com/escuccim/mammography-models
A training dataset not referenced in this work, but created using the methods described, is available on Kaggle. This dataset is similar to dataset 9, but with the criteria used to exclude tiles relaxed, resulting in the inclusion of tiles which do contain background. https://www.kaggle.com/skooch/ddsm-mammography
References
[1] D. Levy, A. Jain, Breast Mass Classification from Mammograms using Deep Convolutional Neural Networks, arXiv:1612.00542v1, 2016
[2] N. Dhungel, G. Carneiro, and A. P. Bradley. Automated mass detection in mammograms using cascaded deep learning and random forests. In Digital Image Computing: Techniques and Applications (DICTA), 2015 International Conference on, pages 1–8. IEEE, 2015.
[3] N.Dhungel, G.Carneiro, and A.P.Bradley. Deep learning and structured prediction for the segmentation of mass in mammograms. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 605–612. Springer International Publishing, 2015.
[4] J.Arevalo, F.A.González, R.Ramos-Pollán,J.L.Oliveira,andM.A.G.Lopez. Representation learning for mammography mass lesion classification with convolutional neural networks. Computer methods and programs in biomedicine, 127:248–257, 2016.
[5] Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
[6] The Digital Database for Screening Mammography, Michael Heath, Kevin Bowyer, Daniel Kopans, Richard Moore and W. Philip Kegelmeyer, in Proceedings of the Fifth International Workshop on Digital Mammography, M.J. Yaffe, ed., 212–218, Medical Physics Publishing, 2001. ISBN 1–930524–00–5.
[7] Current status of the Digital Database for Screening Mammography, Michael Heath, Kevin Bowyer, Daniel Kopans, W. Philip Kegelmeyer, Richard Moore, Kyong Chang, and S. Munish Kumaran, in Digital Mammography, 457–460, Kluwer Academic Publishers, 1998; Proceedings of the Fourth International Workshop on Digital Mammography.
[8] Rebecca Sawyer Lee, Francisco Gimenez, Assaf Hoogi , Daniel Rubin (2016). Curated Breast Imaging Subset of DDSM. The Cancer Imaging Archive.
[9] Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L, Prior F. The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository, Journal of Digital Imaging, Volume 26, Number 6, December, 2013, pp 1045–1057.
[10] O. L. Mangasarian and W. H. Wolberg: “Cancer diagnosis via linear programming”, SIAM News, Volume 23, Number 5, September 1990, pp 1 & 18.
[11] William H. Wolberg and O.L. Mangasarian: “Multisurface method of pattern separation for medical diagnosis applied to breast cytology”, Proceedings of the National Academy of Sciences, U.S.A., Volume 87, December 1990, pp 9193–9196.
[12] O. L. Mangasarian, R. Setiono, and W.H. Wolberg: “Pattern recognition via linear programming: Theory and application to medical diagnosis”, in: “Large-scale numerical optimization”, Thomas F. Coleman and YuyingLi, editors, SIAM Publications, Philadelphia 1990, pp 22–30.
[13] K. P. Bennett & O. L. Mangasarian: “Robust linear programming discrimination of two linearly inseparable sets”, Optimization Methods and Software 1, 1992, 23–34 (Gordon & Breach Science Publishers).
[14] K. Simonyan, A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, arXiv:1409.1556, 2014
[15] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine Learning, pages 448–456, 2015
[16] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–9, 2015.
[17] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.
[18] C. Szegedy, S. Ioffe, V. Vanhoucke, Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning, arXiv:1602.07261v2, 2016
[19] K. He, X. Zhang, S. Ren, J. Sun, Deep Residual Learning for Image Recognition, arXiv:1512.03385, 2015
[20] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You Only Look One: Unified, Real-Time Object Detection, arXiv:1506.02640, 2015
[21] R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, arXiv:1311.2524, 2013
|
ConvNets for Detecting Abnormalities in DDSM Mammograms
| 6
|
convnets-for-classifying-ddsm-mammograms-1739e0fe8028
|
2018-07-20
|
2018-07-20 08:26:57
|
https://medium.com/s/story/convnets-for-classifying-ddsm-mammograms-1739e0fe8028
| false
| 3,425
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Eric Antoine Scuccimarra
| null |
364198a14447
|
ericscuccimarra
| 9
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-05-23
|
2018-05-23 23:54:20
|
2018-05-23
|
2018-05-23 23:59:36
| 3
| false
|
pt
|
2018-10-24
|
2018-10-24 06:49:58
| 0
|
173a536607e3
| 3.093396
| 1
| 0
| 0
|
Introdução
| 3
|
Filtragem colaborativa utilizando Slope One
Introdução
Os sistemas de recomendação são uma subárea de aprendizagem de máquina (machine learning) e têm por objetivo sugerir itens a um usuário com base em seu perfil e histórico ou nas escolhas realizadas por demais usuários com gostos similares. São muito utilizados como estratégia de marketing por empresas de e-commerce, pois ao recomendar algo alinhado ao interesse do usuário, há uma chance maior de que ele venha a adquirir tal produto
Estes sistemas são usualmente classificados em três tipos, baseado em como a recomendação é realizada, sendo estes: baseado em conteúdo, filtragem colaborativa e sistemas híbridos.
Baseado em Conteúdo: recomenda ao usuário itens que sejam semelhantes a itens preferidos no passado.
Filtragem colaborativa: consiste na recomendação de itens que usuários com gosto semelhante preferiram no passado. Para isso segue-se a regra: “Se um usuário gostou de A e de B, um outro usuário que gostou de A também pode gostar de B”.
Sistemas Híbridos: consiste em combinar os dois tipos de recomendação mencionados anteriormente, com a finalidade de fortificá-los e superar suas desvantagens.
Slope One
Nesse post vamos abordar o sistema de recomendação chamado Slope One. Trata-se de um algoritmo de filtragem colaborativa, do tipo Item-Based, que consiste em predizer a avaliação dada por um usuário X a um item i, computando a similaridade entre i e outros itens. Ele possui uma técnica de implementação simples, mas que possui alta precisão.
O algoritmo opera com notas, de um a cinco, avaliadas pelos usuários a itens. Essas avaliações são inseridas em uma matriz UsuariosXItens, na qual cada linha da matriz corresponde às notas de um usuário A a N itens. Com base nesta matriz, o algoritmo cria uma relação linear entre os dados para realizar a predição de qual nota seria dada por um usuário a um item que ele não avaliou. Por este motivo, o algoritmo tem o nome de Slope One. Slope é o multiplicador de x na fórmula f(x) = ax + b, e o Slope para este algoritmo consiste em 1. A fórmula utilizada na predição de item não avaliado i para um usuário A está representada abaixo.
Onde Diff (i, j) é a média das diferenças de avaliações entre itens i e j para os outros usuários, R (A, j) é a nota dada por um usuários A ao item j, e considerando que temos N itens e que os itens variem de i a z
Após realizada a predição das avaliações de itens não avaliados anteriormente pelo usuário, o sistema separa, em uma lista, os itens que obtiveram nota acima de 3 (esta nota foi definida pelo autor, cada desenvolvedor pode usar a nota que se encaixa melhor). Por fim, essa lista é retornada para que os itens presentes nela possam ser exibidos nas telas de recomendações. As etapas do funcionamento deste algoritmo para a predição de uma nota a um item não avaliado estão representadas a seguir.
Um problema presente no sistema de recomendação diz respeito a usuários novos. Nessa situação, o sistema não conhece nada a respeito das preferências dele e também não é capaz de gerar-lhe recomendações.
Exemplo
Para melhor entendimento do funcionamento do algoritmo foi desenvolvido um cenário hipotético, que é composto por três usuários, dois itens e as avaliações dadas por estes usuários aos itens. Apenas o usuário C não avaliou o item 2, como apresentado na Tabela 1.
Conforme o cenário apresentado, o papel do algoritmo será de predizer qual seria a possível nota que o usuário C daria para o item 2. Para isso, o primeiro passo do algoritmo é calcular a média das diferenças entre as avaliações. De acordo com as notas atribuídas, a média das diferenças seria de 1.5 ((3–4) + (2–4) / 2). Então, na média, o item 2 é avaliado com 1.5 pontos a mais em relação ao item 1. Considerando que o usuário C avaliou o primeiro item com uma nota 2, provavelmente o item 2 seria avaliado com uma nota 3.5 (2 + 1.5).
|
Filtragem colaborativa utilizando Slope One
| 1
|
filtragem-colaborativa-utilizando-slope-one-173a536607e3
|
2018-10-24
|
2018-10-24 06:49:58
|
https://medium.com/s/story/filtragem-colaborativa-utilizando-slope-one-173a536607e3
| false
| 674
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Ana Luisa Bavati
| null |
1657caa33a4c
|
analuisabavati
| 1
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-01-27
|
2018-01-27 22:24:05
|
2018-01-27
|
2018-01-27 22:46:55
| 1
| true
|
en
|
2018-01-27
|
2018-01-27 22:46:55
| 0
|
173ca75af5fa
| 2.766038
| 1
| 0
| 0
|
How to Not Compare AI with the Human Brain
| 5
|
I Have a Few Problems With Your Graphic
How to Not Compare AI with the Human Brain
The terrible graphic in question. Listing six reasons why AI is nothing like a human or human brain.
It is self learning
It is a computer, a machine. There is no “self” for it to learn with. It cannot learn because it is a machine. There are many competing theories as to how the human brain/humans learn all of which have some valid claim to “correctness”. Since we cannot say how a human learns or even precisely what learning is we cannot say that anything else learns. Therefore even if it had a “self” (which it does not) we could not say that it is self-learning since we do not know what learning even means.
It assumes
It assigns various weights to different outcomes depending on its programming. Some of these outcomes are “preferred” because they are assigned higher weights. The only thing it “assumes” are that these outcomes (the ones assigned greater weights) are the “correct” ones. Essentially it is assuming that larger values/weights are better than smaller values/weights. Like a human assuming 2 > 1 because it is a bigger number. A poor analogy as it does nothing ‘like a human’ for it is a machine.
It adapts
Based entirely on its programming it modifies the weights assigned to various outcomes based on the input data given, it then checks these weights against the “desired” or “correct” outcome (as determined by its programming). When additional input data is given it “adapts” by assigning greater weight to the input data that gives the “correct” output. Essentially the values in the equations it is running change. It does not change the values through force of will or process of self realization since has no will or self, the values change because that is what its programming requires. I think it goes without saying that evolution by natural selection which ‘forces’ adaptive responses in living things if they wish to survive and multiply, does not and will never apply to a machine.
It predicts
With a given set of input data it checks the “correctness” of the weights assigned and output data against what a previous input training data set gave for outputs that were deemed “correct”. It then outputs this data for the user. It has no idea of the past or the future, what is prediction of future or what is retelling of history, it simply outputs certain values based on the input data given and its programming. It is only a prediction in the sense that the computer has no idea of the “rightness” or “wrongness” of its output, nor does the user. The machine may calculate and output a proability of rightness assigned by the computer but that has no bearing on its’ actual rightness, nor does it in any way mean that the output is in any sense a prediction except for in the one way I just mentioned.
It finds typical and untypical patterns
Based on the outcomes of the analysis of various training data sets it “knows” which outcomes are normal and which are abnormal. Given any input data set it can compare the outcomes against previous normal and abnormal outcomes to determine if the results are typical or atypical. All of this is completely determined by its programming. It is not “finding” anything, it is only evaluating the rightness of fit of the answer to a math problem. If the fit is good it is deemed typical, if not, atypical.
It analyses and suggests the most valuable decisions for the user
Not being human or self aware or having any sense of self or morals it can have no idea what is or is not valuable to a user. What is valuable to the user is what the user programs the computer to consider valuable by assigning higher weights to variables and outcomes that increase it and lower weights to those that decrease it. It does not suggest anything either, nor is it capable of making suggestions, it only outputs what its programming requires it to output based on the input data given.
|
I Have a Few Problems With Your Graphic
| 1
|
i-have-a-few-problems-with-your-graphic-173ca75af5fa
|
2018-01-28
|
2018-01-28 16:05:51
|
https://medium.com/s/story/i-have-a-few-problems-with-your-graphic-173ca75af5fa
| false
| 680
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Daniel DeMarco
|
Research scientist (Ph.D. micro/mol biology), Food safety/micro expert, Thought middle manager, Everyday junglist, Selecta, Boulderer, Cat lover, Fish hater
|
7db31d7ad975
|
dema300w
| 3,629
| 148
| 20,181,104
| null | null | null | null | null | null |
0
|
function softmax(z)
#z = z — maximum(z)
o = exp(z)
return o / sum(o)
end
function gradient_together(z, y)
o = softmax(z)
o[y] -= 1.0
return o
end
function gradient_separated(z, y)
o = softmax(z)
∂o_∂z = diagm(o) — o*o’
∂f_∂o = zeros(size(o))
∂f_∂o[y] = -1.0 / o[y]
return ∂o_∂z * ∂f_∂o
end
using DataFrames
using Gadfly
M = 100
y = 1
zy = vec(10f0 .^ (-38:5:38)) # float range ~ [1.2*10^-38, 3.4*10³⁸]
zy = [-reverse(zy);zy]
srand(12345)
n_rep = 50
discrepancy_together = zeros(length(zy), n_rep)
discrepancy_separated = zeros(length(zy), n_rep)
for i = 1:n_r
= rand(Float32, M) # use float instead of double
discrepancy_together[:,i] = [begin
z[y] = x
true_grad = gradient_together(convert(Array{Float64},z), y)
got_grad = gradient_together(z, y)
abs(true_grad[y] — got_grad[y])
end for x in zy]
discrepancy_separated[:,i] = [begin
z[y] = x
true_grad = gradient_together(convert(Array{Float64},z), y)
got_grad = gradient_separated(z, y)
abs(true_grad[y] — got_grad[y])
end for x in zy]
end
df1 = DataFrame(x=zy, y=vec(mean(discrepancy_together,2)),
label=”together”)
df2 = DataFrame(x=zy, y=vec(mean(discrepancy_separated,2)),
label=”separated”
= vcat(df1, df2)
format_func(x) = @sprintf(“%s10<sup>%d</sup>”, x<0?”-”:””,int(log10(abs(x))))
the_plot = plot(df, x=”x”, y=”y”, color=”label”,
Geom.point, Geom.line, Geom.errorbar,
Guide.xticks(ticks=int(linspace(1, length(zy), 10))),
Scale.x_discrete(labels=format_func),
Guide.xlabel(“z[y]”), Guide.ylabel(“discrepancy”))
| 53
| null |
2018-03-01
|
2018-03-01 17:47:42
|
2018-03-01
|
2018-03-01 20:58:37
| 14
| false
|
en
|
2018-03-02
|
2018-03-02 17:15:18
| 6
|
173d385120c2
| 7.661321
| 5
| 0
| 0
|
The softmax loss layer computes the multinomial logistic loss of the softmax of its inputs. It’s conceptually identical to a softmax layer…
| 3
|
The difference between Softmax and Softmax-Loss
The softmax loss layer computes the multinomial logistic loss of the softmax of its inputs. It’s conceptually identical to a softmax layer followed by a multinomial logistic loss layer, but provides a more numerically stable gradient.
This is an introduction of softmax - loss layers from Caffe’s official document. Caffe is a very popular C++/CUDA Deep convolutional neural Networks (CNNs) library, because of the clear code structure and beautiful design, both academia and industry love to used it as machine learning extensions.
Today I want to discuss about the words quoted above: in fact, there is a considerable distance between the basic algorithm and a program which works well in practice in numerical calculations (or any other engineering area). There are lots of small tricks that may seem insignificant, but eventually cause a huge difference to the result. And the reason is very simple: theoretical work is precise and accurate because of ignoring “irrelevant” details of simplification and abstraction, so objects can operate in a series of “assumption” under the establishment of theoretical systems. But when it comes to applying the theory to practice, we need to add all the details overlooked before, otherwise it may cause a mess. Finding out what’s going on in a series of “assumption” that are no longer strictly satisfied is really difficult. Letting the original theory “roughly” fits the reality is what a data engineer needs to do.
So the softmax function σ (z) = (σ1 (z), σ2 (z),… σm (z)) can be defined as:
Its role in the Logistic regression is to translate the linear predictive value into category probability: Imagine Zi = Wi*x + Bi is the result of linear prediction, Softmax can make Zi nonnegative by letting them become exponential, then the sum of all items is normalized, now each Oi = σi (Z) can be interpreted as the probability of data x belong to the category i, or the likelihood.
Based on the maximum likelihood principle, we can define the objective function “Logistic regression” (also known as Multinomial Logistic Loss). What we need to do is to maximize the value of Oy.
And what Softmax-Loss function does is to combine these two functions:
Such as the problem of two types of Logistic regression, it's better to use the Softmax - loss function. But if you are designing a library of Deep neural Networks, you might prefer to use the two separately: because the Deep Learning model is a layered structure, the main job of a computing library is to provide a wide variety of layer, then users can choose to have a network hierarchy combined in different ways. For example, the user wants to get each category of probability likelihood value, he/she only needs a Softmax Layer, no necessary to do a multinomial Logistic Loss operation. So providing two different Layer structures is much more flexible than providing only one softmax-loss Layer. It also appears to be more modular. But there is a numerical stability problem.
First, we need to understand what the meaning of BackPropagation. For example, a 3-layer neural network is shown in the figure, each layer has input nodes and output nodes except the beginning data layer L0. Typically, a layer of input nodes is simply a “copy” of the upper-level output node, that is because all computational operations occur within each layer of the structure. For ordinary neural networks, the calculation of each layer is usually a linear mapping and then a Sigmoid nonlinear operation, for example:
Using the principle of Chain rule in calculus, you get the following formula:
Notice that the red part is related to the internal structure of the first layer of the network, it can be computed with the information of local structure of the first layer, and for the blue part, we just said the output node is actually equal to the next layer of input nodes, so we can calculate it without knowing any information about the first tier.
And do the BackPropagation:
Let’s go back to the Softmax-loss layer, because the layer has no parameters, we just have to compute the derivative of the backward pass, and since the layer is the topmost one, you can directly compute the derivative of the final output (Loss) without using the chain rule. As we mentioned before, softmax-loss layer has two inputs, a true label Y that directly comes from the bottom of the data layer, and we don’t need to update on the data layer descent Parameter, the other input comes from the compute layer below, which is a fully connected linear inner layer for the Logistic regression or the normal DNNs. Based on calculus knowledge, we can work out:
σk (Z) is the results of the Softmax calculating which is the middle step of the Softmax - loss layer.
What if the Softmax layer and the multinomial Logistic Loss layer are divided into two layers? We put the output of the Softmax layer, that is, the input of the Loss layer as Oi = σi (Z), so we need to compute the top layer at first.
As we pass this derivative down, and reach the Softmax layer, then we can apply the chain rule:
If you try to check it outtake with Chain rule:
Although the end result is the same, but we can draw a conclusion that if divided into two levels of calculation, we need to do much more calculations. And we are more concerned about the stability of the numerical. Because floating - point number has limited precision, each operation will accumulate a certain amount of error, if Oy is very inaccurate, which means the value of correct category’s probability is very small (near 0), there will be overflow danger. We can do some experiments(written in Julia):
And because of the lower precision of the float (Float32) than double (Float64), we use the result of double as the approximate “correct value”, and then compare the difference between the result and the correct value computed by float in both cases. The drawing code is as follows:
We need to set that coordinate to be a negative number with a large absolute value. In the resulting graph we can see comparisons across the range of values. The horizontal axis is the size of the graph, and the ordinate is the difference between the results calculated by two methods and the “real value”.
The first thing you find out is that a single layer of direct computing is really better than splitting into two layers, but the gap is actually very small. Look to the left, you will find that the yellow points disappear, that is because the result has become NaN. For example, if Oy becomes zero, resulting in the underflow beyond the decimal point precision range, we can get INF (1/Oy), when multiplied by others it will directly get NaN, that is, Not a Number. Looking at the Blue line, it seems strange that the accurate seems to be improved, it is because our “real value” is also underflow, although the double number is much higher than float, it is also limited. According to the Wikipedia, precision range for float is roughly at 10-³⁸ to 10³⁸, and the double number is around 10-³⁰⁸ to 10³⁰⁸ , so we can choose point 10^(-2) in the diagram for testing.
So for x =10^-2, float it’s already overflow, it’s around 10^-44, though it’s still in the range of double, the difference between 0 and float 64 is so small that there’s no difference in the picture. If the index is smaller, it will also cause this double number underflow, the result of our “real value” will also be zero, which means the “error” becomes 0. and another problem is that when x reaches 10², the blue and yellow lines are all gone, which means they all get NaN(Not a Number).
One of the solution to this problem is shown in the second line of code which is commented. We subtract each element (Z) by the maximum value before exponentiate it. The maximum number becomes 0, which will not cause overflow problem again, but other numbers in normal range now will be subtracted by a large number, which become large negative values may cause underflow. But since the underflow gets 0 (a very meaningful approximation), there’s no strange NaN for subsequent computations.
I might make some mistakes and don’t explain something very clearly, so if you have any problem, please feel free to contact me via email: jz_liang@yahoo.com
Reference:
Numerical Computing with IEEE Floating Point Arithmetic: Including One Theorem, One Rule of Thumb, and One Hundred and One Exercises
Convolutional Neural Networks for Visual Recognition
An overview of gradient descent optimization algorithms
|
The difference between Softmax and Softmax-Loss
| 6
|
the-difference-between-softmax-and-softmax-loss-173d385120c2
|
2018-04-03
|
2018-04-03 10:24:21
|
https://medium.com/s/story/the-difference-between-softmax-and-softmax-loss-173d385120c2
| false
| 1,646
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
jong un kim
| null |
a0556da001d2
|
liangjinzhenggoon
| 2
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-01
|
2018-08-01 12:59:07
|
2018-08-01
|
2018-08-01 13:37:28
| 3
| false
|
en
|
2018-08-01
|
2018-08-01 20:55:29
| 7
|
173d9c7f112d
| 2.934906
| 2
| 0
| 0
|
SUMMARY: On the day that online distribution of 3D printable gun blueprints was set to become legal in the US, a new AI tool launches to…
| 5
|
New AI Platform — Flagg3D — Launches to Trace and Detect 3D Printable Gun Files Online
SUMMARY: On the day that online distribution of 3D printable gun blueprints was set to become legal in the US, a new AI tool launches to trace and flag these files.
AUG 1, LONDON -
London-based artificial intelligence (AI) startup, 3D Industries Limited, today announced the launch of its Flagg3D (flagg3d.com) platform which automatically detects and flags 3D printable gun component designs uploaded online. This comes on the day that a controversial US federal ruling permitting the online distribution of such digital blueprints, was set to come into effect. Despite an eleventh hour temporary ban extension, it is expected that widespread downloading and sharing of these files is imminent.
(Source: CNN)
The ease of access to production of unregistered, untraceable and undetectable weapons is widely seen as a serious national security concern. Moreover, even if legal, such 3D content would still be in violation of the usage policies of many content sharing and collaboration platforms, sites and networks.
Yet with millions of 3D digital designs being uploaded to the internet each year, and with little consistency or accuracy in naming, annotation or tagging of the files, monitoring is still done through text searches and manual flagging. This is an increasingly expensive and ineffective endeavour.
Powered by proprietary 3D shape recognition algorithms, the Flagg3D platform is able to scan large 3D databases and networks at scale, whilst also analysing inbound 3D designs in real time. The machine-learning based platform matches against existing known gun parts, as well as similar and related designs, thus identifying and flagging unsuitable content.
Flagg3D is primarily targeted at the moderators and administrators of 3D content sharing and collaboration sites, IT networks, cloud storage services and social media platforms. It is also aimed at 3D printing platforms and 3D printer firmware businesses.
Speaking at the launch of the platform, Seena Rejal, CEO of 3D Industries said, “the challenge posed by the potential release of 3D printed gun files clearly demonstrates the limits and futility of outdated text-based tools for dealing with today’s increasingly 3D web. The physical and digital are now more integrated than ever, and this demands far more sophisticated vision and AI solutions for tackling the issues that arise at this intersection.” He continued, “we offer database, network and platform owners and moderators full visibility and knowledge of their systems, allowing them to better protect themselves against policy violations, DMCA take-down notices, or irreparable brand damage. The potential savings in labour, legal and reputational costs are potentially enormous. ”
The platform can be deployed and seamlessly integrated via APIs or accessed as a platform-as-a-service (PaaS) via a dashboard. Flagg3D is also offering a ‘Seek-and-Flag’ service for those wanting to outsource the process completely.
Links:
www.flagg3d.com
https://twitter.com/flagg3d
www.3dindustri.es
https://twitter.com/3dindustries
For more information contact:
Press enquiries: press@3dindustri.es
Seena Rejal, CEO: seena@3dindustri.es
About 3DI
3D Industries Limited (www.3Dindustri.es) is a groundbreaking London-based machine vision and AI company with operations in London, Palo Alto and Munich. We create developer tools and algorithms that power disruptive applications at the intersection of the real and virtual worlds across some of the most exciting markets: VR/AR, autonomous robotics, mobility, gaming, manufacturing, healthcare, 3D Printing and more.
Our international team of world class scientists and engineers hail from leading research groups at Cambridge, Princeton and Stanford, amongst others. We work with industry-defining companies such as HP, Autodesk, Intel, Siemens, Toyota, IBM, IKEA and Sony, and have been featured in Forbes, Inc.com and BBC.
About Flagg3d:
Flagg3D.com (www.flagg3d.com) is a platform exclusively designed to detect and flag unsuitable and prohibited 3D content on sites, platforms, networks and databases. It is powered by the proprietary shape recognition and matching algorithms of 3D Industries.
|
New AI Platform — Flagg3D — Launches to Trace and Detect 3D Printable Gun Files Online
| 23
|
new-ai-platform-flagg3d-launches-to-trace-and-detect-3d-printable-gun-files-online-173d9c7f112d
|
2018-08-01
|
2018-08-01 20:55:29
|
https://medium.com/s/story/new-ai-platform-flagg3d-launches-to-trace-and-detect-3d-printable-gun-files-online-173d9c7f112d
| false
| 632
| null | null | null | null | null | null | null | null | null |
3D Printing
|
3d-printing
|
3D Printing
| 9,416
|
Flagg3d
|
Flagg3D.com (www.flagg3d.com) is a AI platform exclusively designed to detect and flag unsuitable and prohibited 3D content on sites, platforms, networks & more
|
1fc38f855a61
|
press_77706
| 1
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
d8c4ad013dfd
|
2018-07-12
|
2018-07-12 05:28:06
|
2018-07-12
|
2018-07-12 18:29:06
| 2
| false
|
en
|
2018-07-13
|
2018-07-13 16:51:46
| 8
|
174105d3b1d3
| 3.492767
| 33
| 1
| 0
|
Startup. Big vision. Fast moving industry. Product market fit. It’s time to scale.
| 5
|
How to scale 16x in 6 months on Ethereum
Startup. Big vision. Fast moving industry. Product market fit. It’s time to scale.
I lead the engineering team at Numerai. This is the story of how we grew our data science tournament, and how we scaled NMR to become the most used staking token on Ethereum.
Pick one key metric and set an aggressive target
Before you work on scaling, it is important to know what to scale. Ask yourself this question: what is the one thing I want my team to focus on in the next six months?
Monthly active users (Facebook)? Trips per week (Uber)? Transactions per second (Ethereum)? If you are not sure what metric to focus on, start with this lecture by Alex Schultz, VP of Growth at Facebook.
Numerai is a quant hedge fund controlled by a network of data scientists competing in a weekly machine learning tournament. The goal of the data scientists is to predict future movements in the global equities market. To win money from our weekly prize pool, data scientists have to make accurate predictions and stake Numeraire (NMR) on their own predictions.
The game theory behind staking and payouts helps Numerai gauge the confidence of predictions and elegantly mitigates sybil attacks — slyfox
The one key metric we chose to scale at Numerai over the past six months is the number of stakes per week. More stakes means more data scientists making predictions with confidence. More stakes means more signal for our meta-model to use in trading.
When I joined the company in January 2018, NMR was barely six months old and we had 59 stakes per week. We set out to 10x stakes per week by the end of June. It was an aggressive goal — within the realm of possibility but just barely. It forced us to think big. It forced us to focus.
If you want to learn more about setting goals and focusing your team’s efforts, I highly recommend reading about OKRs in Radical Focus.
Radical Focus: Achieving Your Most Important Goals with Objectives and Key Results
"This book is useful, actionable, and actually fun to read! If you want to get your team aligned around real…www.amazon.com
Find and eliminate your bottlenecks systematically
You have your one key metric and have set an aggressive goal. Now it’s time to move the needle. But where do you start?
Any improvements made anywhere besides the bottleneck are an illusion. Astonishing, but true! Any improvement made after the bottleneck is useless because it will always remain starved, waiting for the work from the bottleneck. And any improvements made before the bottleneck merely results in more inventory piling up at the bottleneck. — Gene Kim, The Phoenix Project
The first bottleneck was painfully obvious. The staking prize pool for round 93 was $6000, but the non-staking prize pool was $37180 (2000 NMR * $18.59/NMR on February 3). What started out as an on-ramp for new data scientists turned into the main focus. In round 94 we consolidated both USD and NMR payouts into a single prize pool for staking. Now there was only one game to play. Stakes were up ~2x at 143.
The second bottleneck we tackled was new user growth. With the non-staking on-ramp gone, it became difficult for new users to start staking since they had nowhere to get their first NMR to stake with. Further, while our website was functional, it wasn’t exactly easy to use or understand for new users. In March, we wrote a comprehensive tutorial and launched two airdrops for students and Kaggle users respectively. In April, we completely redesigned our website with a focus on new user experience. By round 110, stakes were up ~5x at 289.
While the new user base grew, the team shifted focus on a new dimension of staking growth — stakes per user. With an active and growing base of data scientists, we were confident that we could multiply the signal we get from each user by asking them to make predictions on multiple targets. In round 111 we launched multiple tournaments, stakes were up ~16x at 973.
Conclusion
enough said
We have made a lot of progress in the past six months, but we are only just getting started. We have even bigger plans for the next six months. Stay tuned.
More resources
Join our telegram group!
Numerai Telegram
Come chat with us about Numerai, NMR, data science and the future of decentralized financet.me
Read more about Numerai and the future of decentralized finance!
How Numerai Works
A step by step guide for data scientists to start competing on Numerai.numer.ai
Blockchain-based Machine Learning Marketplaces
Machine learning models trained on data from blockchain-based marketplaces have the potential to create the world’s…medium.com
Numeraire, The Cryptocurrency Powering The World Hedge Fund
We are making Numeraire (NMR) 5x more valuable to use by increasing the payouts in our staking tournament.medium.com
|
How to scale 16x in 6 months on Ethereum
| 363
|
how-to-scale-16x-in-6-months-on-ethereum-174105d3b1d3
|
2018-07-15
|
2018-07-15 00:50:54
|
https://medium.com/s/story/how-to-scale-16x-in-6-months-on-ethereum-174105d3b1d3
| false
| 824
|
A new kind of hedge fund built by a network of data scientists.
| null | null | null |
Numerai
|
contact@numer.ai
|
numerai
|
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,HEDGE FUNDS,FINANCE,BLOCKCHAIN
|
Numerai
|
Data Science
|
data-science
|
Data Science
| 33,617
|
Anson Chu
|
VP Engineering @ Numerai
|
b96a5cf08eee
|
ansonschu
| 221
| 349
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-12-08
|
2017-12-08 03:15:05
|
2018-01-08
|
2018-01-08 02:16:05
| 0
| false
|
en
|
2018-01-08
|
2018-01-08 02:16:05
| 4
|
174119dcbd63
| 2.641509
| 0
| 0
| 0
|
Allison Hegel won a scholarship to Metis’ Live Online Intro to Data Science course through Women Who Code’s weekly publication the CODE…
| 5
|
Allison Hegel — WWCode and Metis Scholarship Winner
Allison Hegel won a scholarship to Metis’ Live Online Intro to Data Science course through Women Who Code’s weekly publication the CODE Review. In this article, we explore her aspirations, her dreams, her experiences at Metis, and how she plans to use that opportunity to achieve her goals.
What do you do day-to-day for work/education?
I’m just finishing up my PhD in English, where I’m working on a project analyzing book reviews on sites like Goodreads and Amazon. Data science is making its way into lots of unexpected fields like mine, and it allows us to look at a much wider range of books and a more democratic sample of opinions than we have in the past. It’s great to be able to switch between reading a theory of the internet in the morning and coding in the afternoon!
Why do you like being a technical person? What misconceptions do people have about tech?
I’m wary when people see tech as a magic solution to the world’s problems. We technical people have some powerful tools at our disposal, but those tools come with plenty of problems of their own. The more we are willing to question what our computer spits out, the better.
What is a major challenge you’ve faced in tech?
Documentation! So many of the latest and greatest methods and packages are poorly (if at all) documented or use tons of jargon, which makes it tough for people who are learning to pick them up and deal with the many errors that inevitably pop up. I’ve really come to appreciate clean, well-documented code, and I’ve vowed to make sure my own projects make sense to people who aren’t me.
Have you helped others overcome challenges in tech?
Most of the people I work with are on the humanities side rather than the technical side, but whenever I can, I try to serve as a translator and intermediary between the “two cultures.” It’s actually really difficult to be able to explain technical things clearly, but it’s probably the best way to truly learn something. I think that humanists have lots to teach tech people, and vice versa, so I really value my role in bringing the two together.
Any tech pro-tips? Any tips for people who want to follow the kind of path you’ve had?
Learning data science takes time! I’ve been extremely lucky to have a flexible schedule as a graduate student, but there have still been plenty of late nights staring at cryptic error messages when nothing seems to be going right. I would suggest taking lots of breaks, but most importantly building a community of people to learn with so that moments when you’re stuck become moments you can reach out to other people and know that you’re not the only one struggling, and you’ll get through it!
What most excites you about your career and what you’re hoping to do in Data Sci?
Data science touches everything we do these days, not just online but also behind the scenes, in who gets offered loans or admitted to college. I hope that in my career I can bring a humanistic perspective to this mathematical field, and try my best to understand the human impact of data science decisions.
What was your experience learning at Metis like?
The Metis Data Science course was the most interactive online course I’ve ever taken. The instructor and TA made the class feel like we were all in the same room, and we built up a really supportive community over the six weeks. This made learning a huge amount of material much more manageable.
What project did you work on for your Metis course (provide some details please)?
I had a blast applying what I learned in the Metis course to real data. I chose the Yelp Open Dataset to work with because I was interested in what kinds of attributes people value most in a business. I built a regression model to predict a business’ star rating using its attributes, like whether it caters or has bike parking. You can check it out here.
Originally published at www.womenwhocode.com.
|
Allison Hegel — WWCode and Metis Scholarship Winner
| 0
|
allison-hegel-wwcode-and-metis-scholarship-winner-174119dcbd63
|
2018-01-08
|
2018-01-08 02:33:27
|
https://medium.com/s/story/allison-hegel-wwcode-and-metis-scholarship-winner-174119dcbd63
| false
| 700
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Women Who Code
|
We are a 501(c)(3) non-profit organization dedicated to inspiring women to excel in technology careers. https://www.womenwhocode.com/
|
f05962335e24
|
WomenWhoCode
| 41,260
| 978
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-04
|
2018-04-04 08:43:57
|
2018-04-04
|
2018-04-04 08:45:23
| 0
| false
|
en
|
2018-04-04
|
2018-04-04 08:45:23
| 0
|
17420cde8901
| 3.358491
| 0
| 0
| 0
|
[This post was originally published in February 2010]
| 3
|
How do you build a data warehouse without any money?
[This post was originally published in February 2010]
How do you build a data warehouse without any money?
Now I don’t mean data warehousing on the cheap. I’m talking about a data warehouse that doesn’t have any record of money in it — no sales transactions, no insurance claim amounts, no bank balances — in fact no pounds and pence for you to add up. Recently I have been working on a data warehouse like this for a logistics organisation. All the records were about parcels leaving here and arriving there but nothing about money. Building a data warehouse with data like this presents some new challenges and sometimes requires a different approach.
What facts do you report on?
If we don’t have any money in the data warehouse then how are we going to measure or report on what is happening? What are the facts in the fact tables? If there were some monetary amounts we would clearly need to put these in the fact tables so we can add them up by various categories but without some monetary fields we have to identify some other kinds of facts to report on. These are likely to be either (1) things we can count, such as number of parcels sent or (2) elapsed times between events, such as time from sending a parcel to delivery. One problem here is that we could count all kinds of things and measure a number of elapsed times between events but some of these measures may not be particularly relevant or useful.
What are the reporting requirements?
Here is a big lesson to learn: Don’t try to model a moneyless data warehouse without a clear understanding of the business domain and the reporting requirements. If you try to guess what facts need to be modelled you will probably guess wrong. On our recent project we managed to get a pretty good understanding of the business but the client was not in a position to describe reporting requirements until halfway through the development. By this stage the data model had been fixed and our only way to restructure the data was to construct a number of marts on top of the basic data model. Although we got to the end point, the route was longer and more difficult that it would have been otherwise.
Data quality
And here’s another lesson: If the source data doesn’t include money then you should expect a few holes. If we are dealing with monetary amounts then missing or corrupt data in source systems will have a real impact on things that people care about — sales, bonuses, claim amounts, bank balances, profits etc. Generally, the quality of money-related data is very good. When problems are identified they get fixed. By contrast, data that is not money-related can be all over the place without anyone really noticing or doing anything about it. Before you finalise your data warehouse design I strongly advise conducting a data quality audit to determine whether the source fields are consistently populated and the range of values you can expect.
In our recent project we found that a number of one particular data item was populated less than 1% of the time. This was a significant problem as a number of the on-line reports required the user to select a value for this item as a parameter. When the report was generated it contained less than 1% of the expected values. If we had known about the data quality problem earlier we could have saved a few weeks of effort and been able to advise the client on a more appropriate design for the reports.
If your reports are based on elapsed times then data quality problems in the recorded times will have a major impact. You may end up reporting that parcels took five years to deliver while others arrived instantaneously. This is the kind of problem you just don’t get with monetary fields. Your challenge is working out what to do with these rogue figures. In our case our users wanted functionality to filter out these outliers before running the report. If you are going to do anything like this you will need to think carefully about the design.
Time doesn’t accumulate like money
If most of your reporting is time-based, such as reporting the time taken to deliver a parcel, you need to consider what this will look like on an aggregate or summary report. With money it’s easy — you add it up — but time doesn’t work like that. Your users will not be interested in knowing that the sum total of elapsed time for delivering parcels was 23,267 days. It is more likely that they will want to know the mean time and possibly some measure of how much the elapsed times vary from the mean. You may then have to explore terms such as standard deviation, percentiles and coefficient of variance, trying all the time not to lose your users in academic discussion.
So can you build a data warehouse without any money?
Yes you can, and we did. The end result may look rather different from a money data warehouse but the final proof is whether it delivers the business intelligence that users require.
|
How do you build a data warehouse without any money?
| 0
|
how-do-you-build-a-data-warehouse-without-any-money-17420cde8901
|
2018-04-04
|
2018-04-04 08:45:24
|
https://medium.com/s/story/how-do-you-build-a-data-warehouse-without-any-money-17420cde8901
| false
| 890
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
James Ochiai-Brown
|
Principal Business Solutions Manager at SAS. Big Data Architect specialising in Analytical Platform, Analytics Operating Model and delivering business value.
|
79a2073e8129
|
jochiaibrown
| 2
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
5e5bef33608a
|
2018-09-14
|
2018-09-14 12:24:08
|
2018-09-19
|
2018-09-19 10:08:00
| 2
| false
|
fr
|
2018-09-19
|
2018-09-19 10:08:00
| 9
|
1743963534d2
| 2.451258
| 0
| 0
| 0
|
Au croisement entre art et technologie, le creative coding est une manière de présenter les données qui porte du sens en elle-même : il…
| 5
|
Qu’est-ce que le creative coding ? 💻🎨
Au croisement entre art et technologie, le creative coding est une manière de présenter les données qui porte du sens en elle-même : il s’agit de faire passer un message par la manière dont on organise les données, et les connexions qu’on peut tracer entre elles.
Un exemple (mon favori) : Google Arts & Culture est un département de Google qui rend accessible une base de données de six millions d’oeuvres artistiques et culturelles (photos, vidéos, peintures, dessins, posters etc) Des creative coders ont conçu plusieurs expériences de navigation dans cette base de données, qui remettent en question la curation traditionnelle, l’organisation des oeuvres par courant artistique, par artiste ou par période. Ils ont utilisé des algorithmes d’intelligence artificielle pour présenter les données sous une forme expressive, et pas seulement fonctionnelle, en les organisant par similitarité visuelle, par palette de couleurs, par proximité visuelle avec un visage ou un dessin.
Ils ont ouvert de nouvelles portes d’entrée pour découvrir les collections culturelles. Bien sûr, le creative coding ne remplace l’expertise des historiens de l’art, il la complète : plutôt que de faire une recherche pour “impressionnisme” ou “Picasso”, on peut dessiner une silhouette ou prendre un selfie, et être confronté à des oeuvres peu connues, surprenantes, intrigantes.
X Degrees of Separation, de Mario Klingemann, s’inspire de la théorie sociologique selon laquelle il y aurait six poignées de main entre n’importe qui sur Terre et le président des Etats-Unis. Aujourd’hui, grâce aux réseaux sociaux, on évalue plutôt entre trois et quatre le nombre de degrés de séparation entre deux êtres humains. Dans les années 60, Stanley Milgram (oui, celui de la fameuse expérience) avait envoyé des lettres adressées au président des Etats-Unis à quelques-uns de ses amis, en leur demandant de les transmettre à quelqu’un qui connaîtrait quelqu’un qui connaîtrait…et caetera. A leur arrivée, les lettres portaient en moyenne six timbres. Mario Klingemann a cherché à reconstituer cette expérience avec une collection artistique, pour tracer un lien entre deux oeuvres en se fiant uniquement à leur similitude visuelle. L’expérience ne tient aucun compte des métadonnées : peu importe la période, l’artiste, le courant, elle peut connecter des oeuvres vieilles de plusieurs millénaires à des gravures du XIXe siècle.
X Degrees of Separation, par Mario Klingemann
Qui sont les creative coders ?
Artistes technologiques, ingénieurs avec une formation artistique, artistes curieux et doués pour la programmation, designers hors-des-cadres, inclassables…il n’y a pas un parcours ni une discipline qui mène au creative coding. C’est encore un espace de création libre, au carrefour des arts, des nouveaux médias et des technologies, qui se déploie sous des formes et sur des supports différents : sites web, réalité virtuelle, réalité augmentée ou mixte…
Montez le son et bougez la souris pour découvrir l’oeuvre multimédia de Glass Can !
Retrouvez-les aussi aux festivals GROW (12–16 novembre 2018 à Paris), qui réunissent chaque année depuis 2017 des creative coders à Paris pour explorer ce que la technologie apporte à l’art, et comment l’art enrichit la technologie.
Sincères remerciements à toute l’équipe du Lab de Google Arts & Culture pour leur formidable travail, qui permet de rendre les oeuvres culturelles plus accessibles, de les découvrir, et de les comprendre différemment en faisant de nouvelles (et parfois surprenantes !) connexions.
Le creative coding a mis en avant la proximité visuelle entre une photographie de nu et des archives de ballets
|
Qu’est-ce que le creative coding ? 💻🎨
| 0
|
quest-ce-que-le-creative-coding-1743963534d2
|
2018-09-19
|
2018-09-19 10:08:33
|
https://medium.com/s/story/quest-ce-que-le-creative-coding-1743963534d2
| false
| 548
|
Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity.
|
becominghuman.ai
|
BecomingHumanAI
| null |
Becoming Human: Artificial Intelligence Magazine
|
team@chatbotslife.com
|
becoming-human
|
ARTIFICIAL INTELLIGENCE,DEEP LEARNING,MACHINE LEARNING,AI,DATA SCIENCE
|
BecomingHumanAI
|
Creative Coding
|
creative-coding
|
Creative Coding
| 209
|
Laura Sibony
| null |
a9e4e9afd0ba
|
Sibony
| 14
| 40
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-04-18
|
2018-04-18 09:39:09
|
2018-04-18
|
2018-04-18 19:20:49
| 1
| false
|
en
|
2018-04-18
|
2018-04-18 19:20:49
| 13
|
174520d593a6
| 2.086792
| 1
| 1
| 0
|
Business implications of artificial intelligence are ranging from how we consume content to the future of our jobs.
| 5
|
No way your business can escape AI
Picture by Asen Velichkov
Business implications of artificial intelligence are ranging from how we consume content to the future of our jobs.
It was a pleasure to host Antti Merilehto in Tampere with his talk on business implications of artificial intelligence. Utilizing data to produce insights and meticulously validating ideas change the way we are introducing products.
How would it change your business if you are able to introduce products that your client instantly wants?
Antti was inspired to write his book after he moderated AI panel at Slush. Three companies worked on very distant fields using machine learning as a tool. Iris.ai is on a mission to read and understand all scientific papers and thus make research process faster and better. Onfido is building identity verification engine in less than 60 seconds. SearchInk (now omni:us) delivers structured data from highly variable documents including handwritten text.
Anita Schjøll Brede (CEO & Co-Founder of Iris AI), Eamon Jubbawy (COO of Onfido), Sofie Quidenus (CEO & Co-Founder of SearchInk) and Antti Merilehto (Country Manager at Finch Finland) discussing the topic “AI in 2016: The Real Deal” at Slush 2016
One takeaway for me was that competition tightens across borders and industries. We expect the same level of customer experience from Traffic Authority and Tax Administration as from Spotify and Apple.
AI as a new business opportunity
Government is taking effort to make Finland the best country in the world to develop and utilize artificial intelligence. Finland is the second in Europe by the number of AI companies per capita after Switzerland.
Recently Valohai, a machine learning platform-as-a-service company, has raised $1.8M in funding and went on a mission to conquer US market.
Another startup Ultimate.ai made it to 2017 Class of the SAP.iO Foundry, powered by Techstars Accelerator. Now this startup spends a lot of time in Berlin boosting its growth. The product gives customer service agents the AI tools they need to provide faster, smarter responses. Their suggestion engine, trained on historical chat data, works in partnership with agents, providing real-time reply suggestions.
What can you do to get into AI
Here are couple of advices:
Enroll to machine learning online course created by Stanford University on Coursera. The course is heavy on math and programming. It is endorsed by Risto Siilasmaa, Chairman of Nokia and F-Secure.
Be one of the first one to try Elements of AI course. The University of Helsinki partnered up with Reaktor to make this free online course. It doesn’t require complicated math or programming skills. The course rather helps equip you with the understanding and skillsets required for assessing the world we live in through the lens of AI.
Read Tekoäly — matkaopas johtajalle by Antti Merilehto. It is written in Finnish and according to author, translation to English is coming at the end of the year.
Are you in Tampere? Check out:
Seminar on AI for manufacturing industry by Eficode
Brave New World, AI Applied event by Futurice
The article is inspired by “Everything you need to know about Artificial Intelligence” event where Antti Merilehto presented his book “Tekoäly — matkaopas johtajalle”. It was held on 10th of April at Tribe Tampere and organised by JCI United
|
No way your business can escape AI
| 10
|
no-way-your-business-can-escape-ai-174520d593a6
|
2018-04-19
|
2018-04-19 12:09:14
|
https://medium.com/s/story/no-way-your-business-can-escape-ai-174520d593a6
| false
| 500
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Daryna Barsukova
| null |
70553c605ec
|
darynabarsukova
| 111
| 145
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-12
|
2018-09-12 17:27:18
|
2018-09-12
|
2018-09-12 17:30:16
| 1
| false
|
en
|
2018-09-12
|
2018-09-12 17:39:45
| 4
|
1747192dc1d3
| 0.562264
| 1
| 1
| 0
|
Indigenous peoples have such a unique perspective and knowledge base — We do have great challenges yet such great opportunities. I believe…
| 5
|
First Nation’s Artifical Intelligence.
Shout out to Common for acknowledging Native American/First Nations work in Artificial Intelligence on a worldwide stage at the ChainXchange Conference. I have no doubt in my mind that Indigenous peoples and methodologies will contribute in a big way to the tech/digital economy.
Indigenous peoples have such a unique perspective and knowledge base — We do have great challenges yet such great opportunities. I believe with the help of our communities, elders, mentors we can take the lead as Indigenous peoples in the digital economy and create a brighter future for the next generation.
#TheDailyBear #IndigenousPeoples
|
Shout out to Common for acknowledging Native American/First Nations work in Artificial Intelligence…
| 50
|
shoutout-to-common-for-acknowledging-native-americain-first-nations-work-in-artificial-intelligence-1747192dc1d3
|
2018-09-12
|
2018-09-12 17:39:45
|
https://medium.com/s/story/shoutout-to-common-for-acknowledging-native-americain-first-nations-work-in-artificial-intelligence-1747192dc1d3
| false
| 96
| null | null | null | null | null | null | null | null | null |
Commons
|
commons
|
Commons
| 441
|
Sheldon Anderson
|
Social - First Digital Marketer / Podcast - Vlog Producer / Aspiring Aboriginal Entrepreneur
|
8ccec576cb0d
|
andersonsheldon
| 25
| 171
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-08
|
2018-09-08 02:25:57
|
2018-09-23
|
2018-09-23 04:00:46
| 2
| false
|
en
|
2018-09-23
|
2018-09-23 04:00:46
| 1
|
1747b053154d
| 3.424843
| 0
| 0
| 0
|
So, the first semester of my last year of undergraduate studies is underway. One of my courses this semester is Senior Comprehensives…
| 1
|
CS Senior Project — A CNN approach to identifying knitted stitches (Part 1)
So, the first semester of my last year of undergraduate studies is underway. One of my courses this semester is Senior Comprehensives, which is an attempt for students to show that they have actually learned something in their major in the past three years. For my computer science major, we were tasked with creating something — a research paper, a mobile app, a web extension — that will have relevance to both the computer science community and the larger population.
After thoroughly struggling with deciding on a topic, I landed on combining my hobby of knitting with my interest in neural networks (although my previous experience with the latter was pretty much zilch). I pitched my idea to the Ravelry community and was surprisingly met with positive feedback, along with some constructive criticism and potential challenges I may face.
I’m going to be documenting my progress (not 100% willingly) on a public blog (aka right here), for people to keep up-to-date with the project.
The Plan
The overarching goal for this project is to train an Convolutional/Artificial Neural Network on images of knitted stitches, which will then be able to classify pictures uploaded by users (sounds ~fancy~ right?). The hope is that the model will work for a few stitches (most likely 3) but can be expanded if time allows, or continued after the semester’s end. Ideally the model would be wrapped into a mobile application or mobile-friendly website for easy camera access, but depending on time constraints, this may or may not occur.
So at this moment you may ask (especially if you don’t know the difference between knitting and crocheting — and that’s a whole different topic for another time), why does this matter? Who would even care about this? Well knitters, for one, would definitely care. Currently, trying to find a name/pattern for a specific stitch from thousands of possible stitches is akin to the needle-in-a-haystack cliche. There are a couple of options:
a) ask a friend/fellow knitter! (probably the best option atm)
b) google relentlessly for hours (ie “stitch with holes and knit 2 purl 1”)
c) look through a book/website that has pictures! (sounds great until you realize there are ~2,000 images..)
But in addition to being a time-saver, this project would be a very interesting look into computer vision of patterns. No longer are we trying to tell whether an image is of a horse or a cat! No sirree, we are trying to tell bumps in a line versus bumps at a diagonal (i.e. garter stitch and seed stitch), which is arguably much more difficult as there are no clear object borders and very little intensity differences:
seed stitch
garter stitch
So now to the nitty gritty: I’m planning on making this project more-or-less research-based, by tweaking various aspects of the training model and recording the results. These will be most likely detailed here. The current plan of attack is to:
collect images of stockinette and garter stitches (~1000 collected to date from Ravelry pictures) (side note: as this is non-commercial, education-based, and the images cannot be reproduced from my model, I’ve discovered that it falls under the fair use policy, and copyright does not apply(?). please correct me if I’m wrong because I would prefer to avoid legal issues)
organize images into “training”, “validation”, and “testing” folders (I’ve read that it should be around a ratio of 70:20:10, respectively, although I may change this after more research)
feed data into python. I’m currently debating between using PyCharm or Jupyter Notebooks. I’ve been currently leaning toward Jupyter as each cell can be run individually which makes debugging MUCH quicker.
begin creating a model and also learning about convolutional neural networks and the like (this may take a while)
etc
Current Progress
I’ve currently collected a total of over 1000 images of the garter stitch, stockinette stitch, and seed stitch. The dataset for seed stitch images is slightly larger than the others, so I am planning on finding more images to even the amount of training data. I used the Keras function ImageDataGenerator() and flow_from_directory() to feed in batches of images into my program.
During the first few days, I ran into some issues with converting the images to grayscale. I used a dot product that multiplies each R, G, and B matrix by a predetermined value that results in a one-channel grayscale image (shoutout to stackexchange for the help). Of course I then had to deal with the issue of why matplotlib was displaying the picture in neon colors instead of gray…which was a simple fix once I realized what the function ‘cmap’ does. Anyways, on to trying to train the data/learn about neural networks in Keras!
|
CS Senior Project — A CNN approach to identifying knitted stitches (Part 1)
| 0
|
cs-senior-project-a-cnn-approach-to-identifying-knitted-stitches-part-1-1747b053154d
|
2018-09-25
|
2018-09-25 03:56:50
|
https://medium.com/s/story/cs-senior-project-a-cnn-approach-to-identifying-knitted-stitches-part-1-1747b053154d
| false
| 806
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Charlotte Cullip
| null |
a9d8a3cd84dd
|
charlottecullip
| 11
| 12
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-03-03
|
2018-03-03 22:21:54
|
2018-03-03
|
2018-03-03 22:22:06
| 0
| false
|
en
|
2018-05-02
|
2018-05-02 22:05:26
| 0
|
17480fda8430
| 1.373585
| 0
| 0
| 0
|
published on February 15, 2018 on voicesofvr.com, listened on March 3, 2018
| 1
|
NVIDIA for the future!
published on February 15, 2018 on voicesofvr.com, listened on March 3, 2018
In this Voices of VR podcast, Omer Shapira, a senior VR designer and engineer at NVIDIA, talks about training artificial intelligence and robots in VR. Shapira’s main focus is designing human aspects of VR interactions. Last year, NVIDIA came out with something they call Project Holodeck which is a VR environment that mimics the real world through sight, sound, and haptics. It even gives the user “hands” and full dynamic control in the space. Its potential is endless. For instance, at a demo, NVIDIA took an audience inside the design of a Koenigsegg Regera supercar. This allows the audience to view the car at full scale and see all of its components. At SIGGRAPH, NVIDIA showed how VR can be used to train AI and robots in real time rendering and realistic test scenarios. There is also no risk since it’s all virtual! In the demo, there was a robot working with dominoes and in another room, there was a VR headset where a person could train the robot how to play with the dominoes. There is another possibility of having one machine who knows how to play teach another machine through reinforcement learning, similar to how one teaches a child. This kind of reminded me of
Neo in The Matrix where he learns kung fu once the information is inputted in his system/mind. Shapira makes it apparent that the more we interact with AI and robots, the more we should be concerned that they are programmed well and be confident that they will do the right thing. I’ve never thought about this before but it does make sense. I guess I always trusted the ones that created the technology will create something that performs its intended purpose while being safe. Now I see how that is a lot to ask. Shapira continues and tells us what he hopes NVIDIA’s Holodeck will be used for such as training robots to help the disabled efficiently and effectively. I wonder if we will have to train robots specifically for a person’s personality as well, but that’s for a later time.
|
NVIDIA for the future!
| 0
|
nvidia-for-the-future-17480fda8430
|
2018-05-02
|
2018-05-02 22:05:27
|
https://medium.com/s/story/nvidia-for-the-future-17480fda8430
| false
| 364
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Audrey Lee
|
for future reality
|
92f8f649c77c
|
audzo
| 35
| 23
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
7a0ce6aee311
|
2018-02-06
|
2018-02-06 10:42:27
|
2018-02-12
|
2018-02-12 13:01:02
| 1
| false
|
en
|
2018-02-12
|
2018-02-12 15:07:59
| 5
|
174d4190bd86
| 1.415094
| 60
| 1
| 0
|
Introducing the Element AI lab blog
| 2
|
Hello, World!
Introducing the Element AI lab blog
Element AI has just turned one year old. Over the past year, we’ve built a world-class AI lab embracing both applied and fundamental research in machine learning.
From the start, we’ve organized around the ideas of transparency and openness that have made our field such an exciting place to be over the last few years. If big data and computing power have been the fuel for the spectacular growth of deep learning, then open-source publications and code have been the oxygen that turned the spark into a bonfire.
For that reason, we’re excited today to be launching the Element AI Lab Blog. Blog posts complement more “traditional” modes of publishing by giving us the space to discuss ideas informally, document work in process, and engage with richer forms of multimedia explanation. Many of us fondly recall taking our first steps in deep learning, aided by now-classic posts: Chris Olah’s beautiful articles on representations or Andrej Karpathy’s delightful dissection of RNNs. We hope that the Element AI Lab Blog can play the same role for future practitioners.
Wherever you live on the AI Island, we think you’ll find something interesting to read here
We’ll be embracing the whole gamut of content, from detailed discussions of research at Element AI and elsewhere, to practical tips for training and deploying models on the cloud. If you’re a research scientist at a prestigious university or a high schooler tinkering in your bedroom, we hope that you find something here to inform and inspire you.
Philippe Beaudoin, Senior Vice President of Research, Element AI
Archy de Berker, Applied Research Scientist, Element AI
Simon Hudson, Managing Editor, Element AI
To kick off our debut, we’ve included three blog posts to jump right into.
Bahador Khaleghi offers a thorough and thoughtful overview of the recent NIPS conference.
Jeffrey Rainy introduces our video style transfer project, Mur.ai, and follows up with a closer look at our process of using noise-resilience to stabilize live video.
|
Hello, World!
| 564
|
hello-world-174d4190bd86
|
2018-05-03
|
2018-05-03 02:44:11
|
https://medium.com/s/story/hello-world-174d4190bd86
| false
| 322
|
Scientists and developers at Element AI discuss the state of the art in artificial intelligence research and deployment.
| null | null | null |
Element AI Lab
| null |
element-ai-research-lab
|
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,DEEP LEARNING,COMPUTER SCIENCE,RESEARCH
| null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Philippe Beaudoin
|
SVP Research at Element AI, Ex Google engineer, now trying to beat the singularity to the finish line.
|
a930001a662e
|
philippe.beaudoin
| 487
| 265
| 20,181,104
| null | null | null | null | null | null |
0
|
def _batch_normalization(tensor_in, epsilon=.0001):
mean,variance = tf.nn.moments(tensor_in,axes=[0])
print(mean)
tensor_normalized = (tensor_in-mean)/(variance+epsilon)
return tensor_normalized
train_images, train_labels, validation_images, validation_labels, test_images, test_labels = load_mnist(FLAGS.data_dir)
#make placeholders for dataset
features_placeholder_train = tf.placeholder(dtype=tf.float32, shape=train_images.shape)
labels_placeholder_train = tf.placeholder(dtype=train_labels.dtype, shape=train_labels.shape)
#make data set from placeholders
dataset = Dataset.from_tensor_slices((features_placeholder_train,labels_placeholder_train))
#shuffle
dataset = dataset.shuffle(buffer_size=10000)
#batch
dataset = dataset.batch(batch_size)
#normalize
dataset = dataset.map(_batch_normalization)
| 7
| null |
2017-10-25
|
2017-10-25 18:30:24
|
2017-10-25
|
2017-10-25 19:55:27
| 0
| false
|
en
|
2017-10-25
|
2017-10-25 19:55:27
| 3
|
174d6e0f9905
| 1.143396
| 7
| 2
| 0
|
Tensorflow’s (relatively) new Dataset API is really great and makes the process of getting data into Tensorflow much less of a headache…
| 4
|
TensorFlow Dataset API implementation of preprocessing batch normalization
Tensorflow’s (relatively) new Dataset API is really great and makes the process of getting data into Tensorflow much less of a headache than the previous system of runners/queues or whatever hacky methods one used to get around having to muck around with the former.
In addition to adding clean control of shuffling/batching, TensorFlow’s Dataset API also lets us use custom preprossesing routines through the dataset.map functionality. The documentation includes an example of adding noise to input data but arguably far more useful is using this function to perform batch normalization.
Batch normalization implemented for data preprocessing is exactly what it sounds like: instead of normalizing over an entire dataset, we normalize inputs batch by batch. This is particularly useful in situations where a) the dataset is too large to fit in memory at once (although you could get around this since you don’t technically need to load the entire thing to perform mean/variance normalization) b) you are generating or recieving data on-the-fly. This latter case is only becoming more common in applications like autonomous robotics/computer vision programs like apples new faceID that are continuously training.
Here’s a basic map function that does pixel-wise normalization (zero mean and unit standard deviation). Epsilon is a small parameter to ensure we don’t divide by zero
Here’s how you would implement this in the larger Dataset flow.
Now you just have to make an Iterator for this dataset and you’re ready to go!
|
TensorFlow Dataset API implementation of preprocessing batch normalization
| 9
|
tensorflow-dataset-api-implementation-of-preprocessing-batch-normalization-174d6e0f9905
|
2018-06-12
|
2018-06-12 02:20:01
|
https://medium.com/s/story/tensorflow-dataset-api-implementation-of-preprocessing-batch-normalization-174d6e0f9905
| false
| 303
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
Noah Toyonaga
| null |
73ae7c39e85c
|
ntoyonaga
| 1
| 2
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
5e2e0cebdd81
|
2017-11-06
|
2017-11-06 00:05:05
|
2017-11-06
|
2017-11-06 15:10:27
| 1
| false
|
en
|
2017-11-06
|
2017-11-06 15:10:27
| 32
|
174f5d044803
| 2.09434
| 0
| 0
| 0
|
Tuesday (Nov. 7th)
| 5
|
Phoenix Tech Events (Nov. 6th — Nov. 12th)
Tuesday (Nov. 7th)
Free JavaScript Fundamentals Course Studio Session | 5pm @ Galvanize
Galvanize Phoenix
Cult Your Brand: A Solution to Customer Retention | 5pm @ Galvanize
Galvanize Phoenix
Galvanize Data Science Discovery Session | 6pm @ Galvanize
Galvanize Phoenix
HackerNest Phoenix November Tech Social | 6:30pm @ Choice Hotels
HackerNest Phoenix Tech Socials
AUVSI Remote Pilots Council | 6:30pm @ Sheraton Mesa Hotel
Phoenix Drone User Group
Monoids, Functors, and Monads in Practice | 7pm @ DriveTime
Functional First Phoenix
Wednesday (Nov. 8th) 🐫🐫🐫
1 Million Cups Presents: PreciseMeds | 9am @ Galvanize
1 Million Cups Phoenix
Algorithm Economy Meetup | 5:30pm @ Culinary Dropout
Phoenix Cloud
Games at Boulders on Southern | 5:30pm @ Boulders on Southern
Arizona Games!
Startup Weekend PHX Speed Networking | 6pm @ Galvanize
Galvanize Phoenix
Solving Common DBA Problems with Uncommon Uses of R | 6pm @ ICE
Arizona SQL Server User Group
Vicki Mayo: TouchPoint Solutions ($15) | 6pm @ Galvanize
Startup Grind Phoenix
Beginner Developers: Lightning Talks | 6:30pm @ GoDaddy Tempe
Phoenix ReactJS
Experienced Developers: CSS-in-JS | 6:30pm @ GoDaddy Tempe
Phoenix ReactJS
Origami: The Ultimate Prototyping Tool | 6:30pm @ meltmedia
UX in Arizona
Thursday (Nov. 9th)
We Protect PHX — Phishing for Talent | 6pm @ Culinary Dropout
We Protect PHX
Mocking a Web Service in .NET | 6pm @ Entertainment Partners
Humble Rock Stars in .NET
Book Discussion: The Unpersuadables by Will Storr | 6pm @ Uncorked
Thinking and Drinking
Automate the (Test) Automation | 6pm @ Orion Health
Arizona DevOps Selenium Group
Street Fighter 30th Anniversary Party & Tourney | 6pm @ Cobra Arcade Bar
Cobra Arcade Bar
Building Your First React-Native App | 6:30pm @ Galvanize
Galvanize Phoenix
WordPress Meetup — Tempe | 6:30pm @ Endurance
Arizona WordPress Group
Git Workshop! Git Together in the Terminal | 7pm @ Galvanize
Phoenix Version Control
Saturday (Nov. 11th)
Phoenix Fan Fest ($15) | 9am @ Phoenix Convention Center
Phoenix Fan Fest
Saturday Morning Open Hack | 9am @ GoDaddy Tempe
DesertPy — Phoenix Python Group
Harry Potter Family Event ($13)| 10am @ Arizona Science Center
Arizona Science Center
Melanie Swan — Payment Channels | 11am @ GCU
Desert Blockchain
Hackfest @ DeVry CyberSecurity Range | 11am @ DeVry
Linux HackFest
Sunday (Nov. 12th)
Phoenix Fan Fest ($15) | 9am @ Phoenix Convention Center
Phoenix Fan Fest
Save the Date for These Upcoming Events
Nov. 17th — 19th | Startup Weekend ($25)
Dec. 5th | A Celebration of Woman & Girls in Tech & Startups
Free JavaScript Fundamentals ( Online Course & Live Sessions)
jsfundamentals.eventbrite.com
Beginning your web development journey can sometimes be a little intimidating. That is why Galvanize has created a self-paced JavaScript fundamentals online course with over 30 hours of content, exercises , and projects. Also, in our effort to support your learning journey, every Tuesday, there will be in-person studio sessions with Galvanize instructional staff to go over your questions from that week.
|
Phoenix Tech Events (Nov. 6th — Nov. 12th)
| 0
|
phoenix-tech-events-nov-6th-nov-12th-174f5d044803
|
2017-11-06
|
2017-11-06 15:10:28
|
https://medium.com/s/story/phoenix-tech-events-nov-6th-nov-12th-174f5d044803
| false
| 502
|
A comprehensive list of the awesome tech, entrepreneurship, and gaming events happening locally.
| null | null | null |
Phoenix Tech Events This Week
|
christopher.huie@galvanize.com
|
phoenix-tech-events-this-week
|
PHOENIX,TECH,WEB DEVELOPMENT,DATA SCIENCE,VIDEOGAMES
| null |
Startup
|
startup
|
Startup
| 331,914
|
Chris Huie
| null |
8af52aa51166
|
chrishuie
| 107
| 552
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-11-14
|
2017-11-14 06:15:12
|
2017-11-14
|
2017-11-14 06:21:36
| 0
| false
|
en
|
2017-12-08
|
2017-12-08 15:06:50
| 10
|
1750d4da6438
| 0.837736
| 11
| 2
| 0
|
This post is part of Month to Master, a 12-month accelerated learning project. For October, my goal is to defeat world champion Magnus…
| 5
|
M2M Day 378: It works!
This post is part of Month to Master, a 12-month accelerated learning project. For October, my goal is to defeat world champion Magnus Carlsen at a game of chess.
Today, I finished the first version of my chess algorithm, allowing me to play a solid game of chess as a human chess computer. The algorithm is ~94% accurate, which may be sufficient.
Here’s a ten-minute video, where I explain the algorithm and use it to analyze a chess game on Chess.com that I recently played:
(Update: This is the game I played against Magnus, which I later revealed)
I’m excited that it works, and curious to see how much farther I can take it.
The next steps would be to determine the chess rating of the algorithm, play some assisted games with it to see how I do, and then, assuming it’s working as expected, see if I can optimize it further (to minimize the amount of required memorization).
It’s looking like Max Chess may actually become a reality…
Read the next post. Read the previous post.
Max Deutsch is an obsessive learner, product builder, guinea pig for Month to Master, and founder at Openmind.
If you want to follow along with Max’s year-long accelerated learning project, make sure to follow this Medium account.
|
M2M Day 378: It works!
| 87
|
m2m-day-378-it-works-1750d4da6438
|
2018-06-21
|
2018-06-21 05:58:21
|
https://medium.com/s/story/m2m-day-378-it-works-1750d4da6438
| false
| 222
| null | null | null | null | null | null | null | null | null |
Learning
|
learning
|
Learning
| 37,342
|
Max Deutsch
|
Obsessive learner and product builder. Founder at http://OpenmindLearning.com. Guinea pig for http://MonthToMaster.com. Get in touch at http://max.xyz.
|
86ff34e637cf
|
maxdeutsch
| 7,893
| 1,497
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-09-20
|
2017-09-20 06:40:59
|
2017-10-14
|
2017-10-14 11:03:24
| 12
| false
|
en
|
2017-10-14
|
2017-10-14 11:04:12
| 10
|
1750dead73c2
| 5.323585
| 6
| 0
| 0
|
Part 2: A Closer Look
| 5
|
Visualized using Carto.
Mapping Traffic Accidents in Metro Manila
Part 2: A Closer Look
Some weeks ago, I made a post on my introductory project to data science: a Python program to clean and geocode a dataset containing traffic accidents in Metro Manila during 2015. In this post, I dive deeper into the geocoded dataset and reference other articles on traffic statistics in Metro Manila.
A Quick Background
At the time of my previous post, my program geocoded ~15,000 records from the 2015 dataset of ~96,000. I’ve been running my program since, and the current total is ~46,000 geocoded records.
In this post, I visualize and analyze the statistics of the geocoded dataset. I used Carto to generate all geographic visualizations, and Google Sheets to generate all statistical visualizations. For more information on the 2015 dataset, my methodologies, and sources of potential error, please refer to the previous post.
Visualization
First, a visualization of the geocoded dataset.
~46, 000 geocoded traffic accidents. Visualized using Carto.
The dataset contains traffic accidents in the Metro Manila area, and the resulting maps reflect that. Many of the traffic accidents seem to happen along major highways and roads; numerous geocoded points form visible traces of the roads. These main roads are the arteries and veins of Metro Manila, accommodating hundreds of thousands of vehicles every day.
A closer look at Southern Metro Manila. Visualized using Carto.
For example, in Southern Metro Manila, many of the reported accidents were concentrated around the main highways, such as Alabang-Zapote Road (lower horizontal road), Sucat Road (upper horizontal road), or Service Road (right side, vertical road).
A closer look at Northern Metro Manila. Visualized using Carto.
However, the geocoded traffic accidents aren’t exclusive to the main highways and avenues. In many areas in Northern Metro Manila, the geocoded points aren’t concentrated to lines, and appear instead as patches.
A closer look at the business districts Makati and BGC. Visualized using Carto.
For example, business districts Makati (left half, center) and The Fort (right half, center) are scattered with accidents outside main highways like EDSA. The lesser roads that permeate these business, commercial, and residential districts can often be narrow and crowded.
Accident density by city (approximate). Darker contains more accidents. Visualized using Carto.
Carto has a feature that visualizes accident density by regions or cities. Based on the visualization, Quezon City contains the most accidents, followed by Manila and Makati. Unfortunately, I didn’t find any visualizations for further granularity (at the barangay level).
Analysis
Now for some simple analysis on the non location-related fields.
All accidents vs time of day. Visualized using Google Sheets.
Most reported accidents happened during the morning to early evening (7AM-8PM). Schools and businesses typically operate from the morning to late afternoon. However, office hours can extend into the evening to compensate for arriving late to work due to traffic in morning commutes. In addition, certain industries can require employees to work night or graveyard shifts.
All accidents vs day of week. Visualized using Google Sheets.
The majority of reported accidents happened during weekdays and Saturday. Sunday, however, sits well below the average of the other days of the week. According to Philippine labor policies, normal working hours are 8 hours a day, 6 days a week.
Visualized using Google Sheets.
Each accident record in the original dataset was classified into one of three types: Damage to Property, Non Fatal Injury, and Fatal. Damage to Property consisted the vast majority of reported accident types (81.0%), followed by Non Fatal Injuries (18.5%), and finally Fatal (0.5%).
All accidents vs vehicle type. Visualized using Google Sheets.
Cars dominate the vehicle type most involved in all accidents by a large margin, followed distantly by motorcycles, trucks, and vans.
There’s several factors that could explain why cars are most involved in accidents. Some notable points include…
A steady growth in automobile sales (particularly passenger cars), meaning more cars are being sold and used. This is fueled by…
More affordable cars and more affordable payment plans that lower the barrier of entry for many Filipinos.
Outsourcing jobs in the Philippines, which makes car ownership a possibility for a rising middle class.
All accidents vs gender. Visualized using Google Sheets.
Males consist of the majority involved in traffic accidents, at 67.2%. Females follow at 27.9%. Finally, there were people whose genders were unknown, at 4.9%.
All accidents vs age groups. Visualized using Google Sheets.
Most people involved in the reported accidents were between 16 and 40.
There is a sharp increase in accident involvement from people aged 16–20, to 21–25. Between 16–20, people are likely graduating high school and entering college. Between 21–25, people are likely finishing college and entering the workforce.
Working age millennials comprise the three age groups most involved in accidents (21–25, 26–30, and 31–35). In 2015, Millennials (aged 15–34) consisted 47.1% of the working-age Philippine population. Indeed, millennials consist of 1/3rd of the Philippine’s population. They constitute a great fraction of the general population and the working population, and it’s reflected in their involvement in traffic accidents.
Conclusion
I performed some visualizations and analysis on the geocoded dataset. Some interesting observations include…
Quezon City was the city that contained the most accidents. Manila and Makati follow. There wasn’t any feature to identify granularity by barangay, however.
The majority of accidents were recorded in the morning to evening (7AM — 8PM). Most businesses and schools operate around these hours, but oftentimes people do have to work later hours to compensate for morning commutes. Finally, there are industries that require employees to work night or graveyard shifts.
The majority of accidents were recorded on the weekdays and Saturday. Less accidents were reported on a Sunday compared to the other days. This is likely a combination of labor laws and Sunday being a day of rest for many Roman Catholic Filipinos.
Cars were the vehicle type most involved in all recorded accidents by a huge margin. A combination of a growing percentage of middle class Filipinos, and cheaper and more affordable car plans could point to why cars are the vehicle type most involved in accidents.
A considerable percentage of people involved in traffic accidents were working-age millennials (between 20 and 35). Millennials comprise a third of the Philippine population, and a majority of the working age Filipino population. It makes sense that many traffic accidents involved at least one millennial.
I don’t expect that geocoding the remaining dataset will change the observed trends very much. The next time I’ll post on this topic will be when I’ve processed at least two (ideally three) years’ worth of traffic accidents. I think observing the point map and statistics as they change over a number of years would reveal additional insights.
Thank you for reading! Please feel free to contact me on LinkedIn.
Miguell Malacad | Professional Profile | LinkedIn
View Miguell Malacad's professional profile on LinkedIn. LinkedIn is the world's largest business network, helping…www.linkedin.com
|
Mapping Traffic Accidents in Metro Manila
| 6
|
mapping-traffic-accidents-in-metro-manila-1750dead73c2
|
2018-05-12
|
2018-05-12 09:29:22
|
https://medium.com/s/story/mapping-traffic-accidents-in-metro-manila-1750dead73c2
| false
| 1,053
| null | null | null | null | null | null | null | null | null |
Data Science
|
data-science
|
Data Science
| 33,617
|
Miguell Malacad
|
Student | Aspiring Data Scientist
|
624dc4d8bc3a
|
miguell.malacad
| 41
| 44
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
e71c400fedb5
|
2018-04-17
|
2018-04-17 03:40:31
|
2018-04-17
|
2018-04-17 03:50:03
| 1
| false
|
en
|
2018-04-22
|
2018-04-22 23:53:34
| 6
|
175282fbabef
| 2.403774
| 3
| 0
| 0
|
By Blair Willems
| 4
|
Venture 5: Dynamic rates
By Blair Willems
Update: we’d like to clarify that each of these first canvases represent a concept we’re investigating, based on known pain points. We’re in the process of validating them more, and will choose a small subset to start testing as proofs of concept.
Ok, my turn to say hello — I’m Blair, the other venture manager at Data Ventures. Internally myself and Robert are affectionately (we hope?) referred to as Blob.
Imagine opening your first retail business, and then long term roadworks suddenly make your customers’ favourite coffee shop hard to reach or too noisy to enjoy a book — do they occasionally go somewhere else for their morning flat white, or make a permanent change? In most cases, you ( the business owner) suffer this loss without compensation.
Here’s where this venture can add value. Bringing together frequently updated data sets that can monitor relevant performance data and travel patterns, we enable councils set fairer rates by reviewing them more often, and broadening the dataset they use to measure business value.
The benefits to everyone? Fairer rates, more resilient businesses, and businesses that can concentrate on customer value, rather than managing downturns cause by external factors.
For more detail on this venture, we’ve included the details from our lean canvas as plain text below, as well as well as links to our github repository at the end. Next steps are to work to more fully validate the canvases, to see which ones we select for the first round of MVPs.
Remember to subscribe to this blog or follow us on twitter at @dataventuresnz to keep updated on how we’re progressing or send us your questions or comments. You can also email us at dataventures@stats.govt.nz.
Problem
Retail tenants are often affected by unforeseen/unplanned circumstances or planned infrastructure upgrades/changes. This can be anything from an earthquake, a shopping centre opening up nearby or a roading/transport change.
These cause a change in the opportunity market for the retailers, and could be the difference between surviving or closing due to high rental prices. This is true even in cases where (if only temporarily) the changes mean it’s no longer high street retail.
Existing alternatives
Colliers.
Market research performed by landlord/commercial property manager.
Customer segments
Local government, commercial property owners, commercial property managers.
Early adopters
Local government.
Value proposition
A retail tenant receives appropriate pricing through more frequent rates/levies/rental changes, according to their current opportunity market.
High level concept
Use relevant business performance data to help adjust and set council rates more fairly for businesses.
Solution
A dynamic model that indicates the appropriate rate/rental/lease for a location according to the factors that could affect it.
This would update more frequently than the current lease range which is often of multiple years
Advantage
We have some great prior knowledge of this problem space, and some potential partners ready to experiment with us.
Revenue streams
A small subscription fee to access the service, which would vary based on frequency and use.
Cost structure (1 lowest, 5 highest)
Complexity: 3.
Risk: 2.
Effort: 2 [There’s a possible dependency factor on another venture.]
Acquisition: 2 [There’s a possible dependency factor on another venture.]
Key metrics
Impact on retailers reduces during infrastructure changes (as recorded by council complaint levels)
Decrease in number of businesses closing (resilience)
Channels
Local government.
Links to our files for the dynamic rates lean canvas
Dynamic rates repo (GitHub)
Dynamic rates lean canvas (ODS 16 KB)
Dynamic rates lean canvas (XLSX 52 KB)
|
Venture 5: Dynamic rates
| 21
|
venture-5-dynamic-rates-175282fbabef
|
2018-05-08
|
2018-05-08 04:43:35
|
https://medium.com/s/story/venture-5-dynamic-rates-175282fbabef
| false
| 584
|
We are the commercial and prototyping arm of Stats NZ with a focus on partnering with private organisations.
| null | null | null |
Data Ventures
|
dataventures@stats.govt.nz
|
data-ventures
| null |
dataventuresnz
|
Data Science
|
data-science
|
Data Science
| 33,617
|
Data Ventures
|
We are the commercial arm of @StatsNZ with a focus on partnering with private organisations.
|
7087d3c35c6
|
dataventures
| 52
| 49
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
19e43a17fcba
|
2017-11-17
|
2017-11-17 10:48:49
|
2017-11-17
|
2017-11-17 10:50:42
| 1
| false
|
en
|
2018-01-15
|
2018-01-15 10:37:36
| 0
|
1753f6303139
| 1.637736
| 10
| 0
| 0
|
The most difficult thing in any advertising campaign is coming to the agreement with a performer. After all, not only finding a decent…
| 5
|
AdHive Smart Advertising Bounds the Campaigns to Succeed
The most difficult thing in any advertising campaign is coming to the agreement with a performer. After all, not only finding a decent contractor is required, but also making sure the quality of the task performed is sufficient. It is not always possible, since such relations can only be built on basic human trust, especially when it comes to bloggers.
How to build trust between an ad customer and a performer? The answer is simple — cooperating with our marketing platform AdHive. All interactions between advertisers and bloggers is regulated by smart-contracts based on the Ethereum platform.
Smart Contracts are a quality assurance
The automatisation of many processes became possible with the use of smart contracts technology, including advertising via bloggers.
What is a smart contract exactly, since it can simplify the life of advertisers and bloggers so much? A “smart contract” should be explained as a special computer algorithm, which is designed to support self-executing contracts. They are executed in the blockchain environment.
To simplify an advertising campaign, our platform user inputs the conditions for native video content placement into a smart contract. For example, it can be a task that includes what the blogger needs to say about the brand, its target audience, the deadline or any other parameters.
Bloggers, who represent the other side of a smart contract, perform the work according to the conditions prescribed in it. Remuneration is paid only if the system considers all the terms of the contract were fulfilled.
With such a computer code and our Artificial Intelligence, which is able to recognize audio and video brand reference, the problem of mistrust between the participants and the problem of paying for poor quality work can be eliminated. After all, a payment for placement will be processed only in case of a qualitative execution of a smart contract.
Our platform allows you to eliminate such problems of the advertising business as expensive intermediaries, poorly executed tasks and unfulfilled obligations. At the same time, our AdHive development simplifies payment between bloggers, advertisers and members of the community as much as possible.
With AdHive you can optimize the native video content placement, quality control, as well as payments, therefore letting advertisers get rid of unnecessary headaches.
|
AdHive Smart Advertising Bounds the Campaigns to Succeed
| 400
|
with-adhive-smart-advertising-campaign-is-bound-to-succeed-1753f6303139
|
2018-02-25
|
2018-02-25 16:56:42
|
https://medium.com/s/story/with-adhive-smart-advertising-campaign-is-bound-to-succeed-1753f6303139
| false
| 381
|
A community powered global network for native video advertisement.The AdHive platform automates all steps of interactions between bloggers and advertisers.AI modules for video and speech recognition connect to vlog channel and control the execution of the ad task by the blogger.
| null |
adhivetv
| null |
Adhive.tv
|
Adhiveinfo@gmail.com
|
adhive
|
AI,ADVERTISING,NATIVE ADVERTISING,INFLUENCER MARKETING
|
AdhiveTv
|
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
AdHive.tv
|
The first AI-controlled influencer marketing platform on Blockchain. Launching massive advertising campaigns has never been so simple.
|
295e61003285
|
AdHiveTV
| 499
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-17
|
2018-08-17 14:18:29
|
2018-08-17
|
2018-08-17 14:31:16
| 1
| false
|
en
|
2018-08-17
|
2018-08-17 14:31:16
| 0
|
17541a90a9a
| 2.335849
| 10
| 0
| 0
|
Dear All,
| 5
|
18 Reasons why… Keplertek
Dear All,
Are you willing to:
Grow your Money?
Earn Higher Returns?
Start and Expand a Business?
Support Others?
Be Part of a New Venture?
Become the Investor of the Future and contribute to the advancement of the humanity!
Here are 18 reasons why Keplertek is the best investment opportunity for the moment:
1. Keplertek — World’s first decentralized ecosystem backed by AI and blockchain to nurture startups and boost their success rate.
2. Keplertek’s Innovative platform commissions high-tech initiatives for developing breakthroughs through global teamwork.
3. Keplertek provides investors with unique investment opportunities in high-tech’s most disruptive branches: Robotics & AI.
4. Social network where idea generators can initiate global team-building process, adding missing pieces and even assembling their teams from ground-up.
5. Kepler Accelerator presents the opportunity to disrupt the early stage financing ecosystem by eliminating geographical barriers between investors and startups seeking funding.
6. Keplertek online platform bridges the gap between ideas, talents and technical resource providers globally.
7. Keplertek disrupts traditional centralized funding mechanisms and establish a new paradigm for the high-tech sector by providing a platform for creators to work directly with investors and the global community.
8. Keplertek creates a transparent, simple and secure investment environment for the investors using blockchain technology.
9. With the help of Kepler Ecosystem, the success of a startup depends exclusively on the quality of the startup idea and team’s capabilities to execute the project.
10. Keplertek aims to reduce deficiency that exists in start-up ecosystem by increasing the success rate of newly-born innovative projects, implementing new funding mechanism and therefore, utilizing the advantages of talent distribution.
11. With AI in its core, Kepler Network will be the central communication hub for the members within the high tech startup ecosystem. Innovators with ideas will leverage the capabilities of our AI powered Network, get assistance in assembling teams and bringing their ideas to life, while technical professionals will be exposed to quality ideas and will be able to find one that fits their skills and passion.
12. For both innovators and technical professionals, the platform offers great opportunities for constant professional and personal development.
13. Keplertek aims to increase the attractiveness of angel investing and crowdfunding to traditional institutional investors. Through its due diligence, IPO grade planning and reporting procedures, Keplertek seeks to re-create a transparent infrastructure familiar to institutional investor.
14. The platform provides investors equal investment opportunities and ability to invest in projects at any stage of development.
15. The global artificial intelligence (AI) robotics market is set to grow from $3.49bn in 2018 to $12.36bn in 2023, with a forecast compound annual growth rate (CAGR) of 28.78% during this period.
16. Robotics sector is worth an estimated $80 billion. Analysts predict market magnitude to triple in the short run and long-term prospects to multiply tenfold. International Data Corporation (IDC) expects that the corporate expenditure on robotics to reach $230.7bn in 2021, at the CAGR of 22.8%.
17. The total global revenue from AI for enterprise applications is projected to grow from $1.62bn in 2018 to $31.2bn in 2025, with the CAGR of 52.59 % in the forecast period.
18. The total gross output of high-tech industries, including both final and intermediate products, amounted to more than $7.1 trillion in gross output in 2016 in the United States alone.
Nick
|
18 Reasons why… Keplertek
| 206
|
18-reasons-why-keplertek-17541a90a9a
|
2018-08-17
|
2018-08-17 14:31:17
|
https://medium.com/s/story/18-reasons-why-keplertek-17541a90a9a
| false
| 566
| null | null | null | null | null | null | null | null | null |
Startup
|
startup
|
Startup
| 331,914
|
Kepler Technologies
|
Kepler Technologies is a cutting-edge #robotics and Artificial intelligence (#AI) #startup on the #blockchain
|
9896eac1f1e2
|
KeplerTek
| 123
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2017-10-10
|
2017-10-10 16:31:47
|
2017-10-17
|
2017-10-17 17:10:43
| 0
| false
|
en
|
2017-10-17
|
2017-10-17 17:10:43
| 5
|
17559791b2ae
| 2.85283
| 42
| 3
| 0
|
Mental healthcare is in crisis. Depression is the leading cause of disability globally, and the cost of mental illness to society has…
| 5
|
Why we need mental health chatbots
Mental healthcare is in crisis. Depression is the leading cause of disability globally, and the cost of mental illness to society has doubled in the last 10 years in every region of the world. Yet, the global median spending on mental health is just 2.8% of government health spending. In the US, 9 million adults report having serious thoughts of suicide in the previous year. Staggeringly, more than 8% of young people in the US report having made a serious suicide attempt in the previous year (Mental Health America, 2015). There are simply not enough mental health professionals to meet this demand.
The truth about good therapy
The popular idea about therapy is that it holds a kind of special magic that can only be delivered by individuals who are highly trained in this mysterious art form. The truth is that modern approaches to mental health revolve around practical information gathering and problem solving.
The best example of this is cognitive behavior therapy (CBT). CBT is probably the most effective approach to depression and anxiety developed to date. Decades of scientific study show its effectiveness for lots of problems ranging from depression and anxiety to sleep. It is also effective across the lifespan being used by children through older adults. CBT is highly structured and practical, and involves a lot of learning, so it lends itself well to being delivered over the internet. Internet-delivered CBT has been shown to be as effective as therapist delivered CBT for both anxiety and depression. Why is this important? Because if something this useful can be delivered using the internet, then it has the capacity to reach the millions of people all over the world who struggle with their mental health.
The challenges of internet delivered CBT
The internet can increase access, in a way that in-person therapy simply cannot. With this kind of global scale, even modest symptom reduction has the potential to be hugely impactful in lowering the overall burden of disease.
So shouldn’t this solve our supply and demand problem? Alas, there are two catches. The first is that internet-delivered CBT is best when there is a trained guide/coach to check in over the phone. While lesser trained health coaches are in more supply than fully trained, doctoral-level therapists, including any human in the loop will always limit its scalability. The second problem is about engagement. Internet-based CBT approaches suffer from poor adherence; said differently, they feel like homework, and are simply not engaging enough to hold people’s interest. This ultimately undermines their efficacy for those who don’t stick the course.
How Woebot overcomes these challenges
Within the larger goal of achieving true scalability, Woebot aims to address both of these problems. As an automated coach, Woebot helps you practice good thinking hygiene, but he’s also just fun to talk to. People find Woebot easier to engage with than other apps because it’s just a conversation. This is not surprising when you consider that humans have been conversing for about 160,000 years, but we’ve only been designing apps for 10. There’s nothing magic about Woebot, he just asks how you’re doing (mood tracking) and teaches you core CBT concepts in these short conversations (online learning).
Woebot knows his place (….as part of a comprehensive mental healthcare ecosystem)
Woebot will never replace therapy or therapists, and it is not trying to. There is no replacement for human connection — but that is not the point here. The point is that there are millions of people around the world that will never see a therapist, despite the fact that doing so could help them immensely. As a system, we need to get smarter with how we deliver service, and offer lower-intensity options to those who can make use of them. We should be helping people avoid the clinician’s office if we can to free up those precious human resources for those dealing with things that need human intervention.
On an individual level, however, there are so many reasons why people find it hard to reach out. We often say that when you are feeling low, “you should talk to someone”. But insisting that this is the only way to get help leaves behind all of those for whom that is not an option. What if it’s 3am? He won’t do the job of a therapist, but in our experience, that’s not what people want or expect from him either.
It’s nowhere near perfect, but it’s a start.
|
Why we need mental health chatbots
| 211
|
why-we-need-mental-health-chatbots-17559791b2ae
|
2018-06-11
|
2018-06-11 20:12:17
|
https://medium.com/s/story/why-we-need-mental-health-chatbots-17559791b2ae
| false
| 756
| null | null | null | null | null | null | null | null | null |
Mental Health
|
mental-health
|
Mental Health
| 75,731
|
Alison Darcy
|
Psychologist, CEO & Founder of Woebot Labs (www.woebot.io), Former Faculty at @Stanford
|
efae3b80e226
|
dralisondarcy
| 121
| 55
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-08-27
|
2018-08-27 06:59:30
|
2018-08-27
|
2018-08-27 07:01:38
| 1
| false
|
en
|
2018-09-02
|
2018-09-02 16:34:32
| 1
|
175755c66504
| 0.396226
| 9
| 0
| 0
|
We have always sought for massive ways of rewarding the massive loyalty that comes from all of our supporters, and we had concluded on…
| 5
|
Revealing of the surprise package we promised
We have always sought for massive ways of rewarding the massive loyalty that comes from all of our supporters, and we had concluded on awarding something tangible that comes at no cost whatsoever, to the beneficiaries.
READ MORE: https://robotinarox.io/revealing-of-the-surprise-package-we-promised/
|
Revealing of the surprise package we promised
| 427
|
revealing-of-the-surprise-package-we-promised-175755c66504
|
2018-09-02
|
2018-09-02 16:34:32
|
https://medium.com/s/story/revealing-of-the-surprise-package-we-promised-175755c66504
| false
| 52
| null | null | null | null | null | null | null | null | null |
Energy
|
energy
|
Energy
| 22,189
|
Robotina
|
⚡️Future of energy ⚡️ #Blockchain enabled green energy platform. SAVE ELECTRICITY. SAVE MONEY. SAVE THE PLANET.
|
2d786eba2516
|
robotinaico
| 135
| 2
| 20,181,104
| null | null | null | null | null | null |
0
|
model {
ssd {
num_classes: 2
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
...........................
train_config: {
batch_size: 2
optimizer {
rms_prop_optimizer: {
learning_rate: {
exponential_decay_learning_rate {
initial_learning_rate: 0.004
decay_steps: 800720
decay_factor: 0.95
}
}
momentum_optimizer_value: 0.9
decay: 0.9
epsilon: 1.0
}
...........................................
fine_tune_checkpoint: "ssd_mobilenet_v1_coco_2017_11_17/model.ckpt"
from_detection_checkpoint: true
num_steps: 250000
data_augmentation_options {
random_horizontal_flip {
}
}
$ python train.py
--logtostderr
--train_dir=training/
--pipeline_config_path=training/chair_table_v1.config
$ tensorboard --logdir=training/
$ python export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path training/chair_table_v1.config \
--trained_checkpoint_prefix training/model.ckpt-xxxxx \
--output_directory carving_detection
MODEL_NAME = 'carving_detection'
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
PATH_TO_LABELS = os.path.join('training', 'object-detection.pbtxt')
NUM_CLASSES = 2
| 8
|
ae8123f4fea6
|
2018-08-16
|
2018-08-16 02:31:17
|
2018-08-16
|
2018-08-16 14:12:35
| 12
| false
|
id
|
2018-08-29
|
2018-08-29 11:39:24
| 7
|
1758434fe133
| 5.678302
| 1
| 0
| 0
|
Tutorial kali ini akan membahas tentang bagaimana mendeteksi suatu objek menggunakan data yang kita tentukan sendiri.
| 5
|
Custom Object Detection using Tensorflow API (Bahasa)
Tutorial kali ini akan membahas tentang bagaimana mendeteksi suatu objek menggunakan data yang kita tentukan sendiri.
Pada kasus ini, saya melakukan deteksi terhadap 2 objek sekaligus pada satu frame yaitu meja dan kursi yang memiliki motif ukiran (topik skripsi/thesis saya sebenernya)😅.
Pengumpulan Dataset
Dalam melakukan deteksi objek tentunya kita membutuhkan dataset untuk proses training sehingga Neural Network yang dilatih dapat mengenali objek yang akan kita deteksi. Dataset yang saya gunakan total 500 gambar; 470 untuk train dan 30 untuk test/validation. Untuk menggunakan dataset yang saya gunakan dapat mengunduh full dataset disini. Namun, jika ingin mendeteksi objek dengan data sendiri sebaiknya menggunakan dataset minimal 250 untuk 1 objek. Ohya, fyi dalam pembagian dataset train dan test saya lakukan secara manual.
Dalam melakukan pengumpulan dataset saya menggunakan crawling images pada google seperti pada postingan saya berikut ini.
Sebelumnya, buatlah direktori supaya terlihat rapi dan tidak acak-acakan contohnya seperti punya saya berikut.
-direktori-
Membuat Annotation (Labelling)
Dalam membuat annotation atau memberikan label pada gambar kita dapat menggunakan aplikasi labelImg yang akan disimpan dalam file xml. Cara menggunakan labelImg sebagai berikut:
Buka aplikasi labelImg (jalankan perintah: python labelImg.py di prompt)
Klik tombol “Open Dir” (direktori Images)
Pilih direktori dataset gambar
Klik tombol “Change Save Dir” untuk menyimpan hasil file pelabelan .xml ke dalam 1 direktori yang sama (Annotation)
Klik tombol “Create Rectbox” untuk membuat kotak yang nantinya akan dikenali. Atau bisa gunakan tombol keyboard “W”.
Arahkan kursor dan tarik area kotak di sekitar objek.
Akan muncul kotak dialog untuk memberikan nama label objek.
Lakukan secara berulang terhadap seluruh gambar.
Gunakan tombol “D” pada keyboard untuk melakukan next gambar dan tombol “A” untuk previuos gambar.
-labelImg-
Convert .XML ke .CSV
Dataset annotation yang telah kita buat sebelumnya menggunakan aplikasi labelImg perlu dikonversi ke dalam format .csv yang akan digunakan untuk generate TFRecord.
Di direktori yang saya buat sebelumnya terdapat file dengan nama “xml_to_csv” yang berisi kode dalam mengkonversi file xml ke csv.
Jalankan perintah berikut untuk melakukan konversi xml ke csv.
python xml_to_csv.py
-xml to csv-
Convert TFRecord
Pada saat melakukan proses training, tensorflow akan membaca data input dalam format TFRecord yang dinamakan feeding data. Oleh karena itu perlu dilakukan generate data annotation yang tadi telah dikonversi ke file .csv.
Untuk kode TFRecord sendiri juga sudah ada dengan nama file “generate_tfrecord”. Hal yang perlu dilakukan adalah merubah kategori sesuai dengan kategori yang akan didefinisikan. Karena saya memiliki 2 objek yang akan dideteksi maka kategori yang diubah seperti berikut:
-definisikan kategori-
Jika ingin mendefinisikan 1 objek atau lebih dari 2 objek bisa disesuaikan. Ohya, untuk labelnya sendiri harus disesuaikan dengan nama yang kita pakai pada saat membuat annotation di labelImg.
Kemudian, jalankan perintah berikut:
-TFRecord-
-train-
-test-
Mengatur Label Map
Label map ini digunakan untuk memetakan label yang akan digunakan untuk memberikan penamaan pada objek yang akan dideteksi. Karena pada penelitian ini saya akan mendeteksi 2 objek maka item, id maupun name menyesuaikan begitu juga jika objek hanya terdiri satu objek atau lebih dari 2 objek. Berikut konfigurasi label map yang akan digunakan:
-label map-
Simpan dengan nama object-detection.pbtxt ke direktori data.
Konfigurasi Pipeline
Konfigurasi pipeline disini berhubungan untuk mengatur file config yang nantinya akan digunakan untuk melakukan konfigurasi dari model training karena tensorflow menggunakan protobuf maka konfigurasi ini sangat diperlukan. Model yang digunakan adalah SSD Mobilenet V1 dimana model ini sudah disediakan oleh tensorflow itu sendiri.
Nah konfigurasi yang harus dilakukan terdapat pada file chair_table_v1 yang terdapat di folder training. Ada beberapa bagian yang perlu diubah yang akan disesuaikan dengan model yang akan dibuat sebagai berikut:
Pada konfigurasi diatas pada bagian num_classes terdapat 2 kelas/objek yang digunakan yaitu meja dan kursi (note: bisa disesuaikan dengan jumlah objek yang diteliti).
Pada bagian batch_size, saya menggunakan batch size = 2 karena spesifikasi komputer yang saya miliki masih CPU 😅 dan itu membutuhkan waktu yang cukup lama dibandingkan menggunakan GPU. Meskipun komputer yang saya gunakan CPU tapi hasil yang didapatkan bisa mencapai akurasi 80–90% hehe. Jika komputer anda memiliki spesifikasi lebih tinggi bisa menggunakan batch_size ukuran 4 atau lebih.
Selanjutnya, pada bagian num_steps yang digunakan untuk membatasi jumlah step yang digunakan pada saat training. Lagi-lagi karena spesifikasi komputer saya yang tidak mendukung, saya membatasi dengan jumlah 250000. num_steps ini juga disesuaikan dengan ukuran batch_size yang digunakan. Namun, jika ingin mengikuti num_steps yang dihasilkan oleh tensorflow sendiri saat melakukan training, bagian num_steps ini bisa dihapus.
Sampai tahap ini kita selesai menyiapkan dan mengatur file yang dibutuhkan untuk melakukan training model object detection yang akan kita lakukan. 🙂
Note: Pastikan susunan direktorinya benar .
Training Model
Selanjutnya adalah melatih komputer agar bisa mengenali objek yang sudah kita siapkan tadi (training). Sebelum itu, ada beberapa hal yang perlu dilakukan sebelum melakukan training:
Salin direktori yang dibuat tadi yaitu data, images dan training ke dalam folder /models/research/object_detection yang bisa di download di sini.
Download model SSD Mobilenet V1 COCO yang dapat di download disini, kemudian extract dan letakkan di folder /models/research/object_detection/training. Dari banyak model yang tersedia yang akan digunakan adalah model ssd_mobilenet_v1_pets.
Jalankan Protobuf: protoc object_detection/protos/*.proto --python_out=. di dalam /models/research sebelum menjalankan proses training.
Setting Path: export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
Sekarang, berikut perintah untuk melakukan training model:
^train_dir merupakan lokasi penyimpanan checkpoint untuk training. ^pipeline_config_path mengarah ke konfigurasi pipeline.
-training-
Untuk melihat grafik selama proses training dapat menggunakan TensorBoard, jalankan perintah dibawah ini untuk menjalankan TensorBoard.
Salah satu grafiknya seperti ini:
-tensorboard-
Export Graph
Sebelum memasuki proses testing untuk mendeteksi objek, jalankan perintah berikut untuk mengexport modelnya:
Pada bagian model.ckpt-xxxxx merupakan banyaknya steps yang kita lakukan saat melatih model dapat di sesuaikan dengan yang hasil jumlah training yang kalian dapatkan. Untuk melihat dimana file tersebut berada ada di /models/research/object_detection/training. Punya saya ada 250000 steps.
Pada bagian output_directory merupakan folder output tempat untuk menyimpan file hasil export modelnya. Nama folder hasil output nya saya menamakan dengan carving_detection.
Testing/Pengujian Model
Pada pengujian model dibutuhkan sampel gambar/video yang bisa diambil melalui kamera sendiri. Untuk menguji sampel gambar dapat memindahkan gambar yang sudah diambil ke dalam direktori models/research/object_detection/test_images dan berikan nama image1, image2, dst.
Pengujian model disini menggunakan jupyter notebook dan terdapat beberapa hal yang perlu diubah:
sys.path.append('/..../models/research') #point to your tensorflow
sys.path.append('/..../models/research/slim') #point to your slim
Note: Hapus bagian download
Hasilnya kurang lebih seperti ini:
-picture-
-video-
Berhasil diprediksi dengan baikk. 🤗🤗🤗🤗
Comment jika ada pertanyaan jangan lupa “clapsnya” hehe.
Referensi:
https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9
https://imamdigmi.github.io/post/tensorflow-custom-object-detection/
|
Custom Object Detection using Tensorflow API (Bahasa)
| 20
|
custom-object-detection-using-tensorflow-api-bahasa-1758434fe133
|
2018-08-29
|
2018-08-29 11:39:24
|
https://medium.com/s/story/custom-object-detection-using-tensorflow-api-bahasa-1758434fe133
| false
| 1,147
|
Statistics. Data Enthusiast
| null |
arisakaramarita
| null |
Syarifah Rosita Dewi
|
syarifahrositadewi@gmail.com
|
syarifah-rosita-dewi
|
STATISTICS,MACHINE LEARNING,COMPUTER VISION,DATA ANALYSIS,DATA SCIENCE
|
shitaarsya
|
Deep Learning
|
deep-learning
|
Deep Learning
| 12,189
|
Syarifah Rosita Dewi
|
Data Scientist in your area. 👩💻 Easy to see my medium story, click on https://medium.com/syarifah-rosita-dewi
|
f7a2169f6fd4
|
syarifahrositadewi
| 2
| 5
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
66dc44acfceb
|
2018-05-09
|
2018-05-09 09:46:24
|
2018-05-09
|
2018-05-09 09:57:04
| 2
| false
|
en
|
2018-05-12
|
2018-05-12 17:49:14
| 14
|
1758688c541
| 2.28522
| 10
| 0
| 0
|
Are you attending the Consensus 2018 conference in New York City?
| 5
|
Meet the Safeguard Team at Consensus 2018
Are you attending the Consensus 2018 conference in New York City?
Between the 14th and 16th of May, over 4000 startup representatives, investors, media representatives and general Blockchain enthusiasts will be heading to New York City to partake in Coindesk’s annual blockchain conference: Consensus. Here, over 250 speakers will be presenting on different aspects of the Blockchain world, exploring the many avenues into which Blockchain is diverging; from supply chain management, healthcare and insurance, to energy, security and safety.
Safeguard’s co-founders, myself (Ingmar Vroege) and Gertjan Leemans, will be heading to the conference as primary representatives of the Safeguard Token project. Along with them, the drivers of our growth and marketing team, Gino Taselaar and Pascal van Steen, will also be representing Safeguard in the Big Apple, as well as Allard van Santbrink, Safeguard’s lead investment strategist.
Apart from the Consensus summit, members of the Safeguard team will be representing the project at another conference held on the 16th of May: the Square5 Blockchain Conference. As Safeguard’s CEO, I have been asked to give a presentation at this conference, where I will be discussing the intersection between Blockchain technology and safety-management.
In line with our mission of building AI-powered accident prediction and prevention software, with which we aim to reduce workplace accidents around the world by 20% before 2022, we rely heavily on Blockchain technology.
On top of the existing Safeguard solution, which has successfully been helping enterprise level clients like KPN and BAM to manage their safety operations, we’re building the Safeguard Protocol, an open source platform into which other organizations will be able to plug into and build on. Through this protocol, we will be able to build an effective AI system, whilst enabling community-driven platform development that ultimately enables the democratization of safety: making it accessible to organizations around the world.
We’ll be announcing more details on this soon.
On top of the great partnerships we have formed in the Blockchain and AI spaces already, such as with Startupbootcamp, Scylla and Develandoo, our founders are looking forward to the potential new opportunities that the Consensus Conference might present.
Interested in finding out more about our Blockchain application and how we’re applying it to universalise safety-tech, facilitating a shift to a safer world?
For partnership enquiries, or for organizing a meet up in NYC, get in touch: ingmar@safeguardtoken.com.
Interested in our token sale?
Register now for our presale and earn a 20% bonus on your initial purchase.
Upon registering, you will automatically receive a referral code with which you can earn an additional 2.5% bonus per referral (2.5% for you and 2.5% for the referrer).
Sign up for our presale here to secure your referral bonus code.
To be a part of our active community or learn more about what we do:
Visit our website here.
Join our Telegram community here.
Follow us on Twitter here.
Connect with us on Facebook here.
Follow us on Medium here.
Or email support@safeguardtoken.com
|
Meet the Safeguard Team at Consensus 2018
| 452
|
meet-the-safeguard-team-at-consensus-2018-1758688c541
|
2018-05-29
|
2018-05-29 12:56:15
|
https://medium.com/s/story/meet-the-safeguard-team-at-consensus-2018-1758688c541
| false
| 504
|
News and announcement about the Safeguard Token
| null |
SafeguardToken
| null |
Safeguard Token
|
support@safeguardtoken.com
|
safeguardtoken
| null |
SafeguardToken
|
Blockchain
|
blockchain
|
Blockchain
| 265,164
|
Ingmar Vroege
|
Safeguard Founder
|
6db3437a2549
|
ingmarvroege
| 127
| 127
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-27
|
2018-07-27 16:38:13
|
2018-07-27
|
2018-07-27 16:38:55
| 0
| false
|
en
|
2018-07-27
|
2018-07-27 16:38:55
| 3
|
1758815c41f2
| 2.071698
| 0
| 0
| 0
|
The reality of advances like AI — Artificial Knowledge is that it can give people, associations and society a significant lift the extent…
| 1
|
How to Find Out Top AI Solutions Providers
The reality of advances like AI — Artificial Knowledge is that it can give people, associations and society a significant lift the extent that limit and effectiveness, extending essential initiative capacities, speeding correspondence with administrations and affiliations, and offering Top Ai Solutions Providers basic advances in wanders going from cash related administrations to life sciences. Various renowned affiliations and associations are using man-made consciousness viably for their business assignments to help up the effectiveness and accomplish their business progress in a wise manner. 2018 has been announced as the year the AI.
Following steps can lead you towards the Top AI Solutions provider.
Get your work done- Learn as much about computerized reasoning as you can. Take an intensive lesson or read groundwork’s on the web. Make sure to have the best level comprehension of essential ideas, for example, machine learning, neural systems, and common dialect preparing. In particular, find out about key industry players (i.e., AI designers and sellers), their contributions, and the present abilities of their administrations.
Illuminate what you need AI to do- Begin with a particular issue you need to comprehend or a particular business target you need to accomplish. Would you like to streamline an arrangement of work processes, crunch troves of divided information into noteworthy knowledge, drastically enhance client encounters, increment net revenue, or increase deals? Would you like to explain a repeating barrier in a particular business process? Diving into insights about the test or target will enable you to delineate a reasonable AI appropriation design.
Make short list Screen organizations in view of reputation, group accreditations, item capacities, arrangement fit, achievement rate, and client bolster. Organizations that have been in the AI field longer or have done demonstrated work with extensive undertakings in your area are more ideal than organizations with disappointing portfolios or have occupied with an alternate line of business.
Connect and reach- Gather extra data by straightforwardly captivating the merchants/AI improvement offices on your short list. Ask every merchant for the accompanying inquiries:
Keep a record- Look at the qualities and shortcomings of the sellers you have locked in. Draw up a far-reaching set of variables that will enable you to estimate the relative estimation of the AI advancement offices in your rundown. Notwithstanding value, the criteria ought to incorporate item includes, track record, achievement rate, industry rep, and client bolster.
Joining forces with the correct AI improvement office may simply be the upper hand you require.
Ask following before handing over your project to them.
Have they understood a business challenge like yours?
Which specific AI technique (PC vision, profound learning, characteristic dialect handling, ill-disposed systems, and so forth.) would they say they are utilizing?
What amount of time and preparing information are expected to enhance the arrangement’s execution?
What are the dangers associated with utilizing their item?
By what method should their item’s return for capital invested be estimated?
In what ways are their item includes better than those of contenders?
Settle on a choice. In light of your analysis, pick the AI advancement office your business will band together with. Take the jump.
Author Bio
We have extensive knowledge of Data Management Solutions.
Resources Link: https://uberant.com/article/430154-how-to-find-out-top-ai-solutions-providers/
|
How to Find Out Top AI Solutions Providers
| 0
|
how-to-find-out-top-ai-solutions-providers-1758815c41f2
|
2018-07-27
|
2018-07-27 16:38:56
|
https://medium.com/s/story/how-to-find-out-top-ai-solutions-providers-1758815c41f2
| false
| 549
| null | null | null | null | null | null | null | null | null |
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
John Wilson
| null |
6759acb73644
|
pageglobal526
| 0
| 1
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-07-07
|
2018-07-07 20:36:46
|
2018-07-09
|
2018-07-09 21:15:57
| 6
| false
|
tr
|
2018-07-09
|
2018-07-09 21:15:57
| 6
|
17597d112e04
| 1.60283
| 2
| 0
| 0
|
Bundan önceki yazılarımda 6 adet tahmin algoritmasını anlatmıştım.
| 5
|
Machine Learning — Tahmin Algoritmalarının Değerlendirilmesi ve Örnek— Part 7
Bundan önceki yazılarımda 6 adet tahmin algoritmasını anlatmıştım.
Linear Regression
Multiple Linear Regression
Polynomial Regression
Decision Tree
Random Forest
Support Vector Regression
Bu yazıda yukarıdaki algoritmaların hepsini içeren bir örnek yapacağız ve bu algoritmaların ürettiği sonuçları karşılaştıracağız. Eğer yukarıdaki algoritmaları bilmiyosarsanız önce onlara bakmanızı tavsiye ederim.
Not : Yapıcağımız örnekteki veri dosyasını buraya tıklayarak indirebilirsiniz.
Örnekte amacımız verilen dosyadaki verileri kullanarak maaş tahmini yapmak ve OLS raporunu konsola yazdırmak. Bu raporda önemli olan R-Square, Adjusted R-Square ve P değerleridir.
Multiple Linear Regression OLS
Polynomial Regression OLS
Support Vector Regression OLS
Decision Tree OLS
Random Forest OLS
Algoritmaların Karşılaştırılması
R-Square değerlerine baktığımızda değeri düşük olan modelimizin uyumu o kadar yüksek olur ancak bizim yapıtığımız bu örnekte 3 parametreyi de kullandık. 1 ya da 2 parametre ile bu işlemleri gerçekleştirseydik R-Square değerlerinde iyi veya kötü farklılık olduğunu gözlemleyecektik. Bu işlemlere göre
Bu örnekte Support Vector Regression 0.396 R-Squared değeri ile en başarılı tahmin algoritması oldu.
Sonuç
Hepsi bu kadar. Bu yazı ile birlikte Tahmin algoritmalarının sonuna geldik. Bu yazıda Tahmin algoritmalarının nasıl karşılaştırılacağını öğrendiniz.
Okuduğunuz için teşekkürler
|
Machine Learning — Tahmin Algoritmalarının Değerlendirilmesi ve Örnek— Part 7
| 2
|
machine-learning-tahmin-algoritmalarının-değerlendirilmesi-ve-örnek-part-7-17597d112e04
|
2018-07-09
|
2018-07-09 21:15:58
|
https://medium.com/s/story/machine-learning-tahmin-algoritmalarının-değerlendirilmesi-ve-örnek-part-7-17597d112e04
| false
| 173
| null | null | null | null | null | null | null | null | null |
Machine Learning
|
machine-learning
|
Machine Learning
| 51,320
|
ekremh
|
LinkedIn : www.linkedin.com/in/ekremh
|
fc6b9ad9c023
|
ekrem.hatipoglu
| 36
| 19
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
9b46fdd7dd74
|
2018-06-08
|
2018-06-08 15:11:49
|
2018-06-08
|
2018-06-08 15:15:43
| 4
| false
|
en
|
2018-06-13
|
2018-06-13 17:00:39
| 10
|
1759e5c301
| 5.288679
| 4
| 0
| 0
|
Finance is one of the industries most likely to be disrupted by AI. The field is data intensive and small improvements in process and…
| 5
|
How AI is Helping Financial Institutions
Finance is one of the industries most likely to be disrupted by AI. The field is data intensive and small improvements in process and accuracy offer major opportunities for companies looking to out-innovate their competitors and industry challengers. According to CB Insights, finance was one of the major drivers of AI-related venture capital last year and there doesn’t seem to be any sign of slowing down.
Here’s how AI can help improve financial organizations:
Fraud Detection
Fraudulent transactions are a costly business. Financial institutions unable to detect fraud suffer from not only monetary losses, but also hits to their reputation. The prevalence of false positives, instances where users are wrongly identified as fraudsters, has historically been a thorn in the sides of big banks. In an industry where detection accuracy traditionally hovers around 40%, it is much easier to run a false positive than the other way around. In fact, $118 billion in credit card sales are declined each year, even though real fraud only amounts to $9 billion. Each year, almost 15% of all American cardholders experience at least one wrongfully declined transaction.
There are many reasons fraud is difficult to detect with current systems. The number of actual fraudulent transactions is low, it can be hard to discover patterns and fraudsters are constantly changing their strategies to avoid detection. Analysts often attempt to build up a vast set of rules derived from historical data and industry knowledge, however these processes are limited by the amount of data they can ingest and the ability to handle exceptions. These are all areas where AI can help.
Companies like Stripe are actively innovating in this space. By using machine learning, they have been able to reduce fraud by over 25% without increasing the number of false positives. This was accomplished by integrating hundreds of new signals into the detection algorithm and constantly retraining their model to stay up to date with the newest tricks fraudsters are using.
Anti-Money Laundering
Money laundering is the process of running illegally obtained money through a series of transactions to give the appearance of legitimacy. Illegal operators attempt to wash their funds of association with criminal activity, similar to how everyday people wash their clothes of association with dirt. Unfortunately, money launderers appear to be quite good at it. The UN reports that money laundering transactions comprise 2% of global GDP, however banks are only able to seize less than 1% of all laundered money.
As a result, regulatory bodies hold financial institutions to a high standard when it comes to money laundering compliance. All parties and transactions are subject to a thorough and intensive due diligence process. This involves developing an understanding of who the senders and receivers are, what their relationship may be, navigating many layers of shell companies and getting a sense of historical transaction history. In the US, companies spend up to $7 billion on anti-money laundering and compliance operations each year. The analysis needed to determine if a customer is engaging in money laundering is long, inefficient, and almost entirely performed by humans.
Banks like HSBC are already getting a move on building more intelligent compliance processes and have invested heavily in research, development and partnerships. However, there are regulatory hurdles that need to be dealt with, as regulators worry that human reviewers will mindlessly follow the recommendations of black box AI systems. Fully automated financial compliance departments are still a thing of the future, but it is encouraging to see the largest financial institutions in the world actively working towards it.
Credit Scoring
This era of rapid development in artificial intelligence would have been impossible without an accompanying explosion in the availability of data. In the past, banks were able to get by with just using historical data of transactions and payments to determine how creditworthy a potential borrower was; however, with the increased availability of new structured and unstructured databases, novel new techniques are now necessary.
This new constellation of data gives lenders the opportunity to produce more accurate segmentation of their borrowers, as well as produce a more nuanced view of creditworthiness by incorporating qualitative factors such as willingness to pay and consumption behaviour. Since a larger array of data is now consulted, lenders can make more accurate assessments of traditionally difficult to score individuals, such as those without a credit history. Crucially, this clearing process is able to happen almost automatically, resulting in a better customer experience.
There are a slew of fintech startups already employing this model, especially in the developing world where banking histories are more sparse. A prominent example is Ant Financial, an arm of the Alibaba Group. They have developed an application that pulls from a combination of traditional and non-traditional data to produce a comprehensive credit score.
Product Recommendations
If you’re like me, then you probably use multiple financial products in your day-to-day life. Most financial products offer a high degree of customization, whether it’s for your credit card, your savings account, or your investment portfolio. For example, the relevancy a particular loan or line of credit will differ depending on what life stage you are in. Customers just graduating from university will have vastly different financial needs than those preparing to buy a house, or putting aside money for their first child’s education fund.
These shifts in consumer behaviour aren’t currently accounted for by most banks, despite the fact that 72% of customers used digital channels to open a chequing account in 2016. Banks capable of providing a more personalized experience with offers more relevant to the customer’s current needs would decrease acquisition costs and increase conversion and loyalty. Additionally, better product recommendations would allow financial advisors to optimally serve their most profitable customers, without losing focus on the rest of their client portfolio.
Process Optimization
Financial institutions are complex organisations, with many different departments, roles, and lines of communication. The potential points for increased efficiency are almost limitless, and properly identifying these opportunities requires a deep understanding of both the state of the art in AI and domain expertise. As an example, an AI model could analyze documents such as loan or mortgage agreements to measure the bank’s exposure to risk, flagging documents that appear problematic. This would allow analysts to focus their time on more rewarding, and more profitable tasks.
JP Morgan Chase, the biggest bank in the US, recently implemented a program called COIN to do just this. The software analysed commercial-loan agreements at a faster rate and with less errors than human reviewers. Chief Information Officer Dana Deasy doesn’t see this as a case of labour displacement, but rather as a case of augmentation, framing it as a way to “free people to work on higher-value things.”
Key Takeaway
Finance is not immune to innovation and disruption, and artificial intelligence has the potential to unlock many layers of value. The list of possible use cases will continue to grow thanks to innovations in machine learning that are increasing the scope of the technology and its potential applications. It will require careful planning, broad executive buy-in, sharp data science expertise and the right partner to make it a reality.
Interested in starting your AI journey? Contact us today.
|
How AI is Helping Financial Institutions
| 29
|
how-ai-is-helping-financial-institutions-1759e5c301
|
2018-06-13
|
2018-06-13 17:00:41
|
https://medium.com/s/story/how-ai-is-helping-financial-institutions-1759e5c301
| false
| 1,216
|
Stradigi AI is a leading AI powered solutions provider, backed by an applied research lab committed to bringing excellence & smarter results to international businesses.
| null |
stradigiai
| null |
Stradigi AI
|
info@stradigi.ai
|
stradigiai
|
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DEEP LEARNING,WOMEN IN TECH,MONTREAL
|
stradigiai
|
Fintech
|
fintech
|
Fintech
| 38,568
|
Ben Tang
|
the future is fluid — AI, ethical tech, hip-hop. Business Analyst @ Stradigi AI, we’re hiring!
|
4b06fa7d081e
|
bentang
| 88
| 64
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
|
244ef586c71e
|
2018-06-06
|
2018-06-06 09:59:19
|
2018-06-07
|
2018-06-07 07:48:27
| 2
| false
|
en
|
2018-09-03
|
2018-09-03 11:14:29
| 9
|
175afd2eb064
| 4.190881
| 6
| 0
| 0
|
Artificial Intelligence, Machine Learning and related fields are in a constant state of change. We want to inform but also encourage…
| 5
|
AI MUST READS — W22 2018, by City AI
Artificial Intelligence, Machine Learning and related fields are in a constant state of change. We want to inform but also encourage discussions on well presented topics we think are necessary in the context of putting AI into production. Every week we’re picking applied AI’s best articles plus adding a discussion starter
1. Microsoft is creating an oracle for catching biased AI algorithms
Microsoft is creating an oracle for catching biased AI algorithms
Noun Project | Andrejs Kirma | Ms. Tech Microsoft is building a tool to automatically identify bias in a range of…www.technologyreview.com
One of the constant companions of Artificial Intelligence at the moment is the plethora of questions about the ethical ramifications and about potential situations of bias creeping into the automation. If you’ve read almost any of my past must reads then you’ll know that this also isn’t new and there seems to be a new development of some kind in the “automation of bias” field almost weekly.
Microsoft and Facebook’s recent announcements SEEM to be the first practical steps being taken towards identifying and preventing this bias, that isn't to take away from the number of organisations set up and champions of the cause that have come forward because without these people I question whether these two leaders of industry would have taken these steps.
It does leave me questioning though. They can build an artificial intelligence system to detect A.I. in other artificial intelligence systems, if they can do that why can’t they prevent A.I. in their systems in the first place? Is this system for detection built by the same people that built the systems it will be searching for bias within?
Bonus video at the bottom of the page*
2. Why thousands of AI researchers are boycotting the new Nature journal
Why thousands of AI researchers are boycotting the new Nature journal
Budding authors face a minefield when it comes to publishing their work. For a large fee, as much as $3,000, they can…www.theguardian.com
To have such an archaic profit-making model in place in a field that is at the cutting edge technology is an affront not only to the potential of the technology but also to the memory of politically charged internet activist Aaron Swartz and the many like him.
Machine learning is a young and technologically astute field… The community itself created, collated, and reviewed the research it carried out. We used the internet to create new journals that were freely available and made no charge to authors.
Information that has already been paid for once by the taxpayer and the work produced by the scientists and researchers should never be held and controlled by private institutions. This shouldn’t just be limited to AI and ML either, information of this type should always be free and available to the public and hopefully this will start a trend that will start towards the removal of restrictions.
3. Are you scared yet? Meet Norman, the psychopathic AI
Meet Norman, the psychopathic AI
Norman is an algorithm trained to understand pictures but, like its namesake Hitchcock's Norman Bates, it does not have…www.bbc.co.uk
This is idiotic and intentionally misleading with the simple aim to increase the foot traffic of this article. Do you know what sounds like a great idea? Lets demonize a technology that people are already worried about and perpetuate the stereotype that this technology will inevitably kill every single one of us.
In reality all they really created was a 16 year old child spouting off for the sheer shock value of what he has to say.
Not only does this entire article very quickly steer away from the whole story about the “Norman Bates A.I.” almost as if there was a word limit that had to be reached, but what it does say about the experiment that was performed is that in fact it has nothing to do with the specific AI that they’ve attempted to place human qualities onto, but instead it has to do with the data that it has been fed. I imagine that a child raised that had consumed the same imagery as Norman would have a similar out look. What a waste of time. This is how not to do it.
Bonus Article
Microsoft + GitHub = Empowering Developers
Microsoft + GitHub = Empowering Developers — The Official Microsoft Blog
Today, we announced an agreement to acquire GitHub, the world’s leading software development platform. I want to share…blogs.microsoft.com
Although its not directly related to A.I. I feel like I’d be missing a trick if I didn’t talk about Microsoft’s announced acquisition of GitHub. Personally I’m not a big GitHub user (Don’t worry, I’m learning) and as such this announcement didn’t garner the visceral reaction from me that it did many people. Usually I’m the first to understand that the internet isn’t always the best place too get balanced, calm discussions on a topic and when I first saw the backlash I made the assumption it was the typical loud minority, but having read into it more there seems to be some genuine concerns.
There are a lot of other companies that could have acquired GitHub that would worry me twice as much as having Microsoft acquire the company, but still Microsoft will inevitably find themselves with conflicts of interest, how will they deal with these? Will they remain “all-in on open source” as they claim or will they exploit GitHub, driving away its user base?
Bonus Resource
Berkeley Open Sources Largest Self-Driving Dataset Every Data Scientist Should Download NOW
Overview UC Berkeley has open sourced the world's largest and most diverse self-driving dataset It contains 100,000…www.analyticsvidhya.com
UC Berkeley has open sourced the largest and most diverse self-driving dataset for the general public. It is being called ‘BDD100K’ and comes added with rich annotations.
Francesca Rossi, IBM Research & University of Padova, on unbiased AI. See also http://ibm.biz/five-in-five #AIethics
WorldSummit.AI
Join 6,000+ AI practitioners from over 100 countries at WorldSummit.AI this October!
|
AI MUST READS — W22 2018, by City AI
| 22
|
ai-must-reads-w22-2018-by-city-ai-175afd2eb064
|
2018-09-03
|
2018-09-03 11:14:29
|
https://medium.com/s/story/ai-must-reads-w22-2018-by-city-ai-175afd2eb064
| false
| 1,009
|
Making knowledge on #appliedAI accessible
| null |
cityai
| null |
Applied Artificial Intelligence
|
hello@city.ai
|
cityai
|
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DEEP LEARNING,COMPUTER SCIENCE,NATURALLANGUAGEPROCESSING
|
thecityai
|
Artificial Intelligence
|
artificial-intelligence
|
Artificial Intelligence
| 66,154
|
Joe Lord
|
Apprentice at Sage UK working on emerging technologies and Intern at City.AI curating weekly ‘ AI Must Reads’.
|
43fdd3607588
|
joe.lord
| 43
| 4
| 20,181,104
| null | null | null | null | null | null |
0
| null | 0
| null |
2018-09-11
|
2018-09-11 18:54:52
|
2018-09-11
|
2018-09-11 18:54:27
| 2
| false
|
en
|
2018-09-11
|
2018-09-11 18:54:53
| 4
|
175c1b6549bb
| 3.888994
| 0
| 0
| 0
| null | 3
|
How AI Customer Support Can Improve Customer Service
Traditional customer service needs a lot of manpower. If your company is answering calls and emails manually, you also can’t respond quickly to all of your customer’s queries. Artificial intelligence can help give you the speed and flexibility your company needs.
How AI Customer Support Can Improve Customer Service
With the rise in online business activity, there are more ways for your customers to reach you. They can send in forms, discuss problems on forums, and reach out through social media. Customers will also send in emails and, if your company supports it, text messages.
AI tools like chatbots can help customers with basic queries. This leads to quick resolution and customer satisfaction. AI tools can also:
Send automated responses with links to the right FAQ and help pages.
Ask customers for relevant specifics to create a more comprehensive help case that solves problems faster.
Place orders and save customer preferences.
How AI Can Assist Human Support
But sometimes customers want to talk to a human. That doesn’t mean the process has to be slow or too manual. Use AI customer support as your first response to solve basic problems quickly and queue complicated questions the right way. Smart customer service tools can also help human users organize their workload and solve problems faster. Here’s how:
1. Use AI to sort and route support inquiries.
If your customer support team has a workflow chart for redirecting support questions, AI tools can learn it. Chatbots can be trained to recognized keywords and make sure the right human receives the question. Not only does this cut down on delays and redirection, it reduces the risk of dropped queries.
This solution isn’t just for chatbots. You can employ keyword recognition and query sorting through automated phone systems, email inboxes, and social media. If your company uses Salesforce, use a tool like Desk.com that can integrate with your go-to platform. You don’t have to worry about people dropping the ball when reminders are right in their primary workspace.
2. Use AI for enhanced phone support.
When you have a customer on the line, you don’t have time to troubleshoot the problem. You need to have the best possible answer and be able to present it clearly.
AI customer support can provide live assistance to your customer service representatives over the phone. It can pull up common problems based on keywords in the call and suggest next steps.
Artificial intelligence can even help after the call is over. Because AI and machine learning focus on patterns, your tools can analyze your company’s call statistics. You can better plan out work schedules, find out which strategies work in aggregate, and find outliers.
AI Changes How Businesses Respond to Customer Queries
Customer service isn’t just about solving the problem. It’s about making sure your customers continue to have a great experience with your company. Artificial intelligence customer support can help you manage and delight your customers.
1. Augment your messages.
The more personalized messages are, the more they resonate with your target market. Chatbots like the bot in LiveChat can help decide which tone to take with customers based on their initial queries and responses. They can also tap into customer profiles to personalize email responses and anticipate future needs. Even better, your employees can see all of the chat histories from an internal interface. That helps your employees know when they need to step in and what augmentation works the best.
2. Use AI customer support as a brand manager.
In a highly competitive market, your most important asset is your company’s reputation. Use AI tools that integrate with social media to get ahead of negative responses or angry reviews.
But brand management isn’t just a concept for when something goes wrong with public relations. You can also use a wide variety of AI tools to schedule your social media responses, to thank customers for positive feedback, and to add complex problems to the right employee’s to-do list.
The Future of AI Customer Service
Customer service will always need humans, but the role of people is changing. More and more companies are saving their employees for complex problems and creative thinking. They’re using a combination of applications that improve their company’s efficiency which include automated responses and AI tools to instantly respond to quick questions and to start the troubleshooting process.
Customers will continue to use a wider array of communication tools and feedback platforms. Company success will also continue to depend on the customer’s experience, not just a quick resolution. AI customer support tools can solve both problems at once.
Recommended Solutions:
LiveChat
This tool makes it easy to talk to customers in real time. You don’t have the delays or long back-and-forths that happen over email, and younger customers dislike conversations over the phone. Use AI engagement features to start the conversation and make sure the right employee receives the query instantly.
Desk.com
Having too many different tools can make customer support harder, but Desk.com integrates with Salesforce so nothing gets lost. This tool doesn’t just help greet and respond to customer queries. It links help cases and problems to the customer’s central account. This helps your department escalate cases to the right department, whether it’s IT or Finance. Salespeople can also look over the history to retain customers and sell more products.
Find more AI customer support tools and software news about growing trends at the CUE Marketplace. For some links in this post CUE may receive an affiliate commission.
|
How AI Customer Support Can Improve Customer Service
| 0
|
how-ai-customer-support-can-improve-customer-service-175c1b6549bb
|
2018-09-11
|
2018-09-11 18:54:53
|
https://medium.com/s/story/how-ai-customer-support-can-improve-customer-service-175c1b6549bb
| false
| 929
| null | null | null | null | null | null | null | null | null |
Communication
|
communication
|
Communication
| 25,472
|
CUE Marketplace
|
Discover the best software and services to grow your business on https://cuemarketplace.com. First month free on all products purchased through us.
|
1a5c4aa79f38
|
cuemarketplace
| 18
| 32
| 20,181,104
| null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.