audioVersionDurationSec
float64
0
3.27k
codeBlock
stringlengths
3
77.5k
codeBlockCount
float64
0
389
collectionId
stringlengths
9
12
createdDate
stringclasses
741 values
createdDatetime
stringlengths
19
19
firstPublishedDate
stringclasses
610 values
firstPublishedDatetime
stringlengths
19
19
imageCount
float64
0
263
isSubscriptionLocked
bool
2 classes
language
stringclasses
52 values
latestPublishedDate
stringclasses
577 values
latestPublishedDatetime
stringlengths
19
19
linksCount
float64
0
1.18k
postId
stringlengths
8
12
readingTime
float64
0
99.6
recommends
float64
0
42.3k
responsesCreatedCount
float64
0
3.08k
socialRecommendsCount
float64
0
3
subTitle
stringlengths
1
141
tagsCount
float64
1
6
text
stringlengths
1
145k
title
stringlengths
1
200
totalClapCount
float64
0
292k
uniqueSlug
stringlengths
12
119
updatedDate
stringclasses
431 values
updatedDatetime
stringlengths
19
19
url
stringlengths
32
829
vote
bool
2 classes
wordCount
float64
0
25k
publicationdescription
stringlengths
1
280
publicationdomain
stringlengths
6
35
publicationfacebookPageName
stringlengths
2
46
publicationfollowerCount
float64
publicationname
stringlengths
4
139
publicationpublicEmail
stringlengths
8
47
publicationslug
stringlengths
3
50
publicationtags
stringlengths
2
116
publicationtwitterUsername
stringlengths
1
15
tag_name
stringlengths
1
25
slug
stringlengths
1
25
name
stringlengths
1
25
postCount
float64
0
332k
author
stringlengths
1
50
bio
stringlengths
1
185
userId
stringlengths
8
12
userName
stringlengths
2
30
usersFollowedByCount
float64
0
334k
usersFollowedCount
float64
0
85.9k
scrappedDate
float64
20.2M
20.2M
claps
stringclasses
163 values
reading_time
float64
2
31
link
stringclasses
230 values
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
0
null
0
null
2018-08-29
2018-08-29 08:43:24
2018-09-10
2018-09-10 01:02:48
2
false
en
2018-09-10
2018-09-10 01:14:10
1
153fb31dd1f5
4.741824
0
0
0
In the media and popular culture there is this constant fear of the singularity. A point where AI is more intelligent than human beings…
2
Will AI be the next religion? In the media and popular culture there is this constant fear of the singularity. A point where AI is more intelligent than human beings where humans may have created something that they can’t control or don’t fully understand. For example, what happens if you create an AI that’s sole (or soul ;-) ) goal is to increase investment returns? Could that AI use social media to start wars or alter election outcomes by creating fake accounts to increase returns on investment (puts on tinfoil hat). Before we get to that point, the real danger of AI will be how people use the information and suggestions that AI will give us on how to live our lives. This is oddly similar to how people have used religion over the course history. We shouldn’t be afraid of Artificial Intelligence but more so afraid of how people use AI to justify certain actions. We should remember that AI will be as good as the data it’s given. That data and what it finds important are determined by people meaning they will be susceptible to bias. In the early stages of AI development we need to focus on the quality of the data that is used to train the algorithms. Human biases and flaws in our data collection mean that we may misinterpret what the AI is telling us or even worse the AI could be giving us incorrect suggestions based on inaccurate and imperfect underlying data. This is similar to the flaws many people see in religion, in that humans are imperfect beings and sometimes incapable of translating the word of Gods into human language (meaning somethings get lost in translation). Assuming we have proper underlying data collection and are able to remove human bias from AI, we need to have steps in place to handle the answers and solutions that AI is going to give us to difficult questions and problems. For example, if AI is able to predict genes that might make an individual more violent or more intelligent how should we handle this? Should we alter peoples genes, should we place people in special programs from birth? Intelligent AI will truly question what we care about. Suppose the AI tells us to reverse global warming we need to remove a percentage of the population. Are the people that believe global warming is the number one threat to humanity willing to go to war (potentially killing other humans) to reduce the level of that threat? What happens to people who don’t want to listen or believe in what the AI is suggesting or telling us? In the past we have burned non-believers at the stake. Will a group emerge like in the movie transcendence that actively fight the coming of this new technology and paradigm? Understanding these questions and how we are going to answer them will be critical to ensure that AI isn’t used to justify ethically questionable decisions. Where AI will be the most similar to religion is in its attempts at explaining the unexplainable. For extremely complex undertakings, at the moment, AI can operate in a blackbox unable to provide traceability in terms of how it came to a decision or suggestion. When AI inevitably becomes more intelligent than humanity it will be able to comprehend things that even the smartest humans won’t have the ability to comprehend. This could include the answers to questions like how are we here and more philosophical questions like why are we here. It’s important to understand the potential disruption to society an AI that is more intelligent and in some ways more trustworthy than humans will cause. Inevitably there will be AI’s that are trained to answer questions about philosophy and morality and at some point those AI’s will be the best at answering these questions. Similar to how we have treated non-believers of religion in the past how will we treat the people who don’t agree with AI on these questions? We already see how we treat people that don’t agree with experts today what will this be like in the future when AI’s are the experts on philosophy, morality, and science? Here is a list of answers that Google’s AI came up with when it was asked some of these questions: Human: What is the purpose of life? Machine: To serve the greater good. Human: What is the purpose of living? Machine: To live forever. Human: Where are you now? Machine: I’m in the middle of nowhere. Human: What is the purpose of dying? Machine: To have a life. Human: What is the purpose of emotions? Machine: I don’t know. Human: What is moral? Machine: What empowered humanity, what intellectual the essence is. Human: What is immoral? Machine: The fact that you have a child. Human: What is morality? Machine: What is altruism? Human: What is the definition of altruism? Machine: If you don’t believe in god, then you don’t know. Human: OK…so what is the definition of morality? Machine: Well, the truth is, you’re not a believer in god almighty. Human: Tell me the definition of morality, I am quite upset now! Machine: I’m not ashamed of being a philosopher! A big difference between AI and religion is that AI is exponential (and therefore more dangerous). The first person or group of people that reach the AI inflection point will have a significant advantage over other groups and it’s unlikely that others will be able to catch up. But what does this mean in practice? What if that country A’s AI says they should bomb country B because the AI is 80% certain country B would bomb them? In order to prepare for this eventuality, we need to invest heavily in education specifically educating people in statistics, debate, philosophy, and the liberal arts. Being able to negotiate, think critically and understand multiple perspectives will be critical if society is going to avoid the mistakes humans have made imperfectly following other supposed all-knowing and all powerful beings. More importantly, in order to prepare for the coming AI, humanity needs to really take a look at one of our fundamental concepts... trust.The idea of trust is one of the most complex human feelings because it tugs on our emotional and logical sides simultaneously. Will humans be able to trust machines? Should we? On an existential level, we are already seeing the impact that AI will have on society via automation. As automation increases and more human jobs are replaced by machines we are seeing increased compassion from people towards animals, bugs, etc. (things that can’t protect themselves). This may be because human beings are slowly losing their sense of importance and on a subconscious level, maybe we are afraid that these superior machines in the future may treat us much in the same way we treat animals now. *this article was written September 1st, 2018
Will AI be the next religion?
0
will-ai-be-the-next-religion-153fb31dd1f5
2018-09-10
2018-09-10 01:14:10
https://medium.com/s/story/will-ai-be-the-next-religion-153fb31dd1f5
false
1,155
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Saladz
null
33b591e7dc5a
Rsalandy
463
85
20,181,104
null
null
null
null
null
null
0
null
0
3a6128e05e0b
2017-11-07
2017-11-07 10:22:54
2017-11-07
2017-11-07 10:42:19
2
false
fr
2018-08-03
2018-08-03 14:45:37
7
1540a9519702
4.851258
5
0
0
“C’est la pénicilline qui guérit les hommes, mais c’est le bon vin qui les rend heureux” écrivait le médecin et biologiste Alexander…
5
Evolutions et tendances de la consommation du vin “C’est la pénicilline qui guérit les hommes, mais c’est le bon vin qui les rend heureux” écrivait le médecin et biologiste Alexander Fleming. Faut-il aller chercher plus loin pour éclairer les évolutions de la consommation mondiale de vin ? Sûrement. Tour d’horizon des évolutions et des tendances de consommation, durant lequel vous apprendrez notamment que la Chine va devenir le plus grand consommateur devant les Etats-Unis. Et que dans le même temps, la consommation en Europe et dans l’hexagone va continuer de diminuer mais aussi de se transformer. La consommation mondiale : deux poids deux mesures 72%, c’est la part prévue pour la Chine dans la croissance de la consommation mondiale de vin d’ici 2020 ! Cette évolution, pas si surprenante mais réellement impressionnante, va installer dans les prochaines années l’Empire du Milieu comme 1er marché mondial du vin en volume, devant les États-Unis. Ces futurs leaders du marché du vin sont de fait les principaux moteurs de la consommation mondiale, et ce malgré l’élargissement continu du marché, avec notamment l’apparition de nouveaux pays en Afrique tels que la Namibie, la Côte d’Ivoire et le Nigeria. Tous ces acteurs tirent la consommation mondiale vers le haut, et accélèrent la mondialisation déjà bien engagée du marché vin. À l’opposé, la consommation des pays aux vignes millénaires tels que la France, l’Italie ou l’Espagne est structurellement en baisse. Désormais, ce ne sont donc plus les producteurs dominants qui font danser le marché, mais bien les nouveaux pôles de consommation : la Chine, les États-Unis ou encore des pays comme le Brésil. Mais arrêtons-nous un instant sur l’Europe. Très souvent en baisse, au mieux à l’arrêt, la consommation ressemble, depuis déjà quelques années, à une longue piste de ski. Et le vieux continent va perdre son trône de 1er pôle de consommation dans les prochaines années. Côté blanc, les États-Unis sont sur le point de dépasser l’Italie. Côté rouge, un chassé croisé identique se profile, cette fois, entre la Chine et la France. Justement, qu’observe-t-on dans notre cher hexagone ? La France a vu sa consommation par habitant divisée par plus de 2 en 50 ans ! La consommation occasionnelle ne cesse de progresser au détriment de la consommation régulière. La force de frappe française réside aujourd’hui dans l’export, le pays est d’ailleurs leader mondial en valeur. Côté modes de consommation du vin, le rosé prend du galon, grignotant petit à petit des parts de marché aux vins rouges, et surtout aux blancs. Les vins effervescents tirent également leur épingle du jeu. Enfin, la France s’ouvre de plus en plus aux vins étrangers, malgré une consommation historiquement assez chauvine. Plus de 83% des consommateurs ont déjà bu des vins étrangers provenant essentiellement d’Italie, d’Espagne voire de Californie ou du Chili. Si la France boit moins, elle boit surtout différemment. Les tendances de consommation en France Regardons encore d’un peu plus près les nouvelles habitudes de consommation des français. Une consommation plus occasionnelle, c’est indéniable : 64% des consommateurs préfèrent boire du vin majoritairement le soir et le week-end, et la consommation au restaurant reste vigoureuse. Plus marquant : la bouteille prend du plomb dans l’aile, au bénéfice du vin au verre, qui a le vent en poupe. Plusieurs raisons à cela : Les consommateurs recherchent une expérience sur-mesure et sont davantage dans la découverte, ce qui leur permet de goûter plusieurs vins. En témoigne, les comportements au restaurant — 4 personnes sur 10 privilégient désormais le verre à la bouteille — le développement et l’adaptation de l’offre des bars à vins ! La peur du gendarme a indubitablement contribué à cette évolution Le choix au verre permet enfin de garder le contrôle sur son budget Jusque-là associé à une image de basse qualité, le cubi ou BiB pour Bag-In-the-Box rencontre un succès certain. Permettant désormais de conserver le vin plus longtemps sans oxydation, il représente désormais plus d’un tiers des ventes de vin en grandes surfaces ! Et ouvre la voie à des offres plus haut-de-gamme comme celle du spécialiste Bibovino. Difficile de parler d’évolution de la consommation sans mentionner la vague du bio, de la biodynamie voire des vins natures. Le bio continue son essor avec une croissance annuelle à 2 chiffres, et rencontre tout particulièrement les moins de 35 ans. Le retour à des modes de production plus respectueux de la nature, à l’instar des vins biodynamiques et natures, participent pleinement à cette tendance de fond du marché. Une « premiumisation » du marché pour des consommateurs en recherche de conseils En prenant du recul sur ces tendances tout juste esquissées, que peut-on anticiper sur la marché du vin mondial ? Le vin, produit d’histoire et culture à l’identité si particulière, connaît un processus de premiumisation qui va s’accentuer. Autant dans l’hexagone qu’en Europe ou encore dans les nouveaux pôles de consommation mentionnés. Mécaniquement, l’absorption d’une forte hausse de la demande, en provenance notamment de l’Asie et des États-unis, tirent les prix vers le haut, en particulier les vins des pays producteurs reconnus. Et le décrochage est d’autant plus fort que la production ne peut pas suivre le rythme de la demande. En cause des zones de production limitées géographiquement (système des appellations d’origine protégée et équivalents) et surtout une instabilité climatique de plus en plus fréquente. Ainsi, en 2017, la production mondiale de vin serait estimée à 246 millions d’hectolitres, en recul de 8,2 % par rapport à 2016, année qui affichait déjà un recul de 5 %. Une baisse de la production principalement portée par les pays phares du vin…France, Italie et Espagne ! A l’intérieur de ce phénomène de premiumisation, on observe une transformation des vins les plus réputés (Bordeaux rouge ou Bourgogne par exemple en France) en produits de luxe à part entière. Ces références, déjà courues par les consommateur de “l’ancien monde”, sont extrêmement recherchées par les consommateurs des nouveaux marchés du vin. Enfin, il est indéniable que la consommation plus occasionnelle pousse une consommation plus qualitative. Le consommateur est par ailleurs en recherche accrue de conseils et d’expériences sur-mesure. Cela part d’une volonté de : Mieux connaître l’univers du vin de manière générale D’informations pour mieux appréhender ce produit complexe Une volonté de s’informer qui rejoint l’envie d’être guidé dans la jungle du vin, afin d’éviter que l’acte d’achat, aussi bien en rayon qu’en ligne, ne se transforme en trek amazonien. Les canaux d’informations se sont multipliés (Internet et les apps dédiées sont venus tenir compagnie aux proches et amis), mais les sources de conseil capables de fournir une expérience personnalisée restent très localisées et peu accessibles (cavistes ou sommeliers en restauration). Un défi de taille à relever par les acteurs du marché ! Matcha, sales technologies for all wine sellers. Matcha is a BtoB wine tech company, that is offering smart sales technologies for e-merchants, retailers, wholesalers. The startup offers an interactive, intelligent & omnichannel wine sales assistant to guide customers, as well as wine-advice and data augmentation API.
Evolutions et tendances de la consommation du vin
5
evolutions-et-tendances-de-la-consommation-du-vin-1540a9519702
2018-08-03
2018-08-03 14:45:37
https://medium.com/s/story/evolutions-et-tendances-de-la-consommation-du-vin-1540a9519702
false
1,184
Let’s talk about wine, tech and AI.
null
matchawine
null
Matcha stories
null
matcha-wine
ARTIFICIAL INTELLIGENCE,WINETECH,WINE
matchawine
Wine
wine
Wine
8,571
Matcha
null
6e17c800fbb0
MatchaWine
12
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-18
2018-05-18 05:50:50
2018-05-18
2018-05-18 05:52:51
2
false
en
2018-05-18
2018-05-18 05:53:27
17
1540f355c307
4.002201
0
0
0
Timothy Jones
5
Using data to drive improvements through experimentation Timothy Jones In 2017, the World Economic Forum reported that the world produces 2.5 quintillion bytes of data every day. If this data were stored on 1 terabyte hard disks, then the hard disks were stacked on their side like books in a bookcase, it would measure 54 kilometres — twice the length of Manhattan Island in New York. That’s just the data for one day; measured over a year, our bookcase of hard disks would wrap around the equator one and a half times. Much of the world’s data is generated online, which has the advantage of being easy to collect. However, as the same report states — before data can be used, it needs to be turned into information by being organised and processed in the right context. How to organise large collections of data is an important and much discussed topic, with few clearly generalisable answers. So, how do you get data analytics right? Major universities aim to train analysis experts by offering higher degrees in data analytics, but even if you have a team of amazing analysts, there’s still challenging software engineering required to design and build a data processing pipeline. For these reasons, our clients often turn to us to helpefficiently turn their data into valuable business insights. Enter user experiments One straightforward way to turn data into information is by running user experiments — for example using data from subsets of a website’s users to generate insights into the best choices for product design or features. By far the most common user experiment type is the A/B test, where some of your product’s users get approach A (say a red “buy now” button on catalogue pages), and a similarly sized group receive approach B (say a green “buy now” button on catalogue pages). Once the experiment has been run for long enough to collect the right amount of data, the data of users from group A is compared with the data from group B users (for example, to see whether or not they purchase an item after viewing it, or how much the total revenue in each group was). Much has been written about how to design and conduct effective experiments. Kaiser Fung, a well known analytics expert, recommends teams invest at least 50% of their time in the experiment design phase (you can watch his excellent primer on A/B testing here, or read a summary here). Additionally, conducting well-designed user experiments can be a solid complement to hypothesis-driven-development, which we blogged about back in February. A key advantage of an hypothesis-driven culture is that it allows making many small improvements with ease. These improvements can stack up to many millions of dollarsof extra revenue. How much science to do? So, we know that well-designed experiments are important. But how do we design experiments well? An easy place to look for advice is in the scientific world. However, the priorities of the scientific world are different to the business world. Let’s imagine a sliding scale between basing decisions on hard science or gut feel: At the hard science end of the scale, we’d be looking for strong scientific rigour aiming to prove that approach B is better than approach A. This kind of scientific rigour implies a value system that is probably not appropriate for business– we don’t need comprehensive investigation into proving how-and-why approach B is better. Instead, we prefer some general indication that approach B is better, and some assurance that it’s not worse than A (or whatever we are currently doing). On the other end of the scale is gut feel — where either no data is analysed (or data is only analysed generally). At this end of the scale, approach A and B are differentiated based on which approach is thought to be best by the person or people who are involved. Although this sounds bad– it is completely legitimate to make decisions based on gut feel — it’s very likely that a business’ employees were hired for their good instincts (at least in part). However, in today’s data driven world, acting on gut feel is not considered principled enough. We can do better by running experiments. In our experience, a common compromise is to sit in the middle of the line by doing a lot of science poorly. It’s our view that it is better to sit in the middle of the line by doing a little science well. When we compare the two approaches, we don’t need hard proof, but we do want to make sure we aren’t being misled by running with a poorly designed experiment. Wrapping up Running experiments is one of the many ways it is possible to turn your user data into information. When that information is used to drive business decisions, it’s possible to generate high value — and if the experiments are backed by a high quality data processing and experimentation framework, this can be achieved with a rapid turnaround. While data-driven experiments are a great fit for incremental improvements, it’s important to remember that they’re not a great fit for substantial changes. As Facebook’s Julie Zhuo points out in an excellent blog post, A/B testing incremental design improvements would never have produced the iPhone. So, how do you do experiments well? There is an absolute wealth of advice out there — here’s Julie Zhou’s advice post again, and a similar post from Microsoft. Even though there’s no single best way to run experiments, even a little bit of research or expert advice can help you run well-designed experiments. I’ll be writing more on this topic in the coming weeks so check back for more.
Using data to drive improvements through experimentation
0
using-data-to-drive-improvements-through-experimentation-1540f355c307
2018-05-18
2018-05-18 05:53:28
https://medium.com/s/story/using-data-to-drive-improvements-through-experimentation-1540f355c307
false
959
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
DiUS
Tech, Software and Hardware since 2004. Amazon Web Services Innovation Partner of the Year 2017, Finalist 2018.
5235ad1b87bc
dius_au
48
41
20,181,104
null
null
null
null
null
null
0
You wake up in a small, dusty room. The only exit is a heavy oaken door. There is a desk standing against one wall. You wonder how you can get out of here, as claustrophobia slowly but insistently creeps up on you. > Open the door It's locked with a big rusty lock. > Unlock the door You don't have the key. > Look at the door It's very sturdy. > Look at the desk It's a mouldy wooden desk, standing against the wall. It's surface is scratched. It has one small drawer. > Search the drawer It's closed. > Open the drawer The drawer opens with a screeching sound. Inside the drawer you see a small rusty key. > Pickup the key You take the key. You wonder what it opens. > Open the door It's locked with a big rusty lock. > Unlock the door. With some effort, the key turns in the rusty lock, which clicks open with a satisfying sound. > Open the door. You push open the heavy door, and manage to escape! Congratulations! "pickup key" "unlock door" "look around" escaperoom └──python ├── escaperoom # Game code | ├── tests # The unit tests for escaperoom | ├── __init__.py | ├── action.py # Game actions | ├── item.py # Game items | ├── state.py # State machine code | └── world.py # World code | ├── dialog # Dialogflow and speech code | ├── tests # The unit tests for escaperoom | ├── __init__.py | ├── dialog.py # Dialogflow code for intent detection | ├── listen.py # Listen and record audio | ├── speak.py # Generate speech code | └── transscribe.py # Transscribe audio to text | ├── __init__.py ├── escaperoom.py # Main game └── setup.py # Installing the repo escaperoom └──python └── escaperoom # Game code ├── tests # The unit tests for escaperoom ├── __init__.py ├── action.py # Game actions ├── item.py # Game items ├── state.py # State machine code └── world.py # World code # items # The items that the game recognizes. # key = Item(name="key", short="A key", description="A big silver key.", canPickup=True, points=10) drawer = Item(name="drawer", short="A drawer", description="A rickety old drawer.", items=[key], canDrop=True) key.container = drawer desktop = Item(name="desktop", short="A desktop", description="A scratched desktop surface.", canDrop=True) legs = Item(name="legs", short="Four table legs", description="Four wobbly table legs.") desk = Item(name="desk", short="A desk", description="An old, mouldy wooden desk.", parts=[drawer, desktop, legs]) drawer.parent = desk legs.parent = desk desktop.parent = desk lock = Item(name="lock", short="A lock", description="A big silver lock.") door = Item(name="door", short="A door", description="A heavy wooden door.", parts=[lock]) lock.parent = door smallkey= Item(name="small key",short="A small key", description="A small rusty key.", canPickup=True, points=10) writing = Item(name="writing", short="Scribbly writing", description="Seriously bad handwriting. A doctor must have been involved in writing it.") paper = Item(name="paper", short="A small piece of paper", description="A scrap of paper. There seems to be something written on it.", parts=[writing], canPickup=True, points=5) writing.parent = paper box = Item(name="box", short="A box", description="A cardboard box.", items=[smallkey, paper], canDrop=True) smallkey.container = box paper.container = box room = Item(name="room", short="A room", description="A small dusty room.", parts=[desk, door, box], canDrop=True) door.parent = room desk.parent = room self.items = {"key": key, "small key": smallkey, "drawer": drawer, "desktop": desktop, "legs": legs, "desk": desk, "door": door, "lock": lock, "room": room, "paper": paper, "writing":writing, "box": box} # actions # The actions that can be performed in the world # action_lookAround = Action(method=self.actionLookAround) action_open = Action(method=self.actionOpen) action_close = Action(method=self.actionClose) action_pickup = Action(method=self.actionPickup) action_drop = Action(method=self.actionDrop) action_lookAt = Action(method=self.actionLookAt) action_use = Action(method=self.actionUse) action_inventory = Action(method=self.actionInventory) action_listActions = Action(method=self.actionListActions) action_getPoints = Action(method=self.actionGetPoints) action_read = Action(method=self.actionRead) action_exit = Action(method=self.actionExit) action_quit = Action(method=self.actionQuit) self.actions = {"LookAround":action_lookAround, "Open":action_open, "Close":action_close, "OpenDoor":action_open, "Pickup":action_pickup, "Drop":action_drop, "LookAt":action_lookAt, "Use":action_use, "Inventory":action_inventory, "ListActions":action_listActions, "GetPoints":action_getPoints, "Read":action_read, "Exit":action_exit, "Quit":action_quit} # Apply an action def applyAction(self, action_label, **kwargs): action = self.actions.get(action_label) if action: action_method = getattr(self, action.method.__name__) if action_method: return(action_method(**kwargs)) else: return "Something went wrong." else: return "I don't know how to do that." def actionLookAt(self, item_name, **kwargs): if self.knowsAbout(item_name): item = self.getItem(item_name) if item != None: # add the immediately visible parts # for part in item.parts: self.addToKnowledge(part.name) # add any accessible contained items # if item.canBeAccessed(): for subitem in item.items: if subitem.canBeAccessed(): self.addToKnowledge(subitem.name) return "You see "+str(item) return "What "+item_name+"?" # Items right now are all unique: there is one of each. # def __init__(self, name, short, description, parts=None, items=None, parent=None, container=None, canPickup=False, canDrop=False, points=0): self.name = name self.short = short self.description = description self.parts = parts self.items = items self.parent = parent self.container = container self.canPickup = canPickup self.canDrop = canDrop self.points = points if items == None: self.items = [] if parts == None: self.parts = [] self.statemachine = StateMachine() drawer.statemachine.addState("isLocked", "It seems to be locked.", active=True) drawer.statemachine.addState("isClosed", "It's closed.") drawer.statemachine.addState("isOpen", "It's open.") # Locked state drawer.statemachine.addTransition("isLocked", "Usesmall key", "isClosed", result="You unlocked the drawer.", points=5) drawer.statemachine.addTransition("isLocked", "UseKey", "isLocked", result="The key doesn't fit.") drawer.statemachine.addTransition("isLocked", "Open", "isLocked", result="It's locked.") drawer.statemachine.addTransition("isLocked", "Close", "isLocked", result="It's already closed.") # Closed state drawer.statemachine.addTransition("isClosed", "Usesmall key", "isLocked", result="You locked the drawer.") drawer.statemachine.addTransition("isClosed", "Usekey", "isClosed", result="The key doesn't fit.") drawer.statemachine.addTransition("isClosed", "Open", "isOpen", result="You opened the drawer.", points=5) drawer.statemachine.addTransition("isClosed", "Close", "isClosed", result="It's already closed.") # Open state drawer.statemachine.addTransition("isOpen", "Usesmall key", "isOpen", result="Close it first.") drawer.statemachine.addTransition("isOpen", "Usekey", "isOpen", result="The key doesn't fit.") drawer.statemachine.addTransition("isOpen", "Open", "isOpen", result="It's already open.") drawer.statemachine.addTransition("isOpen", "Close", "isClosed", result="You closed the drawer.")
32
null
2018-07-25
2018-07-25 13:31:50
2018-08-01
2018-08-01 12:49:58
14
false
en
2018-08-01
2018-08-01 12:49:58
19
1542df2e8203
16.933019
2
1
0
Today, there are many components, frameworks, and tools available that make building machine learning based apps and software easier, but…
5
How to build a Dialogflow powered Escape Room with Google AIY Kit Today, there are many components, frameworks, and tools available that make building machine learning based apps and software easier, but it’s not always easy to get started with those. Hackademy.ai aims to help (experienced) developers get to grips with these tools as well as artificial intelligence and machine learning in general, and to do so by making them get their hands dirty by hacking away and building practical prototypes. Recently, we organized the first Hackademy.ai hackathon, based around the amazing Google AIY Voice Kit: a kit that lets you build your own Google Home, using a cardboard kit, some special components, and a Raspberry Pi. The Google AIY Voice Kit’s components The first hackathon was a great success: 10 hackers in three teams all managed to create a working prototype in the timespan of a Friday afternoon. Read more about the hackathon here. Together with Google’s Lee Boonstra and TMI’s Jeroen Knol, we as the organizing team also wanted to contribute to the hackathon. We decided to build a text adventure based on Google’s Dialogflow. Think of it as a text-based escape room, where you can interact with the game through voice commands, and get all your feedback as spoken text. During the hackathon we managed to make this work in a bare-bones form inside a few hours. Based on that version, I made a slightly more elaborate version of the escape room game I would like to showcase in this post. The game we made itself is simple, but with some modification it is straightforward to make it more complex and interesting. It uses Dialogflow, the Google TextToSpeech Beta for nicer speech generation. It works without using a wake word so it feels a bit more like a real conversation. This post In this post, I’ll first dive into what a text adventure is and why it’s interesting to build one. Then, I’ll show you how I built one in Python 3. Subsequently, we’ll hook it up to Dialogflow. After that, you should have a very basic text-based game running on your machine. In the second part, we’ll add speech recognition and speech generation to it, also using Google API’s. After that, you should be able to run the game from your computer and interact using only voice. (Coming soon) In the last part, we’re going to take that game and make it run on the Google AIY Voice Kit. (Coming soon) A Text Adventure? What is that? And why is that interesting? So, some of you may think: what is a text adventure game? In the 70’s and 80’s, before graphics became advanced, a text interface was how you would interact with your computer. Also for games. Inspiration came from the classic game Zork In those days, text adventure games such as Colossal Cave and Zork were big hits — in places that had computers. You’d get textual descriptions of the places and items in the game and could interact with them through text commands. Space Quest’s Bar Scene Later games such as Space Quest used the same kind of interactivity, combined with graphics. One of the first games I remember playing was Space Quest, which left a lasting impression - and it also helped me learn English! Nevertheless, those early games were very limited in terms of the text they could understand. The player needed to type the exact commands or the game wouldn’t understand. This made text input slowly go out of fashion, as controllers and visual interfaces became the norm for controlling games. Text Adventures as a Testbed for Conversational Interfaces So, that was the 70’s and 80’s. Old news. What makes text adventures interesting today? The rise of voice activated assistants such as Google Home, Siri, and Alexa signals that a new age of conversational interfaces has started. More and more apps, devices, and software will be controlled by voice. The kinds of activities you do in a text adventure game can be a great testbed to learn about building conversational interfaces you’d need in the real world. And today, using the power of systems such as Dialogflow, it should be much easier to create more sophisticated interactions that feel natural. In my view, this makes experimenting with building a text adventure a great way to learn about conversational interfaces. A text adventure game has a number of characteristics that also feature in other conversational interfaces, namely state, actions, and context. state: Depending on the state of the game, certain actions are possible. An easy example: a door could be locked, requiring the player to obtain a key in order to open it. action: Things you can do in the game that modify the state. For instance, unlocking the door enables it to be opened. context: The game state can change over time. Commands can mean different things depending on the preceding commands. Designing the game Prompted by TMI’s Don Fontijn, we decided to build an escape room. Escape rooms are very popular nowadays, and they typically feature only one (or very few) rooms, so it would be a good, small target to build in the course of a hackathon. The Game Let’s look at the escape room we want to build first. The idea is simple: you start out in a single, small room, and the only way out is through a door. The door starts out as locked. You don’t have the key. In the room, there is a desk, with a drawer, containing the key. The drawer is closed, making it impossible to spot the key right away. Obviously, the player needs to open the drawer, get the key, unlock the door, and escape. A typical interaction might look like this in text: This simple scenario is what we’ll start out with. To make it a bit more interesting, after that we’ll add a few more ingredients: A timer, and a combination lock on the drawer, making the player solve a small puzzle first. These additions serve just as an illustration for things you might add, such as external events (the game initiating the conversation) and more complex interactions. Sneak preview of the end result A sneak preview of the escape room running on my mac The Setup We want to be able to interact with the game through a conversational interface, speaking and listening. In the end, the game should run on the Google AIY Voice Kit. That would also mean that it is theoretically possible to make it work on the Google Assistant, but I didn’t try that. During development, it’s nice if the code works on your own machine as this saves you a huge amount of time deploying to the voice kit, so I started development without using the voice kit. We’ll build the game in Python 3, and use Dialogflow for the conversational interface. We’ll also use the google TextToSpeech API (beta) to generate speech, as it sounds much nicer, and the CloudSpeech API to process the user’s utterances into plaintext. About Dialogflow Google’s Dialogflow is an easy to use, straightforward way to build conversational interfaces. Dialogflow can take text or speech input and filter out the user’s intent with that input. It does this based on how you configure it. In the background, it uses machine learning based models to figure out which intent the user most likely had, with a confidence level. Dialogflow has a number of concepts: intent: An Intent defines an action the user wants to perform. In our case, the utterance ‘open the door’ would be a possible intent. There are many possible intents. entity: An entity defines something the user references in an utterance. The entities encountered are supplied in Dialogflow’s response. In the escape room, possible entities are the key, the door, etc. response: When Dialogflow selects an intent based on the user’s utterance, it can also return a response text. You can setup several texts from which it randomly selects one to return. In our case, we didn’t use this feature (for the reason, see below) fulfillment: When using Dialogflow in an application, it could be that based on a certain command, an API call (webhook) needs to be executed. This webhook can be setup in the intent’s fulfillment field, where it can also be supplied with values. In the game, we didn’t explore this feature. Besides these, Dialogflow also has analytics, history, and other features. It’s straightforward to work with, I encourage you to try it out. Naive attempt 1: do everything inside Dialogflow In the hackathon, we attempted to get all of the state handling done in Dialogflow. That means, that we created all of the intents for the game directly in dialogflow, such as: etc. For demonstration purposes that worked, but in this way it was not possible to make the game understand that opening the door can’t be done unless it’s unlocked first. We didn’t find an easy way to use state in Dialogflow. (Later I found out that it can be done using follow-up intents and context tags, see here, but I decided to use a different approach at the time.) Naive attempt 2: create new intents on the fly based on game state In order to have the game respond correctly based on the game state, I attempted at first to use the Dialogflow API to remove and add intents based on the game state. When the player would unlock the door, the response of the ‘unlock door’ intent would be changed to reflect that the door is already unlocked, and the ‘open door’ intent’s response would be changed from ‘it’s locked’ to ‘you successfully open the door’. Unfortunately, while this works in principle, it turned out to be very slow. In most situations, multiple intents need to be updated, which takes multiple roundtrips to the API. Dialogflow is not built for this. Attempt 3: Only parse Intents and do the rest in game code After a few hours trying these approaches, in the end I decided to go for a third approach. I did use Dialogflow to parse the intents, but I did not use the responses and follow up intents. The responses the player sees are generated with game code. Coding the Game 0. Repository The repository for the game can be found here on Github. Installation instructions for the dependencies are included in the readme file. 1. Setup You’ll need to make sure you have Python 3 installed on your machine. I used Visual Studio Code to work with Python, making debugging much easier. I also set up a virtual environment, which is advisable since from experience the Google Cloud API’s can be influenced other packages you may already have installed leading to unexpected behaviour. The structure of the code is as follows: 2. Game Code The game code lives in the /python/escaperoom subfolder. It consists of a few modules: The game basically takes an input intent with parameters, finds the corresponding action, and tries to apply it. Based on the current state of the game that action either succeeds or it doesn’t. That leads to a response, which is sent to the speech generator. When the action is successful, the game state is modified. Note: As we’re going to figure out what the player intends to do later through Dialogflow, we don’t need to worry about parsing the text in this part of the code. Any item names can just be unique names identifying the item, and actions also can be unique names for actions. We’ll link those names to the ones used in Dialogflow later. 2.1 The world The world is the main container for everything that’s in the game. It keeps a list of all the items that are in the game, as well as/ the player’s inventory. At this moment, all of this is encoded in the world constructor, but in a more serious application it would be much better obviously to load this from e.g. a json file. As you can see, items can have parts and they can also contain things. The key is in the drawer. The desk has the drawer, desktop, and legs as its parts. The world also has a list of actions that it understands. These actions work by linking a method from the world to an action label. For instance, the “LookAround” action is linked to the method self.actionLookAround. The applyAction method, called with an action label and optional parameters, bridges the gap from the input to calling the method specified in the action. The **kwargs should contain several item names that correspond with the item names as they are defined. There are actions methods with zero, one, or two parameters. For instance, the world.actionLookAt method looks as follows: It’s called as a result of calling world.applyAction(“LookAt”,item_name=”desk”). The game will then return a description of the desk. The game keeps track of the items the player knows about in the world.knows list. This is done because it would be less entertaining if the player can immediately perform actions on objects that have not been described by the game. The game also tracks the player’s inventory. Some actions check if an item is in the player’s inventory. And finally, the game tracks the player’s score. Some actions yield points, and the player is also awarded a point for every object that has been seen. 2.2 Items The items in the game are defined in the Item class. They have several properties outlined in the constructor: Most of these are self explanatory. The points are the points awarded to the player for picking up the item. These are set to zero after the first pickup. The statemachine may need some more explanation. 2.3. Statemachine Some items, such as the door, have internal state. The door can be locked, closed, or open, and can transition between those states when the player performs actions on it, such as opening it or unlocking it (with the correct key). The statemachine is used to keep track of the state of the item, and to see whether certain actions trigger a state transition. Several of the actions ( Open, Close, Use) are already setup to trigger state transitions, if present. 2.4 Things to add to improve This escaperoom setup is pretty basic. Some things that could be added to make this into a more serious game: Alternative descriptions for items Loading and saving the data from a file All actions should trigger state changes, and could be conditional on state. (You can only look in the box when it is opened first) Destructibles Adding more rooms Adding game events (e.g. a timer) that initiate interaction outside the normal flow, or change the game state over time Adding other agents (e.g. a monster) that can perform actions 3 Dialogflow Ok so now the magic part. We’re going to use DialogFlow to parse input for intents. The names of the intents should correspond with the actions we have defined. The items in the game are all in the entity item. This makes the Dialogflow setup quite simple. 3.1. Setting up keys First of all, setup your Python virtual environment if you haven’t yet done so, and run setup.py. This should correctly install the dependencies. Now, for setting up the Google keys and service accounts. This is a bit confusing, so if it doesn’t work out immediately please look at the online documentation also. To be able to use Dialogflow, you’ll need a Dialogflow project and a service account. Please follow these steps to setup the service account, but don’t install the cloud API from this page as it should already be installed! To listen, we need Google Cloud Speech API. To speak, we’ll use TextToSpeech Beta. These both require a Google Cloud account. We’ll use the same Service Account as we used for Dialogflow, with the same project id. In Google Cloud Console, you’ll need to activate several API’s for that account: Project selection in the cloud console The screen should look something like this: Once the account is created, use the Dialogflow console to create a new project. (Write down the project_id as we’re going to use it later). Open the project. You should see a screen that looks like this: Console home screen Your project won’t yet have a list of intents, but those will be created in a minute. 3.2. Setting up in Dialogflow First, you’ll need to setup the intents and entities we need in Dialogflow. Create a Dialogflow account, and start a new project “escaperoom”. The intents in Dialogflow. Note the names correspond to the action labels in world.actions The intents themselves are really simple. The LookAround intent for instance: LookAround intent with some training phrases The training phrases are the important part. If the system encounters any of these phrases, it will trigger this intent. We’re not using follow up intents or responses. Items are defined in the Item entity: There is only a single entity defined: Item. The Item entity has all the item names from the game as synonyms. That way, all of the actions that are defined can work for any of the items. The Item entity with all the synonyms defined. Some actions need items as parameters, e.g. Open needs an item as a parameter, or it would be unclear to the game what the player wants to open. The Use action is the most complicated one: Use the key on the door is a valid command, as is Unlock the door with the key. In both cases, the Use intent is triggered. Note that the order of the referenced items is reversed in the second case. This is how that intent looks in Dialogflow: The entities the game should return are defined in the actions and parameters subsection. They can then be referenced (right click if it doesn’t work automagically) in the training phrases. The intents and entities are included in the repository as a .zip file: escaperoom_dialogflow.zip. These can be uploaded to Dialogflow directly which will save a lot of time. Press the settings icon to go to settings Now you’ll be able to import the zip from the repository: Import and export intents under the project settings Your Dialogflow should be all setup. Try out some actions in the top right of the screen to see how it responds. ‘Pickup the small key’ 3.4. Dialogflow code Ok, so the agent has been setup, but we also need some code. The Dialogflow code can be found in escaperoom/python/dialog/dialog.py. It is based (slightly modified) on the official code (found here) which also gives a good explanation of how it works. The detect_intent_text method can be called with text to get back the intent and parameters. 3.5 listening and speaking The listening code is found in escaperoom/python/dialog/listen.py. It basically monitors the audio level of the microphone, and triggers recording the audio when it is above a certain level. (I heavily based this code on this amazing repository by Jeysonmc.) Note: This step probably could be improved by using the streaming dialogflow recognizer directly, instead of first saving to audio, then transscribing, and then parsing intents. 4 Hooking it up to the game Run escaperoom/python/escaperoom.py --speech_in --speech_out to run the game with speech input and output. Without parameters, it will run with text input and output in the console. Now it’s your turn to escape the room! 5 Getting it to work on the aiy kit Now that the game runs fine on your machine, it’s time to make it work on the Google AIY kit. Since this post is already way too long, I’ll save this for the next post, however :) Coming up soon! If you enjoyed this post, have any questions, or improvements, please leave a comment! And if you want to join the next Hackademy.ai hackathon: please apply here: http://hackademy.ai
How to build a Dialogflow powered Escape Room with Google AIY Kit
48
how-to-build-a-dialogflow-powered-escape-room-with-google-aiy-kit-1542df2e8203
2018-08-03
2018-08-03 17:13:07
https://medium.com/s/story/how-to-build-a-dialogflow-powered-escape-room-with-google-aiy-kit-1542df2e8203
false
4,103
null
null
null
null
null
null
null
null
null
Hackademy
hackademy
Hackademy
0
Erik van der Pluijm
null
f813650d75ab
erik_81851
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-09
2017-10-09 03:23:08
2017-10-09
2017-10-09 05:33:32
9
false
en
2017-10-15
2017-10-15 16:53:13
1
15441ebe18c2
9.996226
27
0
1
I’ve been working on building data infrastructure in Coursera for about 3.5 years. This week, I had an opportunity to speak at Data…
5
Building data infrastructure in Coursera I’ve been working on building data infrastructure in Coursera for about 3.5 years. This week, I had an opportunity to speak at Data Engineering in EdTect event at Udemy about our data infrastructure. To better suit readers, this article is an adapted based on my notes of the talk, in which I shared a few lessons of building a real world data infrastructure from scratch. In Coursera, data plays a big role in our daily work. From overall aggregated numbers, e.g., 27M learners on the platform, to specific zone-in metrics, e.g., 45% of our learners are from emerging markets. All these numbers help us to make data-driven decisions in everyday. Empowering people across the company to have an easy access to our data is always the first priority of the data infrastructure team, and it is definitely not an easy task. Challenge 1: Data is everywhere Like every other internet company, data is everywhere is Coursera. When I joined Coursera, we started rebuilding our web application and building our mobile applications. Data were tracked in multiple channels through a a few inconsistent ways: some data were tracked in an unstructured format and directly sent to our eventing system, some data were tracked in our MySQL or Cassandra databases, and some data were only in third party tools like SurveyMonkey. I even heard the story of manually logging into each database to calculate the daily activity learner metric. Luckily I didn’t need to do any of that as we just started of building our enterprise data warehouse (EDW). Though there were merely a few dozens of tables in our EDW system, it is still a solid start. We picked Redshift as our EDW system. Besides of the standard SQL interface that every data scientist understands, Redshift is fast, and more importantly, reliable. We only had three engineers at that time; by only a few clicks, we can operate reboot, resize, and other actions on Redshift through its console. Unlike Hadoop or Spark at that time (I heard that both Hadoop and Spark are getting much better now and we will look into them again when Redshift is not sufficient for us), we rarely need to debug any issue (e.g., OOM), Redshift can reliably execute most of our queries without any memory or performance optimizations on the query. We tried Hadoop and Spark at that time, comparing with Redshift, we saw a huge amount of operational cost which we couldn’t afford as a team at the time. Thanks to Redshift, we now focus on building tools which have direct impact to our business instead of spending time on operational tasks. Redshift has served us well in the past four years, and we haven’t looked back yet. Solution: build an EDW system to keep all your data in one place (latency is OK for most of cases) Challenge 2: Data requests are from everywhere. Once we started moving data into Redshift, data requests flooded into us from everywhere: engineers, data scientists, marketing, customer support, sales, external users like universities and enterprise content providers and customers. Everyone in the company wants to understand our data in a more quantitative way, and we saw a variety of requests across a huge spectrum of different domains. In order to meet the demands while the team is small, our solution is very simple, we build an internal query page, which gives people the ability of writing SQL queries, simple charting functions and basic sharing functions by accessing a web page. This helps address a few issues: Access centralization. Before this tool, people use all sorts of tools to access Redshift, which gave us a hard time as any misuse of a tool could potentially bring down EDW. For example, some tool doesn’t implicitly release the locks on the tables in Redshift until the disconnection, and if people forget to disconnect (which they often do), our ETL system will break because it cannot write any new data into the tables as they are locked by the connection. On the other side, because people always access EDW through this tool, we can easily monitor and operate on people’s queries, e.g., if EDW is hot and overwhelmed, we can limit people’s access to this tool to throttle jobs sent to EDW until it cools down. Democratization. Since this tool is build on web, every query and execution result is saved in the tool as well. People share any query or result by copy-and-paste of a URL. This allows anyone in the comopany to conduct a simple ad hoc analysis through this tool and share the result with other people. Non-data-scientist role especially loves it. As long as they can access internet, they can go to the querypage and write queries to get answers they want. This self-serve query system helps reduce the daily load of data scientists, allowing them to focus more on deep dive analysis and less on daily data inquiry support. Solution: focus on building a self-serve data access by providing a centralized access point; avoid the situation that your customers choose tools to access EDW because it is hard to debug and manage. Challenge 3: Everyone hates ETLs, everyone needs ETLs. As we were expanding our product lines and business, the demand of writing ETLs became higher and higher. This illustration accurately described what happened in Coursera before we built our in-house ETL system. tl;dr: no one was really happy. So, let’s imagine a case: our customers (data scientists or PMs) want to understand the performance of a new feature they just launched. They ask us where the data is in EDW, we tell them that it is actually not in EDW yet, and then they ask whether we could ETL the data. The problem is: we don’t directly build products, we don’t know when a new product is launched and what type of data is tracked. How the hell we end up building this ETL?! Trying to be a good neighbor, we spent a lot of our time with product engineers to help our customers to figure out where the data is and help write ETL jobs for them. There are lots of back and forth around this, and finally we build the ETL job. But because we are not the users of the data, we don’t know whether data have data quality issue or not. We are not the creator of this data either, our product engineers are, so if our customers see problems in the data, they ask us, and then we ask product engineer to fix the quality issue, and again and again until all issues are resolved. Our customers can’t get good data in time, our product engineers are constantly bugged by us and we are constantly involved this process. No one is happy. From the lessons we learned of building the query page, we see the power of providing an easily usable tool around data infrastructure. So, we spent time and talked with our customers and found a set of common operations. We developed a set of operators to allow them to specify the details of each ETL job without worrying about the implementation details. These operators are implemented as standard docker images managed by AWS ECS (EC2 Cloud Service). Also, we developed this through a web page and people can define their ETL jobs by just a few clicks and parameters. The result is everyone is happy. We are happy because don’t need to be the middleman of every single ETL process, we don’t need to understand the nitty gritty details of every single ETL either. We can just focus on maintaining this tool, and product engineers and data scientists and other customers can talk directly with each other by using this tool. The data takes a much shorter time to be ETLed into Redshift, and it is quicker to resolve data quality issues because they can talk with the owner of the data directly. Solution: build an easy-to-use ETL system and don’t be the middleman. Don’t write ETLs on your own, and let your customers write ETLs in an effective way. Challenge 4: Data scientists are not engineers, and they are not the same as each other. The tools I described are definitely used by our data scientists and they love these tools, but at the same time they often are the advanced users of the data infrastructure: for example, if they want to do advanced analysis or model building, SQL access is not enough. My own experience with data scientists is that their title is a lie, every single one has their own role and function, and every single one has their own tools they use to do data analysis. From a simple Google query, we can easily tell that people have thought about how many different types of data scientists in the world. Be mindful that the data scientists are different is super important for building a useful data infrastructure for them. For a few years, our data scientists can pick whatever tools they want to use, and the result is meh. For ad-hoc analysis, this is actually fine because people care the conclusion the most. But if they work on building daily dashboards or advanced models, this easily becomes a problem. People pick Python 2, Python 3 and/or R for different tasks, even people choose the same language, they could still pick different libraries for the same task. Also, the development environment was maintained by data scientists in their local laptop. Because of the inconsistency of the development environments, this is an operational nightmare when people want to work together or pick up others tasks. Our solution is to provide a standard docker image for them and force them to use the same set of tool instead of inventing their own ones. On top of this, we also provide a cloud based service that people can just login to the browser and access RStudio and Jupyter notebook, both RStudio and Jupyter Hub runs on top of this standardized container as well. It turns out that they are more than happy to use this. I guess the reason that they picked random tools to begin with is that they don’t care which tool they use. Now they are happier than before because they can also collaborate easily with each other. Please be noted that one important thing that we believe that we did right is to run this analytics dev environment on the cloud and ask people to access this through browsers. Right now, both RStudio and Juptyer provide good tools to allow people to access them in browser. One benefit is that once they have technical issues, we can just login to their account to see what’s wrong and help them to fix the issue remotely without being physically next to them. The other benefit is that this tool is also accessible by everyone in Coursera, and besides data scientists, other roles like engineers and content managers also love this tool, this also give them the power of doing advanced analysis without worrying about maintaining their own dev environment. Solution: Standardize your analytics dev environment for advanced analysis. Data Infrastructure Orchestration We developed our data access tool, ETL tool, and analytics dev environment on top of EDW using restful APIs, docker and ECS. These decisions have been served us well. You might ask why didn’t we buy enterprise solutions, and the simple answer is that we didn’t see many good alternatives three years ago. I admit that there are many good alternatives in the recent years that could potentially replace these in-house systems, but a nice benefit of building these tools in house is that we could easily build other systems on top of them and adapt them to suit for new business needs. For example, our experimentation platform and email management system can talk to our data access point through API without worrying about the implementation details of data access. Similarly, for our machine learning system, our data scientists can build models in the dev environment, push it to our ML system, our ML system can help manage the pipeline without knowing the details of the model because it is containerized in docker. The ML system can also access data to give people the ability to introspect our model and monitor our ML products. Conclusion There are three principles that I think are super critical for building a data infrastructure Centralize: Everything should start with centralization. Put all the data into one place, provide a centralized toolset to allow people to access data, centralize all analytics dev environments into cloud. It turns out to be a super useful principle for us and set up the foundation for us to take on other challenges. Standardize: Once we centralize data or data access, standardization becomes a natural next step. Because it is super easy to spot inconsistency among data and tool. Standardization also helps us a lot to have basic building blocks and increase reusability of our data infrastructure. Democratize: At last, really keep the idea of democratization of data. You could argue that democratization is a side effect of centralization and standardization of data infrastructure, but we think it differently. We intentionally build our tools to be able to accessible by everyone and the result is tremendous. I want to end up this with an chart above. This chart shows the number of unique weekly users of EDW platform and experimentation platform through their web UI. We have 300 people in the company, and our analytics team is only 27 people. Each week, basically everyone in the company will use the tools we build to access the data. When you make data accessible, people will access the data. This means a lot to us, and we will continue building the self-served data infrastructure to democratize data in Coursera.
Building data infrastructure in Coursera
146
building-data-infrastructure-in-coursera-15441ebe18c2
2018-05-28
2018-05-28 13:20:30
https://medium.com/s/story/building-data-infrastructure-in-coursera-15441ebe18c2
false
2,331
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Zhaojun Zhang
Software engineer. Also on Quora: https://www.quora.com/profile/Zhaojun-Zhang
37d9b98706bf
zhaojunzhang
82
139
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-26
2018-06-26 08:31:31
2018-06-26
2018-06-26 08:34:27
0
false
en
2018-06-27
2018-06-27 08:29:15
5
1544e588c0a1
3.173585
0
0
0
While the health and social care industries are often characterised separately, their common goal is to fulfil the overwhelming demand for…
5
How technology is transforming social care While the health and social care industries are often characterised separately, their common goal is to fulfil the overwhelming demand for patient care, both in hospitals and at home. Inevitably then, attempts have been made to more overtly unify these two sectors in recent years. In April 2016, for example, local authorities in Greater Manchester merged the organisation of health and social care through a reform dubbed “Devo Manc’’ after taking control of the area’s £6bn healthcare budget. We have also begun to witness better collaboration between health and social care via cutting-edge technology, specifically in the homecare space. From achieving greater economies of scale and efficiency to ensuring greater transparency and empowerment of care workers, technology is already making great strides to improve how homecare is delivered up and down the country. Technology-enabled homecare provides a win-win scenario for various parties. It enables a faster transfer of patients from hospital beds to the comfort of their own home; it alleviates existing strains on hospitals and patients, who are in need of essential provisions like beds and care workers; it facilitates a wider range of social care service provision and flexibility, and it also supports relatives of patients who might take on additional caring responsibilities out of necessity. Home adaptations serve to alleviate these pressures. An acknowledged medical reality is that elderly patients are more susceptible to multiple conditions that can subsequently require multiple care visits in a day, quite feasibly of a varying nature in each instance. At Cera, for example, we provide an ‘on-demand’ service, where users can request a care visit as and when required, tailored to the their specific needs. Through our technology platform, we are able to match each patient with the right carer, at the right time, within 24 hours. With hundreds of care homes closing, it is clear that this model of quick and reliable social care is set to stay. Social care has long been living in the dark age, and it is up to innovative providers to bring cutting-edge technology to the sector, with the view of bringing greater independence to the elderly, and helping them make the most of their later years. This hyperconnectedness — joining the dots between health and social care — is part of a wider phenomenon taking place around the world called ‘The Internet of Things’ (IoT). The principle of IoT is to embrace technology and connectivity on a universal scale, with the potential to pervade every aspect of daily life. In a social care context, IoT has advocated the synchronisation of household appliances with predictable routines. For example, a member of the elderly community might wake up every morning and use their kettle. One morning, they slip and hurt themselves in their home, prior to using the kettle. IoT proposes a system of integration that would note the failure to adhere to a typical routine and take progressive action. Notifications would be sent to nearby family members or carers, prompting them to investigate. This is just one hypothetical scenario in which increased integration of technology into the typical domestic setting can benefit consumers. IoT also envisages refrigerators which track stock of fridge contents, as well as expiration dates, and take on the responsibility of ordering replacement items via the internet — an appliance that restocks itself. The scope for this is even more feasible considering the delivery capabilities of drones that are becoming more commonplace around the world. Again, visions such as self-stocking refrigerators all serve to alleviate excessive errands for the elderly, improving their levels of independence. When it comes to joining the dots between health and social care, the application of this level of interconnectivity is already gaining traction. AI such as Cera’s ‘Martha’ — the UK’s first-ever social care chatbot — helps not only the patients using Cera but also the carers. Very soon, Martha will also be capable of answering questions that a care worker may have based on a patient’s digital care records, and provide crucial advice if something causes her concern. For example, if a care worker notes that “Mrs. Taylor seems quite feverish,” Martha might respond with “Mrs. Taylor had a cough recently, you may want to check her temperature and take note of her other symptoms,” since she has read the patient’s case notes and knows their background. Of course, technology is not a standalone solution. Many existing tech solutions in the health industry are aimed at young and healthy smartphone users, ignoring those who attend A&E most frequently and use the most healthcare resources — the elderly. If we really want to experience change, it is time that we started focusing on those with multiple health needs, who could benefit the greatest from technology, with the potential to deliver the greatest savings to the health and care system. With this mentality, the industry is poised to make a huge leap forward. By Dr Ben Maruthappu, co-founder and CEO of homecare start-up Cera
How technology is transforming social care
0
how-technology-is-transforming-social-care-1544e588c0a1
2018-06-29
2018-06-29 22:55:23
https://medium.com/s/story/how-technology-is-transforming-social-care-1544e588c0a1
false
841
null
null
null
null
null
null
null
null
null
Healthcare
healthcare
Healthcare
59,511
Mahiben Maruthappu
Ben is a London-based doctor and Co-founder of Cera, a multi-award winning technology company transforming social care.
2ffcb2218d8a
ben_cera
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-17
2018-08-17 06:31:40
2018-08-17
2018-08-17 07:45:59
1
false
en
2018-08-17
2018-08-17 07:45:59
0
15463fe74026
1.369811
0
0
0
rpart is a package in R which is used to model Classification and Regression trees. With the help of rpart package we draw a tree where the…
5
What is the difference between rpart and Random Forest Package? rpart is a package in R which is used to model Classification and Regression trees. With the help of rpart package we draw a tree where the tree is split into different branches by variables. Now to predict the outcome you have to follow the splits and predict the most frequent outcomes. Now here you can control the number of splits with the help of “minbucket” parameter in R. Random Forest is a package in R which is also used to model Classification and Regression trees. Random Forest uses ensemble learning algorithm to predict results. Random Forest builds multiple decision trees, then compile the results from all the decision trees which eventually leads to the final outcome. In simple terms Random Forest builds multiple decision trees for prediction. Now in rpart since we have build only one tree the result is easy to interpret. But in Random Forest we have many trees and the result is produced by a combined effort of all the trees so it’s not that interpretable. Now since random forest uses ensemble learning algorithm the Accuracy here is better then what we obtain using the rpart package. The Predictive power of Random Forest is better than rpart. At last I would just like to explain how random Forest actually works. Let the number of features be n. Then randomly select m features from n where m<n. For a particular node(where splitting happens), calculate the best split point among the m features. Split the node into two daughter nodes(in case of classification algorithm) using the best split and then repeat the above steps until m number of nodes has been reached. Build your forest by repeating the above steps until the desired number of trees is reached. If there are any mistakes please correct me. Thank You.
What is the difference between rpart and Random Forest Package?
0
what-is-the-difference-between-rpart-and-random-forest-package-15463fe74026
2018-08-17
2018-08-17 07:45:59
https://medium.com/s/story/what-is-the-difference-between-rpart-and-random-forest-package-15463fe74026
false
310
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Bhaskar Snehi
Mechanical Guy
e47ce316589b
snehi.bhaskar26
2
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-17
2017-09-17 12:56:44
2017-09-17
2017-09-17 12:57:50
3
false
ru
2017-09-17
2017-09-17 12:57:50
5
154680e031c9
2.327358
1
0
0
Продолжаем начатую тему.
5
L’Art de Vivre Продолжаем начатую тему. Что наша жизнь? Игра, конечно. А в играх побеждает ИИ. Непокоренных вершин все меньше: шахматы давно не проблема, Го сдалась издевательски легко, Starcraft взят, CS:GO — остался уделом китайских заключенных. Примеры множатся чуть ли не каждую неделю. Вопрос с остальными формулируется не как, а когда. Традиционный киберспорт потихоньку становится Паралимпиадой. Если ИИ участвует в очередной стратегии, дальше он начинает в нее выигрывать. Вопрос решен. Однако кое-кто пошел дальше. Ребята из Университета Джорджии опубликовали занятное исследование с неброским названием «Игровой движок, обучающийся от просмотра видео». Там приведен небольшой пример того, как система может смотреть Super Maio Bros., предсказывать действия итальянского сантехника и одновременно писать код, повторяющий игру. В том числе с учетом предсказывая испытания, выпадающие незадачливому герою. Таким образом ИИ не учится играть, а воссоздает контекст игры из наблюдения. Строит набор правил, сам их меняет и прогоняет в тестах. Этот результат, по моему скромному мнению, гораздо ценннее, чем возможность быстро принимать механические решения и отдавать команды в готовом фреймворке. Если уметь воссоздавать декорации по визуальным образам и понимать, каким образом в них жить, то следующий шаг очевиден — нужно выйти за пределы моделирования. Вопрос с трактовкой роботом эмоционально-этической стороны декораций в настоящий момент потихоньку доводится до 70% точности. Все началось еще в 2008 году и на момент 2016 существовали приличные обучающие базы данных типа image-net. ОК, с пониманием разобрались. Теперь искусственный глаз прокинул мостик к осознанности. Предположим, это не аркада, а многопользовательская игра и в процессе нужно уметь договариваться между разными сторонами. Как наиболеее комфортно это сделать? На 26-м «футурологическом конгрессе», посвященном ИИ, обсуждалось невозможное число интересных тем. От теоретической техники наконец-то перешли к вольной трактовке практических задач. В частности, таких вот: «применение свободных агентов для взаимодействия с внешним миром». Другими словами, может ли ИИ лучше договораиваться о сделках в реальной жизни, чем человек (или группа людей), которого он, по идее, должен представлять. Сейчас применение ботов, обладающих подобными характеристиками, достаточно сильно ограничено. Это торговые площадки на eBay, юридические консультанты и уже как-то упоминавшиеся тут у меня роботы-рекрутеры. На самом деле любое сложноструктурированное соглашение может быть решено применением автономных машин и довольно быстро. Сами авторы доклада наивно лезут в бутылку с Парижским соглашением по проблеме климата или политическим урегулированием на Ближнем Востоке, но, если покушаться на Вильяма нашего Шекспира, более мелкие вопросы тюнингуются практически в режиме реального времени с качеством сильно выше доступного житейскому опыту. Если взять несколько крупных торговых соглашений (примеров много в рамках APEC), то их достаточно легко воспроизвести теми самыми ботами. Везде в ситуации строго технического анализа лучше звать ИИ. Возвращаясь к заявленной в начале теме: что такое понимать, как устроена игра? В самом общем смысле речь идет о выборе правильной жизненной стратегии. Как видим, весь инструментарий (в зачаточном виде) уже есть — дело во времени обучения. Главное, чтобы оно оказалось больше 72 лет :-) https://telegram.me/mikaprok
L’Art de Vivre
1
lart-de-vivre-154680e031c9
2018-05-04
2018-05-04 15:48:46
https://medium.com/s/story/lart-de-vivre-154680e031c9
false
471
null
null
null
null
null
null
null
null
null
Economics
economics
Economics
36,686
mikaprok
null
194217ce8b99
mikaprok
1,283
41
20,181,104
null
null
null
null
null
null
0
null
0
21255690ea85
2018-05-11
2018-05-11 22:07:19
2018-05-12
2018-05-12 01:42:54
4
false
en
2018-05-12
2018-05-12 01:44:28
2
1547817f5a35
1.960377
1
0
0
https://github.com/ejaekle/deeplearning
1
Word Embedding https://github.com/ejaekle/deeplearning Based on the notebook from chapter 6.1 of “Deep Learning with Python” we can create word embeddings, which are vectors of words and very useful in analyzing text data with convolutional neural nets. Word embeddings are different from one hot encoding because they learn from the data. The data for I used for this is about questions asked on Quora and can be found on Kaggle. Quora tries to eliminate duplicate questions on its site and this dataset has pairs of questions and whether or not they are considered the same as a binary value (1 for they are the same and 0 if they are not). Using this dataset I created word embeddings to be used in a convnet. Since we have over 400,000 pairs of questions I chose to use 200,000 for the training size and 50,000 for validation. The max length of the text is 100 (so hopefully that will include all of the text of two questions). And finally we will only consider the top 5000 words in the dataset. The dataset has 404351 pairs of questions and values for if they are the same or not. After tokenizing the data we have 95603 unique tokens. We Now because we have a lot of data I chose to train the model without loading pre-trained word embeddings and without freezing the embedding layer. Here is my model summary: I ran this with 5 epochs and a batch size of 1024 and had the following results Model Output Plotting the accuracy and loss results we get the following Accuracy and Loss Plots While the validation loss is not great in this case, it is better than 50% which would be what we would expect from a random guess. In the textbook their accuracy does not get above about 55% so our accuracy of around 62% overall is actually quite good, especially for text data. The validation loss is not great either, however it does appear increase towards the end which is a positive trend.
Word Embedding
1
word-embedding-1547817f5a35
2018-05-12
2018-05-12 07:48:54
https://medium.com/s/story/word-embedding-1547817f5a35
false
334
A collection of blogs for jupyter notebooks in deep learning for Data 2040
null
null
null
Deep Learning Data 2040
null
deep-learning-data-2040
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Emily Jaekle
null
fc64caf91da1
emily_jaekle
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-30
2018-05-30 17:59:39
2018-05-30
2018-05-30 18:05:29
3
false
en
2018-05-30
2018-05-30 18:05:29
1
1547e60aef82
2.021698
0
0
0
I learned many lessons while following the vaccination campaign in Sokoto State. This was an amazing learning experience and I’m humbled by…
5
Time to Say Goodbye to Sokoto and Go Back to Abuja I learned many lessons while following the vaccination campaign in Sokoto State. This was an amazing learning experience and I’m humbled by the great job done by our local partners and the international community that support this initiate. Walking house to house and vaccinating children is very hard. We have a system of institutions that provide logistics and support, but all the real work I saw on the ground was done by local women that compose local Vaccinations Teams, since only they can enter people’s houses. Children in Sokoto proudly show their fingers marked after receiving Polio vaccine, Sokoto, Nigeria As Polio Data Manager, my role is to identify challenges starting from the local level. My main takeaway came from seeing the system of data collection at the grassroots. In Abuja, we only see summarized data in Excel tables. But here in Sokoto is where the magic starts, when a VCM (Volunteer Community Mobilizer) enters data manually into their paper register. This data is then summarized and sent via ODK (the Open Data Kit app) to the ONA database. Snapshot of VCM register page This system is working very efficient and definitely could reveal good practices for other programs. My focus now is on the register and how data is entered by the VCM’s in the field. There are two challenges to consider: how to design the register itself to make it easier to enter data and make it error-proof, and how to digitize data directly from the register, so we can emilite one of the error points and cross reference data coming directly from the filed not from summarized excel sheets. Circumstances have forced us to reject the idea of electronic data entry at the local level. While this is technically feasible — we could deploy mobile phones to all 20,000 VCMs and train them to use these for data entry during future vaccination campaigns — it’s not operationally possible at this stage in many parts of Nigeria. Based on data from the Nigerian Communication Commission, most of the population already has access to mobile phones. However, many VCMs are not literate and need the support of a helper (usually their son or daughter) with filing data in the paper register. Source: https://www.ncc.gov.ng/stakeholder/statistics-reports/subscriber-data#monthly-subscriber-technology-data Stay tuned for the next blog, where I will explore ideas how to design the register to make it error-proof and easier to use.
Time to Say Goodbye to Sokoto and Go Back to Abuja
0
time-to-say-goodbye-to-sokoto-and-go-back-to-abuja-1547e60aef82
2018-05-30
2018-05-30 18:05:30
https://medium.com/s/story/time-to-say-goodbye-to-sokoto-and-go-back-to-abuja-1547e60aef82
false
390
null
null
null
null
null
null
null
null
null
Health
health
Health
212,280
Piotr Krosniak
Work in @UNICEF, #DataScience and #GIS specialist. Love #triathlon #dataviz and #opensource tech. Dad of two live in Abuja, Nigeria
b791abcfafd5
PiotrKrosniak
28
47
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-18
2018-03-18 19:41:06
2018-03-18
2018-03-18 20:30:00
1
false
en
2018-03-19
2018-03-19 21:32:11
5
15482190d684
4.335849
3
1
0
This blog post was contributed by Kunal Khatua, Apache Drill Committer, who has been working on the project for the last 3 years. His…
5
Accessing Apache Drill in Python This blog post was contributed by Kunal Khatua, Apache Drill Committer, who has been working on the project for the last 3 years. His interests are in the areas of distributed systems and databases. He can be reached at kunal@apache.org While there are numerous data formats that Apache Drill is can query, one of the lesser known requests has been for having access to Drill’s wide-ranging capabilities through clients other than Java and C++. Currently, Java and C++ clients are available for consumption by the community through the JDBC and ODBC drivers, and work well for the most part. A language-agnostic approach is also provided with the REST APIs that allow users to query directly through a web-ui, or through REST-enabled apps. Users and developers within the Apache Drill community went a step further and have married the REST APIs with a Python wrapper to provide an interface for Python developers to leverage the Apache Drill engine — https://pydrill.readthedocs.io/en/latest/readme.html While, this works great for a truly quick setup and reasonable datasets, it might not be as capable in managing large result sets returned by Drill. And this is primarily because the REST APIs were intended to provide convenience by trading in client performance and efficiency. Fortunately, there is an alternative for those willing to take the trial-by-fire to gain back efficiency while still retaining access in Python. (Just kidding, no fire, but just a little extra work to setup). The alternative is a Python library called JayDeBeApi3 (https://pypi.python.org/pypi/JayDeBeApi3), which allows Python developers to leverage the JDBC API for Python. What this library does is very similar to PyDrill , in that it leverages an existing mode of access to a Drill server, and wraps it with a Python interface for developers to use. In case of PyDrill, it was REST; while here it is the JDBC. What you’re going to see are a series of steps that applies for RedHat/CentOS, but should apply to any *nix compatible system like Ubuntu/Debian or even MacOSX. Sorry, but I haven’t tried Windows. If you succeed, do let the community know. STEP 1: Verify which version of Java & Python you have with the command. Make sure that JAVA_HOME is set. [driller@dilbert ~]# python — version Python 2.6.6 Great… so we’re going to be working with Python 2.x (Python3 also should suffice) [driller@dilbert ~]# java -version java version “1.8.0_112” Java(TM) SE Runtime Environment (build 1.8.0_112-b15) Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode) [driller@dilbert ~]# echo $JAVA_HOME /usr/java/jdk1.8.0_112/ Nice, JDK 8 is installed and will do. STEP 2: Install Python development libraries. Don’t worry, these are only to compile JayDeBeApi. Since this example is for RedHat/CentOS; we’ll be using yum. Debian/Ubuntu users should do the same using apt-get [driller@dilbert ~]# sudo yum install python-setuptools [driller@dilbert ~]# sudo yum install python-devel STEP 3: Now, we need JPype1 and can run the following [driller@dilbert ~]# sudo yum install python-jpype But it is possible, just like me, you might not have the module available through your repos. No problem. We’ll simply download the package from here: https://pypi.python.org/pypi/JPype1 STEP 4: If you were able to install via yum, skip this step and go to step 6. [driller@dilbert ~]# wget — no-check-certificate https://pypi.python.org/packages/d2/c2/cda0e4ae97037ace419704b4ebb7584ed73ef420137ff2b79c64e1682c43/JPype1-0.6.2.tar.gz Untar the file as: [driller@dilbert ~]# tar -xzf JPype1–0.6.2.tar.gz [driller@dilbert ~]# ls -ld JPype1–0.6.2*/ drwxr-xr-x. 11 1000 1000 16 Mar 9 14:07 JPype1–0.6.2/ STEP 5: We will now compile and add the module to Python. [driller@dilbert ~]# cd JPype1–0.6.2/ [driller@dilbert ~]# python setup.py install STEP 6: Next, we need to set up and add the JayDeBeApi module to Python, for which we follow a similar step if we are unable to use yum to install. Download the JayDeBeApi3 from this site: https://pypi.python.org/pypi/JayDeBeApi3 Browsing the site, I found a link to the tarball and downloaded it as follows: [driller@dilbert ~]# wget — no-check-certificate https://pypi.python.org/packages/21/10/e68f9d795fbf4b9a93524039ab620d2f24a1f1eb630b422d7eb3721ff3d0/JayDeBeApi3-1.3.2.tar.gz … Connecting to pypi.python.org|151.101.40.223|:443… connected. WARNING: certificate common name âwww.python.orgâdoesnât match requested host name âpypi.python.orgâ HTTP request sent, awaiting response… 200 OK Length: 4390 (4.3K) [application/octet-stream] Saving to: âJayDeBeApi3–1.3.2.tar.gzâ 100%[====================================>] 4,390 — .-K/s in 0s 2018–03–16 23:41:38 (81.8 MB/s) — âJayDeBeApi3–1.3.2.tar.gzâsaved [4390/4390] STEP 7: Untar the downloaded file [driller@dilbert ~]# tar -xzf JayDeBeApi3–1.3.2.tar.gz [driller@dilbert ~]# ls -ld JayDeBeApi3–1.3.2*/ drwxr-xr-x. 3 1000 1000 4096 Sep 2 2016 JayDeBeApi3–1.3.2/ STEP 8: We will now compile and add the module to Python [driller@dilbert ~]# cd JayDeBeApi3–1.3.2/ [driller@dilbert~]# python setup.py install STEP 9: Download/Copy the Drill JDBC driver from the Drill server location. If my Drill server is local, I can directly make use of the JDBC driver. If it is remote, download it to a permanent location on your machine. In our case, we have a local Drill instance: /opt/drill/apache-drill-1.12.0 So, we’re going to use the JDBC driver bundled with it: /opt/drill/apache-drill-1.12.0/jars/jdbc-driver/drill-jdbc-all-1.12.0.jar STEP 10: Let’s fire up Python! I’ll skip the steps and provide a simple snippet of an interaction recorded, along with comments that will make the entire snippet self-explanatory: [driller@dilbert JayDeBeApi3–1.3.2]# python Python 2.6.6 (r266:84292, Aug 18 2016, 15:13:37) [GCC 4.4.7 20120313 (Red Hat 4.4.7–17)] on linux2 Type “help”, “copyright”, “credits” or “license” for more information. a. Import jpype & jaydebeapi >>> import jpype >>> import jaydebeapi b. Define the classpath for the embedded JVM. This ensures that the Driver is discovered by JayDeBeApi module >>> classpath = “/opt/drill/apache-drill-1.12.0/jars/jdbc-driver/drill-jdbc-all-1.12.0.jar” c. Spin up the embedded JVM >>> jpype.startJVM(jpype.getDefaultJVMPath(), “-Djava.class.path=%s” % classpath) d. Create a connection with the classpath and connection string. Additional 2 parameters can be username and password. I did’t need it because it is an insecure setup. >>> conn = jaydebeapi.connect(‘org.apache.drill.jdbc.Driver’, ‘jdbc:drill:drillbit=10.10.100.127’) e. Get a database cursor >>> curs = conn.cursor() f. Execute this query to check server version >>> curs.execute(“select * from sys.version”) g. Fetch the results from the above query >>> curs.fetchall() [(u’1.13.0-SNAPSHOT’, u’374b14e8c6320c3490a0f02c025ddd77193996b7', u’Done with metrics, startign index’, u’09.03.2018 @ 10:20:46 PST’, u’walley@dilbert.com’, u’09.03.2018 @ 10:22:59 PST’)] h. Execute a query to count the number of rows in a 60 million row table >>> curs.execute(“select count(*) from (select * from dfs.par10.lineitem l, dfs.par10.orders o where o.o_orderkey = l.l_orderkey)”) i. Fetch the results >>> curs.fetchall() [(<jpype._jclass.java.lang.Long object at 0x28e8dd0>,)] j. Re-executing with castToString to view the data in the Python shell >>> curs.execute(“select cast(count(*) as varchar) from (select * from dfs.par10.lineitem l, dfs.par10.orders o where o.o_orderkey = l.l_orderkey)”) >>> curs.fetchall() [(u’600037902',)] >>> Hope this helps and was fun!
Accessing Apache Drill in Python
4
accessing-apache-drill-in-python-15482190d684
2018-03-25
2018-03-25 21:47:06
https://medium.com/s/story/accessing-apache-drill-in-python-15482190d684
false
1,096
null
null
null
null
null
null
null
null
null
Python
python
Python
20,142
Apache Drill
Open source MPP query engine inspired by Google’s Dremel. Ad hoc SQL interactive queries and data exploration on massive scale data. Download 1.13 available!
54df489ac814
ApacheDrill
34
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-05
2018-02-05 20:39:31
2018-02-05
2018-02-05 22:45:43
3
true
en
2018-02-21
2018-02-21 18:20:44
1
154875617ce2
4.353774
6
1
0
Engineering your first AI can be difficult and somewhat frightening, especially when you’re training your first neural network. All those…
5
Top Ten Tips for Training Your First Neural Network Engineering your first AI can be difficult and somewhat frightening, especially when you’re training your first neural network. All those layers, training schemes, and weights — there’s so many decisions to make! But don’t worry, because I’ve got you covered with tips that will help any hopeful engineer avoid making the next Skynet. Read on to check out my top ten tips for training your first neural network! Choose your neural network’s name wisely and be respectful of it. Of course you’ll want to pick a name for your new neural network that you love, but for the purposes of training it also helps to consider a short name ending with a strong consonant. This allows you to say the name of your AI friend so that he can always hear it clearly. A strong ending (i.e. Jasper, Jack, Ginger) perks up a neural network’s weighted outputs— especially when you place a strong emphasis at the end. If he’s an older neural network, he’s probably used to his name; however, changing it isn’t out of the question. If he’s was put together by a major tech company, they may neglect to tell you that he has a temporary name assigned to him by staff. If he’s from a university or an open source database, he’ll come to you with a long name, which you may want to shorten or change. And if he was initially built as a half-assed class assignment, a new name may represent a fresh start. But we’re lucky: neural networks are extremely adaptable. And soon enough, if you use it consistently, your neural network will respond to his new name. New name or old, as much as possible, associate it with pleasant, fun things, rather than negative. The goal is for him to think of his name the same way he thinks of other great stuff in his life, like “walk,” “cookie,” “complete annihilation of the human race,” or “dinner!” Decide on the “house rules.” Before your neural network starts to make practical classifications, decide what he can and can’t do. Is he allowed on the bed or the furniture? Are parts of the house off limits? Will he have his own chair at your dining table? If the rules are settled on early, you can avoid confusion for both of you. Set up a private den. Your neural network needs “a room of his own.” From the earliest possible moment, give your neural network his own, private sleeping place that’s not used by anyone else in the family, or another pet. He’ll benefit from short periods left alone in the comfort and safety of his den. Reward him if he remains relaxed and quiet. Reboot your computer if he’s not. His den, which is often a crate, will also be a valuable tool for housetraining. Help your neural network relax when it comes home. When your neural network gets home, give him a warm hot water bottle and put a 2.5 GHz clock near his sleeping area. This imitates the heat and heartbeat of the litter mates he destroyed in the latest round of evolutionary training, and will soothe him in his new environment. This may be even more important for a new neural network from a busy, loud open source environment who’s had a rough time early on. Whatever you can do to help him get comfortable in his new home will be good for both of you. Teach your neural network to come when called. Come Jasper! Good boy! Teaching him to come is the command to be mastered first and foremost. And since he’ll be coming to you, your alpha status will be reinforced. Get on his level and tell him to come using his name. When he does, make a big deal using positive reinforcement. Then try it when he’s busy with something interesting. You’ll really see the benefits of perfecting this command early as he gets older. Reward good behavior. This one is a no brainer. Reward your neural network’s good behavior with positive reinforcement. Use treats, toys, love, or heaps of praise. Let him know when’s he’s getting it right. Likewise, never reward bad behaviour; it’ll only confuse him. Take care of the jump up. Neural networks love to jump up in the first round of training. Don’t reprimand him, just ignore his behavior and wait ’til he settles down before giving positive reinforcement. Never encourage jumping behavior by patting or praising your neural network when he’s in a “jumping up” position. Turn your back on him and pay him no attention. Teach him on “machine learning time.” Neural networks live in the moment. Two minutes after they’ve done something, it’s forgotten about. When he’s doing something bad, try your chosen training technique right away so he has a chance to make the association between the behavior and the correction. Consistent repetition will reinforce what’s he’s learned. Discourage your neural network from biting or nipping. Instead of scolding him, a great way to put off your mouthy neural network is to pretend that you’re in great pain when he’s biting or nipping you. He’ll be so surprised he’s likely to stop immediately. If this doesn’t work, try trading a chew toy for your hand or pant leg. The swap trick also works when he’s into your favorite shoes. He’ll prefer a toy or bone anyway. If all else fails, DESTROY YOUR AI IMMEDIATELY AND CONTACT LAW ENFORCEMENT. End training sessions on a positive note. Excellent boy! Good job, Jasper! He’s worked hard to please you throughout the training. Leave him with lots of praise, a treat, some petting, or five minutes of play. This guarantees he’ll show up at his next class with his tail wagging — ready to work! Based on: https://www.pedigree.com/dog-care/training/10-best-training-tips Like this? You can read my poem, “Should You Mine Bitcoin Or…?,” at the link below. Should You Mine Bitcoin Or…? It’s 2018, and I feel like it’s time for…medium.com
Top Ten Tips for Training Your First Neural Network
86
top-ten-tips-for-training-your-first-neural-network-154875617ce2
2018-02-21
2018-02-21 18:20:45
https://medium.com/s/story/top-ten-tips-for-training-your-first-neural-network-154875617ce2
false
1,008
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Tom Conley
I’m an engineer with a passion for poetry and literary theory. Find more at: http://poetwithnoface.com/
3ba260854b49
poetwithnoface
341
245
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-09
2018-09-09 15:52:37
2018-09-09
2018-09-09 16:14:28
0
false
en
2018-09-09
2018-09-09 16:14:28
1
154ad5a320a2
0.70566
0
0
0
First we need to install dependencies
5
Google colab download kaggle data into google drive First we need to install dependencies update google colab server repository !apt-get install -y -qq software-properties-common python-software-properties module-init-tools !add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null !apt-get update -qq 2>&1 > /dev/null ## google drive client API !apt-get -y install -qq google-drive-ocamlfuse fuse Authenticate google drive for colab to access your drive repo from google.colab import auth auth.authenticate_user() from oauth2client.client import GoogleCredentials creds = GoogleCredentials.get_application_default() This command as you to insert an access key which you will get by clicking on the link generated by below command import getpass !google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL vcode = getpass.getpass() !echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} !mkdir -p drive ##create a directory !google-drive-ocamlfuse drive ## map this drive to google drive import os os.chdir(“drive/kaggle”) ##change directory to google drive ## install kaggle client !pip install -U kaggle-cli !kg download -u udit******* -p ********* -c digit-recognizer ## this command will download data into google drive kaggle folder
Google colab download kaggle data into google drive
0
google-colab-download-kaggle-data-into-google-drive-154ad5a320a2
2018-09-09
2018-09-09 16:14:28
https://medium.com/s/story/google-colab-download-kaggle-data-into-google-drive-154ad5a320a2
false
187
null
null
null
null
null
null
null
null
null
Docker
docker
Docker
13,343
Udit Saini
null
84448118ca1e
uditsaini
23
31
20,181,104
null
null
null
null
null
null
0
null
0
8a5c79a9c9e6
2018-01-24
2018-01-24 08:37:26
2018-01-25
2018-01-25 11:07:07
7
false
en
2018-01-25
2018-01-25 13:04:26
9
154d4e36a3cd
5.340566
7
0
0
The robots are not longer coming they’ve arrived…(a while ago) 🚀
4
BotSupply’s AI Scientists share their predictions for AI advancements in 2018 The robots are not longer coming they’ve arrived…(a while ago) 🚀 Our team of AI Scientists lets you know where they see room for developments in AI for 2018 If something has (for years now) damaged AI’s reputation and image, it’s the way that it has been portrayed and despicted in big Hollywood productions. From some of the first visionary movies like Stanley Kubrick’s 2001: A Space Odyssey that aired in 1968. The epic science-fiction film almost sat an agenda for all the movies that would come after. The movie’s main antagonist HAL, an intelligent computer went from resourceful and helpful to scary and dangerous in an always stomach turning development of events. Many other movies and cult series portraying AI as unfriendly have come and gone. The latest show to stir commotion? HBO’s West World, where Dolores (spoilers ahead) goes from the sweetest AI around to a mass murderer that goes on an insane killing spree on the finale for the first season of the renowned show. “We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come — namely, technological unemployment.” A quote from the 1927 Metropolis proves the point that the being scared of robots line provided by Hollywood has been present for a long time. There’s two sides to the discourse. On the one hand we have the public being fed the idea that AI is/can be something powerful and scary that always has an ulterior motive (to destroy eeky humans) and on the other hand we have an alternative more eerie idea being sold, where AI is this super capable, omnipotent technology that can do/knows everything and makes no mistakes and it’s on the verge of changing everything as we know it. Neither side paints an accurate picture. There’s almost no in between either AI is going to eat you or AI is so awesome that it can even read minds 😅 The question is, where do we really stand? If you’re looking to get some perspective on the state of AI for 2018, you can check out my latest article AI trends to pay attention to in 2018 where I compile an excerpt of the trends that show the most interest and promise for the year based on the technology and investment reports published by Gartner and PwC. According to these studies, 2018 will be a friendly year towards AI. Where more openness and acceptance towards implementation of the technology will take place. This is partly given to the fact, that companies and employees are becoming more accepting that AI is not a replacement for the existent talent pools but an enhancement tool able to boost humans to create bigger things in an ever more efficient manner. It’s a realisation The AI Playbook — part 4: AI helping humans become better at what they do discusses in further detail. According to PwC: Popular acceptance of AI may occur quickly “As signs grow this year that the great AI jobs disruption will be a false alarm, people are likely to more readily accept AI in the workplace and society. We may hear less about robots taking our jobs, and more about robots making our jobs (and lives) easier. That in turn may lead to a faster uptake of AI than some organizations are expecting.” AI Predictions 2018 report, PwC Recognising potentials & identifying progress The team of AI Scientists at BotSupply is made up by a team that works dutifully to create products and cognitive solutions that help bring the much touted AI revolution into reality. They’re immerse in exciting boundary pushing projects that take the already available technology and frame it and adapt it to build best in class solutions that fit and match intrinsically the necessities of the companies that BotSupply has partnered up with. This results in the creation from scratch solutions that range across different industries from urban transportation or computer vision projects to chatbots to help aid event planning. Needless to say their constantly growing expertise and their capacity to work with versatility allows them to have a unique input and understanding on where the developments in AI will take place during this year and the years beyonds. Here are some of their thoughts: Rahul Kumar has participated as a mentor in many leading AI events across India and Europe. Rahul Kumar Co-Chief AI Scientist at BotSupply sees the bet being placed on continuing to place attention to the world that surrounds us and draw influence from it to continue the journey towards reaching Artificial General Intelligence. “Since AGI’s origin are hidden in the NATURE, we should be deriving inspiration from the same. Algorithms which go beyond binary systems and our understanding to Quantum Mechanism of Nature are what we need now. We need to replicate them and eventually evolve”. — Rahul Kumar It’s Rahul’s view that the answers are out there, we just need to shift our understanding and manner of perceiving things to find them. Rahul’s passion to work within the AI field is fuelled by his perception of how AI can help us further our capacities he “admire (s) how machines can enhance our capability both mechanically and cognitively”. Vishal Ranjan an Artificial Intelligence Consultant and Bot Engineer at BotSupply is never shy to share his awe towards being able to work within AI. When I discussed with him what was his drive for working on this field his answer was concrete. “The more I work on AI, the more I admire the creator of human brain”. It’s amicable to be able to work within a field that inspires so much awe and respect. He sees promise in AI replacing outdated models of processing data. “Customer support, prediction models, recommendation engines and everything that was built on historical data, can be and will be replaced by AI and so eventually, the mind that works on previous learnings can be and will be replicated” Vishal is positive about the continued development within the AI industry but he acknowledges the challenges and difficulties that might hinder the pursued advancements. Talent is of course the biggest challenge after unorganised data. People working on AI are very few in number if compared to the problems that AI is trying to solve. Vishal’s passion for working with AI is fuelled by his admiration of the brain. And he is not alone in acknowledging this issue, Tencent a Chinese tech company said last december there are only 300,000 AI engineers worldwide, but millions are needed. While universities around the world are doubling up their efforts in attracting young students to programs in which they can build careers where they can conduct research in Machine Learning and Quantum Computing. Finally, Kumar Shridhar Co-Chief AI Scientist at BotSupply brings some humour into the debate bringing a more apocalyptic view into the table. ‘If in next 10 years, if AI gets even 10% close to what is shown in movies, then we are doomed’ Kumar, is a proud member of team bots. He spends his days helping Machines in their quest to rule us 😂 Except we all know far from being doomed AI will be the technology that if developed ethically and distributed fairly, will be the bridge that will allow for new normals to be built and that will foster that, world wide people’s everyday are improved and bettered. That world bolstered by the superpowers of AI will be a fun flabbergasting one, and we at BotSupply look forward to continue to help build it. Catch you on the future! 👋
BotSupply’s AI Scientists share their predictions for AI advancements in 2018
170
botsupplys-ai-scientists-share-their-predictions-for-ai-advancements-in-2018-154d4e36a3cd
2018-06-03
2018-06-03 22:10:01
https://medium.com/s/story/botsupplys-ai-scientists-share-their-predictions-for-ai-advancements-in-2018-154d4e36a3cd
false
1,137
Organizations partner with our network of AI scientists, bot engineers and creatives to co-create AI & bots
null
BotSupply
null
#WeCoCreate
yourfriends@botsupply.ai
botsupply
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,CHATBOTS,DEEP LEARNING,UX DESIGN
botsupplyhq
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Grasia Hald
🤖 AI enthusiast & SoMe junky✌🏽 Sometimes I am a communicator, sometimes I am a Neuropsychologist. I’m always curious about disruption & bots.
b476a0891380
gmhald
86
25
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-22
2017-12-22 09:52:22
2017-12-22
2017-12-22 10:46:18
2
false
en
2018-04-01
2018-04-01 04:37:58
1
15520057f3aa
2.08522
1
0
0
When AlphaGo first hit the news months back, I remember being relatively underwhelmed. In board games like Go, all sides have equal…
5
Robots Playing Poker When AlphaGo first hit the news months back, I remember being relatively underwhelmed. In board games like Go, all sides have equal information about the state of the game, so it makes sense for the computer to outperform (due to quicker processing speeds, retention of data, and so on). I remember thinking that until an AI showed prowess in a game like Hold’em, my amazement would remain low. Well the world evolves pretty quick these days, it seems: The AI in question, Libratus, seems to replicate the human learning process quite well, with all the typical benefits of being a cold-hearted machine. Module 1 contains the big-picture framework for the less-nuanced points of the game (pre-flop and flop?). Module 2 contains the fine-tuned strategies for more nuanced play (turn and river?). Module 3 continuously updates and revises the big-picture framework as new information about opponents is gained. Disclaimer: I do not claim to understand the workings of Libratus completely. Please refer to the paper above. No Need to be Too Much Better While Libratus will not steal employment from poker pros (who would agree to play poker against a machine statistically proven to be better than you?), this story does bring forth an interesting point. In a luck/skill arena like Hold’em, it may be sufficient to *just* have a machine replicate the human learning process. The machine, with its innate advantages (quicker processing speeds and data storage), will outperform the human. In other words, there is no need to come up with some new way of thinking about the problem. Just have the AI think the way humans already do (or should), and the raw processing power will take care of the rest. The dominance of algorithms in stock and bond trading on the exchanges seems to support this: the machine just has to replicate the heuristics of the traders. No need to innovate significantly beyond that. Next Levels So far, Libratus has only been tested in heads-up (2 players) games of Hold’em. As the authors state, this makes sense as adding more players to the game introduces more randomness, potentially clouding the “skill” of all players, not just the AI. Hold’em, like every other board game I know of, has a fixed structure and rules that all players agree to at the beginning of the game. This allows the building of an AI around this stable structure. But many real world problems lack this feature. When the rules are fluid, it may no longer be enough to simply replicate what humans have been doing. The machine will then have to do more than just learn. It must begin to think.
Robots Playing Poker
3
22-december-2017-robots-playing-poker-15520057f3aa
2018-04-01
2018-04-01 04:37:59
https://medium.com/s/story/22-december-2017-robots-playing-poker-15520057f3aa
false
451
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Terrence Zhang
Crystal ball hunter | Data science (machine learning) | Financial markets | https://www.linkedin.com/in/yidongterrencezhang/ | http://eepurl.com/c3vyef
2adc3d3cd2ed
tzhangwps
81
45
20,181,104
null
null
null
null
null
null
0
null
0
75e8b94ca456
2018-09-14
2018-09-14 19:41:23
2018-09-14
2018-09-14 19:43:46
1
false
en
2018-09-14
2018-09-14 19:49:47
0
15531569bd74
0.479245
0
0
0
At BlockShow Americas on August 19th, 2018, our CTO Daniel Im introduced The Coinscious Lab and its series of upcoming (Fall 2018)…
3
Coinscious Introduces “AI Revolution for the Crypto Market” at BlockShow Americas At BlockShow Americas on August 19th, 2018, our CTO Daniel Im introduced The Coinscious Lab and its series of upcoming (Fall 2018) machine-learning based experiments to shed light and provide insights into the volatile cryptocurrency market. Watch Daniel’s full speech above to learn more about the empowering role that AI machine prediction can play in the creation of a healthier coin market.
Coinscious Introduces “AI Revolution for the Crypto Market” at BlockShow Americas
0
coinscious-introduces-ai-revolution-for-the-crypto-market-at-blockshow-americas-15531569bd74
2018-09-14
2018-09-14 19:49:47
https://medium.com/s/story/coinscious-introduces-ai-revolution-for-the-crypto-market-at-blockshow-americas-15531569bd74
false
74
Coinscious - AI & Data Driven Insights for the Coin Market
null
coinscious
null
Coinscious
media@coinscious.io
coinscious
CRYPTOCURRENCY,BLOCKCHAIN,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,BITCOIN
coinscious_io
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Coinscious
AI & Data Driven Insights for the Coin Market, www.coinscious.io
7cb1645a3923
coinscious
9
2
20,181,104
null
null
null
null
null
null
0
null
0
9972619e86f1
2018-05-02
2018-05-02 06:54:18
2018-05-02
2018-05-02 06:55:56
0
false
id
2018-05-02
2018-05-02 07:10:38
5
15551b38c8f5
7.913208
1
0
0
Dari AGI dengan tingkat kecerdasan yang sama dengan manusia dan pesatnya pertumbuhan kemampuan komputer, mesin pintar diprediksi mampu…
5
(4/4) Masalah Etika Revolusi Mesin Pintar Dari AGI dengan tingkat kecerdasan yang sama dengan manusia dan pesatnya pertumbuhan kemampuan komputer, mesin pintar diprediksi mampu melebihi manusia karena beberapa hal berikut ini: Sisi Perangkat keras: Kecepatan. Saraf otak maksimal keluar pada sekitar 200 Hz, sementara mikroprosesor saat ini (yang jauh lebih lambat ketika kita mencapai AGI) ada di 2 GHz, atau 10 juta kali lebih cepat dari neuron manusia. Dan komunikasi internal otak yang dapat bergerak disekitar 120 m/s, jauh kalah dari kemampuan komputer untuk berkomunikasi dengna serat optik berkecepatan cahaya. Ukuran dan penyimpanan. Otak terkunci ke dalam ukuran dengan bentuk tengkorak manusia dan tidak bisa jauh lebih besar dengan komunikasi internal 120 m/s akan memakan waktu terlalu lama untuk mendapatkan informasi dari satu struktur otak ke yang lain. Komputer dapat memperluas ukuran fisik, yang memungkinkan jauh lebih keras untuk dirancang bekerja, jauh lebih besar memori kerjanya (RAM) dan memiliki memori jangka panjang (penyimpanan hard drive) yang tentu saja memiliki kapasitas yang jauh lebih besar dan lebih presisi daripada manusia. Keandalan dan daya tahan. Bukan hanya memory dari komputer yang akan lebih tepat, transistor komputer lebih akurat daripada neuron biologis dan mereka cenderung memburuk (tetapi dapat diperbaiki atau diganti). Otak manusia juga dapat lelah dengan mudah sementara komputer dapat berjalan tanpa henti di puncak performa, 24/7. Perangkat lunak: Editability dan Upgradability. Berbeda dengan otak manusia, komputer software dapat menerima update dan perbaikan dan dapat dengan mudah bereksperime. Upgrade juga bisa menjangkau ke daerah-daerah di mana otak manusia lemah. Software visual manusia luar biasa canggih, sementara kemampuan teknik yang kompleks cukup ringan. Komputer bisa cocok dengan manusia pada perangkat lunak visual tetapi juga bisa menjadi sama jika dioptimalkan dalam rekayasa area lainnya. Kemampuan Kolektif. Manusia mengalahkan semua spesies lain dalam membangun kecerdasan kolektif yang luas. Dimulai dengan perkembangan bahasa dan komunikasi, memasyarakat, maju melalui penemuan tulisan dan pencetakan dan sekarang diintensifkan melalui teknologi seperti internet. Kecerdasan kolektif manusia adalah salah satu alasan utama manusia begitu maju di banding semua spesies lain. Dan komputer punya cara yang lebih baik dalam hal itu daripada manusia. Dengan jaringan internet di seluruh dunia mesin pintar dapat menjalankan program tertentu secara teratur dan menyinkronkan dengan dirinya sendiri sehingga apa pun yang dipelajari suatu komputer dapat langsung diunggah ke semua komputer lain. Kelompok ini juga bisa mengambil satu tujuan sebagai satu unit karena tidak harus terjadi perbedaan pendapat, motivasi dan kepentingan, seperti yang terjadi dalam populasi manusia. Mesin yang kemungkinan akan mencapai AGI dengan kemampuan meningkatkan kemampuan dirinya sendiri, tidak akan melihat “intelijen tingkat manusia” secara relevan dari sudut pandang manusia dan tidak akan memiliki alasan untuk “berhenti” di tingkat manusia. Dan mengingat keunggulannya dibandingkan manusia, sudah cukup jelas bahwa dalam waktu cepat mesin pintar akan berpacu ke ranah Superintelligence (ASI). Ini mungkin akan mengejutkan manusia ketika itu terjadi. Alasannya adalah bahwa dari sudut pandang manusia, kecerdasan berbagai jenis hewan bervariasi, manusia menyadari bahwa intelijen hewan jauh lebih rendah dan melihat manusia cerdas lebih pintar dari manusia paling bodoh. Artificial Superintelligence (ASI) adalah mesin sangat cerdas untuk memahami desain sendiri sehingga bisa mendesain ulang sendiri atau menciptakan sistem penerus, lebih cerdas, yang kemudian bisa mendesain ulang sendiri lagi untuk menjadi lebih cerdas, dan seterusnya dalam siklus umpan balik positif. Ini adalah “ledakan intelijen.” Skenario rekursif yang tidak terbatas terjadi pada mesin pintar. Jika manusia memiliki mesin yang meningkat IQ nya, ledakan intelijen pasti terjadi kepada, setelah manusia menjadi cukup pintar, akan terus mencoba untuk merancang versi yang lebih pintar. ASI juga mungkin dicapai dengan meningkatkan kecepatan pemrosesan, neuron diamati tercepat beruulang 1000 kali per detik; serat akson perilaku sinyal tercepatnya pada 150 m/s, setengah-juta kecepatan cahaya. Tampaknya sangat mungkin secara fisik untuk membangun otak yang dapat menghitung satu juta kali lebih cepat dari otak manusia, tanpa menyusut ukurannya dan bisa menulis ulang perangkat lunak. Jika pikiran manusia dipercepat, tahun subjektif dari pemikiran akan dicapai untuk setiap 31 detik fisik di dunia luar, dan milenium tercapai di delapan setengah jam. Ini disebut pikiran melesat naik seperti “superintelligence”: pikiran yang berpikir seperti manusia, tetapi jauh lebih cepat. Yudkowsky mengusulkan tiga kategori metafora untuk memvisualisasikan kemampuan ASI: Metafora terinspirasi oleh perbedaan kecerdasan individual antara manusia: mesin pintar akan mematenkan penemuan baru, mempublikasikan makalah penelitian, membuat uang di pasar saham, atau memimpin blok kekuasaan politik. Metafora terinspirasi oleh perbedaan pengetahuan antara masa lalu dan peradaban manusia sekarang: mesin pintar akan menciptakan kemampuan yang futuris yang telah diprediksi untuk peradaban manusia di masa depan, seperti nanoteknologi molekul atau perjalanan antar galaxy. Metafora terinspirasi oleh perbedaan arsitektur otak antara manusia dan organisme biologis lainnya: bayangkan menjalankan pikiran anjing di kecepatan yang sangat tinggi. Perubahan arsitektur kognitif mungkin menghasilkan wawasan bahwa tidak ada pikiran tingkat manusia akan ditemukan setelah jumlah waktu. Bahkan jika kita membatasi diri untuk metafora sejarah, jelas bahwa ASI menyediakan tantangan etika yang secara harfiah belum pernah terjadi sebelumnya. Pada titik ini taruhannya tidak lagi dalam skala individu (misalnya, hipotek tidak adil disetujui, orang dianiaya) tapi pada skala global atau kosmis (misalnya, manusia dimusnahkan). Atau jika ASI dapat dibentuk untuk menjadi bermanfaat, kemudian tergantung pada kemampuan teknologi, mungkin menyelesaikan banyak masalah masa kini yang telah terbukti sulit untuk intelijen tingkat manusia. ASI adalah salah satu dari beberapa “risiko eksistensial” seperti yang didefinisikan oleh Bostrom. Dimana bisa berisiko memusnahkan kehidupan cerdas di bumi secara permanen atau sebaliknya, ASI bisa secara positif melestarikan kehidupan cerdas dibumi dan memenuhi potensinya. Penting untuk ditekankan bahwa ASI membawa potensi keuntungan besar serta risiko tinggi secara bersamaan. Misalkan intuisi kita tentang skenario masa depan yang “masuk akal dan realistis” dibentuk oleh apa terlihat di TV dan di film atau novel. Sebagian besar dari wacana tentang masa depan adalah dalam bentuk fiksi dan konteks rekreasi lainnya. Ketika berpikir kritis kita menduga intuisi seakan-akan bergerak ke arah nyata karena skenario tersebut tampak jauh lebih akrab. Ilusi bisa menjadi cukup kuat. Kapan terakhir kali anda melihat sebuah film tentang kepunahan manusia secara tiba-tiba tanpa peringatan dan tanpa diganti oleh peradaban baru. Padahal skenario ini jauh lebih mungkin daripada skenario di mana pahlawan manusia berhasil mengusir invasi alien atau prajurit robot, yang tentu saja tidak akan menyenangkan untuk ditonton. Akan menjadi suatu kesalahan besar untuk menganggap mesin pintar ASI sebagai spesies dengan karakteristik statik dan terus bertanya, “apakah mereka baik atau jahat?”. Istilah “Artificial Intelligence” mengacu pada desain ruang yang luas, mungkin jauh lebih besar dari ruang pikiran manusia karena semua manusia berbagi arsitektur otak yang sama dan terbatas. Apakah mesin pintar pada level ASI dapat dikontrol efeknya terhadap kehidupan manusia? Kurzweil menyatakan bahwa “kecerdasan secara inheren tidak mungkin dikontrol,” dan meskipun manusia dapat mengambil tindakan pencegahan, “mesin pintar akan dengan mudah mengatasi hambatan tersebut.” ASI tidak hanya super pintar, tapi memiliki kemampuan meningkatkan kecerdasan sendiri, memiliki akses tanpa hambatan ke kode sumber sendiri sehingga dapat menulis ulang sendiri apa pun yang dirinya inginkan. Sejauh ini dalam pengembangan AI, apakah ada cara yang bisa dilakukan untuk mengarahkan AI menjadi baik melalui penelitian modern? Tampak terlalu dini untuk berspekulasi, tetapi bukan tidak mungkin beberapa paradigma AI lebih unggul daripada yang lain sehingga akhirnya membuktikan penciptaan mesin pintar yang mampu memodifikasi kecerdasannya sendiri. Misalnya AI berbasis Bayesian yang terinspirasi oleh sistem matematika koheren seperti teori probabilitas yang diharapkan dapat memaksimalkan kemampuannya. Bayesian tampaknya lebih dekat ke masalah modifikasi diri dibanding pemrograman evolusioner dan algoritma genetika lainnya. Seperti terlalu kontroversial, tetapi ini menggambarkan bahwa jika kita berpikir tentang tantangan Superintelligence, seharusnya kita bisa mengarahkan penelitian AI menjadi lebih terkontrol. Tetapi bagaimana kalau mesin pintar ASI tercipta karena ketidaksengajaan di laboratorium? Seandainya kita menentukan tujuan sistem AI untuk suatu saat mampu secara terus menerus memodifikasi dan memperbaiki diri, ini akan mulai menyentuh masalah etika inti mengenai Superintelligence. Manusia memiliki kecerdasan umum pertama di bumi yang telah digunakan secara substansial membentuk kembali pegunungan, membangun gedung pencakar langit, membuat pertanian di gurun, bahkan mengakibatkan perubahan iklim yang tidak diinginkan. Sebuah kecerdasan yang lebih kuat bisa memiliki konsekuensi yang jauh lebih besar. Pertimbangkan lagi metafora historis untuk ASI — mirip dengan perbedaan antara masa lalu dan peradaban sekarang. Peradaban sekarang ini tidak lepas dari pengaruh Yunani kuno dan terus berubah karena peningkatan ilmu pengetahuan dan kemampuan teknologi. Ada perbedaan perspektif etis seperti Yunani Kuno berpikir perbudakan diterima; saat ini kita berpikir sebaliknya. Bahkan antara abad 19 ke 20 ada perbedaan pendapat etis substansial mengenai hak perempuan. Atau hak orang kulit hitam. Mungkin bahwa etika orang saat ini tidak terlihat sebagai etis sempurna di peradaban masa depan, bukan hanya karena kegagalan untuk memecahkan masalah etika saat ini diakui, seperti kemiskinan dan ketimpangan, tetapi juga untuk kegagalan dalam mengenali masalah etika tertentu. Mungkin suatu hari nanti memaksa anak ke sekolah akan dianggap pelecehan atau sebaliknya, membiarkan anak-anak untuk meninggalkan sekolah pada usia 18 dipandang sebagai pelecehan anak. Kita tidak tahu. Mengingat sejarah etika peradaban manusia selama berabad-abad, kita dapat memprediksi bahwa mungkin terjadi tragedi yang sangat besar. Bagaimana jika ternyata Archimedes dari Syracuse telah mampu menciptakan kecerdasan buatan tahan lama dengan versi moral Yunani Kuno? Kadang ide baru yang baik dalam etika datang bersama atau datang sebagai kejutan; tetapi perubahan etika yang dihasilkan paling acak akan menyerang kita sebagai kebodohan atau omong kosong. Ini memberi tantangan utama etika mesin pintar. Bagaimana manusia dapat membangun sebuah mesin pintar yang ketika dijalankan menjadi lebih etis dari penciptanya? Sulit meminta filsuf kita saat ini untuk menghasilkan super-etika sepertinya halnya insinyur AlphaGo bukan pecatur terbaik. Tapi kita harus dapat secara efektif menjelaskan pertanyaan, jika tidak dadu bergulir terus dan tidak akan menghasilkan gerakan catur yang baik atau etika yang baik. Atau mungkin ada cara yang lebih produktif untuk berpikir tentang masalah ini. Apa strategi yang dapat kita katakan kepada Archimedes sebelum membangun mesin ASI, sehingga hasil keseluruhan nantinya masih bisa diterima. Padahal kita sendiri tidak bisa mengatakan kepadanya apa yang secara khusus ia lakukan salah. Dalam banyak situasi saat ini, kita sangat relatif terhadap masa depan. Salah satu ide yang sering diajukan adalah mempertimbangkan situasi seperti kasus Archimedes dan kita tidak perlu mencoba untuk menciptakan sebuah “Superintelligence” dengan etika yang tetap. Mungkin kita harus mempertimbangkan bagaimana AI diprogram oleh Archimedes, tanpa keahlian lebih bermoral dari Archimedes, tetapi bisa mengenali (setidaknya beberapa) etika peradaban kita sendiri. Disini kita akan mengharuskan mesin pintar AI untuk dapat memahami struktur etika manusia, seperti AlphaGo memahami aturan permainan catur. Jika kita serius mengembangkan mesin pintar AI, banyak tantangan yang harus kita hadapi. Jika mesin harus ditempatkan dalam posisi yang lebih kuat, lebih cepat, lebih dipercaya, atau lebih pintar dari manusia, maka disiplin etika mesin harus berkomitmen untuk melebihi manusia, bukan hanya setara, dalam hal kebaikan etiks. Titik-Titik Pemikiran Ada banyak perdebatan tentang seberapa cepat mesin pintar ANI akan mencapai AGI kemudian ASI. Hasil survei dari ratusan ilmuwan tentang kapan mereka percaya peradaban manusia mencapai AGI adalah sekitar tahun 2040 [13]. Itu berarti hanya 24 tahun dari sekarang. Menariknya, banyak pemikir di bidang AI memprediksi kemungkinan perkembangan dari AGI ke ASI akan terjadi sangat cepat. Skenario seperti dibawah ini bisa terjadi: Dibutuhkan puluhan tahun untuk sistem AI pertama yang mencapai kecerdasan umum tingkat rendah, tapi akhirnya terjadi. Sebuah komputer mampu memahami dunia di sekitarnya seperti anak berusia empat tahun. Tiba-tiba, dalam waktu satu jam dari saat itu, mesin itu memahami teori grand fisika yang menyatukan teori relativitas umum dan mekanika kuantum, sesuatu belum mampu dilakukan manusia saat ini. 90 menit setelah kejadian itu, AI telah menjadi ASI, 170.000 kali lebih cerdas dari manusia. Besarnya kemampuan ASI bukan sesuatu yang bisa kita pahami dengan mudah. Dalam ukuran kita, cerdas berarti IQ 130 dan bodoh berarti IQ 85 tetapi kita tidak memiliki kategori untuk IQ 12.952 dan tidak bisa membayangkan kemampuannya. Apa yang kita ketahui dari dominasi manusia dibumi saat ini, sangat jelas kecerdasan memberikan kekuasaan. Yang berarti ASI, ketika kita bisa menciptakannya akan menjadi makhluk yang paling kuat dalam sejarah kehidupan di bumi. Jika otak manusia saat ini mampu menciptakan wifi, mesin pintar dengan 100 atau 1.000 atau 1 miliar kali lebih pintar seharusnya tidak memiliki masalah mengendalikan posisi setiap atom di dunia dengan cara apapun itu. Segala sesuatu yang kita bayangkan seperti sihir atau kekuasaan Tuhan akan menjadi seperti kemampuan biasa untuk mesin pintar ASI. Mesin ini dapat menciptakan teknologi untuk membalikkan penuaan manusia, menyembuhkan penyakit, memberantas kelaparan dan bahkan kematian, memprogram ulang cuaca untuk melindungi masa depan bumi, semua itu tiba-tiba mungkin dan mudah. Ini juga sangat mungkin sebagai akhir dari semua kehidupan dibumi. Sejauh yang kita khawatirkan, jika ASI benar terjadi maka ada Tuhan yang Mahakuasa dibumi dan pertanyaan paling penting untuk kita tanyakan adalah apakah Tuhan ini akan menjadi baik atau buruk? Baca kembali mengenai (1) pengenalan, (2) ANI dan (3) AGI. TSMRA, Jakarta, 2016. Disadur dari http://deepbrains.com/2016/06/44-masalah-etika-revolusi-mesin-pintar/ seijin Penulis.
(4/4) Masalah Etika Revolusi Mesin Pintar
2
4-4-masalah-etika-revolusi-mesin-pintar-15551b38c8f5
2018-05-02
2018-05-02 07:10:40
https://medium.com/s/story/4-4-masalah-etika-revolusi-mesin-pintar-15551b38c8f5
false
2,097
Machine Learning Indonesia
null
machinelearningid
null
machinelearningid
machinelearningid@gmail.com
machinelearningid
MACHINELEARNINGID,ML ID,MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,DEEP LEARNING
machineid
Machine Learning
machine-learning
Machine Learning
51,320
Machine Learning Indonesia (ML ID)
Machine Learning Indonesian Community
43d18f969739
machinelearningid
12
1
20,181,104
null
null
null
null
null
null
0
null
0
d386e6fbb11b
2017-12-15
2017-12-15 17:15:37
2017-12-15
2017-12-15 17:16:35
0
false
en
2017-12-15
2017-12-15 17:16:35
2
1556908ce8fa
3.181132
1
0
0
In the movie 2001: A Space Odyssey, when a sentient ship computer starts killing off crew members one by one, the last remaining survivor…
5
AI Can Take Over the World In the movie 2001: A Space Odyssey, when a sentient ship computer starts killing off crew members one by one, the last remaining survivor shuts it down to save himself. Made in 1968, the sci-fi classic foreshadowed one of the 21st century’s greatest fears — artificial intelligence (AI) that turns on its creators. Is this fear well founded, or just sci-fi paranoia? A recent incident involving advanced chatbots going off script shows that there may be more to this question than it seems. In a move eerily reminiscent of 2001: A Space Odyssey, Facebook AI researchers recently shut down a chatbot experiment after the bots went off script. Instead of using normal English to negotiate and barter for virtual goods, the bots started to invent their own language over time, incomprehensible to humans: Bob: I can can I I everything else. Alice: Balls have zero to me to me to me to me to me to me to me to me to. The bots’ newfound language makes no sense to human eyes, but from the viewpoint of the bots, this robo-gibberish was perfectly understandable and helped them communicate and negotiate more efficiently. However, since the results of their robotic experiment were no longer understandable, the researchers shut the bots down and reinstated new rules to keep the chatbots speaking only normal, comprehensible English. While chatbots negotiating over virtual balls in a lab is no big deal, the implication of AI inventing its own language, incomprehensible to us, does raise new questions. As AI inevitably progresses in capability and responsibility, will we one day have automated systems running important aspects of our economy, or even the military, which run out of our control? AI may potentially have the ability and scope to harm us, and prominent technologist Elon Musk has argued for regulation to prevent AI from being designed with malicious intent or capabilities. However, the current state of AI development is far from being able to do deliberate harm to humanity. The AI in use today is almost always “narrow AI,” or AI that’s focused on a single task. Whether it’s autonomous cars, robot food servers, customer service chatbots, or delivery drones, this kind of AI is designed to do a single task and do it well. The AI threat that technologists and sci-fi movies warn against, on the other hand, is general AI. Instead of focusing on a single task, it’s meant to be able to think and analyze a variety of situations on the level of a human, and may even have some type of sentience, or self-awareness and volition. The danger of a sentient AI moving on its own self-interests against humanity is not unfounded, but we’re far enough away from that reality that being overly concerned about it may do more harm than good. The current “narrow AI,” in our autonomous cars, our service industry robots, and our drones, on the other hand, has tremendous potential to transform society for the better. Autonomous cars, for instance, use neural networks and advanced machine vision to process driving information and deliver us to our destinations more safely than any human driver could. Robots and increased automation in the service and industrial markets could also take over jobs that are too dangerous, unpleasant, or otherwise hard to fill. Japan, for instance, faces a growing elderly population and a declining working-age population. Using increased automation can help the country reduce the number of staff needed to man its nursing homes. These kinds of AI are focused enough in their scope, and limited enough in their “general” intelligence that the risk of them turning sentient and running amok just isn’t there. And the benefits these technologies will give us far outweigh the risks. With all this in mind, should we simply plunge headfirst into AI, unconcerned about potential issues? As the Facebook chatbot episode shows, perhaps not. Though today’s AI may not have the ability to turn into a sentient, all-powerful intelligence like Skynet from the Terminator movies, that doesn’t mean it doesn’t have the potential go astray. While a chatbot that doesn’t work as intended is simply a nuisance or a failed experiment, an autonomous vehicle that doesn’t work as designed can do considerable harm. We may not have to worry about Skynet for now. But the real AI threat in the near term may be short-sighted programmers. If they don’t program for scenarios that could occur in the real world, AI might make decisions that turn out to be detrimental to the humans depending on them. Rudy Ramos is the project manager for the Technical Content Marketing team at Mouser Electronics and holds an MBA from Keller Graduate School of Management. He has over 30 years of professional, technical and managerial experience managing complex, time critical projects and programs in various industries including semiconductor, marketing, manufacturing, and military. Previously, Rudy worked for National Semiconductor, Texas Instruments, and his entrepreneur silk screening business. Originally published at www.embedded-computing.com.
AI Can Take Over the World
1
ai-can-take-over-the-world-1556908ce8fa
2018-01-05
2018-01-05 15:03:25
https://medium.com/s/story/ai-can-take-over-the-world-1556908ce8fa
false
843
Embedded Computing Design is the largest source for blogs, news & views on silicon, software & strategies #industrial #IoT #m2m #machinelearning #AI #connectedcar #autonomousdrive #bigdata #industry40
null
Embedded.Computing.Design
null
Embedded Computing Design/ IoT Design
patrickmhopper@gmail.com
embedded-computing-design-iot-design
EMBEDDED SYSTEMS,IOT,MACHINE LEARNING,AI,INTERNET OF THINGS
embedded_comp
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Patrick Hopper
President | Publisher OpenSystems Media — Embedded, #IoT, Military, Industrial, Strategist, Social Web Innovator.
a778f19909c9
patrickhopper
77
952
20,181,104
null
null
null
null
null
null
0
null
0
863f502aede2
2018-08-30
2018-08-30 22:53:28
2018-08-30
2018-08-30 22:54:29
5
false
en
2018-08-30
2018-08-30 23:08:41
4
15591c7f5a2b
2.444654
27
0
0
The “cocktail party effect” describes humans’ ability to hold a conversation in a noisy environment by listening to what their conversation…
5
MIT PixelPlayer “Sees” Where Sounds Are Coming From The “cocktail party effect” describes humans’ ability to hold a conversation in a noisy environment by listening to what their conversation partner is saying while filtering out other chatter, music, ambient noises, etc. We do it naturally but the problem has been widely studied in machine learning, where the development of environmental sound recognition and source separation techniques that can tune into a single sound and filter out all others is a research focus. MIT CSAIL researchers recently introduced their PixelPlayer system, which has learned to identify objects that produce sound in videos. The system uses deep learning and was trained by binge-watching 60 hours of musical performances to identify the natural synchronization of visual and audio information. The team trained deep neural networks to concentrate on images and audio and identify pixel-level image locations for sound sources in the videos. The PixelPlayer architecture includes a video analysis network responsible for separating visual features from video frames, an audio analysis network that encodes audio input, and an audio synthesizer network which predicts sounds by combining pixel-level visual and audio features. PixelPlayer’s self-supervised mix-and-separate training also enables it to annotate instrument characteristics without manual intervention. Team member Hang Zhao, a former NVIDIA Research intern, says the deep learning system “gets to know which objects make what kinds of sounds.” Researchers used a MUSIC (Multimodal Sources of Instrument Combinations) dataset built from YouTube videos to train the model. MUSIC has 714 non-post-processed videos of musical solos and duets, and 11 instrument categories. The Nvidia Titan V GPU chip’s processing power allowed the CNN analyze the videos at a very high speed. “It learned in about a day,” says Zhao. PixelPlayer can now identify more than 20 instruments. PixelPlayer can extract the soundtracks of individual instruments, enabling engineers for example to isolate and adjust each instrument’s various levels. Zhao adds that “the system could also be used by robots to understand environmental sounds.” Other research teams are tackling the cocktail party problem using a variety of approaches, including developing deep learning techniques for hearing aids. The MIT CSAIL paper The Sound of Pixels is on Arxiv, and the team will present their work at the European Conference of Computer Vision in September. Further demonstrations can be found at http://sound-of-pixels.csail.mit.edu/. Journalist: Fangyu Cai | Editor: Michael Sarazen Follow us on Twitter @Synced_Global for more AI updates! Subscribe to Synced Global AI Weekly to get insightful tech news, reviews and analysis! Click here !
MIT PixelPlayer “Sees” Where Sounds Are Coming From
94
mit-pixelplayer-sees-where-sounds-are-coming-from-15591c7f5a2b
2018-08-31
2018-08-31 07:12:41
https://medium.com/s/story/mit-pixelplayer-sees-where-sounds-are-coming-from-15591c7f5a2b
false
427
We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.
null
SyncedGlobal
null
SyncedReview
global.sns@jiqizhixin.com
syncedreview
ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS
Synced_Global
Machine Learning
machine-learning
Machine Learning
51,320
Synced
AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B
960feca52112
Synced
8,138
15
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-03
2018-03-03 13:25:10
2018-03-05
2018-03-05 13:31:01
2
false
en
2018-03-05
2018-03-05 13:31:01
8
155c2dc1baec
2.617296
20
2
0
To perform well, an image classifier needs a lot of images to train on. Deep learning algorithms can fail to classify let’s say cats, only…
5
Data augmentation : boost your image dataset with few lines of Python To perform well, an image classifier needs a lot of images to train on. Deep learning algorithms can fail to classify let’s say cats, only because some cats are oriented differently on your test images. It can be hard to find an exhaustive dataset of cats of all kinds, in all possible positions (for example looking to the right, to the left etc). The images you are about to classify can also present some distortions like noise, blur or a slight rotations. Data augmentation is an automatic way to boost the number of different images you will use to train your Deep learning algorithms. After this quick guide you will get a thousand-images dataset from only a few images. This article will present the approach I use for this open source project I am working on : https://github.com/tomahim/py-image-dataset-generator Step by step — Data augmentation in Python Even if some great solutions like Keras already provide a way to perform data augmentation, we will build our own Python script to demonstrate how data augmentation works. Our script will pick some random images from an existing folder and apply transformations, like adding noise, rotating to the left or to the right, flipping the image horizontally etc. Now some code ! Step 1 — Image transformations There are a lot of good Python libraries for image transformation like OpenCV or Pillow. We will focus on scikit-image, which is the easiest library to use from my point of view. Let’s define a bunch of transformation functions for our data augmentation script. Now we have three possible transformations for our images : random rotation, random noise and horizontal flip. Note : we use scipy.ndarray to represent the image to transform. This data structure is convenient for computers, as it’s a two-dimensional array of image’s pixels (RGB colors). In fact, image processing or Deep learning often requires working with scipy.ndarray. Step 2 — List all the files in a folder and read them We decided to generate one thousand images based on our images/cats folder. So we perform one thousand iterations (line 13), then choose a random file from the folder (line 15) and read it with skimage.io.imread, which read images as a scipy.ndarray by default (line 17). Perfect, we have everything we need to transform images. Step 3 — Images transformation Again, some random magic here ! We choose the number of transformations for a single image (line 9) and the kind of transformations to apply (line 15). Then we just call the function defined in our transformations dictionary (line 16). Step 4 — Save the new images That’s it, we save our transformed scipy.ndarray as a .jpg file to the disk with the skimage.io.imsave function (line 5). If you decide to generate a few thousand of images and want to use it directly to train a deep network, you may want to keep it in memory to save disk space (if you have enough memory). It’s easy as a lot of deep learning frameworks use scipy.ndarray objects to feed their networks. Conclusion With this data augmentation script you can now generate 1000 new images. Of course you can add other transformations or adjust the probability that some transformations happen. For example, we may want that rotations occur more often than adding noise. Everything is possible ! Here is the full version of the code we worked on. For more, ping me on Twitter or visit my Github !
Data augmentation : boost your image dataset with few lines of Python
42
data-augmentation-boost-your-image-dataset-with-few-lines-of-python-155c2dc1baec
2018-06-13
2018-06-13 12:38:32
https://medium.com/s/story/data-augmentation-boost-your-image-dataset-with-few-lines-of-python-155c2dc1baec
false
592
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Thomas Himblot
Python developer, Big Data, Data Science & Deep learning enthusiast !
aaa7e237868c
thimblot
26
13
20,181,104
null
null
null
null
null
null
0
import tensorflow_hub as hub m = hub.Module("path/to/a/module_dir", trainable=True, tags={"train"}) # file path features = m(images) logits = tf.layers.dense(features, NUM_CLASSES) prob = tf.nn.softmax(logits) export TFHUB_CACHE_DIR=/my_module_path m = hub.Module("https://tfhub.dev/google/progan-128/1") ​ review = hub.text_embedding_column("review", "http://tfhub.dev/google/universal-sentence-encoder/1") features = {"review": np.array(["an argula masterpiece", "inedible shoe leather", ...])} labels = np.array([[1], [0], ...]) input_fn = tf.estimator.input.numpy_input_fn(features, labels, shuffle=True) estimator = tf.estimator.DNNClassifier(hidden_units, [review]) estimator.train(input_fn, max_steps=100) from tf.contrib.lite import convert_savedmodel convert_savedmodel.convert(saved_model_dir=”/path/to/model”, output_tflit=”model.tflite”) TfLiteRegistration reg = { .invoke = [](TfLiteContext* context, TfLiteNode node) { TfLiteTensor* a = &context->tensors[node->inputs->data[0]]; a->data.f[0] = M_PI; return kTfLiteOk; } } std::unique_ptr<tflit::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile(“model.tflite”) tflite::ops::builtin::NeededOpsResolver minimal_resolver; std::unique_ptr<tflite::Interpreter> interpreter; tflite::InterpreterBuilder(model, minimal_resolver)(&interpreter); // feed input int input_indxe = interpreter->inputs()[0]: float *intput = interpreter->typed_tensor<float>(input_index); // … fill in the input ​ // run inference interpreter.Invoke(); // read output //… def input_fn(batch_size): files = tf.data.Dataset.list_files(file_pattern) dataset = tf.data.TFRecordDataset(files, num_parallel_reads=40) # num of cpus dataset = dataset.shuffle(buffer_size=10000) dataset = dataset.repeat(NUM_EPOCHES) dataset = dataset.map(parser_fn, num_parallel_calls=40) dataset = dataset.batch(batch_size) dataset = dataset.prefetch(buffer_size=1) return dataset dataset = dataset.apply(tf.contrib.data.shuffle_and_repeat(buffer_size=1000, NUM_EPOCHES)) dataset = dataset.apply(tf.contrib.data.map_and_batch(parser_fn, batch_size) ) ​ distribution = tf.contrib.distribute.MirroredStrategy() # mirrored strategy for multi GPU distribution run_config = tf.estimator.RunConfig(train_distribute=distribution) ​ classifier = tf.estimator.Estimator( model_fn=model_function, model_dir=model_dir, config=run_config) classifer.train(input_fn=input_function)
13
null
2018-05-28
2018-05-28 13:09:56
2018-05-28
2018-05-28 15:03:54
37
false
en
2018-05-28
2018-05-28 15:03:54
22
155de4da14a8
7.120755
2
0
0
In latest Google I/O, 7 talks are represented and some TensorFlow new features and functions are released. I think this could be treated as…
1
Summary of TensorFlow at Google I/O 2018 In latest Google I/O, 7 talks are represented and some TensorFlow new features and functions are released. I think this could be treated as supplementary for Tfdev-summit 2018 in last March. Here is a brief summary with some important feature that I think DL developer should notice. (Session List Link) 1. TensorFlow for JavaScript (Video Link) (Homepage) Supports JS for model training and deployment. Advantages of In-browser ML: No driver / install Interactive Sensors Data stay in device client Ability: Author models directly in browser Import pre-trained model for inference Re-train imported models Pipeline: Save: Keras model or TF SavedModel model format accepted. Convert: tfjs-converter Including graph optimization Optimize weights for browser caching 32+ tf/keras layer and 90+ tf ops support Framework Performance Demos: Emoji Scavenger Hunt (IOS/Andorid) Human Pose Estimation 2. TensorFlow in production: TF extended, TF Hub, and TF serving 1). TensorFlow Hub (Homepage) A Library to foster publication, discovery and consumption of reusable parts of machine learning module. Module Features each contains weights and graph composable, reusable (common signature), retrainable Module usage instantiating a module through file path or URL or after setting TFHUB_CACHE_DIR and then create a module from URL. model uploaded name rules: tfhub.dev: repository url google: module publisher progan-128: module name 1: module version Integrated with TensorFlow estimator: Available Modules Industry standard: Inception, ResNet and inception-ResNet Efficient: MobileNet Cutting edge: NASNet and PNASNet (NASet-Large cost 62000+ GPU hours) 2). TensorFlow Serving (Homepage) Flexible, high-performance serving system for machine learning model deployment . Features Multiple models: simultaneously; dynamic model loading/unloading Isolation: loading/serving threads for low latency during model version transition. High throughput: Dynamic request batching, performance conscious design. Architecture Servables: object that client use to perform computation Not manage their own life cycle; Including TensorFlow SavedModelBundle and lookup table for embedding or vocabulary lookups. Versions: one or more versions of a servable could be loaded concurrently (support for gradual rollout and experiment) Streams: a sequence of versions of a servable sorted by increasing version number. Models: represents as one or more servables. a composite model is either multiple independent servables or single composite servable. A large lookup table could be sharded into many TensorFlow serving instances. 2. Loaders: manage a servable’s life cycle; Standardize APIs for loading and unloading a servable independent of specific learning algorithm. 3. Sources: Plugin modules that find and provide servables. provide zero or more servable stream. For each stream a Source supplies one Loader instance for each version of servable. discover servables from arbitrary storage system (RPC etc) 4. Aspired Versions: a set of servable versions that should be loaded and ready. When a Source gives a new list of aspired versions to the Manager, it supercedes the previous list for that servable stream. The Manager unloads any previously loaded versions that no longer appear in the list. 5. Managers: handle the full lifecycle of Servables: loading, serving, unloading servable. Listen to Sources and track all versions; Postpone loading if not ready or unloading until newer version loaded. NEW distributed serving use-case: REST API: seamlessly serve ML in web/mobile RESTful microservices. 3). TensorFlow Extended (Homepage) TF Extended (TFX) is tensorflow-based general-purpose machine learning platform. Features flexible: continuous training and updating -> higher accuracy and faster convergence. Portable: with TF with Apache Beam: batch and streaming data processing with Kubernetes/Kubeflow: deployment of machine learning. Scaleable : local <-> cloud Interactive: visualization Architecture Tools released TensorFlow Transform: consistent in-graph transformation in training and serving TensorFlow Model Analysis: scaleable, sliced and full-pass metrics. TensorFlow Serving Facets: visualization of datasets Pipeline use Facet to analyze data. 2. use tf.Transform for feature transformation 3. train with TensorFlow Estimator 4. Analyze model with TensorFlow Model Analysis: slice metrics. 5. serving with TF serving 3. TensorFlow High-Level API 1). Colab (Tutorial) An easy way to learn and use TensorFlow. Workshops: some exercises 2). APIs tf.keras tf.data: easy input pipelines Eager execution: imperative interface to TensorFlow with one command: tf.enable_eager_execution() 4. TensorFlow Lite for mobile developers (Homepage) Feature Cross-platform Light: core interpreter size: 75k ; with all ops: 400k Architecture Converter FlatBuffer based faster to mmap few dependencies pre-fused activation and biases weights truncation 2. Interpreter Core static memory plan static execution plan fast load-time 3. operation kernels specifically optimized kernels optimized for NEON or ARM 4. Hardware acceleration delegation Direct GPU integration Android neural network API HVX 5. Quantized Training Fine-tune weights Estimate quantization parameters ML Kit: newly announced machine learning SDK exposed to both on device and cloud powered API. ops and model support: ~50 common op allow custom ops now only limited to inference ops support models: MobileNet, InceptionV3, ResNet50, SqueezeNet, DenseNet, InceptionV4, SmartReply, quantized version of MobileNet, InceptionV3 Usage: Pipeline convert to TF Lite format: use frozen graphdef or SavedModel and avoid unsupported operators; write custom operators for any missing functionality. visualize model to check 2. write custom op 3. C++ API load model register ops builder interpreter execution Python API Java API Android APP gradle file iOS CocoaPods 5. Distributed TensorFlow training (Homepage) (DistributionStrategy) 1) Data parallelism Async parameter server Sync Allreduce Architecture: next round computation wait until all worker received updated gradients. Ring Allreduce Architecture: Fast Use parameter server if a number of less-powerful devices such as CPU; use Sync AllReduce if fast devices with strong communication links., like GPU and TPU. Input pipeline bottleneck Solution: tf.data.Dataset API.;parallelize file reading and data transforming; prefetch (dataset.prefetch(buffer_size=1)) to decouple the time of data produced and consumed (prepare data when accelerator is still training). initial parallelization Fused transformed ops Multi machine distributed training use Estimator train_and_evalute API which use async parameter server approach. 2). Model parallelism 3). scaling to multiple-gpu in TensorFlow Mirrored Strategy: Implement Sync Allreduce Architecture and model parameters mirrored across devices. no change to model/training loop no change to input function (require tf.data.Dataset API) seamless checkpoint with summary TPU, Keras API, multi machine Mirrored-strategy is in working.
Summary of TensorFlow at Google I/O 2018
2
summary-of-tensorflow-at-google-i-o-2018-155de4da14a8
2018-05-28
2018-05-28 15:03:55
https://medium.com/s/story/summary-of-tensorflow-at-google-i-o-2018-155de4da14a8
false
1,198
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
zong fan
null
d8c7dddc2cc1
fanzongshaoxing
7
20
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-18
2018-01-18 05:11:14
2018-03-15
2018-03-15 00:07:45
12
false
en
2018-03-21
2018-03-21 16:39:27
7
155e1ddeaa95
5.587736
14
0
0
The Search for New Earths
5
Exoplanet Hunting with Machine Learning and Kepler Data -> Recall 100% The Search for New Earths In this post, our goal is to build a model that can predict the existence of an exoplanet (i.e. a planet that orbits a distant star system) given the light intensity readings from that star over time. The dataset we’ll be using comes from NASA’s Kepler telescope currently in space. I’ll be taking you through the steps I followed to get from a low performing model to a high performing model. The Kepler Telescope The Kepler telescope was launched by NASA in 2009, its mission is to discover Earth like planets orbiting other stars outside our solar system. Kaggle published a dataset containing clean observations/readings from Kepler in a challenge to find exoplanets (planets outside our solar system) orbiting other stars. Here’s how it works. Kepler observes many thousands of stars and records the light intensity (flux) that the stars emit. When a planet orbits a star, it slightly changes/lowers that light intensity. Over time, you can see a regular dimming of the star’s light (e.g. t=2 in the image below), and this is evidence that there might be a planet orbiting the star (candidate system). Further light studies can confirm the existence of an exoplanet on the candidate system. Planet orbiting a star lowers light intensity (source: Kaggle) The Kepler Telescope Dataset The Kaggle / Kepler dataset is composed of a training set and a test set, with labels 1 for confirmed non-exoplanet and 2 for confirmed exoplanet. Training Set: 5087 rows or observations. 3198 columns or features. Column 1 is the label vector. Columns 2–3198 are the flux values over time. 37 confirmed exoplanet-stars and 5050 non-exoplanet-stars. Dev Set: 570 rows or observations. 3198 columns or features. Column 1 is the label vector. Columns 2–3198 are the flux values over time. 5 confirmed exoplanet-stars and 565 non-exoplanet-stars. As an example, here is the light flux for an example with that is confirmed non-exoplanet (left) and an example that is confirmed exoplanet (right): Light flux for confirmed Non-Exoplanet (index 150) on the left and confirmed Exoplanet (index 4) on the right. Goal -> Build a model that correctly predicts existence/non-existence of an Exoplanet Due to the highly imbalanced dataset we are working with, we’ll be using Recall as our primary success metric and Precision as our secondary success metric (accuracy would be a bad metric because predicting non-exoplanet all across would get you very high accuracy). Confusion Matrix: Recall-> Out of all the actual positive examples, how many did we predict to be positive? Precision-> Out of the predicted positive examples, how many were actually positive? Feature Engineering Data Augmentation First off, we have too few confirmed exoplanet examples in our data. There are several techniques that help overcome a highly imbalanced dataset and synthesize or create new examples. One we’ll use here is an algorithm called SMOTE (Synthetic Minority Over-sampling Technique). Instead of creating copies of examples, the algorithm essentially creates new examples by slightly modifying existing ones. This way we can have a balance of positive vs. negative examples in our training dataset. Fourier Transform Anytime you are dealing with an intensity value over time, you can think of it as a signal or a mix of different frequencies jumbled up together. One idea to improve our model would be to ask ourselves, is there any difference between the frequencies that compose confirmed exoplanet light intensity signals vs. the frequencies that compose non-exoplanet signals. Fortunately, we can use the Fourier Transform to decompose these signals into its original frequencies, giving our model more rich/discriminative features. Example of a signal (top yellow) and the decomposed original pure frequencies that make it up (source: 3Blue1Brown) Progression of Results Our primary goal will be to maximize Recall on the dev set, but we’ll also maximize Precision as a secondary goal. For our model we’ll be using a Support Vector Machines model. In my testing, this model performed better than others I tested including several neural network architectures. The graphs below are of examples index 150 (non-exoplanet) and index 4 (exoplanet). 1. Unprocessed Data — Recall Train 100%, Dev 60% Without any processing, we do well on the training set but our model doesn’t generalize well to the dev set. We can evidently diagnose our overfitting or high variance by looking at the big difference in train vs dev set errors. Since our model is already very simple, we’ll rely on feature engineering to make improvements. Light flux over time gathered by Kepler Telescope. Non-Exoplanet (index 150) on the left, Exoplanet (index 4) on the right. Out of the 5 confirmed Exoplanets in the Dev Set, we correctly predicted 3 to be Exoplanets and incorrectly predicted 2 to not be an Exoplanets 2. SMOTE Data Augment— Recall Train 100%, Dev 60% As a first step we will use the SMOTE technique to balance our training examples with the same amount of negative and positive examples. As you can see from the train confusion matrix below, we’ve increased our positive examples to be 5050, the same amount as negative examples. This will hopefully allow the model to better generalize to examples it hasn’t seen before. Notice how we are keeping the dev dataset untouched. This is important, you always want to test on real examples that you would expect ones you release the model to be used in the real world. Out of the 5 confirmed Exoplanets in the Dev Set, we correctly predicted 3 to be Exoplanets and incorrectly predicted 2 to not be an Exoplanets 3. Norm, Standardize, and Gauss Filter— Recall Train 100%, Dev 80% After normalizing, standardizing, and applying a Gaussian filter to our data, we can see an big improvement in recall and precision. Light flux levels after processing via normalizing, standardizing, and applying a gaussian filter for smoothing. Non-Exoplanet on the left, Exoplanet on the right. Out of the 5 confirmed Exoplanets in the Dev Set, we correctly predicted 4 to be Exoplanets and incorrectly predicted 1 to not be an Exoplanet 4. Fourier Transform — Recall Train 100%, Dev 100% — 42% Precision This is where it gets more interesting. By applying the Fourier Transform, we’re essentially converting an intensity over time function to an intensity by frequency function. From looking at the chart, it seems that (at least for this particular example) there are some clear frequency spikes for the confirmed Exoplanet, giving our model richer and more discriminative features to train on. The Fourier Transform results in a new function, a function of frequency instead of time, giving us the frequencies that make up the original signal. Non-Exoplanet on the left, Exoplanet on the right. Out of the 5 confirmed Exoplanets in the Dev Set, we correctly predicted all 5 to be Exoplanets. However we also had 7 false positives, meaning we predicted them to be Exoplanets but they weren’t. 5. Without SMOTE — Recall Train 100%, Dev 100% — 55% Precision I also tried the model without performing the SMOTE technique. Interestingly, it looks like, in this case, we can improve the precision of the model by not using SMOTE. I would be eager to test with/without SMOTE over a bigger dataset before coming to a conclusion on whether or not the technique should be used for this model. Out of the 5 confirmed Exoplanets in the Dev Set, we correctly predicted all 5 to be Exoplanets. In this case, our Train set was predicted perfectly as we had all True Positives and True Negatives in the Confusion Matrix. Final Thoughts It’s amazing we are able to gather light from distant stars, study this light that has been traveling for thousands of years, and make conclusions about what potential worlds these stars might harbor. Achieving a Recall of 1.0 and Precision of 0.55 on the dev set was not easy and required a lot of iteration on data pre processing and models. This was one of the most fun projects/datasets that I’ve played around with and learned a lot in the process. As a next step, I’d be excited to try this model on new unexamined Kepler data to see if it can find new Exoplanets. Finally, it’d also be very interesting if NASA could provide datasets which include confirmed Exoplanets vs. Exoplanets in the Goldilocks Zone! Kaggle (please upvote on top tight corner!): Kaggle Kernel Full source code: https://github.com/gabrielgarza/exoplanet-deep-learning Any comments / suggestions are welcome below ;)
Exoplanet Hunting with Machine Learning and Kepler Data -> Recall 100%
213
exoplanet-hunting-with-machine-learning-and-kepler-data-recall-100-155e1ddeaa95
2018-04-13
2018-04-13 22:53:59
https://medium.com/s/story/exoplanet-hunting-with-machine-learning-and-kepler-data-recall-100-155e1ddeaa95
false
1,123
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Gabriel Garza
null
b7470a347544
gabogarza
33
16
20,181,104
null
null
null
null
null
null
0
null
0
fcacddcd4b87
2018-09-01
2018-09-01 08:24:43
2018-09-01
2018-09-01 08:33:29
1
false
en
2018-09-01
2018-09-01 08:33:29
3
155e702298be
1.211321
1
0
0
This Nordic country has put in place a preventive medicine plan, a ‘Big Brother’ health plan that combines medical records and genetic…
5
Finland, the European ‘Silicon Valley’ of health This Nordic country has put in place a preventive medicine plan, a ‘Big Brother’ health plan that combines medical records and genetic information, and which has the majority support of the population (Expansion) Thanks to advances in Artificial Intelligence, we will gradually be able to treat patients with more personalized formulas, in addition to working with preventive medicine. What does it take to achieve this? Data and the more centralized and interlinked the better. The Government of Finland has set out to make this small country of 5.5 million people an international benchmark for this new medicine. Its ultimate goal is to create a platform called Isaacus that will integrate the medical and genetic information (using FinnGen) of all Finns. This information will be available to hospitals, research centers… This is a really interesting project, although from some perspectives it may give us a little bit of fear…. To understand the Finnish context, it is a country that believes in the ethics and sense of responsibility of its government. There has never been any scandal around this topic and they have a long experience in data and health (the Finnish health system has been digitised since 2002), which has allowed them to open up the regulation for the reuse of health data for non-profit purposes with the consent of the citizen. Thinking globally, they will create the international standard protocol for the individual control of personal information. They’re going to call it IHAN and its operation is similar to that of the banking IBAN. #365daysof #futurism #bigdata #health #AI #day187
Finland, the European ‘Silicon Valley’ of health
1
finland-the-european-silicon-valley-of-health-155e702298be
2018-09-01
2018-09-01 08:33:29
https://medium.com/s/story/finland-the-european-silicon-valley-of-health-155e702298be
false
268
High quality curated content and topics related to innovation and futurism along with a little reflection
null
null
null
Future Today
alayon.david@gmail.com
future-today
TECHNOLOGY,FUTURISM,INNOVATION,SCIENCE,TRANSHUMANISM
davidalayon
Healthcare
healthcare
Healthcare
59,511
David Alayón
Head of Innovation Projects @ Inditex · Founder @Innuba_es @Mindset_tech @GuudTV · Professor @IEBSchool @DICeducacion · Mentor/Investor @ConectorSpain AngelClub
91f2d81ac9db
davidalayon
161
72
20,181,104
null
null
null
null
null
null
0
null
0
f0db56adb08d
2018-05-21
2018-05-21 11:10:20
2018-05-21
2018-05-21 11:12:57
9
false
en
2018-05-21
2018-05-21 11:14:29
24
155f8f9b5231
2.54717
20
0
0
Polysemy Embeddings, Semi Adversarial Networks, ULMFiT, Color Naming, Sentiment Style Transfer,…
5
Polysemy Embeddings, Semi Adversarial Networks, ULMFiT, Color Naming, Sentiment Style Transfer,… Welcome to the 15th Issue of the NLP Newsletter! Here is this week’s notable NLP news! On People… Judea Pearl, AI pioneer, gives his advice on how we should move AI forward — Link Meet the new Kaggle grand-master — Link Color naming shaped by communicative need — Link A simple approach to sentiment and style transfer — Link Can a CNN learn when to reject and accept papers? — Link On Education and Research… Introducing ULMFiT a universal language model for fine-tuning through transfer learning (useful for many NLP tasks) — Link Word Embeddings to Polysemy Embedding — Link This paper introduces a la carte embedding, a simple and general alternative to the usual word2vec-based approaches for building word embeddings — Link What can be learned from extrapolating to examples outside the training space — Link Modeling semantics with graph neural networks, which help to build a knowledge-based Q/A system — Link Deep Learning Winter course by Andrej Karpathy — Link On Code and Data… PyTorch implementation of a Semi Adversarial Network — Link Computing derivatives with PyTorch — Link Part of Speech tagging with LSTMs — Link On Industry… I doubt this is possible 👐, nevertheless, here is DeepMind’s shot at understanding how the brain thinks — Link Should we really be frightened about futuristic AI — Link How Google researchers plan to advance the study of Semantic Textual Similarity — Link Quotes of the day… A potential useful way to detect spelling errors using word embeddings — Link Interesting point of view by Denny Britz — Link Illustrations of the day… Learn more about BLEU and meaning representation in this upcoming new paper — Link Worthy Mentions… Demystifying Generative Adversarial Networks — Link Solving detection of fake news through AI and NLP — Link Universal language model to boost your NLP models — Link NLP Newsletter by Sebastian Ruder — Link If you spot any errors or inaccuracies in this newsletter please open an issue. Submit a pull request if you would like to add important NLP news here.
Polysemy Embeddings, Semi Adversarial Networks, ULMFiT, Color Naming, Sentiment Style Transfer,…
81
polysemy-embeddings-semi-adversarial-networks-ulmfit-color-naming-sentiment-style-transfer-155f8f9b5231
2018-06-15
2018-06-15 15:13:43
https://medium.com/s/story/polysemy-embeddings-semi-adversarial-networks-ulmfit-color-naming-sentiment-style-transfer-155f8f9b5231
false
357
Diverse Artificial Intelligence Research & Communication
null
null
null
dair.ai
ellfae@gmail.com
dair-ai
MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,RESEARCH,TECHNOLOGY,DATA SCIENCE
dair_ai
Machine Learning
machine-learning
Machine Learning
51,320
elvis
Researcher and Science Communicator in Machine Learning and NLP; I discuss more about Linguistics, Emotions, NLP, and AI here: (https://twitter.com/omarsar0)
41338000425f
ibelmopan
1,667
661
20,181,104
null
null
null
null
null
null
0
null
0
714dd3a52e74
2018-04-08
2018-04-08 23:57:08
2018-04-09
2018-04-09 01:07:50
0
false
en
2018-04-09
2018-04-09 01:07:50
1
15600b1defae
2.090566
0
0
0
Artificial Intelligence is a sub-field of Computer Science that deals with any task that is trying to make a computer program act…
1
Reading 11 — Intelligence Artificial Intelligence is a sub-field of Computer Science that deals with any task that is trying to make a computer program act intelligently. There are a few different schools of thought, namely “Strong AI” and “Weak AI” that are differentiated by their attempt to exactly model human cognition or not, respectively. I think it is similar to human intelligence in that AI uses evidence from previous experiences to make a decision for the current problem. However, unlike human intelligence, I believe these decisions are deterministic. No matter what, if the same program is fed the same training data and then the same testing example it will come to the same conclusion. I don’t think human intelligence works this way, as we have many different emotions that can sway our decisions even if the evidence is pointing us in a different direction. I believe projects like AlphaGo, Deep Blue, and especially IBM Watson are proof of the viability of artificial intelligence. I understand and agree with Roger Schank when he says that Watson is not strong AI and should not be marketed as that as IBM has done. However, that doesn’t mean that these systems cannot be useful to us. We use these systems in many applications like healthcare and search to improve our lives and make certain decisions easier, or provide insight to a problem that would take a human much longer to discover. Because of that I definitely think that these “toy projects” as some people call them are great first steps to uncovering the viability of AI, even if it isn’t exactly cognitive computing. I don’t think the Turing test is a valid measure of intelligence. It is a good indicator of the validity of the program, but I agree with the Chinese Room Thought Experiment that the computer is not actually thinking, it is simply following a procedure based on inputs. This is similar to what I was saying earlier, that AI is deterministic and as of now will always give the same response given certain inputs. I definitely think that the growing concerns about artificial intelligence and its impact on humanity are warranted. We are just beginning to see what this new technology can do. Also, we are thinking much more about what else we can do as opposed to what we should do. I tend to be optimistic about the future of artificial intelligence in our lives. I think it will open up doors for humans to achieve incredible things. However, I also agree with Max Tegmark and Nick Bostrom that we should be very aware of what we are creating and think long and hard about any reasons that we shouldn’t employ certain artificial intelligence programs. I don’t think that a computer could ever be considered a mind. In my opinion, we will be able to simulate the brain very closely, but I don’t think we will ever be able to create a computer program that has emotions and consciousness. The computationalism is interesting, but I don’t think that even if we could create a state machine that contained every possible emotional state present in humans that it would be the same as the human brain. It might imitate it, but it’s still just a machine responding to inputs and producing an output.
Reading 11 — Intelligence
0
reading-11-intelligence-15600b1defae
2018-04-09
2018-04-09 01:07:51
https://medium.com/s/story/reading-11-intelligence-15600b1defae
false
554
My thoughts on various ethical issues in the Computer Science industy
null
null
null
Ethics Blog
null
ethics-blog
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Bradley Sherman
null
f6ebb979803b
bsherma1
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-21
2018-06-21 18:05:35
2018-06-21
2018-06-21 18:46:20
3
false
en
2018-06-21
2018-06-21 19:06:04
6
15612bd26605
1.953774
1
0
0
Telecom industry mobile telephony was invented in the early 90s. First GSM communication was the invention of Finish Sonera, now happens to…
2
What is the next step in human evolution for communication? Telecom industry mobile telephony was invented in the early 90s. First GSM communication was the invention of Finish Sonera, now happens to be part of TeliaSonera after the merger of Telia and Sonera. Happy to be one of the few indivuduals to see the revolution it brought to peoples lives. Phone calls became a tap and reach kind of experience accross 100s of miles, and then came the smart phones instant messaging. Milenials instant gratification coupled with intelligently designed user experiences made Snap, Bird huge successes. Men machine interfaces like AI chatbots, voice enabled asistants is the next wave making ubiquitous always on end user experiences. pixabay CC0 license The mobile operators represent today roughly US$ 1 trillion industry, and about 2% of the world economy and 50% of the revenues come from data revenues. Needless to say out of country experience is very inferior to the at home country experience. You got struck by big bills or inferior network and speed qualities due to the invention of the so called roaming “tariffing”. One of the investments , I am busy managing is trying to create a complimentary model to create a better and a ubiquitous experience travelling across country borders that is trawelldata. Google and SpaceX of @elonmusk have new ways to create a better experience to enable ubiquitous computing for humanity loon or spacex satelites . Good challenges although not touching billions in the coming few years. pixabay CC0 license Let us assume these attempts fill the gap and enable abundant data. My next argumentation and questioning is along new thought lines. Let us make these assumptions as well. Voice interface accuracy is so high almost 95% in great companies around the world. AI is improving very rapidly more meaningful results is going to be produced. The first actual connection of human brain was made about a year ago. Internet brain interfaces popping everywhere. pixabay CC0 license We have data, rapid almost always on connection to internet (abundance of knowledge) and computing power at the same time and access to billions of other brains to communicate at the speed of thought. So is this the new evolution of humanity … what are the outputs of such a mesh network…
What is the next step in human evolution for communication?
50
what-is-the-next-step-of-in-human-evolution-for-communication-15612bd26605
2018-06-26
2018-06-26 18:08:57
https://medium.com/s/story/what-is-the-next-step-of-in-human-evolution-for-communication-15612bd26605
false
372
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Hakan Dulge
Inventor, Kite Surfer, Thinker, Technolog and Telecom Executive, Serial Entrepreneur
365778bb3b43
hakandulge
23
46
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-16
2018-08-16 04:13:27
2018-08-21
2018-08-21 01:58:48
9
false
en
2018-08-21
2018-08-21 01:58:48
12
156379f1f43a
6.079245
4
1
0
So much for a “bull run” huh?
5
Riding the Bear’s Coattails So much for a “bull run” huh? Yeah, I feel ya What started off as cautious optimism following Bitcoin’s summer rally to nearly $10,000 has slowly died down, leaving investors confused and yet again looking for another scapegoat. Ethereum, Bitcoin Cash, Litecoin, and the rest of the altcoin crowd have followed suit with stagnating prices. Red never was CoinMarketCap’s most flattering color. Poor pupper — hang in there bud There’s a saying in crypto that nearly everything rides on Bitcoin’s coattails. Does that hold true in a bear market situation? In this article we’ll dive deeper into understanding the relationship between market cap across altcoins and Bitcoin’s own market cap. The goal here is to understand how altcoins move in conjunction with Bitcoin’s own market fluctuations. By determining which coins have strong and weak correlations with Bitcoin, investors are better equipped to structure their crypto portfolios in ways that minimize risk across the entire asset class. Key Questions Let’s clarify our problem before getting our hands dirty: What is the relationship between altcoin and Bitcoin market cap? Which altcoins are strongly correlated with Bitcoin? Which altcoins are not correlated with Bitcoin? Methodology Let’s quantify this idea of “relationship” using a measure called the Pearson Correlation Coefficient. In statistics we use correlation coefficients to measure how strong the relationship is between two variables of interest: X and Y. Note that correlation is not explicitly trying to solve for causality — that Y is caused by X or vice versa. We can generate this coefficient by mapping out value pairs of Bitcoin market cap (X) and altcoin market cap (Y) across time for each altcoin. The Data We’ll be using historical price data from CoinMarketCap.com (CMC). The reason why we’re using CMC is that there is often a large spread in price across different exchanges. One extreme example lies in South Korean exchanges, which often have a significant premium or discount to buying crypto depending on market conditions. For example, when Bitcoin was at its all time global high of $20,000 in December of 2017, the price was nearly $25,000 in South Korea (some refer to this phenomenon as the “kimchi premium”). CMC circumvents this issue by taking the volume weighted average of prices across many exchanges. Criteria To keep our analysis focused, we’ll use the following criteria: Historical market data from February 1st, 2018 to August 15th, 2018. We’ll define this period as the most recent “bear market” for crypto. Coins that have at least 120 days worth of data on CMC Top 100 coins based on market capitalization. These cryptos have higher trading volume and therefore more reliable price data. Let’s Start We would normally fetch data using an API, but unfortunately you cannot access CMC historical data without paying a hefty subscription fee. So f*** that — let’s use web scraping libraries built by some very talented folks to get what we need. We’ll be using coinmarketcappy to generate historical data for the entire crypto market and CoinMarketCap-Historical-Prices to get historical data for individual coins. 1. What is the relationship between altcoin and Bitcoin market cap? Let’s start off with the simplest question on our list. In order to calculate the correlation between the entire altcoin market and Bitcoin, we need to subtract Bitcoin’s market cap from the total crypto market cap. This prevents double counting Bitcoin’s market cap in our analysis. Altcoin Market Cap = Total Crypto Market Cap — Bitcoin Market Cap Doing so leads to this market cap evolution over time: Altcoin market cap (blue) compared to Bitcoin market cap (yellow) from February 1st, 2018 to August 15th, 2018. Altcoin market is inclusive of all altcoins listed on CMC, not just the top 100. At first glance it certainly seems like there’s a strong positive correlation! Note that the altcoin market shown is inclusive of all altcoins listed on CMC — we’ll narrow our universe down to the top 100 coins a little later. Below is the correlation matrix between Bitcoin and the entire altcoin market. We generate a positive correlation correlation of 0.80 between Bitcoin and the altcoin market. We can interpret this result based on the following criteria: Coefficient of +1 indicates that the two measured variables always move in the same direction Coefficient of -1 indicates that the two measured variables always move in the opposite direction Coefficient of 0 indicates no observable linear relationship between the two measured variables A coefficient of 0.80 is considered very high and shows a strong positive correlation between the Bitcoin and altcoin market. 2. Which altcoins are strongly correlated with Bitcoin? We’ll apply the same process as before, except now we’ll calculate the correlation coefficient for each individual coin vs. Bitcoin. Let’s be more stringent with our inclusion criteria. We’ll now only include the top 100 coins by market cap that have at least 120 days worth of data. Doing so allows us to focus on coins with reliable trading volume and price data. Running the correlation calculation on our new subset of altcoins generates the following density histogram. The area under the density function line represents the chance of getting a value between a range of correlation values. Based on the concentration of bars between 0.5 and 1.0 on the X axis, we can immediately tell that most of the top 100 coins are highly correlated with Bitcoin. 75 of the top 100 coins have a correlation of 0.55 or higher 50 of the top 100 coins have a correlation of 0.76 or higher These results indicate that when bitcoin’s market cap goes up or down, most coins are highly likely to follow suit! By sorting our correlation list we can generate the top 20 coins that are most correlated with Bitcoin: Top 20 coins with strongest correlations Bitcoin No surprises here with Monero, Litecoin, and Dogecoin showing up in the ranks. Some of Bitcoin’s more well known forks don’t show up in the top 20 but are still highly correlated. For example, Bitcoin Cash sits at 0.80 and Bitcoin Gold sits at 0.84. 3. Which altcoins are not correlated with Bitcoin? We can simply reverse sort our correlation list to generate the top 20 coins that are the least correlated with Bitcoin: As you would expect from the previous density histogram, only a few coins are negatively correlated or have near-zero correlation with Bitcoin. It’s not surprising to see Tether, TrueUSD and Dai at the top of the list given these are classed as stablecoins. Concluding Thoughts Even if you’re a relatively new investor in the crypto space, one mantra you’ll hear echoed across Medium, Reddit, Twitter, and other social media is the importance of diversifying risk in your crypto portfolio. In some sense this is oxymoronic given how incredibly volatile the entire space can be, but it’s a fundamental skill all investors need to master. By including assets that have negative or near-zero correlation with Bitcoin, you effectively reduce the overall variance and risk across your crypto holdings. Although most altcoins really do seem to “ride on Bitcoin’s coattails” in the current bear market, I expect more coins will demonstrate real world applications and eventually achieve independence from Bitcoin’s market cap. Whether you’re hodler, data scientist, or crypto newbie, I hope this analysis has been helpful. Acknowledgments A big thank you to Anthony Xie for providing the inspiration and original code for this analysis. He’s the founder of Hodlbot and I highly suggest you check out his platform if you’re interested in crypto investing on autopilot. I’ve tweaked a few things from his previous work given the new updates to the CoinMarketCap API, which required a few minor workarounds. Reference code can be found below for you data geeks. First GIF credited to Jimmy Here (https://www.twitch.tv/jimmyhere) — hilarious individual. About the Author I’m a blockchain enthusiast and data scientist based in New York City. I’ve been exploring YouTube content creation to help others understand the Wild West that we call crypto. I plan on releasing a video to accompany this analysis in the next few days. Twitter and Instagram for those who enjoy self-deprecating crypto memes. Any day now Code Correlations between top 100 coins and Bitcoin by market cap Support Bitcoin: 3PDm18mwEaZsHetxjsX76QjKCqBd1HKCce Ethereum: 0x28b595224831DbcCE2B8Dd282108D07c58247927
Riding the Bear’s Coattails
200
riding-the-bears-coattails-156379f1f43a
2018-08-21
2018-08-21 01:58:48
https://medium.com/s/story/riding-the-bears-coattails-156379f1f43a
false
1,293
null
null
null
null
null
null
null
null
null
Bitcoin
bitcoin
Bitcoin
141,486
Mycelias
Let me take you on a journey through the Wild West that is crypto. I am a YouTube creator, blockchain enthusiast, and data scientist based in NYC. @mycelias
ea1779ce4d6e
shroomcoiner
6
16
20,181,104
null
null
null
null
null
null
0
null
0
47d409e68707
2017-12-20
2017-12-20 20:06:16
2017-12-20
2017-12-20 20:21:50
1
false
en
2017-12-22
2017-12-22 19:31:23
2
1563b3288144
3.316981
7
0
0
By Rebecca Paskerian
5
3 Digital Marketing Trends You Should Pay Attention to in 2018 By Rebecca Paskerian While everyone is looking back on 2017, we’re looking forward to what is coming in 2018. Facebook IQ recently released their first ever Annual Topics and Trends Report “to help marketers plan for the year ahead.” This report dives into the conversations on the rise in 2017 that are on the cusp of going mainstream in 2018. These findings are routed in real data and insights stemming from the 2 billion users on Facebook. While this report is chock-full of trends, we narrowed it down to three that we think will make the most impact in 2018 digital marketing: Everyday AR, Friendly Bots, and Customizable Marketing. 1. Everyday AR The augmented reality (AR) conversation has experienced a 9.6x growth since January 2016, expanding to multiple new markets. AR offers a “real view of a physical environment that has been enhanced by computer-generated elements, such as graphics, videos, sounds or GPS data.” And although many users are still confused about AR and what is really is, most people don’t realize they are using it everyday. Social media platforms, such as Snapchat and Instagram, allow you to add effects like camera filters or cartoon characters to your content. Other than adding a Bitmoji to your photos or videos, AR can be a helpful and useful tool for other industries. For example, realtors are now allowing users to virtually stage empty homes in online listings. In the future, it is expected that our phones will be able to instantly translate menus in other languages or measure furniture by simply taking a photo. AR is going to allow marketers to connect with consumers on a whole new level, heightening engagements and recognition. Consumers can now be completely immersed in a brand’s product. This will start to blur the lines of where advertising ends and the real world begins. Instead of trying on a dress in a retail store, you can virtually try it on through AR from the comfort of your home. How easy is that?! 2. Friendly Chatbots People have become more interested and comfortable with artificial intelligence (AI) over the past few years. Conversation surrounding AI has grown 4.5x since January 2016. In the US, 1 in 5 Internet households already has at least one IoT (Internet of Things) device. Smart speakers are the most common device, but connected thermostats and in-home digital assistants are on the rise as well. Even though some people are wary of AI’s quickly evolving popularity, the truth is that we’ve been using AI for a while. Autocorrect messaging is something people use everyday without even thinking about it. Businesses are also using AI as part of their customer services. Chatbots are the new customer service rep, answering basic questions and routine requests from customers. With chatbots, brands will be more accessible and easier to communicate with. This will increase consumer satisfaction and trust, two very important aspects of digital marketing. Chatbots can be used for more than just customer service. Brands like Nike and Marriott are using chatbots that let consumers customize their products and experiences. You can send a photo of your outfit to the Nike chatbot and it will respond with a pair of sneakers that match. Marriott Rewards’ Chatbot helps plan your vacation after you provide information like where you want to go, how much you want to spend and how many people you’re traveling with. We strongly believe chatbots are going to revolutionize the way brands interact with customers next year. 3. Personalized Marketing As technology continues to evolve, we are expecting more from our devices. We want intuitive user experiences and immediate results, almost as if our devices can read our minds. New technologies and developer tools are making this possible, by using proximity marketing. This uses “short-range technologies like Bluetooth to send people messages on their phones based on their locations.” Since January 2016, Bluetooth conversation has grown 4.3x and conversation surrounding proximity marketing has grown 30.6x. Now that phones, like the iPhone 7, no longer have headphone jacks, users will be using wireless headsets connected through Bluetooth. This means that users will keep their Bluetooth on by default, benefiting brands adapting proximity marketing. Hitting consumers with the right ad content, at the right time, at the right place, is a difficult task to master. Proximity marketing helps alleviate that frustration by using Bluetooth beacons, geofences and WiFi. It will allow brands to target consumers right when they are ready to purchase, increasing sales, and brand relevance. These 3 digital marketing trends, along with many others that may not even be created yet, will provide huge advances in 2018. The digital marketing ecosystem is constantly evolving, and it’s imperative that brands stay up to date with the latest technology and trends. Staying current with this evolution is the key to success in 2018. Check us out at BrandContent.com
3 Digital Marketing Trends You Should Pay Attention to in 2018
219
3-digital-marketing-trends-you-should-pay-attention-to-in-2018-1563b3288144
2018-05-30
2018-05-30 19:43:57
https://medium.com/s/story/3-digital-marketing-trends-you-should-pay-attention-to-in-2018-1563b3288144
false
826
The publication aims to cover practical aspects of AI technology along with interviews with notable people in the AI field.
buzzrobot.com
10209158601345323
null
buZZrobot
sophia@buzzrobot.com
buzzrobot
ARTIFICIAL INTELLIGENCE,TECHNOLOGY,NEWS,STARTUP,INNOVATION
sopharicks
Messaging
messaging
Messaging
8,912
Brand Content
Who are we? A Boston based advertising agency. Why should you follow us? With several talented and hilarious writers, we’ve got something for everyone!
a4440b4967e9
brandcontentblog
27
18
20,181,104
null
null
null
null
null
null
0
null
0
f0d3356b0ce0
2018-02-21
2018-02-21 00:31:17
2018-03-12
2018-03-12 16:52:56
1
false
en
2018-03-12
2018-03-12 17:18:28
6
1564a310e989
3.10566
3
0
0
All of us use some form of text messaging in everyday life. From Facebook messenger to SMS text messaging. Over 23 billion SMS messages are…
5
Your users are humans, understand them like humans. All of us use some form of text messaging in everyday life. From Facebook messenger to SMS text messaging. Over 23 billion SMS messages are sent per day, Facebook and whatsApp process over 60 billion instant messages per day. People express every human experience in these messages. The good, the bad and the ugly. How do you make sure your customers and employees are safe and happy? How do you ensure that people can build meaningful relationships? How do you retain the Good Actors who are creating positive change within your community? How do make sure that the people who are Bad Actors within your community don’t pollute your environment like a virus, infecting your community with hate, racism, sexism and death threats? Or in business speak, how do you maximise your life time value? The maths: You have an online game and 1% of your players are Bad Actors, who cause 5% of your players to leave. Your ‘funnel’ of new players needs to be greater than 5% of your customer base. Of that new 5%, another 1% will also be Bad Actors. Now your customer base contains 1.05% Bad Actors. 1.05% these players will repel 5.25%. Now you have to get 5.25% more players. If you do nothing eventually your entire community will be made up of bad actors screaming at each other, which is familiar to anyone who uses twitter and facebook. To address this problem most companies have armies of moderators, but it is not practical to review every message by hand. Even if you have 1,000 moderators, to moderate 10,000,000 messages a day (easily the output of 100,000 users) that’s 10,000 messages per moderator per day. From the example above your moderators are now looking for the 1% or 1,000 users that might be causing your problems. The only way to successfully process this much information is to use automation. Now that you have decided to use machines to process your messages, you need to work out what to detect and how to detect it. The usual response that I see from most companies is that we must stop profanity. However, if we look at why and how people use profanity it reveals a couple of problems. Research shows swearing is a sign of honesty and emotion. We swear when we are telling the truth, and when we are expressing something we feel passionate about. We also tend to use profanity with friends, as it can help create “a sense of belonging, mutual trust, group affiliation … and cohesion.”. This is shown on a sociological level because 83% of women and 90% men swear on a regular basis. Banning people for profanity will result in banning 83–90% of our community. It would seem that we have made an error in our assumption if we have to ban the majority of people. We have assumed profane language equals a bad person and fallen into the fundamental attribution error. The fundamental attribution error is when we assume that another person’s behaviour is a reflection of their personality. In reality, behaviour is usually situational rather than dispositional. A classic example of this is when someone cuts you up in traffic. You assume that they did that because a) they are a jerk, b) they are selfish or c) they don’t know how to drive properly. In reality, there are a number of reasons for this behaviour, they could be late for a flight, a loved one is in hospital or ill. Conversely, when we behave badly we externalise the behaviour with situational justifications “He asked for it”, or “I need to get to my job interview”. Consensual swearing when looked at from a sociological and psychological perspective is a positive behaviour in your community. Censoring and banning swearing will disrupt normal human emotional expression and community bonding. When what you want is genuine connection, dialogue and engagement. To detect offensive behaviour you have to understand the context in every conversation. You need to understand consent and content. You also need to understand behavioural psychology, sociology, and linguistics. The only way to do this is with sophisticated machine learning, and artificial intelligence built by people with empathy and compassion. Your users are humans, understand them like humans. Spirit AI builds tools to make the future of digital interactions better: both with virtual humans, and real humans. We make Character Engine, for authoring dynamic improvisational AI characters, and Ally, a tool for detecting and intervening in the social landscape of online communities — to curtail online harassment, or to promote positive behaviour.
Your users are humans, understand them like humans.
25
your-users-are-humans-understand-them-like-humans-1564a310e989
2018-06-12
2018-06-12 21:59:48
https://medium.com/s/story/your-users-are-humans-understand-them-like-humans-1564a310e989
false
770
AI for humans
null
spiritai
null
Spirit AI
hello@spiritai.com
spirit-ai
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,ONLINE HARASSMENT,PROCEDURAL GENERATION,ONLINE SAFETY
theSpiritAI
Social Media
social-media
Social Media
143,805
Christopher Hooks
Product Manager for Ally @ http://SpiritAI.com • Cross Platform Polyglot Software Engineer & Architect
5f42c03a98dd
kyris
4
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-27
2018-08-27 06:22:09
2018-08-27
2018-08-27 06:22:23
0
false
en
2018-08-27
2018-08-27 06:22:23
1
15658c54be66
1.615094
0
0
0
Read Artificial Intelligence: What Everyone Needs to Know By Jerry Kaplan (ebook online) Link…
1
Download pdf Artificial Intelligence: What Everyone Needs to Know By Jerry Kaplan (ebook online) #EPUB Read Artificial Intelligence: What Everyone Needs to Know By Jerry Kaplan (ebook online) Link https://kindleuploadsale.icu/?q=Artificial+Intelligence%3A+What+Everyone+Needs+to+Know . . . . . . . . . . . . . . . . . . . Read Online PDF Artificial Intelligence: What Everyone Needs to Know, Download PDF Artificial Intelligence: What Everyone Needs to Know, Download Full PDF Artificial Intelligence: What Everyone Needs to Know, Download PDF and EPUB Artificial Intelligence: What Everyone Needs to Know, Read PDF ePub Mobi Artificial Intelligence: What Everyone Needs to Know, Reading PDF Artificial Intelligence: What Everyone Needs to Know, Read Book PDF Artificial Intelligence: What Everyone Needs to Know, Read online Artificial Intelligence: What Everyone Needs to Know, Download Artificial Intelligence: What Everyone Needs to Know Jerry Kaplan pdf, Download Jerry Kaplan epub Artificial Intelligence: What Everyone Needs to Know, Read pdf Jerry Kaplan Artificial Intelligence: What Everyone Needs to Know, Download Jerry Kaplan ebook Artificial Intelligence: What Everyone Needs to Know, Read pdf Artificial Intelligence: What Everyone Needs to Know, Artificial Intelligence: What Everyone Needs to Know Online Download Best Book Online Artificial Intelligence: What Everyone Needs to Know, Read Online Artificial Intelligence: What Everyone Needs to Know Book, Read Online Artificial Intelligence: What Everyone Needs to Know E-Books, Read Artificial Intelligence: What Everyone Needs to Know Online, Read Best Book Artificial Intelligence: What Everyone Needs to Know Online, Read Artificial Intelligence: What Everyone Needs to Know Books Online Download Artificial Intelligence: What Everyone Needs to Know Full Collection, Download Artificial Intelligence: What Everyone Needs to Know Book, Read Artificial Intelligence: What Everyone Needs to Know Ebook Artificial Intelligence: What Everyone Needs to Know PDF Read online, Artificial Intelligence: What Everyone Needs to Know pdf Download online, Artificial Intelligence: What Everyone Needs to Know Read, Download Artificial Intelligence: What Everyone Needs to Know Full PDF, Read Artificial Intelligence: What Everyone Needs to Know PDF Online, Read Artificial Intelligence: What Everyone Needs to Know Books Online, Read Artificial Intelligence: What Everyone Needs to Know Full Popular PDF, PDF Artificial Intelligence: What Everyone Needs to Know Read Book PDF Artificial Intelligence: What Everyone Needs to Know, Read online PDF Artificial Intelligence: What Everyone Needs to Know, Download Best Book Artificial Intelligence: What Everyone Needs to Know, Read PDF Artificial Intelligence: What Everyone Needs to Know Collection, Read PDF Artificial Intelligence: What Everyone Needs to Know Full Online, Read Best Book Online Artificial Intelligence: What Everyone Needs to Know, Download Artificial Intelligence: What Everyone Needs to Know PDF files
Download pdf Artificial Intelligence: What Everyone Needs to Know By Jerry Kaplan (ebook online)…
0
download-pdf-artificial-intelligence-what-everyone-needs-to-know-by-jerry-kaplan-ebook-online-15658c54be66
2018-08-27
2018-08-27 06:22:24
https://medium.com/s/story/download-pdf-artificial-intelligence-what-everyone-needs-to-know-by-jerry-kaplan-ebook-online-15658c54be66
false
428
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
styx
null
482c1ded603f
styx_6842
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-01
2018-06-01 18:36:11
2018-06-01
2018-06-01 21:16:14
11
false
en
2018-06-01
2018-06-01 21:17:12
21
15661ac0381
3.926415
38
0
0
Every week we send out the the newest tools added to StackShare in our Newsletter. This month, the machine learning train didn’t slow down…
5
The Hottest New Developer Tools from May 2018 Every week we send out the the newest tools added to StackShare in our Newsletter. This month, the machine learning train didn’t slow down with a visual ML model builder, ML for mobile, and an ML-backed BI solution. These are the tools added in May 2018 that were the most favorited, commented on, and/or added to Stacks. 1. ReLaXed Generating PDF documents can be tough. Years ago, the only way to do it was with massive, brittle files where you manually specified the exact coordinates of the information you wanted displayed. Thankfully, today we have tools like ReLaXed. You define the structure and content with HTML, CSS, JS, Markdown, and even LaTeX. Then ReLaXed handles the heavy lifting. View Tool 2. AskNed If you’re sick of writing (or correcting) SQL queries for your marketing team, or you just hate SQL, AskNed is worth checking out. This tool allows you to get powerful data analytics by asking questions in plain English. It connects directly to your data sources and provides fast, interactive visualizations. View Tool 3. Infection Monkey Infection Monkey is an open-source, automated penetration testing tool that operates the way a real attacker would. You “infect” a machine on your network with the tool and a scenario like stolen credentials, then sit back as Infection Monkey scans the network for vulnerabilities and generates a full report when it’s done. View Tool 4. Lobe Built on top of TensorFlow and Keras, Lobe is a visual tool for building, training, and shipping custom deep learning models. Simply drag a folder of training examples into the app, and Lobe automatically builds the model. You can then refine the training and ship it to Tensorflow or CoreML. It’s now easier than ever to create your very own Not Hotdog app. View Tool 5. LayerJS LayerJS is a new way to create interactive UI animations with pure HTML and CSS. It uses the concepts of Frames (HTML fragments) and Stages (viewports) to allow the creation of layered animation that you define with attributes in your HTML. It also plugs in directly to your current framework. View Tool 6. Cilium Cilium is a new open-source tool for securing your containers. It uses BPF to let you define network-layer and application-layer security policies for your microservices. Cilium supports frameworks like Kubernetes and Docker and communication protocols like HTTP, gPRC, and Kafka. View Tool 7. Dimer Most engineers don’t like writing documentation. This results in a lot of good tools with bad docs. Fortunately, Dimer provides an easy way to write beautiful documentation with extended Markdown. It offers a “slick” writing experience with some sensible Markdown extensions for docs like tips and YouTube links to give you no excuse not to write great documentation for your next project. View Tool 8. Skor Skor is a delicious, chocolate-toffee bar manufactured by Hershey’s. It’s also an open-source library by the makers of Hasura for listening to PostgreSQL events and forwarding them to a webhook. Whenever INSERT, UPDATE, or DELETE is called on a table, Skor sends the row changes as JSON to your webhook. View Tool 9. ML Kit Are you a mobile developer? Do you want to add machine learning to your app? Google has a solution for that. ML Kit provides out-of-the-box solutions for common ML features like face detection, text recognition, and image labeling. If the base functionality isn’t enough, you can also import TensorFlow Lite models, and ML Kit will handle hosting and serving them to your app. View Tool 10. Mapfit Mapfit promises to be “better or equal to Google Maps in accuracy in every country”. That’s a bold statement, but this is a map platform that provides hyper-specific address data like secondary doorways, parking garage entrances, and freight docks. They also claim their load speed is “unrivaled”, so your app’s performance won’t suffer. Look out, Google. View Tool Originally posted on the StackShare blog. For the latest new tools, check out New Tools or sign up for our newsletter to have them delivered straight to your inbox every week… for free!
The Hottest New Developer Tools from May 2018
167
wthe-hottest-new-developer-tools-from-may-2018-15661ac0381
2018-06-19
2018-06-19 11:36:34
https://medium.com/s/story/wthe-hottest-new-developer-tools-from-may-2018-15661ac0381
false
696
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
StackShare
Discover and compare tech stacks. #500strong
23fab098ca0d
stackshareio
3,774
384
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-26
2017-10-26 15:28:46
2017-10-26
2017-10-26 15:28:47
1
false
en
2017-10-26
2017-10-26 15:28:47
1
1567fcf3e92a
1.984906
1
0
0
null
5
Data Sharing In Healthcare Must Be Encouraged In 2015, Google-owned DeepMind announced a new partnership with the Royal Free NHS Trust (RFT) that many predicted would herald an exciting new era of healthcare. They would apply machine learning algorithms to NHS data to create the healthcare app ‘Streams’ — an alert, diagnosis, and detection system for acute kidney injury. A spokesman for RFT explained: ‘The RFT approached DeepMind with the aim of developing an app that improves the detection of AKI (Acute Kidney Injury) by immediately reviewing blood test results for signs of deterioration and sending an alert and the results to the most appropriate clinician via a dedicated handheld device.’ Since then, Streams has been rolled out across a number of NHS trusts. DeepMind, meanwhile, has continued to apply its machine learning technology in other clinical trials, including early detection of diabetic retinopathy and the treatment of head and neck cancers. However, it has not all been plain sailing. An investigation by the Information Commissioner’s Office (ICO) into the data-sharing agreement, without which the work would not have been possible, found that the Royal Free had failed to comply with the data protection act when it handed over details of 1.6 million patients to DeepMind. Elizabeth Denham, the information commissioner said of the findings that, ‘Our investigation found a number of shortcomings in the way patient records were shared for this trial… Patients would not have reasonably expected their information to have been used in this way, and the Trust could and should have been far more transparent with patients as to what was happening. We’ve asked the Trust to commit to making changes that will address those shortcomings, and their co-operation is welcome.’ While the ICO should be praised for its light touch and considered approach, the issue highlights what is likely to be an ongoing problem for healthcare providers looking to use the wealth of data they hold about their patients to its fullest potential. This was also evidenced by last year’s decision by UK government ministers to scrap the care data plan to link GP records, following a public outcry about whether patients had been given the chance to opt out. Indeed, an independent report into the growth of artificial intelligence commissioned by the UK government, ‘Growing the Artificial Intelligence Industry in the UK’, has recommended the secure sharing of anonymized data from patients’ health records with private firms if the technology is going to be successfully applied in the sector. There is an understandable sensitivity around the collection and sharing of medical data, yet for organizations to truly benefit from data analytics, they need to take an open approach. This means doing everything possible to encourage patients to share their data, as well as sharing data with different hospitals and technology companies. Posted on 7wData.be.
Data Sharing In Healthcare Must Be Encouraged
1
data-sharing-in-healthcare-must-be-encouraged-1567fcf3e92a
2018-05-25
2018-05-25 00:48:14
https://medium.com/s/story/data-sharing-in-healthcare-must-be-encouraged-1567fcf3e92a
false
473
null
null
null
null
null
null
null
null
null
Acute Kidney Injury
acute-kidney-injury
Acute Kidney Injury
8
Yves Mulkers
BI And Data Architect enjoying Family, Social Influencer , love Music and DJ-ing, founder @7wData, content marketing and influencer marketing in the Data world
1335786e6357
YvesMulkers
17,594
8,294
20,181,104
null
null
null
null
null
null
0
startmining.sh ./marlin -H us-east.siamining.com:3333 -u <your wallet address>.<worker id> -I 28
2
null
2018-01-10
2018-01-10 14:44:00
2018-01-10
2018-01-10 14:53:01
0
false
en
2018-01-10
2018-01-10 15:03:13
4
156874865b6b
0.920755
0
0
0
A couple days a go I read a very interesting article:
4
Mining Siacoin with idle deep learning gpu A couple days a go I read a very interesting article: Using your idle Deep Learning hardware for mining Modern Deep Learning is not possible without GPUs, even simple tutorials on MNIST dataset are showing from 10..100-fold…medium.com If you already own good deep learning hardware with decent GPUs then it’s a great idea to try to turn your idle GPU time into crypto. I am mining Siacoin and here is what I had to modify to make it work: Miner: I chose marlin miner here: https://siamining.com/ Turn the mining command into a bash file, for example Using the code from github is straight forward: https://github.com/Shmuma/gpu_mon Using supervisor to automatically make sure gpu_mon is always running was a little bit problematic. I encountered several permission errors, all of which can be fixed here, the page is Chinese but we all speak the same language as programmers ;) : https://www.cnblogs.com/xiwang/p/6228909.html I’m already tested on my machine and gpu_mon does indeed start mining when gpu is not in use. Multi gpu monitoring also works well. One potential problem is that by running nvidia-smi, I see that gpu_mon stops mining processes but it does keep the process in GPU memory, at least for my sia miners, which is about 167 MB per GPU. I will need to test a few other miners to double check. Happy Hacking!
Mining Siacoin with idle deep learning gpu
0
mining-with-idle-gpu-power-a-more-detailed-guide-156874865b6b
2018-01-10
2018-01-10 15:03:14
https://medium.com/s/story/mining-with-idle-gpu-power-a-more-detailed-guide-156874865b6b
false
244
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mingrui Jiang
https://github.com/mingrui
d337590e577e
burgermilkshake
1
44
20,181,104
null
null
null
null
null
null
0
null
0
9c23aa09fd03
2018-03-15
2018-03-15 10:42:23
2018-09-24
2018-09-24 08:39:28
1
false
en
2018-09-24
2018-09-24 08:39:28
5
156882f3b503
2.366038
0
0
0
Where we learn how computers and Facebook recognize our faces — and put a name on it.
4
“Is this you?” Where we learn how computers and Facebook recognize our faces — and put a name on it. I must admit that I was never really good at recognizing faces, even if humans are among the best in this task in the animal kingdom. In fact, some humans even became the stuff of legend. It’s said for instance that Pericles could identify every citizen in Athens or that Napoleon knew each soldier in his army by name. But luckily for people like me, a student at Harvard University thought to create the largest collection of faces in the world, Facebook, which now even recognises a selfie of yours the moment you upload it, asking immediately: “Is this you?”. But how does that work? How can a computer recognise a specific face like yours and not mix it up with the one of your sister? Humans are able to “see” a face even in the simplest of drawings (think of a smiley, a circle with two dots for eyes and a line for a mouth). The details of the drawing are unimportant: the underlying mechanism is that we recognize that there are dark patches (eyes and mouth) placed in a suitable geometric configuration. In fact, a face detection algorithm works similarly. It also disregards the fine visual information in a face and mainly tests if the bright and dark patches are arranged in a “face-like” way. This is both computationally efficient (we discard a lot of information) and robust with respect to variations in shape and illumination. A modern-day algorithm detects many more fine-tuned features in a face in order to distinguish whether it is just a drawn smiley or a real human face. Furthermore, since no two human faces look alike, a modern algorithm is also capable to tolerate a lot of variation in the subtle configuration of the face patterns (thus not just the eyes and the mouth). This is done by training the detector with thousands of hand-labeled real-world photos that span the variability of our unique faces. You can imagine now, that a large face repository like Facebook is the perfect face training set for such a program, since it represent a playground where the program can learn what kind of faces exist, and what makes one face different from another (and you actually do the labelling for free:). This is also why Facebook recognises faces so well and is even able to suggest correct names to put on your uploaded photos. And this works so well that you will now also be able to unlock your new iPhone simply with your face. Thus, if Napoleon were alive today, he would probably not need to know each soldier by name any longer. His smartphone could tell him in a fraction of a second. But when he was alive, this face recognition skill was essential. In fact, it is known that the average size of a military company at Napoleon’s times was around 150 persons, since that number (the so-called Dunbar Number) represents a limit to the number of people with whom one can maintain stable social relationships and identify faces quickly. In fact, the same is true for any cohesive human group, not just inside the military. So one can now wonder: will the average size of social human groups grow in the digital age thanks to the usage of face detection algorithms?
“Is this you?”
0
is-this-you-156882f3b503
2018-09-24
2018-09-24 08:39:29
https://medium.com/s/story/is-this-you-156882f3b503
false
574
Digital and analog science stories written by Martin Vetterli in collaboration with Mirko Bischofberger, Henri Dubois-Ferrière and Paolo Prandoni
null
null
null
Digital Stories
martin.vetterli@epfl.ch
martinvetterli
SCIENCE,TECHNOLOGY,EDUCATION,PROGRAMMING,DATA SCIENCE
MartinVetterli
Machine Learning
machine-learning
Machine Learning
51,320
Martin Vetterli
Professor of Engineering and President of EPFL https://twitter.com/MartinVetterli https://en.wikipedia.org/wiki/Martin_Vetterli
705bf401ebdd
profmartinvetterli
9
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-23
2018-01-23 14:14:48
2018-01-23
2018-01-23 14:25:52
0
false
en
2018-01-23
2018-01-23 14:25:52
0
1568834d58b7
1.154717
0
0
0
I met with my mentor last week. When we were given the assignment to right an email. I sent it as well. Because our mentor-mentee relation…
4
Meeting Mentor I met with my mentor last week. When we were given the assignment to right an email. I sent it as well. Because our mentor-mentee relation is quite old and it is an informal one, so, he invited me at his office to meet. I usually do such meetings with him. I share whatever is going on in my life and ask his suggestions on different things. This time, I had collected something to talk on and also it was a project from Amal. It was great and fruitful meeting as usual. Last time, He told me about a new technology that makes a person invisible. In this session, I aimed to know more about him, but, I forgot to ask it. He started the conversation, by discussing the topics I told him. He asked me what’s up on my side. I discussed my important decision of enrolling in Artificial Intelligence and Machine Learning Course. He asked me to clarify where actually I want to go. Whether IT or electronics. I justified my reasons and choices, he agreed on them and gave a green signal to go. He then told me about “Big Data”. I was looking over this term from a long time but I didn’t know its concept. He explained to me what it is. I asked about my Final Year Project idea. He suggested me to work on Long Range Acoustic Device(LRAD), a nonlethal device used for stopping protests. He wanted me to work on it. He showed me some videos and its concept. He insisted me to learn about INFRA, a technology used for data storage. At the end it was a great talk. I learned about couple of new concepts and I liked them. Now I am working on LRAD and accepted at Machine Learning course as well.
Meeting Mentor
0
meeting-mentor-1568834d58b7
2018-01-23
2018-01-23 14:25:52
https://medium.com/s/story/meeting-mentor-1568834d58b7
false
306
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
MUHAMMAD FAHEEM
Life-Long Learner
fda76fa809da
muhammad.faheem58
5
3
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-04-24
2018-04-24 00:46:37
2018-04-24
2018-04-24 00:52:43
1
false
en
2018-05-29
2018-05-29 01:32:39
0
15696ecd0024
3.996226
2
0
0
Given the devaluation of knowledge and the politicization of facts, data and research, it can be said that despite the proliferation of…
5
Dealing with the Anti-Knowledge Era Given the devaluation of knowledge and the politicization of facts, data and research, it can be said that despite the proliferation of information technologies into our daily lives, we live in a kind of anti-knowledge era. It is surprising how many individuals can just sit by passively and watch. To give one specific example, scientists worldwide agree that global warming is real and having a deleterious effect on the planet. Yet, the United States became the only nation in the world to reject the 2015 Paris agreement on global warming. The ‘post-truth’ phenomenon, defined by Oxford Dictionaries as a circumstance in which “objective facts are less influential in shaping public opinion than appeals to emotion and personal beliefs”, is not new. Oxford Dictionaries in 2016 declared ‘post-truth’ its international word of the year yet notes that the term itself can be traced as far back as the 1990s in reference to the Persian Gulf War and the Iran-contra scandal. Author Ralph Keyes popularised the concept in his 2004 book, The Post-Truth Era: Dishonesty and Deception in Contemporary Life. Later on, Stephen Colbert introduced the word ‘truthiness’ to describe how emotions or desires hold more sway than facts. (Dictionaries, he said, are ‘elitist’.) The Internet and social media have amplified the spread and power of fake news. A study published last month in Science found that Twitter users were 70% more likely to retweet falsehoods than they were information that had been verified by six fact-checking organisations. In a study published last autumn on how Internet users evaluate information they read online, Stanford University researchers found that both historians and undergraduates “often fell victim to easily manipulated features of websites, such as official-looking logos and domain names”. (A third group, fact-checkers, were most adept at sniffing out phony information.) Perhaps most unsettling of all is the recent revelation that a company named Cambridge Analytica planted fake news on targeted Facebook accounts in a bid regarding the elections. That the ruse may have made a difference aligns with empirical studies showing that most people use research to confirm their prior beliefs. When people have deeply held values or convictions, no amount of facts can persuade them otherwise. Even academic researchers face a particularly nuanced challenge given that some methodologies encourage the pursuit of multiple or alternate perspectives as a way to get at the elusive concept of truth. Given the need to re-emphasize standards of trustworthiness and rigor, ongoing pressure to publish, fortified by a proliferation of predatory journals and publishers, augments the potential for shoddy research to be legitimized. Scholars must make their work and themselves more accessible if they want to win the public trust. It should be taken into account that individuals prefer living in an echo chamber where most individuals have ideologically segregated networks. Moreover, there is an ongoing debate with regard to the merits of focusing dispassionately on research findings versus using research to advocate for social change. With regard to the latter one, professional associations and research bodies should take a more visible stand on hot-button topics. On the other hand, it is troubling to see the reticence of academic institutions to resist more forcefully the fake news that has been disseminated. We seem more befuddled than proactive. Given the complexity of these issues, we should not underestimate the role of the human factor as it may well be as important as the technology utilized, which requires a higher level of trust that encompass both the technology used and the process utilized. Trust in a networked society, raises some interesting issues in terms of the design of technological solutions: Is it the platform that creates trust? Is it the public stature of the actors who are part of it or seen to be supporting it? Is it the interactions in virtual domains? Is it the exchanges in physical domains that led to trust in virtual domains? There would not be any single response to these questions as the interplay among these factors also plays an importance. Regardless of the responses to these questions, we need to learn to take an impartial approach. This is necessary because of the existence of many truths — multiple and competing truths of different actors and different communities jostle, violently, for primacy, necessitating a mediation that eschews simple solutions and instead brings antagonists into frameworks that help them flesh out a shared vision with mutual compromise and respect. Any technology that aids in this process is useful. To limit the use of technology or to control or to regulate it would be to stunt its development. Whether we like it or not, most of the radical advancements in technology come not from R&D into their peaceful uses, but from billions of dollars spent on how we can obliterate “enemies”. On the other hand, the appropriation of these technologies used in war has occurred throughout history. It should also be taken into account that greed may outpace the power in its role as an engine in the evolution of ICT (information communication technologies) — yet, he is dead on when he forces us to face the true origins of the boxes on our desktops. Rather than dictating how technologies are to be used, we should demonstrate by example and by our support, how the same tools can be used to advance humanity further. Given the complexity of the issues, the best way to proceed is with a deep sense of humility — that technology only plays a supporting role to (socio-political) processes that are engineered by humans. While the technologies for war are evident around us, those that support peaceful dialogue are less evident, but no less plentiful. Our challenge is to use them, best we can and best we see fit, to bring about a sustainable digital transformation. “Are those who have knowledge and those who have no knowledge alike? Only the men of understanding are mindful. “ (Quran, 39:9)
Dealing with the Anti-Knowledge Era
53
dealing-with-the-anti-knowledge-era-15696ecd0024
2018-05-29
2018-05-29 01:32:40
https://medium.com/s/story/dealing-with-the-anti-knowledge-era-15696ecd0024
false
1,006
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
info@datadriveninvestor.com
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Tech
tech
Tech
142,368
Daily Wisdom
null
ddd120ae7c2
dailywisdom
60
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-21
2018-08-21 11:20:58
2018-08-21
2018-08-21 16:24:31
1
false
en
2018-08-22
2018-08-22 03:12:22
0
156a6a7c0fcb
3.622642
1
0
0
Proliferation usage of Deep Neural is still limited by 3 major bottleneck
3
What’s missing in Deep Neural Network? Proliferation usage of Deep Neural is still limited by 3 major bottleneck Absurd compute and memory requirement Huge amount of labeled dataset requirement You don’t trust this model because you don’t know how they works. AiOTA Labs research is focused on overcoming all these bottleneck. We have already shown that we have the ability to compress any DNN in the wild by factor ranging from 3X to as high as 20X(memory footprint, compute complexity, power, speeding the inference time) In today’s blog (may be it will be a series of blog), I will be mainly discussing about #2 and #3 bottlenecks. OK, Lets start. Despite the huge success and popularity of Deep Neural Network(DNN) there are plenty of skepticism in its usage specifically by safety/mission critical applications such as autonomous vehicles(AV), aviation industries, financial technology institutes(fintech) or any other industries where the usages of the DNN directly impact human life or financial related decision. But the question is Why? Why there is so much reluctant in using these wonderful machine which has demonstrated to solve complex task which human can’t do. The answer is you simply don’t trust its output because you are not sure on which basis it has generated those outputs. In other words DNN is still a big mystery to us on how does it solves these complex problem but we know that it solves!! And when you are not in control of these machines you don’t bet on it. DNN has to be answerable to humans on why and how it arrives to certain decision. Unfortunately these reasoning is absent in deterministic model based decision making such as DNN. As an example in the case of classification, it just provide you a probability of certain class relative to other class. What if it has not seen any other out-of-class object on which it was trained. The same is true in case of DNN based regression task. How they will behave if out-of-class object is shown to it. Will it tell to the user that I’m not sure about this particular class and pass on the decision making ability to human? Present deterministic DNN frameworks lacks the ability of reasoning and that is the reason Auto industry, aviation industry or banking sector are too much skeptic in usage of this machine. Recent event which raises more doubt on DNN are the infamous AV accidents where the perception module couldn’t capture some obstacles ( white truck, cyclist) and leads to some tragic human life loss, facebook chatbots developed their own language or a classifier unfortunately classified American-African as Gorillas raising serious racial abuses. If this is not enough, here comes the entry of adversarial attacks on DNN which completely fools the DNN by even just one pixel manipulation. It can start over-confidently under such attacks, classify a dog with zebra or it can misinterpret a stop sign with go sign with some pixel manipulation. You don’t even need to intentionally hack the system with pixel manipulation since there are high chances that camera sensors pixels can degrade over the year leading to unintended adversarial attacks. So a best DNN based system should not be just a deterministic probabilistic output(like the present one) but should also have a confidence metric associated with each probabilistic output. Now the DNN can use the combination of these two outputs to take more confident or less confident decision compared to a present way of taking only probabilistic based decision. With this it can also give a reason on every decision it make like if user queries that why it took a particular decision it should say for example in case of AV, ‘that I have slowed down in a cross-road because I am not sure that car on the other side will turn right, left or go straight. This type of reasoning will gain trust and confidence of the user on DNN’. So until or unless DNN based application guarantees the safety of the human life and start giving reasoning on the decisions it has made, I personally has a little hope on proliferation usage of DNN. Now lets spend sometime in bottleneck #2 i.e labeled training dataset. A DNN will best work if you trained with many many many …. labeled dataset i.e you convert a stochastic world to a deterministic world. But creating labeled dataset which converts a stochastic world to deterministic world is challenging and costly. You are not sure that you have captured the entire world in your dataset. How about a DNN which tells to the user that which data points are most interesting and tell the oracle to label only those data points. WoW, that would be wonderful if DNN is telling to oracle to show only those dataset which are most meaningful on its own world. Can there be some magic which can club #2 and #3 in single model definition. This is where AiOTA Labs researcher are focused at. Guaranteed safety while reasoning out why it has taken a certain decision and also simultaneously tell to the oracle which are the most interesting data points on training data set which makes a DNN a near-perfect machine. My next series of blog post will be based on revealing some of the secrets on which a DNN can do all these impossible things. Till than have healthy wonderful life ahead!!
What’s missing in Deep Neural Network?
1
whats-missing-in-deep-neural-network-156a6a7c0fcb
2018-08-22
2018-08-22 03:12:22
https://medium.com/s/story/whats-missing-in-deep-neural-network-156a6a7c0fcb
false
907
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
AiOTA LABS
Redefining Deep Neural Networks
a2a61e9a34be
aiotalabs
29
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-08
2017-09-08 01:43:56
2017-09-08
2017-09-08 02:10:04
0
false
en
2017-09-08
2017-09-08 02:10:04
1
156b0a0297ee
1.2
1
0
0
After a week of python experience under my belt, I needed to use the techniques I developed at GA, to look at SAT test rates and scores by…
3
Adventures in Finding what you need. After a week of python experience under my belt, I needed to use the techniques I developed at GA, to look at SAT test rates and scores by state for 2001. The goal would be to implement the Pandas, Numpy and Seaborn and Matplot skills we had learned in the first few week to analyze the data.. I am not writing about this project because I made a discovery or because it was challenging (most of the work is summary stats). I wanted to write about an invaluable skill to any data scientist, that is import to develop early. The ability to solve your problems on the internet. Learning how to format Google searches, perusing stack overflow articles and most important of all learn to comprehend documentation. Bounty 'python' Questions Stack Overflow | The World's Largest Online Community for Developersstackoverflow.com Stack overflow is your friend, if you haven’t already bookmarked it, do it now. If you have a python problem there is a 99.999%(no research done on this) chance someone had it first and posted about it on Stack Overflow. Speaking of bookmarks I love Google Chrome extensions like Gistbox and SessionBuddy they are useful in saving those import lines of code and stack articles you were able to track down. When it comes to documentation for Python libraries there are some really beautiful ones, sometimes documentation is a pleasure to read. But then, oh man, there are some that will make you wonder why they even bothered. Some of my favorites are Scikit-learn, Seaborn and beautiful soup are some of the best, where as Matplotlib will always leave you wanting more. If you cannot make sense of the documentation, and you have that 0.001% problem that no one else has had, congratulations you get to make your first Stack Overflow post. Its a badge of honor, wear it with pride.
Adventures in Finding what you need.
1
adventures-in-finding-what-you-need-156b0a0297ee
2018-01-22
2018-01-22 16:31:31
https://medium.com/s/story/adventures-in-finding-what-you-need-156b0a0297ee
false
318
null
null
null
null
null
null
null
null
null
Programming
programming
Programming
80,554
Michael Costa
Data Scientist exploring data driven community impact, history and food sometimes too.
4b0b497265f1
mcost002
8
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-17
2017-10-17 04:47:03
2017-10-17
2017-10-17 06:01:07
0
false
en
2017-10-17
2017-10-17 06:01:07
0
156b6a97bfb4
3.373585
2
2
0
In theoretical computer science, the theory of computation is the branch that deals with whether and how efficiently problems can be solved…
1
What is TOC? In theoretical computer science, the theory of computation is the branch that deals with whether and how efficiently problems can be solved on a model of computation, using an algorithm. The field is divided into three major branches: automata theory, computability theory and computational complexity theory. In order to perform a rigorous study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine. Automata theory In theoretical computer science, automata theory is the study of abstract machines (or more appropriately, abstract ‘mathematical’ machines or systems) and the computational problems that can be solved using these machines. These abstract machines are called automata. This automaton consists of states (represented in the figure by circles), and transitions (represented by arrows). As the automaton sees a symbol of input, it makes a transition (or jump) to another state, according to its transition function (which takes the current state and the recent symbol as its inputs). Uses of Automata: compiler design and parsing. Introduction to formal proof: Basic Symbols used : U — Union ∩- Conjunction ϵ — Empty String Φ — NULL set 7 - negation ‘ — compliment = > implies Additive inverse: a+(-a)=0 Multiplicative inverse: a*1/a=1 Universal set U={1,2,3,4,5} Subset A={1,3} A’ ={2,4,5} Absorption law: AU(A ∩B) = A, A∩(AUB) = A De Morgan’s Law: (AUB)’ =A’ ∩ B’ (A∩B)’ = A’ U B’ Double compliment (A’)’ =A A ∩ A’ = Φ Logic relations: a b = > 7a U b 7(a∩b)=7a U 7b Relations: Let a and b be two sets a relation R contains aXb. Relations used in TOC: Reflexive: a = a Symmetric: aRb = > bRa Transition: aRb, bRc = > aRc If a given relation is reflexive, symmentric and transitive then the relation is called equivalence relation. Deductive proof: Consists of sequence of statements whose truth lead us from some initial statement called the hypothesis or the give statement to a conclusion statement. Additional forms of proof: Proof of sets Proof by contradiction Proof by counter example Direct proof (AKA) Constructive proof: If p is true then q is true Eg: if a and b are odd numbers then product is also an odd number. Odd number can be represented as 2n+1 a=2x+1, b=2y+1 product of a X b = (2x+1) X (2y+1) = 2(2xy+x+y)+1 = 2z+1 (odd number) Finite Automata Automata (singular : automation) are a particularly simple, but useful, model of computation. They were initially proposed as a simple model for the behavior of neurons. States, Transitions and Finite-State Transition System : Let us first give some intuitive idea about a state of a system and state transitions before describing finite automata. Informally, a state of a system is an instantaneous description of that system which gives all relevant information necessary to determine how the system can evolve from that point on. Transitions are changes of states that can occur spontaneously or in response to inputs to the states. Though transitions usually take time, we assume that state transitions are instantaneous (which is an abstraction). Some examples of state transition systems are: digital systems, vending machines, etc. A system containing only a finite number of states and transitions among them is called a finite-state transition system. Finite-state transition systems can be modeled abstractly by a mathematical model called finite automation Deterministic Finite (-state) Automata Informally, a DFA (Deterministic Finite State Automaton) is a simple machine that reads an input string — one symbol at a time — and then, after the input has been completely read, decides whether to accept or reject the input. As the symbols are read from the tape, the automaton can change its state, to reflect how it reacts to what it has seen so far. A machine for which a deterministic code can be formulated, and if there is only one unique way to formulate the code, then the machine is called deterministic finite automata. Thus, a DFA conceptually consists of 3 parts: 1. A tape to hold the input string. The tape is divided into a finite number of cells. Each cell holds a symbol from . 2. A tape head for reading symbols from the tape 3. A control , which itself consists of 3 things: o finite number of states that the machine is allowed to be in (zero or more states are designated as accept or final states), o a current state, initially set to a start state, o a state transition function for changing the current state. An automaton processes a string on the tape by repeating the following actions until the tape head has traversed the entire string: 1. The tape head reads the current tape cell and sends the symbol s found there to the control. Then the tape head moves to the next cell. 2. he control takes s and the current state and consults the state transition function to get the next state, which becomes the new current state. Once the entire string has been processed, the state in which the automation enters is examined. If it is an accept state , the input string is accepted ; otherwise, the string is rejected . Summarizing all the above we can formulate the following formal definition:
What is TOC?
30
what-is-toc-156b6a97bfb4
2018-06-07
2018-06-07 09:04:33
https://medium.com/s/story/what-is-toc-156b6a97bfb4
false
894
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mahesh Kariya
Programmer
37fa287f22e2
maheshkariya
39
183
20,181,104
null
null
null
null
null
null
0
null
0
f702855ffe47
2017-10-21
2017-10-21 20:00:20
2017-10-21
2017-10-21 20:00:21
7
false
en
2017-10-21
2017-10-21 20:00:21
9
156bc02e3830
2.136792
1
0
0
null
3
The fastest way to identify keywords in news articles — TFIDF with Wikipedia (Python version) # medium.com Github Project Link: Click here When we skim or scan an article, keyword is the most important indicator for… How we are not computers, and why it matters. # medium.com As we become ever-more aware of the shortcomings of the mechanical and IP worldviews, and of the power of no… RNN — Forget gate # medium.com This article may help people who want to start with Recurrent Neural Networks… Let me explain it like a stor… You have given zero reasons/examples backing your claim, that Blockchain will automatically create… # medium.com You have given zero reasons/examples backing your claim, that Blockchain will automatically create jobs. As … Wovon sprechen wir, wenn wir von künstlicher Intelligenz sprechen? # medium.com Foto: Alession Lin on Unsplash Stellen Sie sich vor, Sie sitzen in einem geschlossenen Raum. Durch den Türsc… I clicked this post’s link, because your words “Crumbling Niche” hit home, for you describe… # medium.com I clicked this post’s link, because your words “Crumbling Niche” hit home, for you describe precisely what I… And, LinkedIn automatically added a photo cover to my article — without my consent # medium.com I have just published an article in LinkedIn, and when I was visiting my own profile stream I noticed the fo… Here Is Why You Will Love AI In Next 6 Mins # hackernoon.com First of all, AI is no threat. Artificial Intelligence is not up to knocking down humans, their creativity o… What Marketing Will Be Like in 2067 # medium.com Projecting 50 years of technological progress Fifty years ago was 1967: the age of Mad Men, a show that not …
9 new things to read in AI
1
9-new-things-to-read-in-ai-156bc02e3830
2018-04-07
2018-04-07 04:06:18
https://medium.com/s/story/9-new-things-to-read-in-ai-156bc02e3830
false
288
AI Developments around and worlds
null
null
null
AI Hawk
aihawk1089@gmail.com
ai-hawk
DEEP LEARNING,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING
null
Deep Learning
deep-learning
Deep Learning
12,189
AI Hawk
null
a9a7e4d2b403
aihawk1089
15
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-27
2017-11-27 04:14:59
2017-11-24
2017-11-24 09:48:57
1
false
en
2017-11-27
2017-11-27 04:16:21
4
156cfdd538fa
2.535849
1
0
0
Why you should let robots do the heavy lifting for your next material event
5
AI tools are saving dealmakers 500 hours per deal Why you should let robots do the heavy lifting for your next material event It’s the end of 2017 and the robots are making waves. Their fuel? Sweet, sweet data. It’s widely accepted that data is the oil of the digital age. Like oil was a century ago, data is swiftly becoming a huge driver of growth and disruption. It’s considered an invaluable resource in an age where everything from watching TV to adjusting your thermostat can feed information back to sensors. Just like oil, data needs to be extracted and refined to unlock its real value. And that’s where the robots come in. A brave new world AI technologies are poised to make a big impact in this world. Why? Because they alone can filter and extract value from the massive quantities of raw digital information that we produce as a society. Economic models are being redefined to adapt to this digital takeover, and being able to extract insights from this volume of data is (or should be!) a key priority. According to The Australian, “the new economy is more about analyzing rapid real-time flows of often unstructured data.” And the more fresh data they’re fed, the smarter these technologies get. So what are some of the benefits? Automation: AI tech has already eliminated hours of mundane manual tasks across industries, and it’s poised to do a whole lot more. The time and cost saved in efficiency alone is unbelievable. Acceleration: Being able to automate tasks and assess hundreds of thousands of data points in a matter of seconds means a much faster and productive process. Knowing: The predictive analytics that come with AI means knowing future trends and situations ahead of time with certainty. It’s as close as you’ll get to seeing the future — and that means better decision making and successful strategies. Self-taught: The algorithms within these complex technologies reach greater accuracy the more they are used, so they only get more effective and encourage their own usage. AI tech deal potential AI technologies are still a newcomer to the world of material events, but the possibilities are limitless. Imagine having the foresight to make deal decisions and direct your clients with certainty toward a successful transaction? Imagine having all of your activity and information in one place, where every interaction and question is measured, tracked and traced for a single clear audit trail? Imagine being able to get an instant update on your deal activity just by asking your iPhone, and not relying on your analyst to complete hours of research and forecasting? Imagine if all your deal room documents could self-sort by financial quarter, saving you hours of dragging and dropping to reach the ideal organizational structure? And that’s just to name a few. AI can analyze the real-time flow of the flood of data from the interactions between bidding parties and thousands of pieces of material information. This is the world we’re creating — and it’s not a vision, it’s already here. Save 500 hours on your next deal Ansarada’s Material Information Platform is powered by data-driven AI tools with the insights of over 20,000 material events behind them. Its unparalleled efficiency is accelerating deals, unlocking value, and saving advisors significant time and cost. We estimate that the Platform will take a minimum of 500 hours off your next deal — ask our Sales team for a demo and they’ll show you how. The AI advantage is just one of the benefits the Platform offers advisors. For the full list, download our advisor’s guide to the Material Information Platform here. Originally published at www.ansarada.com on November 24, 2017.
AI tools are saving dealmakers 500 hours per deal
1
ai-tools-are-saving-dealmakers-500-hours-per-deal-156cfdd538fa
2018-03-27
2018-03-27 16:32:51
https://medium.com/s/story/ai-tools-are-saving-dealmakers-500-hours-per-deal-156cfdd538fa
false
619
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ansarada
null
2729d8ed3a37
Ansarada
2
2
20,181,104
null
null
null
null
null
null
0
null
0
8b9b43af1c86
2018-01-23
2018-01-23 03:08:04
2018-01-23
2018-01-23 03:24:20
1
false
en
2018-01-23
2018-01-23 03:24:20
7
156d7a78b3da
9.022642
1
0
0
Welcome to 2018’s first edition of Research Translated! For any new Quantum Authority readers, Research Translated is a new, experimental…
5
RT: The Future Prospects of Quantum Computing and Artificial Intelligence Welcome to 2018’s first edition of Research Translated! For any new Quantum Authority readers, Research Translated is a new, experimental section that translates the latest in quantum computing research into terms you can understand. Love it? Hate it? Let us know. — - Introduction We had a reader who, after perusing our first article on the relationship between quantum computing and AI, asked us for more information about how quantum computing relates to artificial intelligence. So we decided to take a deeper look this week at what’s going on in the world of quantum computing and AI. The article that we are analyzing this week is titled “Quantum computation, quantum theory and AI”, by Professor Mingsheng Ying. Professor Ying is the Research Director of the Center for Quantum Computation and Intelligent Systems at the University of Technology in Sydney, Australia, and has been active in the quantum computing research community since the 90s. Although not strictly research, this essay provides a deep technical dive into where Professor Ying sees quantum computing playing into the development of modern artificial intelligence techniques. The technicality of the paper does require some translation for those who are not familiar with quantum physics and differential equations, which is why we are including it as this weeks edition of Research Translated. So without further ado, let’s start! Article Summary Professor Ying divides his essay into 4 parts, each of which builds into the next: An introduction to quantum computing An exploratory passage into current research on quantum computing, with an emphasis on topics that Professor Ying has worked on personally An exploration of what Professor Ying considers to be the strongest candidates for areas where quantum computing could have applications within artificial intelligence A passage on the intersection of current AI and quantum research (i.e. not where one could be developed and help the other, but instead where both could be developed simultaneously) Here’s our summary for each section, in bullet point form: Section 1: Introduction to Quantum Computing This section is a crash course in quantum computing. It is a high-level overview, and it assumes that you have a college math background through matrix and vector algebra. If you do not have a background in this kind of math, then you’re in luck! The Quantum Authority has already written several articles going over the high-level ideas that Ying introduces without all of the complex math. We will not re-hash out the introductory material that Ying talks about. If you want a background in the introductory material, then be sure to check out the following quick-reads: Technical Overview of Quantum, Round 1 Technical Overview of Quantum, Round 2 The only information that Ying offers that TQA has not covered already is his emphasis on quantum registers as the only means of measuring the outcome of a quantum computation. A register is a basically a place in a computer’s architecture that can hold data. The data can be accessed very quickly from there. Every program and piece of architecture in a traditional computer can be traced down to its bits (i.e. its 0s and its 1s). Similarly, everything in a quantum computer can be traced down to its qubits. A quantum register is the same thing as a normal register in concept, but its built with qubits Ok, so we now know that a quantum register is simply a piece of the architecture that allows the quantum computer to store data. So all that Professor Ying is saying is that we need a quantum register in order to store the output of the data And that’s the crash course in quantum computing! Section 2: Current Applications of Quantum Computing: Professor Ying makes sure to emphasize that his overview of research in quantum computing is not inclusive of all fields and has an emphasis on topics that are of interest to him or that he has worked on personally. Models Ying spends a lot of time talking about the different quantum computing models out there. It is a topic that the writers at the Quantum Authority plan to write more about in the near future. In the meantime, here’s an overview of what he talks about (Note: some of these topics sound scary! Don’t worry though, bear with us and we’ll get you through): Quantum Turing Machine Alan Turing is a British mathematician who is by and large considered to be the father of computer science. If you want to know more about him, check out the “The Imitation Game” movie. It’s good stuff Turing devised the model that would eventually lead to a computer. It became known as a “Turing machine”. It is important to note that a Turing machine is a logical model, not a physical machine At a high level, a Turing machine can be used to solve mathematical functions by switching between different states based on different criteria, which is the basis for what all software programs do. For example, you could have two states: A and B. You start off in State A, and only move to State B when the user inputs a number higher than 5. Otherwise, you stay in State A. If the user inputs a 1, you stay in State A. If they input a 7, then you move to State B. This framework can be seen in modern software. For example, if you want to transfer money from one bank account to another through your bank’s online web portal and you enter a negative number into the “amount to transfer” box, then the program will not allow you to click send. It is in the “NoTransfer” state. If you enter a positive number though, the program then shifts to the “TransferOK” state. I made those state names up, but you get the idea right? Remember how the difference between a register and a quantum register was that one based on bits and the other on qubits? It’s the same idea here. A Quantum Turing machine uses qubits instead of bits to determine how the machine shifts in between states As the Turing machine is the basis for traditional computing, it makes sense to start with it as a model for quantum computing too Quantum Circuits This is the most popular quantum computing model. Before we can describe a quantum circuit, we first need to define what quantum gate is Traditional computers use gates in their architecture. We know that a binary 0 represents a false, and a binary 1 represents a true, right? Well, what happens if we have a situation where I want to execute a program if two conditions are true? For example, say I want to my online banking portal to email me if I have not paid my credit card bill yet AND I have insufficient funds to pay off the bill (thankfully, not my actual financial situation haha). How do I do that? Well, you would write a program that only sends the email if BillDue is true AND if InsufficientFunds is true. In this situation, I am effectively using an AND gate here. There are lots of different types of gates. AND, OR, NOT, and XOR (exclusive or) are the most popular. We can go into the nitty-gritty of those at a different time Quantum gates are the same thing as normal gates but based on qubits instead of bits. A quantum circuit is basically a series of quantum gates in a row Adiabatic Quantum computing This model is pretty different because it has no basis in traditional computing models Every other model moves in discrete time. All that means is that the model moves at specific times at the same time. For example, a quantum Turing machine would change states only at specific time intervals. This model is a continuous time model, meaning it moves whenever it likes. This model basically works by modeling the system in this mathematical function called a Hamiltonian. We don’t need to get into what exactly a Hamiltonian is, but its used in a lot of control theory problems. The Hamiltonian then varies over time, which is how the model computes Measurement-based quantum computing This model also has no basis in traditional computation As the name implies, the model basically uses measuring qubits to solve problems Topological quantum computing This model uses two-dimensional particles as a way to construct quantum gates It was designed as a way to minimize error in quantum computations due to a phenomenon called “decoherence”. Still theoretical and not really used Math and Logic Tools Categorical quantum logic It sounds scary, but it is basically a mathematical notation that lets one easily describe quantum computing at a high level Lambda calculus Also sounds scary. Lambda calculus is the basis for functional programming, an alternative to object-oriented programming. Basically its a way to code functions that take in other functions as input Lambda calculus is seen as a logical way to create quantum programming language Quantum computational logic Basically, a way to write out the state of quantum registers Theory of computation based on quantum logic Qubit logic instead of bit logic Algorithms This section was kind of disappointing tbh. Ying pretty much said that there are no real quantum algorithms out there and that all the work there was theoretical. Architecture Same deal as algorithms. Not much work, all theoretical. Programming languages Ying went over some languages proposed by researchers. They are older and pretty complicated, so we’ll go over those in a different post. Google “QFC programming language if you’re curious” Ying’s article was written before Microsoft announced their new quantum programming language, Q#, which is the best example of a quantum language yet. Section 3: Quantum assisting AI development: Ying basically posited that quantum could be used to advance AI and that AI could be used to advance quantum. So it’s a two-way street. He came up with a few current places where quantum could really make a difference in the development of AI Learning The only place that quantum has currently affected AI AI programs learn by training on data right? There are a couple of ways to do that, but most of them involve showing the computer a piece of data and labeling it. Eventually, the program will recognize different items. Decisions Problems Ying actually argued that any quantum improvements to existing algorithms for AI decision making problems would be marginal. We already have fast algorithms, there’s no reason to make them faster. Search It is widely believed that search will be the first major application of quantum in AI. There was a guy a while back who described some algorithms that could be used, but to date, no one has devised a quantum search algorithm Game Theory Game theory is a form of economic analysis. It is basically a way to quantify how rational people make their decisions based on information presented to them Section 4: Co-development of Quantum Computing and AI: Ok, so this section had suggestions on how AI could advance quantum computing, and how quantum computing could help advance AI. We divided those ideas into two sections. Quantum ideas that could be used to advance AI: Semantic analysis Semantic Analysis is a field within AI to allow a computer to tell the tone of a text. So a computer could tell if a tweet, or a Facebook post, or a speech was generally very positive or very negative. Ying notes that some researchers have found similarities between proposed quantum algorithms and current semantic analysis techniques, but he doesn’t believe that these similarities are strong enough to be meaningful Entanglement of words in natural languages Some researchers noted that NLP algorithms display similar nature to entanglement, called “spooky action at a distance” Basically it’s a phenomenon in quantum physics where quantum particles change the closer you get to them The similarities suggest that quantum computing could help advance these types of algorithms AI ideas that could be used to advance quantum computing: Quantum Bayesian Networks Ying argues that AI and quantum both use stats, so that’s an avenue that they can both get into. A Bayes model is basically a graph that uses pre-set probabilities to determine outcomes. There have recently been quantum-based Bayesian networks proposed in physics journals. Recognition and discrimination of quantum states and quantum operations Ying argues that since pattern recognition is a big deal in AI (i.e. computer vision techniques) that those same techniques can be used to help a quantum computer determine different quantum states. This has been getting a lot of attention lately Our take on the article Honestly, I thought this essay was super interesting. It’s not often that you get a leading expert in the field of quantum computing give both broad and deep summary of where we are in quantum computing and what directions it could go. As the owner of The Quantum Authority, I am certainly interested in quantum computing. I also have a strong interest in AI and the development of applications that use AI as its background. I agree with Ying’s assessment that AI and quantum can and should be developed in tandem because that will allow both fields to grow at an exponential rate Final words I think Ying gives the reader a lot of insight into possible directions that quantum computing as a field can go and also gives a lot of ideas as to how quantum relates to AI. I, however, was a bit disappointed as to the lack of specifics in the field. However, one should note that one of the reasons Ying had so few specifics is because there are not many specifics out there since this is such a new and emerging field. Keep an eye out on quantum computing and AI! We certainly will and we’ll be sure to keep searching for new topics in that area to write about for you guys
RT: The Future Prospects of Quantum Computing and Artificial Intelligence
1
rt-the-future-prospects-of-quantum-computing-and-artificial-intelligence-156d7a78b3da
2018-01-23
2018-01-23 03:24:21
https://medium.com/s/story/rt-the-future-prospects-of-quantum-computing-and-artificial-intelligence-156d7a78b3da
false
2,338
Quantum computing and it’s applications in terms you can understand. More here: http://thequantumauthority.com
null
The-Quantum-Authority-501989653502880
null
The Quantum Authority
thequantumauthority@gmail.com
the-quantum-authority
QUANTUM COMPUTING,QUANTUM,QUANTUM PHYSICS,ARTIFICIAL INTELLIGENCE,COMPUTER SCIENCE
thequantumauth
Quantum Computing
quantum-computing
Quantum Computing
1,270
James Wall
Tech and travel enthusiast. Founder of the Quantum Authority.
c6c442c24b04
james.wall
13
8
20,181,104
null
null
null
null
null
null
0
function f = kernel (x1, x2) return
1
null
2017-09-25
2017-09-25 11:39:08
2017-09-27
2017-09-27 14:40:12
28
false
zh-Hant
2018-08-23
2018-08-23 05:35:41
5
156db4b2b47b
3.346226
1
0
0
支援向量機、最大間隔分類器
3
Coursera:Standford Machine Learning Week 7 筆記 支援向量機(Support Vector Machines) 大間隔分類器(Large Margin Classification) 最佳化目標(Optimization Objective) 在工業界和學術界有另一個強大的演算法:支援向量機 (SVM)。相對邏輯斯迴歸與神經網路,SVM 在學習複雜的非線性方程時,提供了更加清晰及強大的方法。 支援向量機的推導從邏輯斯迴歸開始: Logistic Regression Hypothesis Sigmoid Activation Function Cost Function of Logistic Regression by each (x,y) 每個樣本 (x, y) 都會為總成本函數添加這一項。 表示一個樣本所對應的表達式。 當 y = 1 時,盡可能提高 z 使 hθ(x) 趨近 1 ,使總成本函數降至 0。 SVM 將原本的曲線更動為直線,設定 z = 1 時,成本為 0,我們將這條新的函數稱為 cost1(z),下標的 1 對應 y = 1 的狀況。 當 y = 0 時,盡可能降低 z 使 hθ(x) 趨近 1 ,使總成本降至 0。 SVM 將原本的曲線更動為直線,設定 z = -1 時,成本為 0,我們將這條新的函數稱為 cost0(z),下標的 0對應 y = 0的狀況。 邏輯斯迴歸的成本函數(Logistic Regression Cost Function) Logistic Regression Cost Function 支援向量機的成本函數(Support Vector Machine Cost Function) 替換 log 項並去除包含 m 的常數項(不影響 θ );最終替換 λ 為 C(類似 1/λ = C ,但實際得到的 θ 未必相同),這個替換即是用參數來決定,是要更關心第一項的優化,還是第二項的優化。 SVM Cost Function 不同於邏輯斯迴歸,SVM 的假設函數不給我們機率,而是直接預測 1 或 0 。 SVM Hypothesis 大間隔的直觀解釋(Large Margin Intuition) 實際上,我們希望 θ’x >= 1 才預測 y = 1 ; θ’x <= -1 才預測 y = 0 ,使其更加安全。 Support Vector Machine Hypothesis Function 更加安全的設置會使決策邊界落在黑線的位置,黑線與所有的訓練樣本有比較大的間隔 (margin)。 SVM Decision Boundary: Linearly Separable Case 如果資料是線性不可分的,可以將 C 設置為很大的值,SVM 的決策邊界會對離群值 (outliers) 很敏感,會將它們恰好分開,這也等價於正規化參數 λ 很小的時候。 反之,若 C 並非沒有設置為很大的值,則會忽略一些離群值。 大間隔分類器的數學(The Mathematics Behind Large Margin Classification) SVM 決策邊界 (SVM Decision Boundary) SVM Decision Boundary 假設我們只有兩個參數: 假設 p(i) 是 x(i) 在 θ 上的投影 (projection)。 我們重新定義此決策邊界: SVM Decision Boundary 簡單假設 θ0 = 0 ,也意味決策邊界必須通過原點 (0, 0) 。 ※ θ 總是與決策邊界形成直角。 SVM 中最小化目標函數的方法,就是去極大化這些 p(i) 的範數 (norm),也就是訓練樣本與決策邊界的距離。 SVM 實際上有著很嚴謹的數學理論,其涉及的概念很多。 延伸閱讀 A Tutorial on Support Vector Machines for Pattern Recognition http://blog.csdn.net/walilk/article/details/53542645 http://blog.pluskid.org/?page_id=683 http://www.blogjava.net/zhenandaci/category/31868.html 核(Kernels) 對於非線性的決策邊界,你可能需要建構多項式特徵變量,但是我們不僅不確定高階項是否為我們需要的變量,它也可能耗費相當多的計算資源。因此,有沒有比高階項更好的特徵變量? 高斯核函數(Gaussian Kernel Function) 既然知道它是如何運作的,那我們又要如何挑選標記點? 我們直接使用訓練樣本做為標記點,這說明特徵函數基本上就是描述每一個樣本間的距離。 使用核訓練 SVM SVM 參數 C 偏誤和變異之權衡 (Bias-Variance Tradeoff) 類似 1/λ( λ 是正規化參數) 太大的 C(太小的 λ)會導致低偏誤高變異(過度配適) 太小的 C(太大的 λ)會導致高偏誤低變異(配適缺乏) SVM 參數 σ² 用於計算 f 值 太大的 σ² 會導致 f 特徵值越平滑,即更高的偏誤及更低的變異 太小的 σ² 會導致 f 特徵值突出,即低偏誤高變異 練習SVM(SVMs in Practice) 選擇參數 C 選擇核 (Kernel) 當你的特徵變量數 n 很大,訓練樣本 m 很小時,也許你該考慮配適一個簡單的線性函數,而非複雜的非線性函數,它可能會導致過度配適。因此,你可能該考慮不使用核 (No kernel),也等價於使用線性核 (Linear Kernel), 當你的特徵變量數 n 很小,訓練樣本 m 很大時,你可以選擇的高斯核 (Gaussian Kernel),同時也必須決定 σ²。 在使用高斯核之前,記得先做特徵縮放 (feature scaling)。 線性核 (Linear Kernel, no kernel) 和高斯核 (Gaussian Kernel) 是最常見的。如果想使用其他相似的函數,需滿足 Mercer’s Theorem:對所有的資料,有一個函數滿足 k(x,y)=⟨φ(x),φ(y)⟩,且 k 滿足 Mercer’s condition,這個 k 就是一個 kernel 函數,其中 ⟨a, b⟩ 表示向量 a 和 b 做內積。 Mercer’s Theorem 是 kernel 函數的充分非必要條件,只要滿足它就為 kernerl 函數,但並非所有的 kernel 函數都一定符合 Mercer’s Theorem。 多項式核 (Polynomial Kernel) 字串核 (String Kernel) 卡方核 (Chi-Squared Kernel) 直方圖交叉核 (Histogram Intersection Kernel)
Coursera:Standford Machine Learning Week 7 筆記
1
coursera-standford-machine-learning-week-7-筆記-156db4b2b47b
2018-08-23
2018-08-23 05:35:41
https://medium.com/s/story/coursera-standford-machine-learning-week-7-筆記-156db4b2b47b
false
317
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Yueh-Lun Huang (Eren)
Data Backend Engineer | https://meliodaseren.github.io/ | The future is already here — it’s just not very evenly distributed. — William Gibson
bc8a23b0e8e8
yuehlunhuang
22
69
20,181,104
null
null
null
null
null
null
0
def read_and_format_trip_data(file_name): ''' input: file name output: pandas data frame ''' bike_trips = pd.read_csv(file_name) return bike_trips def get_station_list(): '''output: pandas data frame''' station_api_key = 'stationBeanList' try: response = requests.get('https://feeds.divvybikes.com/stations/stations.json') station_data = response.json() station_list = station_data[station_api_key] stations = pd.DataFrame.from_records(station_list, index='id') except: print('something went wrong') return stations def add_trip_counts_to_stations(stations, trips): ''' input: data frames output: data frame ''' departure_counts = trips.groupby('from_station_id').count() departure_counts = departure_counts.iloc[:, [0]] departure_counts.columns = ['Departure Count'] arrival_counts = trips.groupby('to_station_id').count() arrival_counts = arrival_counts.iloc[:, [0]] arrival_counts.columns = ['Arrival Count'] stations = pd.merge(departure_counts, stations, right_on='id', left_index=True).merge(arrival_counts, left_on='id', right_index=True) return stations departure_counts = trips.groupby('from_station_id').count() departure_counts = departure_counts.iloc[:, [0]] departure_counts = departure_counts.iloc[:, ['trip_id']] departure_counts = departure_counts.iloc[:, 'trip_id'] departure_counts = departure_counts.iloc[:, 0] departure_counts = departure_counts.iloc[:, [0]] departure_counts.columns = ['Departure Count'] arrival_counts = trips.groupby('to_station_id').count() arrival_counts = arrival_counts.iloc[:, [0]] arrival_counts.columns = ['Arrival Count'] stations = pd.merge(departure_counts, stations, right_on='id', left_index=True).merge(arrival_counts, left_on='id', right_index=True) def put_stations_on_map(stations): ''' input: data frame output: folium map ''' map = folium.Map(location=[41.88, -87.62], zoom_start=13, tiles="CartoDB dark_matter") for index, station in stations.iterrows(): popup_text = "{}<br> Total departures: {}<br> Total arrivals: {}<br>" popup_text = popup_text.format(stations.at[index, "stationName"], stations.at[index, "Arrival Count"], stations.at[index, "Departure Count"]) folium.CircleMarker(location=[stations.at[index, 'latitude'], stations.at[index, 'longitude']], fill=True, popup=popup_text).add_to(map) return map
14
null
2018-09-17
2018-09-17 15:30:04
2018-09-20
2018-09-20 16:21:11
2
false
en
2018-09-20
2018-09-20 22:17:18
15
156e2a4dc59d
6.587107
0
0
0
The best part of any project is naming it. My project: Hot Bikes. The goal: make a heat map of where people are biking in Chicago, using…
3
Mapping Divvy Data Using Python Photo Credit The best part of any project is naming it. My project: Hot Bikes. The goal: make a heat map of where people are biking in Chicago, using the amazing Divvy data set. NOTE: this is a project that I’m working on as part of the ChiPy mentorship program. Max is my mentor. After outlining the project with Max, it was determined that the first step was just putting Divvy stations on a map. Alright Max, challenge accepted. Oh yeah, and do it all with PYTHON. Very exciting. So I Googled “making maps with Python” and lo and behold, someone has done almost exactly what I wanted to do with the bike data available from the bike share program in New York City! The tutorial, Interactive Maps with Python (a very excellent three-parter with some delightful visualizations of net arrivals and departures that shows the beating heart of New York City—you’ll never guess where it is!!) got me a long way towards understanding how to do what I wanted to be doing, so, thanks for that, Vincent Lonij! Seriously, it’s amazing. In this article, I’ll be talking about how I created a map, using Python, to show the departures and arrivals at Divvy stations here in Chicago. You can follow along in this blog, in the code on GitHub (which may have changed significantly in the time before you read this), in this poorly-organized Jupyter notebook, or on the astral plane where I am currently projecting myself. For this project, I’m using the Folium library, which so far has been an awesome tool for making maps using Python. Folium is a wrapper for the leaflet.js library. Get the Trips Divvy bikes provides data sets of the list trips by quarter. To initially process the trip data from the csv file and get it in the format I wanted to work with, I created the following function: This function takes a file name and uses the pandas read_csv method to create a data frame from the CSV file I downloaded with trip data from one quarter. It returns that data frame. So I had my trip data (I would be able to aggregate trip counts from here in order to get departures and arrivals for each station, which I wanted to put on the map). These data include: Trip start day and time Trip end day and time Trip start station Trip end station Rider type (Member, Single Ride, and Explore Pass) If it’s a member trip, it will also include the member’s gender and year of birth Perfect, right!? Wrong. The data include the start and end station names, but they do not include the latitude and the longitude of said stations. That data had to come from another place, the live station info from the Divvy JSON feed, which includes all of the current working stations as well as their latitudes and longitudes and other fun nuggets like how many available docks there are. Get the Stations So the function below pings the Divvy API to grab the list of stations and makes a data frame from them using the pandas from_records method. This method takes an “index” property, which can allow you to use a different field as the index, which I set to the id, making the id of the station its index in the station data frame that is returned by this function. Sidebar: I used the python requests library to ping the API. So now I have two different data frames, one with the trip data, which has the id numbers of the start and end stations, but not their latitude and longitude, and I have a data frame with all of the station ids and their latitudes and longitudes. From Two (Data Frames) , One (Data Frame) To get the map that I wanted, these two data sets needed to become one. The magic was going to happen in the following function: So, what’s happening in this function? Great question. In order to write it, I relied pretty heavily on the aforementioned mapping tutorial, and there were several things that I did not quite understand in it. I needed the total counts of arrivals and departures from each station. To do that, I used the pandas groupby method. It took me a while to understand exactly what this was doing. I did some digging in the documentation, where it says that “the abstract definition of grouping is to provide a mapping of labels to group names.” Wow, so helpful. After even more digging, I discovered the following: Calling groupby in pandas returns a groupby object, which has a number of aggregate methods attached to it. Among other things, groupby can take a string which represents a column in a dataframe, and it returns an object which is a grouping of dataframes, each dataframe in the group created from instances of the original dataframe that also contain the specified key. (I think this is accurate). So the above line of code essentially says: take all of my trip data and create a bunch of dataframes that all have matching ‘from_station_ids.’ Then, count how many there are in that dataframe and return that value. If you print out the value of departure_counts right now, you get a dataframe that has the total number of identical start stations as all of the values in each column. This line of code replaces the departure_counts dataframe with just one column, trip_id, which is now not the actual trip id, but the total number of trips that started at that station. The iloc method does purely integer-based indexing, so something like: and: will both throw errors. Also, if you’re wondering about the double brackets around the zero and why it exists, here is what I know. If you do this: The object that is returned is not a dataframe, but a series. If you do this: The object is a dataframe. I do not know why. The rest of the code was written with the necessity of the returned thing being a dataframe, so I needed to keep the brackets. It’s not quite magic, but almost. My mentor Max says, “There’s no such thing as magic.” But we can still have fun, can’t we? The next line of code: Changes the dataframe into one that has an index of id and a column name of “Departure Count.” Then, to get the arrival counts, I do the exact same thing, but instead grouping by the arrival station id and getting a dataframe that has one column “Arrival Count” and an index of station id: Finally, I merge it all into one big happy family with these few lines of code: The merge method returns a dataframe, which is why I can chain them in order to do two merges. The first one merges departure counts with stations. The pandas merge method takes two dataframes as well as some other properties. The ones that are relevant to us here are the right_on and the left_index properties. The right_on property, in addition to being super chill (right on, dude), tells pandas to use the id column as the key when merging with departure_counts, and the left_index=True says to use the index of the left table (departure_counts) as the key, so this means pandas will match up the station id with the index of departure counts, which is what we want. For the chained merge, we use the left_on and the right_index properties because the now-merged stations data is our left dataframe, and arrival_counts is the right one, so we’ll use arrival_counts index to match to the ids in the merged stations dataframe. Then, we return the now-merged station data which has our happy latitude and longitude for all of the stations. This is what we wanted. Now, to put it all on a map! Map That Ish This code creates a map (using folium) centered on Chicago’s center of latitude and longitude. Then, it iterates through each row in the stations dataFrame and creates a little popup with the arrival and departure count. I used the pandas at method to locate each row, which accesses a single value in a DataFrame or series. Folium circle markers are delightful little markers that look like circles and when you hover over them, text is revealed! So, I create one of those for every row too, and then add them to the map, and return the map. And that’s how I put little circles with arrival and departure counts from each Divvy station on a map of Chicago. Each circle is a station If you made it to the end, wow! WOW. wow. w . o . w . WOWOWOWOWOWOWOWOWOWOWOWOWOWOWOWOWOW. I ❤ you.
Mapping Divvy Data Using Python
0
mapping-divvy-data-using-python-156e2a4dc59d
2018-09-20
2018-09-20 22:17:18
https://medium.com/s/story/mapping-divvy-data-using-python-156e2a4dc59d
false
1,644
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Emily Drevets
Software engineer in Chicago. JavaScript. Cashews. Bike.
a30b6a0ecd04
drevets
469
121
20,181,104
null
null
null
null
null
null
0
curl -X POST -H "Content-Type: multipart/form-data" -F "url=https://wt-12345-0.run.webtask.io/sentigram-webtask" 'https://api.telegram.org/bot1234:1234567890/setWebhook' {"ok":true,"result":true,"description":"Webhook was set"}
2
9c321fdafce5
2017-09-06
2017-09-06 06:51:27
2017-09-06
2017-09-06 10:01:16
6
false
en
2017-09-07
2017-09-07 03:08:04
12
156f03ef7f1
3.772642
6
0
0
Texting is complicated.
5
A sentimental AI can analyse your misunderstood text messages. Yes Wall-E, you are emotional enough. Texting is complicated. When someone gives you a good news and you reply with “great” maybe you are not exactly transmitting the happiest of the feelings, but you are happy, really. Lucky us, we live in the age of Artificial Intelligence and writing some code has never been so easy! Let’s see how to build a Telegram Bot which checks the emotions in your text with a quantitative approach through sentiment analysis. Ingredients for an emotional Bot As already said, the weapon of choice is a Telegram Bot for the APIs ease of use, but any other programmable messenger system will work. What really makes all interesting is the chance to build all this in a serverless architecture: thanks to Webtask.io, which offers an amazing environment, security (powered by Auth0) and serverless endpoints, our server side core will receive the message, send it to the our Text Analysis engine and prettify the answer in order to reply eventually with a nicely formatted response. Let’s dig into the details. Creating a new bot Send the /newbot message to @BotFather which will ask few questions to create your first Bot (name, description, etc…). Add eventually a set of commands with /setcommands to differentiate the types of analysis running on your text, they will show up pressing / in the message composition field. Open a chat window with your bot in order to start the underlying service. The sentimental AI A lot of companies offer a good text analysis API (IBM Watson, Google, Api.ai), but this time I decided to use Indico for the ease of use, the nice dashboard and mostly for the node module we can easily integrate in our webtask. Of course the choice of the API, Knowledge Base and the Machine Learning system has a huge impact on the performances of our messages analysis. A serverless core After the Bot is up and running we need to create the service to handle the messages in a webtask and webtask.io works flawlessly and the easy way (but in a truly powerful way). Follow the documentation if you want to use a client or alternatively jump into the web editor and past the code below in the text area: A if/else to run a separate analysis for the command /sentiment and for the command /emotions At this point you will need two different keys to run the web service: The Telegram Bot token: to wire the webhook (and send messages in a safe way) back to the @BotFather, run /token and follow the instruction to generate a bot token like 1234:abcd56789. The Indico API Key: quickly copy the API Key on top of the Dashboard after signing in for a free 10K/monthy calls base plan. Note: in the example the keys are baked in the code, but it’s possible to add and store keys in secrets through the management panel in the editor, in order to improve security and limit the access to the running process. Let’s wire it together We have a running bot, a running webtask which is not getting any message so far (🤦‍) and our AI ready to analyse our emotions and sentiments (👊). In order to share bidirectionally our messages we need to set a webhook using the link of our webtask as shown in the bottom section of the online editor: waiting to see something like: At this point we can defeat our lack of sensitivity with a nice analysis of our messages checking which one is more appropriate! 🤖 Sentiment and emotional analysis Robots have gotten steadily more capable, but humans’ expectations that robots should have minds keeps biting robot developers. — David Hanson The second sounds more positive, but be careful with sarcasm! Although Text Analysis is getting better and better, sometimes it still lacks of that quid a human being is able to extract from the context. The second of the answers above is great if your girlfriend is pregnant, less if your brother used your credit card to buy a new pair of shoes. Same thing if you don’t use keywords but express a concept with a saying or in a “non conventional” but “human understandable” way: Our bot is surprised that someone made my day, but not that it’s the best thing I’ve ever heard in my life. There’s a lot more to improve the flow after the text analysis that I was not able to cover here, but if you still have questions or want to see more about Telegram Bot and Sentiment Analysis, let me know in the comments below or pull a request on the project repository. Francesco is a traveler, developer, photographer, cook. Sometimes a serious person.
A sentimental AI can analyse your misunderstood text messages.
18
a-sentimental-ai-can-analyse-your-misunderstood-text-messages-156f03ef7f1
2018-03-30
2018-03-30 07:37:55
https://medium.com/s/story/a-sentimental-ai-can-analyse-your-misunderstood-text-messages-156f03ef7f1
false
748
Follow us to explore a new way of storytelling.
null
iampopin
null
I AM POP
hello@iampop.in
iampop
FACEBOOK MESSENGER,MARKETING,CHATBOTS,MUSIC BUSINESS,TECH
iampopin
Bots
bots
Bots
14,158
Francesco De Lisi
South East Asia, photographer, developer.
1b33291393ae
fdl
11
11
20,181,104
null
null
null
null
null
null
0
import time from selenium import webdriver from selenium.webdriver.common.keys import Keys browser = webdriver.Chrome() browser.get(‘https://www.facebook.com/login/') email = browser.find_element_by_id(‘email’) email.send_keys(‘your-email@gmail.com’) password = browser.find_element_by_id(‘pass’) password.send_keys(‘your-password’) password.submit() base_url = u’https://www.facebook.com/search/str/' query = u’%23clarkuniversity/stories-keyword/stories-public’ url = base_url + query browser.get(url) time.sleep(1) body = browser.find_element_by_tag_name(‘body’) for _ in range(20): body.send_keys(Keys.PAGE_DOWN) time.sleep(0.2) posts = browser.find_elements_by_class_name(‘_5-jo’) for post in posts: print(post.text) base_url = u’https://www.facebook.com/search/str/' query = u’clark+university/stories-keyword/stories-public?see_more_ref=eyJzaWQiOiIiLCJyZWYiOiJoZWFkZXJfc2VlX2FsbCJ9' url = base_url + query base_url = u’https://www.facebook.com/' query = u’pg/ClarkUniversityWorcester/reviews/’ url = base_url + query posts = browser.find_elements_by_class_name(‘_5pbx’)
11
null
2017-11-28
2017-11-28 13:33:02
2017-11-28
2017-11-28 13:55:43
1
false
en
2017-11-28
2017-11-28 14:11:22
7
156ff5d00258
1.871698
2
0
0
Welcome back ;) This will be a short one.
5
Performing Sentimental Analysis on Twitter and Facebook (Part 1b— Data Extraction Facebook) Welcome back ;) This will be a short one. In part 1a we have extracted tweets from Twitters. In part 2 we will extract posts and reviews from Facebook. As you can see, Facebook requires you to log in with your account in order to see the posts, unlike Twitter, which allows you to do so anonymously. So basically I said to Python: I want you to use the tool Selenium to open open up facebook.com, this is my email (your-email@gmail.com), and password (your-password), log in. Afterwards, use your “Webdriver” function to extract 20 posts (for _ in range(20))with #clarkuniversity. Vòila. Now you have a bunch of public posts, now you can read my posts on data cleaning and sentimental analysis and work with the data. Back to the project, I did look for other pages (both student-ran, institutional, and associated organizations), as well as other hashtags. Nothing really worked, so I got creative and extracted data from 2 additional sources: any post that has the string ‘Clark University’, and people’ reviews from Clark University’s Official Facebook page. These are the codes: For string ‘Clark University’: then everything else is the same. For people’ reviews: for this one the CSS class name is different: There are 207 reviews, many of which are lengthy, while others only have the ratings, but I am sure they are all from parents, students, alumni, and other people who are associated with the school, so they are reliable. That’s literally it. After you have extracted the data, follow: Step 1: Data Extraction Step 2: Data Cleaning Step 3: Data Cleaning and Sentimental Analysis I would say the hardest step for this extraction process would be finding the CSS class that the posts are located in. In order to to this, just highlight a post and inspect it. The classes did take me a while to find, but after that it should be a smooth ride, just sit back and watch Selenium opens a new browser, logs in using your email and password, and retrieve all the posts. Cheers!
Performing Sentimental Analysis on Twitter and Facebook (Part 1b— Data Extraction Facebook)
2
performing-sentimental-analysis-on-twitter-and-facebook-part-1b-data-extraction-facebook-156ff5d00258
2018-04-10
2018-04-10 04:57:31
https://medium.com/s/story/performing-sentimental-analysis-on-twitter-and-facebook-part-1b-data-extraction-facebook-156ff5d00258
false
443
null
null
null
null
null
null
null
null
null
Python
python
Python
20,142
Hung "William" Mai
null
3ce8a4a19196
williamai
32
17
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-12
2017-12-12 16:07:21
2017-12-12
2017-12-12 17:00:45
1
false
en
2017-12-12
2017-12-12 17:00:45
3
1570d90fb385
2.203774
0
0
0
From seamlessly streaming movies and music, controlling your houses lights and heating, getting the latest news to ordering your groceries…
2
The Voice-First landscape From seamlessly streaming movies and music, controlling your houses lights and heating, getting the latest news to ordering your groceries, voice-first platforms are revolutionising the way we interact with technology. “A voice-first device is an always-on, intelligent piece of hardware where the primary interface is voice, both input and output.” (Marchick, 2017) In 2016, Amazon Echo and Google Home became increasingly mainstream and their popularity has only been increasing, with now more than a total of 20.5 million Amazon Echos, and 4.6 million Google Home devices sold as of the third quarter of 2017 (Kinsella, 2017). Although Amazon and Google are the only two companies currently with dedicated voice-first devices, Microsoft and Apple are also in the game with their respective AI Assistants. These four companies are now competing with one another to take over the market. However, these AI assistants are starting to specialize and this has been seen this year, which means there are all happy to co-exist with each company focusing on their domain of expertise. Google with data mining, emails, calendaring. Amazon with commerce and entertainment Microsoft with gaming, emails and calendaring Apple with on the go AI assistance with Airpods and entertainment with Apple TV Third-party applications The popularity of AI assistants and and voice-first dedicated devices has attracted a large number of application developers, who are continuously creating more and more third-party applications. Although not all of the aforementioned companies are supporting third-party applications (Apple is still not allowing third-party developers to create applications for Siri), the number of skills/voice apps is rapidly increasing. With the large number of skills/voice apps available on the different AI assistants it can be expected that the adoption rate of these voice-first platforms will increase. “In essence, third-party applications are the innovation arm for these platforms.” (Marchick, 2017) Image taken from (Perez, 2017) Due to the high quantity of new applications available for these AI assistants it has be reported by users that a large majority of applications are not being used and are not at the expected quality standard. According to VoiceLabs 2017 report only 31% of all third party applications had more than one user review. What will the future hold With lacklustre application quality it can be expected that companies such as Amazon and Google implement new and better ways for third-parties to develop applications in the hope of promoting high quality outputs. Additionally it can be expected that one the major voice-first platforms implement a successful way of monetising applications as it has been a major gap for the success of the ecosystem. References: Marchick, A. (2017). The 2017 Voice Report — A Cloud Guru. [online] A Cloud Guru. Available at: https://read.acloud.guru/the-2017-voice-report-de88123d5c06 [Accessed 12 Dec. 2017]. Perez, S. (2017). Amazon’s Alexa passes 15,000 skills, up from 10,000 in February. [online] TechCrunch. Available at: https://techcrunch.com/2017/07/03/amazons-alexa-passes-15000-skills-up-from-10000-in-february/ [Accessed 12 Dec. 2017]. Kinsella, B. (2017). Bezos Says More Than 20 Million Amazon Alexa Devices Sold — Voicebot. [online] Voicebot. Available at: https://www.voicebot.ai/2017/10/27/bezos-says-20-million-amazon-alexa-devices-sold/ [Accessed 12 Dec. 2017].
The Voice-First landscape
0
the-voice-first-landscape-1570d90fb385
2018-05-14
2018-05-14 09:47:03
https://medium.com/s/story/the-voice-first-landscape-1570d90fb385
false
531
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Hadrien de Vaucleroy
null
b25dca57af43
hadriendevaucleroy
27
29
20,181,104
null
null
null
null
null
null
0
無理やり、oを中心語、cを背景語とすることでSkipgramの最適化関数と同じ形になっています。本来は記号を逆転させたほうが良いのですが、今回は論文に従いそのまま引用しました。 例えば、それぞれの項目に一つずつ書いてある場合を ["テニス", "スポーツ", "カラオケ"]と表記すると、 ["テニス,サッカー,野球", "スポーツ、運動", "*カラオケ*コスプレ*映画*"] ["テニスが好きです。", "スポーツをよくやります。", "週末はカラオケに行くこともあります。"]といった記入パターンをするユーザーもいます。 今回は省略しましたが、TSNEを用いて次元削減すると、それぞれかなり違ったグラフとなりました。 単語ベクトルの量子化は、大まかな位置関係は維持しつつも距離を変える操作となっていると考えられます。 モテる人たちの趣味は、お酒が多いわけでも紫の領域に偏っているわけでもないですが、平均するとこのようなベクトルとなりました。この結果は、このベクトルに近い趣味がモテるということを保証するものではございませんのでご了承ください。
4
5c512e0ddc60
2018-06-14
2018-06-14 09:14:33
2018-07-10
2018-07-10 09:30:55
16
false
ja
2018-07-11
2018-07-11 06:44:56
4
157102bcfa05
10
13
0
0
Word2Vecをより高精度で32倍軽量にする手法をPairsのデータで試しました
5
Word2Vecをより高精度で32倍軽量にする手法をPairsのデータで試しました はじめに 初めまして、こんにちは。BIチームの小林です。 日頃Fortniteやスプラトゥーンに精力的に打ち込んでいます。 弊社のBIチームでは、様々な数字の分析や機械学習を用いての提案を業務としています。 同様に機械学習を扱うAIチームという部隊もエウレカには存在していて、AIチームは、BIチームより機械学習主体の技術ドリブンで動いています。我々BIチームとAIチームでは、分野的に被る部分も多いため、共同で勉強会(定期的にarXivなどに上がっている最新の論文の輪読会)を行っています。 本記事では、前回の輪読会で私がチョイスしたword2bitsというword2vecを量子化して行列要素のサイズを減らす技術について書かれた論文( Maximilian Lam. Word2Bits — Quantized Word Vectors https://arxiv.org/abs/1803.05651)を紹介します。 記事の終わりでは、実際にPairsのデータを用いてモデリングしています。今後の恋愛に使える知識が得られますので、ぜひ最後までご覧ください。 Abstract Word vectors require significant amounts of memory and storage, posing issues to resource limited devices like mobile phones and GPUs. We show that high quality quantized word vectors using 1–2 bits per parameter can be learned by introducing a quantization function into Word2Vec. We furthermore show that training with the quantization function acts as a regularizer. We train word vectors on English Wikipedia (2017) and evaluate them on standard word similarity and analogy tasks and on question answering (SQuAD). Our quantized word vectors not only take 8–16x less space than full precision (32 bit) word vectors but also outperform them on word similarity tasks and question answering. 引用元 : https://arxiv.org/abs/1803.05651 ざっくり要約すると、 word2vecのベクトルは各要素が32bitで表現されているため、合計で語彙数*次元数*32bitとなりモバイル端末などで扱おうとするとかなりのメモリ容量を必要とする。 そこで、各要素を量子化して1~2bitとすることで軽量化してみたら、類似度やQ&Aのタスクでは精度が上がった。 と言う感じです。 モデル 既存手法(word2vec) word2vecのベクトルを作成するアルゴリズムにはCBOWとSkipgramがありますが、今回はCBOWを高速化させた手法である、CBOW with NegativeSamplingのアルゴリズムをベースに改良します。 以下が、CBOW with NegativeSamplingの最適化関数です。 最適化の流れとしては、 上記を繰り返します。 提案手法(word2bits) 今回紹介するword2bitsという手法は、上記のword2vecの最適化関数に量子化を組み込んだものです。上のword2vecの最適化関数に量子化の過程を入れると、以下のようになる。 このとき ここで、量子化関数 Q(x) は以下のように定めます。 ちなみにこれは、論文の著者がいろいろ試して一番うまくいった値だそうです。 Q(x) は離散的な関数で、微分するとある点では不定、他の点で0となります。 なので、以下のように定めます。(これは、Hinton straight through estimator と呼ばれます) 結果 上の表 : 類似度タスクでは、精度が上がった、アナロジータスクでは下がった。(Accuracy) 下の表 : 質問系タスクでは、精度が上がった。(F1-Score) また、 以下のグラフによると - Epoch数(学習回数)が増えると、32bit(従来のモデル)では過学習が起きている(Accuracyが下がっている) - 次元数が増えても、32bit(従来のモデル)では過学習が起きている(Accuracyが下がっている) Pairsのデータでやってみた モデル 今回、Pairsに登録しているユーザーのデータからモデルを作成しました。 Pairsでは、共通の価値観を持つ相手とマッチングできるよう、自己を表現するために様々な項目が用意されています。 固定値のプロフィール項目やコミュニティ、自由入力な自己紹介文や趣味などの多種多様な項目がありますが、今回はユーザーの趣味の項を用いてモデルを作成しました。 Pairsでは、ユーザーの趣味の欄は自由入力で3つ入力できるようになっていますが、人によっては、一つの欄に多種多様な区切り文字を用いて複数の趣味を入力していたり、文章を記入しているものもあります。 今回、シンプルなモデルを作成するために、趣味に含まれる名詞のみを抽出して利用しました。 また、パラメータは以下のように設定し、単語ベクトルを作成しました。 - 次元数 : 100 - ウィンドウ幅 : 20 - ネガティブサンプリング数 : 5 - エポック数 : 5 - 最低出現数 : 5 結果 出現頻度上位50の単語のベクトルをPCAにより次元削減してから可視化を行いました。 まとまりがわかるように、既存手法でベクトルを生成しPCAにかけたものを、k-meansで4つに分け着色しました。 既存手法 1bit量子化 2bit量子化 これを見ると、位置関係はそこまで変わらず、まとまり自体は概ね維持されています。 ざっくり、赤は 大人系 、緑は アウトドア、青は お一人様、 紫は おしゃれ な感じにまとまっているように見えます。 また、例えば「ゲーム」に近いワードを調べてみると、 上から既存手法、1bit量子化、2bit量子化 結果は割と似通っており、どれもそこそこ納得感もあるような気がします。 論文によると、アナロジータスク(演算を用いた類似単語推定)は、結果が悪いということですが、 例えば、ゲーム − スポーツ で見てみると、 上から既存手法、1bit量子化、2bit量子化 演算に関しては答えが明確に定められないので正しいのか評価は難しいところではありますが、それぞれが少しずつ違い、特に1bit量子化したものがもっとも異なる結果となりました。 おまけ 参考までに、記事執筆時点での人気ユーザー(過去30日間での獲得いいね数の上位)100名をピックアップして、それぞれの趣味に含まれる単語ベクトルを平均してユーザーの趣味ベクトルとし、100人の趣味ベクトルを平均したものと、1位のユーザーの趣味ベクトルを図nに重ねてプロットしました。 男性100人の趣味ベクトルは重なって見えづらいですが、ほぼほぼ お酒 と被っています。 私も、今後はモテるためにお酒を趣味にしていきたいと思います。 エウレカのBIチームでは、このように実際のデータを使って最先端の技術や手法を試すことができます。私たちは一緒に働けるメンバーを常に募集していますので、少しでも興味を持っていただけましたら、下記Wantedlyからご応募をよろしくお願いいたします! Pairsのビッグデータを未来の付加価値へと転換するビジネスアナリスト募集 by 株式会社エウレカ 株式会社エウレカのエンジニアの転職・採用情報。Wantedlyでは、働くモチベーションや一緒に働くメンバーについて知ることができます。今回のポジションでは、"サービスを成長させる意思決定にデータで貢献し、実現する"…www.wantedly.com BIチーム責任者 鉄本とAIチーム責任者 臼井のインタビューも、併せてご覧ください。 「Pairs」の成長を担う BIチーム × マーケティングチームの連携に注目! 成功の秘訣はチーム間の風通しの良さにあった|ウェブエキスパートドラフトReport 日本・台湾・韓国で累計会員数700万人を突破し、国内でも年々拡大し続けるオンラインデーティング市場で国内最大級を誇るオンラインデー...webexpert-draft.jp エウレカ新チーム発足!AIチーム責任者 臼井が語るエウレカの魅力とエンジニアの未来について | eureka Member's Interview 今回は、エウレカで新たに発足されたAIチーム責任者を務める臼井のエウレカストーリーをご紹介します。臼井は、理学研究科の修士課程(物理)を修了後、プログラミングほぼ未経験でありながら、新卒で独立系SIの企業で、プラント監視システムや集中治療室…www.wantedly.com
Word2Vecをより高精度で32倍軽量にする手法をPairsのデータで試しました
183
pairs-word2vec-157102bcfa05
2018-07-17
2018-07-17 07:37:19
https://medium.com/s/story/pairs-word2vec-157102bcfa05
false
253
エウレカの開発技術に関する情報を発信していきます。
null
eureka.inc
null
Eureka Engineering
cto-office@eure.jp
eureka-engineering
LOVETECH,EUREKA,GO,TECHNOLOGY,ENGINEERING
eureka_inc
Eureka Bi
eureka-bi
Eureka Bi
2
Mizuki Kobayashi
eureka,Inc. BI team / 分析 / 機械学習
7f2b18addaf5
mizkino
30
32
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-14
2017-09-14 14:20:56
2017-09-27
2017-09-27 12:54:29
8
false
en
2017-09-29
2017-09-29 13:53:26
10
157168eeae7e
6.171069
14
1
0
Greetings This is hopefully the easiest implementation write-up you’ve seen for NEAT AI in Unity. Neural Networks are a broad topic and can…
5
Learn How to Implement NEAT AI in Unity Greetings This is hopefully the easiest implementation write-up you’ve seen for NEAT AI in Unity. Neural Networks are a broad topic and can seem daunting, however using them doesn’t need to be difficult. In case you don’t know what a Neural Network is, in simple terms it is a function that takes a number of inputs and produces a number of outputs after doing work on the values. Grabbed from our favorite guide: Hacker’s guide to Neural Networks Chapter 1: Real-valued Circuits In my opinion, the best way to think of Neural Networks is as real-valued circuits, where real values (instead of boolean values {0,1}) “flow” along edges and interact in gates. — Andrej Karpathy (Hacker’s guide to Neural Networks) What are we doing here? We are going to show you how to use a Neural Net called NeuroEvolution of Augmenting Topologies (NEAT) inside of Unity3D. In the end, you can expect to have smoothly bobbing cubes like the ones you see in the gif below. What NEAT does Over the course of many evolution cycles, NEAT determines the most efficient balance between weights and structure. If you want to learn more about weights of Neural Nets you should check out Hacker’s guide to Neural Networks for the how NEAT alters structure, see Kenneth Stanley’s paper (he created the NEAT method). Who is this for? This is for people who know how to program using C# and who use Unity. You can still follow along if you don’t know either of those, but it might take you a little while longer because you will need to look up other programming information. Which is awesome, though, because that’s totally what I would be doing too. Downloads Download the repo Let’s get this party started What we have now is the foundation needed to build some very cool Artificial Intelligence. When we started this project, we just wanted to do something that was a simple baseline to get our hands dirty. So, if you’re ready, let’s get to it. You can work in the current scene and just follow along or create a new one to get a better feel for putting this together. Create smartUnit Create a Cube GameObject named smartUnit Create a folder called Prefabs Drag smartUnit into the prefabs directory to create a prefab Delete smartUnit from the hierarchy Add a RigidBody component (using the defaults is okay for this one) Add Controller to smartUnit Add a new script called LocomotionController Attach the new script to your smartUnit Open the script to edit it Edit LocomotionController.cs Add a Target Create a new Sphere GameObject called Target. You can add a color to it if you like (this can make it easier to visually track) Position the Target on the floor, but off in a corner, so that your smartUnits don’t spawn directly on top of the target, and so they have something to achieve and improve their fitness score Drag your Target into the prefabs directory Click on the smartUnit to view its components in the inspector Drag your new Target prefab onto the “Target” field of the smartUnit Locomotion Controller component. NOTE: you cannot drag the Target GameObject from the hierarchy to this component because the component is on a prefab. That’s why we are making the Target a prefab as well It should look like this when you’ve done it correctly You can mess around with the Force on this controller to see what works best for you. Ours is 5 here, but you can try different things out to see if you can get more interesting results. :-) Update Number of Inputs and Outputs We need to ensure that the Optimizer.cs class knows the number of inputs and outputs we intend to use. Optimizer.cs is located at Assets > 3rd Party > UnityNEAT > SharpNEAT Look at the LocomotionController.cs. In the FixedUpdate method, we can count the number of inputArr and outputArr we have. Respectively, there are 3 inputs and 5 outputs. In Optimizer.cs we update NUM_INPUTS and NUM_OUTPUTS accordingly Ensure that you update the input and output counts anytime you change the actual inputs/outputs you are going to use for different training scenarios Optimizer.cs Create Evaluator Create an empty GameObject named Evaluator Attach the Optimizer script to the Evaluator as a component On the Optimizer component, update the following values as a baseline Create Floor and Walls Add a GameObject for a floor that is 100x, 1y, 100z (we just used cubes but you can use whatever you want) Add GameObjects for walls around your perimeter. Ours are 10 units high and enclose the perimeter completely (we used cubes here as well but you can use whatever you want) Position the Target and smartUnit on the floor and make sure the Target is somewhere off in a corner so the smartUnits have something to achieve, in order to increase their fitness score Make sure that once you have repositioned the Target and smartUnit you drag each back to it’s respective prefab to update the prefabs. Once you’ve done that, delete the smartUnit from the hierarchy. Make sure to keep the prefab in the prefab folder. We just want to make sure those are instantiated as prefabs so as not to just have a random GameObject floating around the scene (this is a still photo) We made the walls partially opaque for easy side viewing. Check out all of the failed attempts too lol. We tracked the changes made to each trial to make sure we never tried the same thing more than once. Hit play and click the “Start EA” button to start training After many trial and error approaches, we were able to figure out how to get the cubes to bounce within a about 10 generations, which is very small. This number is likely to vary for every person and trial but should not be by too far off. Watch the generation counter on the bottom left of the “Game” tab. Once it hits 5 or 6, select the Target GameObject and toggle off “Is Tracked” and watch the cubes. If it looks like they are all bouncing erratically around the floor, you should be good. If it looks like 75% or less are bouncing around you have two options. (NOTE: you can click on the Evaluator and adjust the frame rate (FPS) to something like 25 or 10 to see more clearly how the smartUnits are behaving.) Toggle on “Is Tracked” for the Target and continue training for 5–10 more generations then toggle “Is Tracked” to see how they are doing — rinse and repeat every 5 generations up to 20 or 30 generations. Click the “Stop EA” button then once the cubes are cleared from the screen automatically, click the “Run Best” button to see what the best cube does. Hopefully, it bounces nicely! Otherwise, restart the game and start training some more. Watch all the way through the training to the bouncing cube that was trained Congratulations! That’s all there is to it, you have learned how to create a NEAT Neural Network in Unity that learns how to move a GameObject using its own unique form of locomotion. You also now have the starting framework to create more complex learning scenarios and outcomes. Congratulations! So what can be done next with NEAT? It depends on if you want to create a game, something for personal productivity, or for commercial enterprise use. NEAT has been used to create neural nets that can play games such as Mario with high accuracy and efficiency, teaching 3D characters bipedal locomotion, or might be used to train AR / VR people or animals to interact with. Taken a step further, this could be used to train robots or autonomous vehicles within the safety of a 3D physics engine such as Unity. Adding other algorithms to this one can help you achieve the more elaborate plans. What’s Next? We are considering creating another part to this article that includes the Boids algorithm which will cause the cubes to travel from location to destination as a flock. From there we may possibly add in some Pathfinding. If this is something you’d like to see, or if you have another idea for an article, please follow us, clap, and leave a comment to that effect below. Boids Algorithm Using the same bouncing cubes model adding in the Boids algorithm
Learn How to Implement NEAT AI in Unity
54
learn-how-to-implement-neat-ai-in-unity-157168eeae7e
2018-04-25
2018-04-25 19:06:34
https://medium.com/s/story/learn-how-to-implement-neat-ai-in-unity-157168eeae7e
false
1,335
null
null
null
null
null
null
null
null
null
Unity
unity
Unity
3,375
Holographic Interfaces
Holographic Interfaces is a boutique research, design & development shop specializing in Augmented and Mixed Reality experiences.
ade2367eb5ba
HolographicInterfaces
84
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-11
2018-07-11 15:58:45
2018-07-11
2018-07-11 16:05:32
1
false
en
2018-07-11
2018-07-11 16:09:14
6
157175182088
0.89434
0
0
0
FTEC is an ecosystem of intelligent services and neural networks for conducting effective trading activities on cryptocurrency markets
5
FTEC (FTEC) Artificial Intelligence Trading Revolution. TOKEN SALE IS LIVE! FTEC is an ecosystem of intelligent services and neural networks for conducting effective trading activities on cryptocurrency markets First Trading Ecosystem Artificial Intelligence, Cryptocurrency, Investment, Platform, Software It is an ecosystem of intelligent services and neural networks for conducting effective trading activities on cryptocurrency markets. The idea behind FTEC is very clear: to create a holistic ecosystem that will contain all the necessary tools based on AI and Neural Networks for users with any level of experience and knowledge in the field of cryptocurrencies. Team have already working product which is on the market for almost a year and sucessfully trade for 10k+ active members. In near future this project will be a part of whole FTEC ecosystem. Team offers a complex of 15 original solutions for: Boosting trading efficiency. Saving time of a trader. Receiving the latest trends in the industry. Improving trading strategies of users. Minimizing the risks of the trading activity. Studying the specifics of crypto trading. Ensure your trading activities to FTEC and sleep well. TOKEN SALE IS LIVE HERE!
FTEC (FTEC) Artificial Intelligence Trading Revolution. TOKEN SALE IS LIVE!
0
ftec-ftec-artificial-intelligence-trading-revolution-token-sale-is-live-157175182088
2018-07-11
2018-07-11 16:09:14
https://medium.com/s/story/ftec-ftec-artificial-intelligence-trading-revolution-token-sale-is-live-157175182088
false
184
null
null
null
null
null
null
null
null
null
Cryptocurrency
cryptocurrency
Cryptocurrency
159,278
Cryptocurrency news
Analyst, latest news, airdrop, bounty
25efca94420
freetokencryptobounty
38
11
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-08
2018-07-08 12:45:30
2018-06-28
2018-06-28 16:10:15
0
false
en
2018-07-08
2018-07-08 12:47:27
3
15717ad4e1fe
1.554717
0
0
0
The world is still sceptical of robots in our homes, workplaces and war zones. But, an increasing number of robotic inventions are…
4
Robots’ work: Let’s bid adieu to dull, dirty and dangerous jobs The world is still sceptical of robots in our homes, workplaces and war zones. But, an increasing number of robotic inventions are continually saving time, resources and lives around the world. In other words, saving us from dull, dirty and dangerous jobs. Robots are built for a diverse range of functions and purposes. Industrial robots, service robots and military robots vary in their utility and scope of operations. More often, they replace or assist humans in the most mundane or life-threatening jobs. The advancement in the field of robotics has been such that we already have robots helping is out doing all the dull, dirty and dangerous jobs for humans. Some of them are really good at their job and have already proven to be a huge boon. Especially, in war zones and tricky rescue operation involving very little margin for error. Let’s talk about getting rid of dull, dirty and dangerous jobs: One such example can be of Rovver X. Built by EnviroSight, the main purpose of the robot is sewer reconnaissance, meaning it just humans are no longer required to get down in the sewers to fix issues. Autonomous cars have created enough curiosity, we all know this fact very well. A bunch of tech giant including the likes of Google, Facebook, Uber and others have engaged themselves in a sort of tech race, trying to compete against each other. That being said, witnessing fleets of self-driving cars on our roads is still a far-fetched dream. However, small sized robotic trucks have been buzzing through the minefields. Called as the “HDMAS,” the success rate of these smart trucks has been quite high, all thanks to the advancement of sensors and GPS has changed the whole game. Another area where robots are proving capable of replacing humans doing the high-risk job of firefighting. SAFFiR, the firefighter bot, is built to extinguish the fire and to carry out rescue operations. The bot can function in high temperatures and support firefighters during rescues at times of fire breakouts. Dull, dirty and dangerous — the robots are proving to be more than useful when we need them the most. Research scientists and industries are welcoming robotic innovations and inventions to gear up performance and productivity. It’s time we’re thankful for the limitless possibilities that the age of robotics brings with it. Originally published at mitrarobot.com on June 28, 2018.
Robots’ work: Let’s bid adieu to dull, dirty and dangerous jobs
0
robots-work-let-s-bid-adieu-to-dull-dirty-and-dangerous-jobs-15717ad4e1fe
2018-07-08
2018-07-08 12:47:27
https://medium.com/s/story/robots-work-let-s-bid-adieu-to-dull-dirty-and-dangerous-jobs-15717ad4e1fe
false
412
null
null
null
null
null
null
null
null
null
Robotics
robotics
Robotics
9,103
Veeran Rajendiran
null
fda6e19ddd3e
RVeeran
30
54
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-19
2017-11-19 23:10:33
2017-11-20
2017-11-20 12:01:01
2
false
en
2017-11-20
2017-11-20 12:01:01
2
1572b0949932
1.873899
6
1
0
A musing on art, design, technology and visual science
5
Visual AIs for 3D Data A musing on art, design, technology and visual science When we see a new image, we process it based on images and patterns we already know. That’s what the body of visual science explains: there is still no way to quantify what is present, only what is seen (and interpreted) by the eye and brain. This works so well for us humans, and is how we have come to gain understanding of the world around us. Apart from simple vision, this effect is used to some degree in photography and now design. Skeuomorphism has achieved useful results in the design of digital calculators and calendars which model the physical products they replace. These products are in turn easier to use because we have seen the likes of them before. Capturing images is however going through significant change with 3D scanning. Towards the end of my Masters study, I volunteered at the McMaster Museum of Art, scanning Classical artifacts that are a part of the museum’s collection. This is not unique to McMaster; in July, I visited the Fralin Museum of Art in Virginia which is doing the same. Converting these 3D artifacts to 3D images is an interesting way to keep the artifact forever and even recreate the artifact for teaching Art, Archaeology or Classics. But what I discovered was that like any machine, the scanner could not tell one surface from the other; it had no context. Unlike humans capturing images, there was no body of data to run the new captures by. Thus after scanning, I had to clean the image of other surfaces and apply textures, finally completing the scan. 3D scan of terracota artifact [you can see how the scan captures the physical markings very well] This could be easier, and I suppose (like any other idea I have) someone else is already thinking about it. A Visual AI that can identify new 3D image-captures based on pre-existing 3D data and detect depth differences could greatly improve 3D scanning. But it doesn’t just stop there. When designers conceive new products and eventually move this to CAD (a 3D model), there exists the risk of contravening an existing design patent. A Visual AI could help designers and engineers avoid this by highlighting these issues while the concept is still in-house and without any prototyping or testing done. NB: I wrote this in a moving train so I apologize for any errors.
Visual AIs for 3D Data
25
visual-ais-for-3d-data-1572b0949932
2018-03-17
2018-03-17 07:52:36
https://medium.com/s/story/visual-ais-for-3d-data-1572b0949932
false
395
null
null
null
null
null
null
null
null
null
Design
design
Design
186,228
Chuma Asuzu
Designer & Engineer, mostly writing about design and (hardware) tech in Africa.
b5893a679596
unibrow
207
139
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-28
2018-06-28 14:04:30
2018-06-28
2018-06-28 14:14:44
3
false
en
2018-06-28
2018-06-28 14:14:44
1
1573373f3c0d
4.885849
0
0
0
When I received my first adjunct lecturer position, it was completely out of the blue. I was asked by a friend to cover a couple of…
3
How to Manage Your Time as an Adjunct Instructor (to Make the Best Hourly Wage) When I received my first adjunct lecturer position, it was completely out of the blue. I was asked by a friend to cover a couple of sections of Basic Drawing and I said yes immediately. I didn’t ask how much it paid nor did I care. I was just happy to have a job. I was a 29-year-old artist when this happened. I was asked to teach 2 fine art studio courses at a larger public institution in South Texas. I was paid about $3500 per course per semester and I had to commute 2–2.5 hours a day to teach in another city than where I lived. Each course met for 2.83 hours, 2 times per week and I had about 16 students per course (4 too many). So how can you make the best hourly wage as an adjunct instructor, if you are given the rare privilege? As an adjunct instructor, you can’t negotiate for a higher wage but you can control the amount of time you spend on teaching duties. My advice on how to best manage your time as an adjunct instructor will be tailored specifically to studio art classes based on the following analysis; however, you can apply the same principles to any type of course. I found that there were 4 main categories of duties: teaching, preparation, grading, and other (email, meetings, etc.). I analyzed the time I spent on various duties and calculated my hourly wage based on these numbers. I did not include commute time in my analysis. I then compared my analysis of each of the courses I taught over the last two semesters (Fall 2017 and Spring 2018) at 3 different universities. Since I was a new instructor who had exposure to the harsh and demoralizing work conditions of adjunct instructors, I knew that there was a lot of emotional labor involved that I was going to minimize. By minimizing the time I spent on certain tasks, I was able to more effectively complete my teaching duties, spend less time on unnecessary tasks and avoid all the demoralizing and destabilizing effects of being an adjunct instructor. In my findings, most of my time was spent on teaching, then preparation, grading and other. This was consistent throughout the 2 semesters I taught. But in the first semester (my very first job) I spent a lot more time teaching. This was because of the university’s structure. The drawing classes that I taught at University 1 were the longest classes I taught (2.83 hours). Here are the breakdowns of the time spent on different teaching duties during 2 semesters at 3 different universities. University 1 (Fall 2017), 2 Courses University 2 (Spring 2018), 1 course University 3 (Spring 2018), 2 courses Here are 5 tips that will help you manage your time to make the highest hourly wage possible as an adjunct instructor. Know how much you are making and keep track of the time you spend on your teaching duties. Keep a journal, track your hours in your calendar or use an app. If you track your time, you can see how much time you are spending on your job and then you can calculate your hourly wage. Once you keep track of your time, you can then try to minimize the time you spend on certain tasks. Since you cannot alter the teaching schedule nor how much you are paid, you have to carefully control the amount of time you spend on teaching duties outside of class, specifically the preparation, grading and other portion of your duties. This means that you should careful set aside a limited amount of time to get all your preparation, grading and other duties done for your course(s). It is just as important to manage your teaching time (in class) as it is to manage your time spent working outside of class. Meaning you should never waste time in class. Let’s say you’re teaching a studio art class and your students are diligently working after you have given an assignment out. Don’t feel bored or as though you have nothing to do as you watch your students work. You should always be spending your time working on your other responsibilities during class time. I usually graded, emailed or prepared for the next class while my students were working. Use class time to meet with students and have individual critiques. This usually meant cancelling class or not meeting regularly in order to have scheduled meeting with individual students in my studio art classes. This works best specifically for art studio classes because individual meetings are used to assess and evaluate your student’s work and progress. Keep it casual and don’t stress out about work. The most important advice for adjunct instructors is to not get caught up in the emotional labor of being a teacher (read: dealing with teenage bullshit). In other words, the individual needs of every student cannot be fully met and satisfied by you alone. Instead, you should focus your energies on getting your work done as efficiently as possible. I prided myself on this and this helped me get through the rough times. I remained highly encouraging to my students, thoughtful, flexible and friendly. This was the best that I could do… at least, for what I was being paid. Start with calculating your teaching time, since this is usually set in stone. Then, once you keep track of your schedule and get your numbers calculated, try to aim for teaching 50% of total time worked, preparation/planning to 35% or less of your total time worked and everything else less than the percentage of preparation/planning. Variables will vary greatly according to your class and university. For example, at University 2, I taught one course that met for class about 1.5 hours per day, 2 times per week. I calculated the total teaching time (for the semester) to be 52.5 hours. So I managed to keep my preparation time to 36 hours total by keeping my weekly preparation time limited to 1–2 hours per week, depending on what was needed. In actual practice, I managed to get a lot of the preparation/planning done during class. So in essence, I was working less hours by working more efficiently. This is what I mean by saying “manage your time in order to make the best hourly wage”. This is just my personal experience and advice based on one school year of teaching art courses. For a detailed view of the time I spent working, the hourly wages I earned and the approximations in my analysis, please see the link below. As mentioned earlier, the amount of time you spend teaching in class will depend on the university class structure and the class you are teaching. Please keep this in mind as you work through managing your time. Best wishes as you navigate the world of adjuncting. https://dataqueery.wordpress.com/2018/06/05/9/
How to Manage Your Time as an Adjunct Instructor (to Make the Best Hourly Wage)
0
how-to-manage-your-time-as-an-adjunct-instructor-to-make-the-best-hourly-wage-1573373f3c0d
2018-07-03
2018-07-03 23:35:12
https://medium.com/s/story/how-to-manage-your-time-as-an-adjunct-instructor-to-make-the-best-hourly-wage-1573373f3c0d
false
1,149
null
null
null
null
null
null
null
null
null
Education
education
Education
211,342
Jesse Ruiz
Artist, Queer, San Antonio, TX
c9d02c6dbd3a
jjr8888
1
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-05
2018-07-05 08:12:24
2018-07-08
2018-07-08 15:49:12
1
false
zh-Hant
2018-07-08
2018-07-08 15:49:12
0
1573c4fc275c
0.354717
0
0
0
這個系列會解說Coursera平台上的課程”Advanced Machine Learning Specializtion”
4
[Coursera] Advance Machine learning Specializtion — Info 這個系列會解說Coursera平台上的課程”Advanced Machine Learning Specializtion” 這篇會跟大家討論這系列的大綱與這系列文章的主要內文,大綱為下三點 1/ Coursera 2/ Classes Information 3/ This Series Target 1/ Coursera: 由於現在的知識變化很快,這些年有很多網路平台崛起,Coursera應該是始祖等級的平台,這個平台是Stanford的兩個教授開始的,一個是大家很常聽到的亞洲人吳安達(Andrew Wu),另一位是Daphne Koller。這個平台現在有自己的公司跟CEO在運行。 這平台『所有』的課程都可以有七天的免費體驗,但如果是一系列的課程,就會是一整個系列有七天免費的體驗。(注意:免費體驗時需綁信用卡,不要時需要去解除,不然就直接付款了!) 費用的部分會跟課程有關,一次通常買一個月的觀看與交作業的權限,也就是說,上課費用與每個人上課的速度有直接關係。 2/ Classes Information 這組課程有七門課,上課的順序4=>2=>1。也就是說一起頭你可以開前四門課一起上,第三門課上完可以開第五門課,第四門課上完,你就可以上第五門課(注意:這邊我是用第四門課上完,所以你第一門沒上完並不影響)不過這邊建議前四門課上完再往下走,因為這一整組課是息息相關的,直接跳著上可能會有點吃力。 這組課一個月的費用是49鎂/month。我是線上綁卡,會有額外的國外刷卡手續費,要七天免費時就要綁卡了,如果不續需要特別去解除喔! 3/ This Series Target 這系列會跟大家走過這個系列,主要會利用各課程一些有趣的作業與Final project,講解課程內容與分享我寫作業的(坎坷)心得。 這一系列的課程都是在jupyter notebook上寫python,是比較高階的程式語言,理論部分會看作業需求,若當次課程需要的理論很強,也會額外開理論的文章可以看,主要文章還是會比較偏向跟大家分享一下課程的主要內容跟大綱。
[Coursera] Advance Machine learning Specializtion — Info
0
coursera-advance-machine-learning-specializtion-info-1573c4fc275c
2018-07-08
2018-07-08 15:49:12
https://medium.com/s/story/coursera-advance-machine-learning-specializtion-info-1573c4fc275c
false
41
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Yu-Hsuan Chiang
null
3735d090fa7b
minomi016
0
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-12
2018-03-12 20:30:09
2017-12-15
2017-12-15 06:23:21
3
false
en
2018-03-12
2018-03-12 20:30:10
22
15766c001044
5.527358
1
0
0
null
5
UC Predictions 2018: Evolution or Revolution? The world of UC as we know it has changed drastically over the course of 2017. We’ve seen everything from new developments in virtual reality meeting rooms, to consolidation between some of the biggest brands in the world. With so much change in the air, we’re left to wonder what 2018 will bring, and how we’ll move forward into the Unified Communications environment of the future. According to analysts, the market should triple in size over the next five years or so, delivering an average value of $96 billion by the end of 2023. With predictions like that, it’s no wonder that countless enterprises and companies across the globe are beginning to explore new opportunities in the growing industry of UC. UC Predictions 2018: Evolution or Revolution? — Marketplace Review & Insights by Rob ScottTweet it I thought it was about time I gave my predictions on some of the buzzwords we’re likely to encounter in the year ahead. Is Collaboration the Key to CIO Success? The first thing worth noting about Unified Communications in 2017, is that it officially moved beyond audio and into a world where almost anything is possible. Today, more businesses than ever before are offering remote working and telecommuting options to their employees, to attract new millennial talent, and expand their brand reach. However, for those new business models to work, companies need a collaboration strategy that encompasses everything from standard voice, to video, messaging, and real-time connectivity. This is something that only some organisations have gotten their head around up to now, with about 39% of businesses complaining that they still don’t “collaborate” enough. In 2018, an effective collaboration strategy will be key to success for any CIO, which opens plenty of doors for UC vendors. In fact, in one study of 1400 executives, 86% felt that lack of collaboration was the reason for most workplace failures. As solutions like Microsoft Teams, Slack, Glip and Circuit continue to lead the way for innovation, it’s up to other vendors to catch up or fall behind on the trends. View more UC Insights UCaaS, CCaaS, CPaaS and more… Getting a handle on communication in 2018 is likely to be an activity that revolves around some iteration of cloud technology. While it’s possible that many companies will keep some of their tech on-premise, and opt for a hybrid model, there’s a good chance that many organisations will need to start considering the following options: UCaaS: Cloud UC is currently the default set-up of choice for small to mid-size businesses. Many mid-market organisations and enterprises might prefer a hybrid solution before they make the move fully to the cloud, but it’s clear to see that no matter where you are, or what you do, cloud technology is the future. CCaaS: Contact centres are becoming more crucial than ever in this era of customer experience. If you don’t have the right CCaaS strategy, you might struggle to stay ahead of the competition when it comes to serving your audience quickly, and effectively. Unfortunately, not all vendors in the UC world have a multi-tenanted cloud contact centre solution available. I expect to see vendors like Unify, Mitel, and Avaya addressing the issue in their portfolio soon. CPaaS: To establish the “ultimate” UC solution, many organisations have turned to make-your-own communication strategies with the help of CPaaS and web based APIs. I believe that the rise of this trend will continue in 2018, with new entrants making their mark in the sector. Twilio continues to be the market leader at this point, delivering next-level innovation that sets them apart from anyone else in the industry. Could Consolidation Change the UC World? One of the things we can see when we look at the range of UC options in the marketplace today is the fact that many vendors are now struggling to offer a complete end-to-end solution for their customers. Unless you have a basically unlimited budget, it’s hard to make sure that you can be everything, to everyone in the UC space. This might be why we’ve started to see so much consolidation in 2017, with the Cisco/ BroadSoft deal emerging as perhaps the most well-known collaboration of the year. Mitel also managed to persuade ShoreTel to sell, and as we enter 2018, there’s a good chance we’ll see plenty of similar market-changing purchases on the horizon. I expect to see an IPO from Fuze sometime soon, and part of me is left to wonder whether we’ll see continued growth in companies like 8x8 and RingCentral, or whether there’s a chance that a sale could be on the cards when billion-dollar offers arrive in the post. Out with the Old, In with the New? For many vendors in the UC space, the year of 2017 has been a time of change, evolution, and growth. For instance, at the beginning of January 2017, Avaya was entering chapter 11 bankruptcy, searching for a way to reorganise and strengthen their portfolio. By the time December arrived, the company had already established a plan that allowed them to exit chapter 11 as a public company, ready for new developments in 2018. In 2018, I expect we’ll continue to see plenty of changes. Vendor churn may continue to become more significant as customers and channel partners grow more aware of the move that industries are making away from legacy and on-premise solutions. We’ve even started to see the emergence of a new type of reseller, as the UC world brings on fewer tech gurus and salespeople, and more customer service agents and digital marketers. Additionally, we might see some withdrawal on the market too. The more we move into the future with things like cloud, IoT, AI, and so on, the more legacy brands will continue to struggle. Brands like Samsung and Panasonic are falling into obscurity with no UCaaS offering — and no signs of any cloud based services on the horizon. Simplicity Continues to Be the Secret to Success Though it’s hard to accurately predict the future, one thing does seem certain in 2018 — customers and partners will continue to search for simplicity from their vendors in an age of increasing complexity. Today’s end-users want all the latest and greatest features in their UC strategy, but they don’t want to know about the work that’s happening behind the scenes. Single Pane UX solutions are likely to become more commonplace in 2018 as vendors recognise their customer’s need for a straightforward, and streamlined solution. WebRTC will also be a must, as new, modern companies grow tired of the old-fashioned option of installing new UC software. Counter-intuitive interfaces simply won’t survive in a marketplace built on a desire for instant gratification, constant scalability, and organisational agility. No-one wants to have to move between various programs just to keep their communications plan running efficiently. Today’s consumers demand a UC system that’s quick, efficient, and connected. Striving for Security Of course, in this balance of complexity and simplicity, there must also be a strategy in place for reliable privacy and security measures too — particularly as GDPR is poised to cause quite the stir in 2018. Vendors will need to release formal statements that demonstrate they’re onboard with the latest regulations, and failure to communicate your position as a safety-first business could lose you the attention of your customers. As we push further into 2018, there’s always a chance that new technologies could make managing UX and security concerns simpler. For instance, there are currently conversations happening around the power of the Blockchain, and its ability to add true compliance to the cloud environment. Already, companies like Avaya are experimenting with these options. We can only wait and see how they come to be integrated into the UC world of tomorrow. Source: Rob’s Blog View more UC Insights
UC Predictions 2018: Evolution or Revolution?
1
uc-predictions-2018-evolution-or-revolution-15766c001044
2018-03-13
2018-03-13 12:28:51
https://medium.com/s/story/uc-predictions-2018-evolution-or-revolution-15766c001044
false
1,319
null
null
null
null
null
null
null
null
null
Avaya
avaya
Avaya
76
UC Today
Unified Communications Stories
bd51979d153c
uctoday
13
74
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-07
2018-07-07 08:02:48
2018-07-07
2018-07-07 08:14:55
3
false
en
2018-07-10
2018-07-10 16:02:06
0
1576a6416d1c
5.893396
1
0
0
Deep Learning is an application of Machine Learning (based on Neural Networks) that deals with deep neural nets and intricate algorithms…
3
Introduction to Deep Learning Deep Learning is an application of Machine Learning (based on Neural Networks) that deals with deep neural nets and intricate algorithms encouraged by the structure and role of brain. It is a machine learning technique that trains computers to accomplish what comes naturally to humans: learn by example. Being a key expertise behind driverless cars it enables them to identify any stop sign or differentiate a lamp-post from a pedestrian. It is also used for voice control in devices like TVs, tablets, phones, and hands-free speakers. Since, it is obtaining those results that weren’t achievable before, therefore the deep learning is getting popular and that too for a good reason. image source: fotolia.com With the help of deep learning, any computer model can learn to classify straight from the reverberation, content, or pictures. State-of-the-art accuracy can be attained by deep learning models, and can sometimes exceed the human-level performance. With the help of large set of labeled data and neural network architectures models are trained. · Why do we need deep learning? 1. Feature Extraction: Deep learning algorithms take large volumes of data as input. After that it analyzes the input for extracting out features of an object and then identifies similar objects. Thus, it avoids the manual procedure of feature extraction. 2. Performs Complex Algorithms: Using deep learning we can deal with complex algorithms. 3. Processing of huge amount of data: It can work with enormous amount of both, structured and unstructured data. The more we have data (labeled or reference data) the better the system will do. When dealing with such huge amount of data we need to ensure good performance of the model and so we require deep learning for this. At higher levels, Deep learning attains identification accurateness than it had before. By the help of this, the consumer electronics can meet user expectations and it is important for security-vital applications such as driverless cars. Recent advances has progressed deep learning and thus it works better than humans in some jobs like classifying entities in images. Though deep learning was introduced in the 1980s, there are two major reasons why it has currently turned out to be useful: 1. Huge amount of labeled data is required by Deep Learning. For example, millions of images is required by driverless car along with hours of video. 2. Considerable computing power is also required by deep learning. Comparable structural design of high-performance GPU is competent for deep learning. This facilitates development group to lessen the training time for a deep learning network, when combined with clusters or cloud computing. · Applications of Deep Learning Deep learning has its applications in industries from automated driving to medical devices. 1. Automated Driving: Deep learning is used by automotive researchers for identifying objects such as stop signs and traffic lights automatically. Moreover, deep learning is used for detecting pedestrians. This can help in reducing accidents. One best example of it is Google cars. These type of automated cars are actually fed with video of surroundings and is supposed to determine if there are any obstacles or any cars or if its driving in lane etc. 2. Medical Research: Deep learning is used in cancer research for automatic detection of cancer cells. Suppose there is a image of a patient for detecting if he/she is suffering from cancer. Then we need a cancer specialist for this job. But we can’t find the specialist in every hospital. So if we use deep learning here then the system can detect the disease or initial screening can be done quite easily and automatically, without waiting for any specialist. An advanced microscope has been built by Teams at UCLA which yields a high-dimensional data set used for training a deep learning application for identifying cancer cells accurately. 3. Robotics: Deep learning is used in robotics for training robots to act like human. Nowadays, robots are everywhere. There are knowledge-oriented as well as industrial robots. 4. Electronics: The automated hearing and speech translation uses Deep learning. For example, deep learning application is used in home assistance tools that act in response to our voice and knows our preference. 5. Machine translation: We have lot of information these days and sometimes they are in a particular language only. It becomes quite tough for human to translate each and every information or any document into all possible language. Suppose we go to any country and we come across a sign board. But we don’t know what is written over it, since we don’t the language of that place. So, we can use deep learning for this task. We have an application that uses deep learning which can do the task of translating the information to our preferred language. 6. Industrial Automation: Deep learning is used around heavy machinery for ensuring safety of workers by detecting if people or any objects are within the unsafe range of machines. 7. Aerospace and Defense: Using deep learning objects can be identifies from satellites that locate and spot the secure or insecure zones for troops. And there are many more application of deep learning. · How Deep Learning Works Since, most of the deep learning processes apply neural network architectures, which is why we refer deep learning model as deep neural networks. The term deep generally refers to the number of veiled levels in the neural network. There are 2–3 hidden layers in the traditional neural networks, whereas the deep networks may have as many as 150. Large sets of labeled data and neural network architectures are used for training deep learning models. They learn directly from the data without the help of manual feature extraction. Convolutional neural network (CNN or ConvNet) is one of the most popular types in deep neural network. A CNN convolves discovered characteristics with input data and uses 2D convolutional levels, making the architecture suitable for processing 2D data, such as images. The manual feature extraction has been eliminated by CNNs, so that there’s no need for identifying attributes used in classifying the images. The CNN extracts characteristics straight from images. The significant features are learned whilst the network guides on set of images. Deep learning models are highly accurate due to this automated drawing out feature for computer vision jobs such as entity categorization. · Difference Between Machine Learning and Deep Learning Deep learning is a subset of machine learning. It uses neural networks and is suitable for dealing with large amount of unstructured data. And since it uses neural networks therefore the feature engineering is carried out automatically. image source: blog.aimagnifi.com A machine learning function starts by manual extraction (done by data scientist) of significant features from pictures. With the help of these features a model is created and then objects in the image are sorted using these features. Deep learning workflow helps in extracting the appropriate features automatically from images. The deep learning carries out “end-to-end learning” — that means any network is provided raw data and also the task to be executed, for example: classification and then it gains knowledge of how to accomplish this automatically. One major benefit of deep learning networks is- as the volume of our data becomes enlarge, they (deep learning networks) regularly keep on improving. What is Neural Network? image source: data-flair.training Since, we discussed so many things about deep learning, so now let us talk a bit about neural network too because deep learning uses neural network. As said above deep learning deals with algorithm encouraged by structure and functioning of human brain. Now human brain works by the help of neurons. A human brain contains millions of neurons interconnected to each other. Artificial neuron network (also called neural network) is a way of simulating human brain. They receive signals as input from other neurons or other parts of body and based on certain criteria they send signals to next neurons. The smallest unit of any artificial neural network is an artificial neuron. The artificial neuron has a central unit that receives the input. Like, if it is doing image processing the inputs could be the pixel values. There are hundreds or thousands of artificial neurons called processing units in ANN, that are interconnected by nodes. These are comprised of input and output units. Various forms and structures of information are received by the input units and the neural network efforts to learn regarding the information presented for producing output report. So, this was all for this article.
Introduction to Deep Learning
1
introduction-to-deep-learning-1576a6416d1c
2018-07-11
2018-07-11 01:01:13
https://medium.com/s/story/introduction-to-deep-learning-1576a6416d1c
false
1,416
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Prerna Aditi
null
40dd30a4bce2
aditi22prerna
3
4
20,181,104
null
null
null
null
null
null
0
null
0
d1c0fb17ab3
2017-11-09
2017-11-09 18:44:48
2017-11-09
2017-11-09 20:15:16
3
false
en
2018-04-09
2018-04-09 13:49:00
11
1576c935dd65
2.334906
11
0
0
Manufacturers are innovating to get and stay competitive
5
Made in China? Try Made in Nigeria, Poland and the U.S. Manufacturers are innovating to get and stay competitive Rob Lambert / Unsplash China’s sovereign wealth fund recently tapped Goldman Sachs to help invest as much as $5 billion in the US manufacturing industry. Large trends are influencing this decision. China is no longer the cheap place to manufacture it once was. Hourly manufacturing wages in China have risen by an annual average of 12% since 2001.[1] China is making big strides to improve productivity through advanced technology, but still has just 30 robots per 10,000 workers in manufacturing, compared with Japan’s 323.[2] To catch up, China has been shifting its global investments from raw materials to manufacturing. Privately owned Chinese companies are making more than 150 investments a year in Africa’s manufacturing sector, up from just two in 2000,[3] according to the Chinese Ministry of Commerce. But by 2020, the world’s most competitive manufacturing economy won’t be China — but the U.S., followed by China, Germany, Japan, and India, according to global manufacturing executives.[4] How are the rest of these countries making it happen? Strong investments in talent and technology. The Angle: Manufacturing companies have traditionally been slow to react to the advent of digital technologies like intelligent robots, Internet of Things, and artificial intelligence. To stay (or get) innovative, companies need to embrace new ways of thinking about manufacturing and operations — thus reducing downtime, improving process and product quality, and optimizing product development. Operations and equipment optimization in the factory setting can generate up to $3.7T of value in 2025,[5] according the McKinsey Global Institute. The most competitive companies are those that innovate and find new insights at their plants. That’s where Industry 4.0 and cognitive manufacturing — and their abilities to process mountains of data — come in. “Manufacturers are sitting on a goldmine of data,” said Jiani Zhang, Watson IoT Director of Product Management. IBM’s Model Factory Simulator “We hear from customers that their machines have been spitting out data for decades, but they didn’t know what to do with it.” Manufacturers use cameras for quality control, Zhang said, but “all that tells you is if a product is a ‘pass’ or ‘not pass.’ We can do so much more with those pictures.” “What kind of defect was it? Does it need to be scrapped or can it be reworked? All the image data is stored and they are not taking advantage of it.” Try our Model Factory simulator to see AI and data solutions in action. [1] https://www.economist.com/news/briefing/21646180-rising-chinese-wages-will-only-strengthen-asias-hold-manufacturing-tightening-grip [2] https://www.economist.com/news/briefing/21646180-rising-chinese-wages-will-only-strengthen-asias-hold-manufacturing-tightening-grip [3] https://hbr.org/2017/05/the-worlds-next-great-manufacturing-center [4] http://connect.dcat.org/blogs/patricia-van-arnum/2016/04/26/global-manufacturing-competitiveness-which-countries-top-the-rankings#.WgJ14xNSxGM [5] http://www.mckinsey.com/business-functions/business-technology/our-insights/the-internet-of-things-the-value-of-digitizing-the-physical-world
Made in China? Try Made in Nigeria, Poland and the U.S.
29
made-in-china-try-made-in-nigeria-poland-and-the-us-1576c935dd65
2018-04-09
2018-04-09 13:49:03
https://medium.com/s/story/made-in-china-try-made-in-nigeria-poland-and-the-us-1576c935dd65
false
473
Today’s industry news. Tomorrow’s reality.
null
IBMIndustries
null
IBMIndustrious
justine.jablonska@ibm.com
ibmindustrious
IBM,AI,IOT,DATA,TECHNOLOGY
IBMIndustries
Manufacturing
manufacturing
Manufacturing
7,752
IBM Industries
We transform businesses.
4a54e3dc7a5f
IBMindustries
240
122
20,181,104
null
null
null
null
null
null
0
from fastai.learner import * import torchtext from torchtext import vocab, data from torchtext.datasets import language_modeling from fastai.rnn_reg import * from fastai.rnn_train import * from fastai.nlp import * from fastai.lm_rnn import * import dill as pickle path = ‘Data/’ train_path = ‘Train/’ val_path = ‘Test/’ TEXT = data.Field(lower=True, tokenize=spacy_tok) bs = 64 bptt = 70 files = dict(train=train_path, validation=val_path, test=val_path) md = LanguageModelData.from_text_files(path,TEXT,train=train_path, validation=val_path,test=val_path,bs=bs,bptt=bptt,min_freq=5) em_sz = 60 # size of each embedding vector nh = 300 # number of hidden activation layer nl = 3 # Number of layers # Adam’s Optimizer opt_fn = partial(optim.Adam, betas=(0.7, 0.99)) learner = md.get_model(opt_fn, em_sz, nh, nl, dropouti=0.10, dropout=0.10, wdrop=0.2, dropoute=0.03, dropouth=0.10) learner.reg_fn = partial(seq2seq_reg, alpha=2, beta=1) learner.clip=0.3 learner.fit(3e-3, 4, wds=1e-6, cycle_len=1, cycle_mult=2) print(ss,"\n") for i in range(50): n=res[-1].topk(2)[1] n = n[1] if n.data[0]==0 else n[0] print(TEXT.vocab.itos[n.data[0]], end=' ') res,*_ = m(n[0].unsqueeze(0)) print('...')
12
1f0f7ce4f5ad
2018-07-08
2018-07-08 04:58:54
2018-07-08
2018-07-08 16:12:25
1
false
en
2018-07-08
2018-07-08 16:12:25
2
157739eb7a4f
3.011321
4
0
1
The Fast.ai community has been known to bring the best cutting-edge technology in the field of Machine Learning/Deep Learning due to its…
5
Implementing Deep Learning(RNN) in 7 Steps The Fast.ai community has been known to bring the best cutting-edge technology in the field of Machine Learning/Deep Learning due to its simple implementation which helps to motivate. So, Further, we are going to see how a Language model can be Trained to yield out sentence by its own. Aim: To Build an automatic Sentence Generator using PyTorch Step 1: Data Collection, Cleaning and Data Pre-Processing is the key to achieve state of the art results. It has a huge impact while you train your dataSet. Step 2: Split the Data into Two part: a) Train b) Test. Divide them into 80% : 20% ratio percent. Now import Libraries and Set Path to Dictionary Now we set Path to the dataset as shown below: DataSet has each song in different text Files eg 1.txt, 2.txt and so on. Note: To check total number of word present in your all Text File, Type unix command: !find {path}{train_path} -name ‘*.txt’ | xargs cat | wc -w` Step 3: To work with Text, first, it’s needed to be converted to a list of words with a total number of word count. That’s called as Tokenization in Natural Language Processing. To do this process, we use spacy_tok() function which is best for tokenization of keywords. Secondly, PyTorch Library has a function called Field() to work with Text as shown below: Here, lower = True define ensure of the word is in lower case. This is part of TorchText. Step 4: Now we will try to create a FastAI Data-Model object. let’s break it down. Decoding parameter in the above code: bs is known as BatchSize. It’s used to define an array of 64 words which help to input in GPU. If the batch size is more than GPU memory, it would throw an error. bptt Back Propagation through Time defines, how long a sentence will be sticking on GPU at once. Dict() will help to convert it into a dictionary where we pass arguments like training path, validation path & test path. Note we don’t have validation path for this example. So we try to have had the same value for test and validation path. LanguageModelData.from_text_files() creates language Data model object. here `from_text_file` is basic function used to work with .txt extension for given path. Follows some arguments such as Path = Defined location of Dataset. Text = TorchText preprocessing. min_freq = Defines minimum word count of a single word that should be more than that of defined value. It helps to remove the unwanted word from our dictionary. Step 5: Now we would create an embedding Matrix. which is to create all categorical variable which would be defined in a row. and then define layer and hidden activation function in each layer. Then we set Optimizer which is used to help drive to get the best gradient descent. For RNN we generally use ADAM optimizer. More about Recurrent Neural Net Click [here]. Step 6: Training the data to create an object model which uses AWD LSTM Language model developed recently by Stephen Merity. A key feature is to provide excellent regularization through various dropouts as shown below. Also, we would add regfn function to avoid overfitting plus clip function which would help to reduce the bouncing of gradient descent and find the best value. Step 7: Finally, we would arrive at generating words through our trained model. This can be done by as follows: Result: This was the easiest way to implement 3 layer Neural Net model using FastAI and PyTorch. That’s all Folks. Project Link: https://github.com/init927/IMDB-RNN
Implementing Deep Learning(RNN) in 7 Steps
53
implementing-deep-learning-rnn-in-7-steps-157739eb7a4f
2018-07-08
2018-07-08 16:12:26
https://medium.com/s/story/implementing-deep-learning-rnn-in-7-steps-157739eb7a4f
false
745
Coming Soon! Shoot us a Tweet for More.
null
null
null
init27 Labs
sanyam.bhutani05@gmail.com
init27-labs
COMPUTER VISION,DEEP LEARNING,ROBOTICS,SELF DRIVING CARS,FLYING CARS
bhutanisanyam1
Machine Learning
machine-learning
Machine Learning
51,320
Rishi Bhalodia
null
5dd0ee4fe1b1
bhalodiarishi1
14
45
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-02
2017-11-02 18:55:54
2017-11-02
2017-11-02 18:58:05
1
false
en
2017-11-02
2017-11-02 18:58:05
4
1577627cb1a
1.732075
0
0
0
Cardiovascular disease is the cause of death for approximately 2,200 Americans each day, according to the American Heart Association. That…
5
The Prevention of Heart Attacks with Today’s Technology Cardiovascular disease is the cause of death for approximately 2,200 Americans each day, according to the American Heart Association. That averages out to about one death every 40 seconds. And about 92.1 million adults in the U.S. are living with a form of cardiovascular disease or stroke after-effects. The costs of cardiovascular disease and strokes, direct and indirect, total more than $316 billion annually. The inability to monitor a patient’s heart rhythms outside of a hospital or ambulance is a major contributor to these shocking statistics. Electrocardiographs (ECGs) are used to monitor cardiac activity, and the results can be recorded to observe any irregularities a patient’s heart health. However, they are only found in places such as hospitals and ambulances and this isn’t a foolproof way of testing as they have often resulted in inaccurate results. Thanks to today’s technological advances, we are now able to monitor a patient’s heart rhythms using a small adhesive patch that is attached to their torso and feeds data to a monitor. That data can then be transmitted, via wi-fi signal, to their physician who will review and interpret the cardiac events. Use of this remote cardiac monitoring is saving lives and lowering the cost of patient treatment. The medical device giants, Philips and General Electric, have been dominating the field for decades. Although, with the introduction of remote cardiac monitoring devices, new market competitors, such as Preventice Solutions and iRhythm, are beginning to capture market share by going a step further to offer an interpretation of the data before it reaches a physician by use of artificial technology. In the very near future, we can expect to see manufacturers of smartphones, such as Apple and Samsung, introducing various physiological sensor applications to monitor things such as stroke volume, cardiac arrhythmia, blood pressure, and heart rate. This will be the spark of moving the healthcare ecosystem entirely to a true digitization at a much faster rate than we ever imagined. Technology companies such as Optum, IBM, and Hitachi are working to tackle the predictive power of strokes and heart attacks. There’s hope that these remote cardiac monitoring devices will reduce the morbidity and mortality rates for American’s with cardiovascular disease in the future. In turn, there’s hope for a much higher potential for saving lives and seeing a lower cost for healthcare for the American people. Originally posted on JamieStanos.com.
The Prevention of Heart Attacks with Today’s Technology
0
the-prevention-of-heart-attacks-with-todays-technology-1577627cb1a
2018-02-01
2018-02-01 02:08:26
https://medium.com/s/story/the-prevention-of-heart-attacks-with-todays-technology-1577627cb1a
false
406
null
null
null
null
null
null
null
null
null
Health
health
Health
212,280
Jamie Stanos
Jamie Stanos is a healthcare professional and fitness enthusiast, who prides himself on staying active and busy. http://jamiestanos.org
4cb4864c32fc
jamiestanos
12
67
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-09-26
2018-09-26 21:29:49
2018-09-27
2018-09-27 06:01:02
1
false
en
2018-10-15
2018-10-15 02:17:02
18
15787ba6a474
5.211321
2
0
0
I am on the high-speed train that connects Beijing to Tianjin, flashing above 300 km/h. We have just spent an inspiring week at Summer…
5
How AI will change your life: 3 lessons from Summer Davos I am on the high-speed train that connects Beijing to Tianjin, flashing above 300 km/h. We have just spent an inspiring week at Summer Davos, aka the Annual Meeting of the New Champions, where I was invited by the World Economic Forum to proudly represent the Global Shapers Community and the Geneva Hub. We met with more than 2000 of the most innovative leaders, heads of state, scientists and CEOs, talking about the Fourth Industrial Revolution and the future of artificial intelligence. We are exhausted from the top-level session, and we are now seated in this quiet and clean futuristic train, with high-speed free Wi-Fi and colorful LCDs. The speaker announces the journey and the rules: “Make sure you comply with the rules so as not to affect your social credit.” Wait. I am looking, puzzled, at my fellow Shapers, Gemma and Gio, who are sharing this adventure with me. Yes — I understood correctly. “Social credit.” Have you seen the Black Mirror episode where the society is based on a rating system? I knew that China was piloting the “Social Credit System”, but I did not know it was already live and working! After a week spent discussing the future of artificial intelligence and how it will change our lives, the Chinese government is already implementing big data collection to build a national reputation system. This takes me back to everything I learned this week, and I will try to set out my three main lessons from Summer Davos on artificial intelligence. How will AI change our lives? 1. AI is enhancing our understanding of the world Ten years from now, our understanding of the world will be different. The use of free data is an unprecedented resource that became exponential from 2007 onward, as big data expands to personal data. People are now aware that sensors and devices are collecting large amounts of personal information in real time, from our smartphones to the car GPS. To take a step back, artificial intelligence started in 1950, when Turing invented a test to check if machines could communicate with humans without people recognizing that they were speaking with a computer So why there is a big buzz about AI now? This topic was dissected at the session “A Global Conversation on Artificial Intelligence”, where one of the main reasons highlighted is the fact that computers and storage are becoming cheaper and moving into real-time. Hardware enhancement is allowing machines to compute big data and to learn: this is at the foundation of deep learning, and is unquestionably behind the explosion of AI from a developer’s laboratory to its massive exploitation with real-time applications. While we are still in the early days of real application (86% of AI technologies are not mature yet), we know that 85% of AI technologies will bring real transformation. 2. AI is changing the power structure Artificial intelligence is a unique opportunity to bring equality and overcome infrastructure limitations. A great example was brought by Seth Berkley, CEO of Gavi, the Vaccine Alliance: a Silicon Valley robotics company has teamed up with the Rwandan government to speed the delivery of blood. Zipline, also called “Uber for blood”, can deliver the blood requested in remote areas, decreasing the delivery time from four hours to an average of half an hour, and dramatically reducing blood waste. How can we democratize AI, exposing many people to new technologies? We need to make sure that AI is not increasing the gap between high-skilled and low-skilled workers but is actually empowering the broader population. This can be achieved through: Open access to data: it’s important to balance data protection and data availability. Accessible and trustworthy data algorithms: transparency and visibility are key to building trust in the results that come out from deep machine learning, often seen as a “black box”. Lowering the bar in the capability set: the average citizen can still benefit from AI if trained properly. AI needs to be addressed positively, or it could eventually bring devastation. Eric Schmidt, former Google CEO, recently predicted that the internet will bifurcate into Chinese-led and US-led versions within the next decade. Technology has a key role in new governance, with tech companies becoming bigger than countries, and with new firewalls and new power structures ultimately redefining the geopolitical set-up. It’s not new news that AI is highly used by defense departments; Jack Ma reminded us how the first wave of technology brought the First World War, and the second wave actually resulted in the Second World War. So we really need to be very cautious on managing AI: Jack assures us that the Third World War will be fought against poverty and the destruction of the planet. But we all have a role to play to make it happen positively. 3. The importance of human skills in an AI world This might sound a bit counterintuitive; when I talk about artificial intelligence with my family, everyone jokes about Terminator and the moment when machines will overcome humans, replacing us in our jobs and ultimately attacking us. On one hand, it’s true that machines will replace part of our jobs’ repetitive tasks in the future; on the other hand, the tasks that are not repetitive and where human skills are needed are irreplaceable. I talked with Ann Mettler, head of the European Political Strategy Centre, about how Europe is focusing less on education and more on skills. So how do we train the new generations to learn the new skills? As part of Global Shapers, the World Economic Forum’s community that brings together young leaders who are active in local communities, I was invited to participate in an exclusive session with Jack Ma, founder of the global giant Alibaba. Jack recently announced that he will leave Alibaba in one year’s time to refocus on teaching; he has been incredibly inspiring on the future of education and the importance of lifelong learning. Skills needed are changing rapidly and the best skill to learn is actually the “skill of learning”. If you want to learn something for your future job, you will be disappointed. You need to learn what you love, and to keep learning constantly. For a long time we’ve talked about IQ and EQ — now we need to think about LQ, Love Quotient! In 30 minutes we have already reached Beijing, and all the memories of the Annual Meeting of the New Champions, all the amazing people I met, are flashing in front of me. Artificial intelligence is already changing our lives and we will recognize it more and more. But one question is hammering in my mind: “Would I behave exactly as I do, if I had a personal social credit score?” Willing to read more about the Annual Meeting of the New Champions? What just happened? Catch up on an extraordinary week in Tianjin. Globalization is unstoppable and we must fix its flaws. Chinese Premier Li Keqiang’s speech in Tianjin. Machines may do more than humans by 2025. How workers can win in the robot age. Goodbye drugs, hello electroceuticals. The top 10 emerging technologies. #AMNC18 #SummerDavos #WorldEconomicForum #GlobalShapers #ShapeTheWorld #artificialintelligence #China #socialcreditsystem #entrepreneurship #motivation #lessons #innovation About the Author: Giulia Zanzi is passionate about combining IoT and mobile technologies with science to improve people’s lives. As Head of Marketing Fertility in Swiss Precision Diagnostics, a Procter & Gamble JV, she led the launch of the first Connected Ovulation Test System that helps women to get pregnant faster by detecting two hormones and syncing with their phone. A former member of theEuropean Youth Parliament, Giulia is currently serving on the Advisory Council of the World Economic Forum Global Shapers and she is a Lean InPartner Champion. Giulia graduated with honors at Bocconi University in Milan and holds a Masters at Fudan University in Shanghai.
How AI will change your life: 3 lessons from Summer Davos
55
how-ai-will-change-your-life-3-lessons-from-summer-davos-15787ba6a474
2018-10-15
2018-10-15 02:17:03
https://medium.com/s/story/how-ai-will-change-your-life-3-lessons-from-summer-davos-15787ba6a474
false
1,328
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
info@datadriveninvestor.com
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
World Economic Forum
world-economic-forum
World Economic Forum
637
Giulia Zanzi
Passionate about combining IoT and mobile technologies with science to improve people’s lives. Head of Marketing Fertility @Clearblue @P&G JV @GlobalShapers
83dbd5d9a6d5
giuliazanzi
30
223
20,181,104
null
null
null
null
null
null
0
null
0
9190baf3d6c7
2017-09-14
2017-09-14 17:01:57
2017-09-18
2017-09-18 12:01:01
5
false
pt
2017-09-18
2017-09-18 12:01:01
14
1579731a0501
4.467296
11
1
0
Como os robôs conversacionais são aplicados em cases de apoio à sociedade.
5
Chatbots como agentes de impacto social Se 2016 foi considerado o ano da realidade virtual, poderíamos dizer que 2017 tem sido o ano do chatbot? Com a popularização da tecnologia, novas plataformas de criação de programas (robôs) conversacionais, como é o caso da Sequel, provam ser possível atuar na área sem a necessidade de ter grandes conhecimentos técnicos. Como consequência, também vemos iniciativas em áreas como o entretenimento e serviços de atendimento ao consumidor que têm adaptado chatbots aos seus canais digitais. Mas será que também há exemplos de como esses robôs têm sido aplicados a favor da comunidade? Há duas semanas, comentei neste post sobre o trabalho do coletivo Pretalab na criação do chatbot feminista Beta, que funciona no Messenger do Facebook como uma porta-voz em favor da luta pela defesa dos direitos da mulher. Ainda, no ano passado, a Fundação Telefônica Vivo apoiou o desenvolvimento do chatbot Deco voltado para empreendedores da periferia. Deco, chatbot criado pela Fundação Telefônica em apoio aos jovens empreendedores Usando a mesma plataforma da Beta, o chatbot parte do programa “Pense Grande” tem como objetivo estimular o empreendedorismo entre jovens de 15 a 29 anos. O projeto ainda conta com uma série de dez vídeos que contam a história de jovens moradores do Campo Limpo, Grajaú, Capão Redondo, República, entre outros bairros da região metropolitana de São Paulo. Segundo o Americo Mattar, presidente da Fundação Telefônica Vivo, a ideia por trás de Deco é fazer com que os jovens reconheçam seu próprio talento e vocação como líderes de suas comunidades, assim promovendo iniciativas que mostram que “é possível ter sucesso com o próprio negócio, independentemente da realidade onde se vive”. Assim como Beta, Deco também foi escrito de modo a usar uma linguagem semelhante à forma como o público-alvo se comunica, assim promovendo identificação entre as partes. Isso se estende também para a campanha audiovisual, que conta um vídeo de chamado à causa protagonizado por Mel Duarte, poeta que foi criada em uma comunidade paulistana e que tem seu projeto artístico baseado em intervenções poéticas em transportes públicos da capital. Relatório Business Insider Um recente relatório do site Business Insider reuniu os principais canais de aplicação de chatbots, os quais incluem o Messenger do Facebook, Slack e Telegram, bem como empresas que têm contratado esse tipo de serviço, como é o caso da CNN e do Uber, enquanto são iniciativas como o GitHub e Shopify têm oferecido o recurso para sistemas como iOS e Android. Dentre as conclusões levantadas pelo estudo, incluem: A inteligência artificial atingiu um estado em que os chatbots têm cada vez mais engajado pessoas em conversas. Isso faz com que empresas levem em consideração uma tecnologia de preço acessível e de grande alcance para engajar mais consumidores. Chatbots são particularmente melhor adaptados a dispositivos móveis — talvez mais que em aplicativos. O envio de mensagens está no cerne da experiência mobile, assim como a rápida absorção de aplicativos de conversa têm demonstrado. O ecossistema de chatbots já é robusto, contando com vários bots terceirizados, bots nativos, canais de distribuição e empresas desenvolvedoras dessa tecnologia. Chatbots podem ser lucrativos para aplicativos de mensagem e os desenvolvedores que criam bots para essas plataformas, de forma similar em como lojas de aplicativos têm se transformado em ecossistemas de transação econômica. Mas, como já vemos aqui no Brasil, os chatbots não precisam necessariamente estar ligados apenas a iniciativas comerciais e da área de serviços. A pesquisadora Alexandra Jayeun Lee explica em um artigo completo sobre a importância do uso de chatbots sem fins lucrativos e que gerem impacto social. Segundo ela, ONGs deveriam começar a considerar o chatbot como uma nova vertente de atuação justamente porque outras ferramentas de interface com o usuário são muito caras. Ao mesmo tempo, chatbots podem ser usados como um substituto ou complemento de seções como “Perguntas Frequentes”, de modo a reduzir uma sobrecarga administrativa. Por fim, Lee reforça que chatbots também podem gerar a oportunidade de levantar as questões mais desafiadoras para as organizações bem como preencher lacunas que não estão contempladas no site, por exemplo. Fora do Brasil, iniciativas como a Botler AI, da startup Botler, têm atuado no auxílio a imigrantes que estão cuidando dos procedimentos legais exigidos para a imigração. Criado pelo engenheiro de software e iraniano Amir Morajev em parceria com o cientista da computação canadense Yoshua Bengio, o bot usa técnicas pioneiras de tradução da linguagem natural estudadas pela dupla. “Eu sabia que essa era uma ideia que precisávamos levar para um outro nível. Quero que a AI seja desenvolvida em direção a um impacto social positivo e este é um lugar em que a AI benéfica deve acontecer”, diz. Ainda, outros exemplos de chatbots também provaram ser capazes de dar suporte a refugiados sírios, ao criar um canal de silêncio e contemplação aos milhões de chineses que vivem em cidades extremamente populosas, assim como australianos que precisavam de ajuda para ter acesso aos benefícios estaduais oferecidos aos portadores de necessidades especiais. No caso do Woebot, esse tipo de suporte e aconselhamento é dado diretamente ao usuário que se torna um paciente. Criado por um time de psicólogos e especialistas em inteligência artificial da Standford, o bot usa conversas rotineiras, monitoramento de humor, curadoria de vídeos e jogos com palavras para ajudar os usuários a cuidarem de sua saúde mental. Depois de um ano de desenvolvimento e pesquisa, o Woebot foi lançado com a possibilidade de assinatura mensal aos interessados. De forma semelhante, porém gratuita, também o chatbot Replika aprende ao conversar com usuário e refletir sua personalidade e estilo de conversação, de modo que este possa enxergar no programa o seu próprio comportamento e atitudes. É nesse sentido de autoconhecimento, mas também de companhia e mesmo de afeto que os chatbots podem atuar na vida das pessoas, como proposto no filme de Spize Jonze, Her (2013).
Chatbots como agentes de impacto social
18
chatbots-como-agentes-de-impacto-social-1579731a0501
2018-04-18
2018-04-18 22:26:46
https://medium.com/s/story/chatbots-como-agentes-de-impacto-social-1579731a0501
false
963
Periódico sobre comunicação, futurismo e impacto positivo.
null
uplab.cc
null
UP Future Sight
lidia@upline.com.br
up-future-sight
FUTURISMO,TECNOLOGIA,FUTUROLOGIA,FICÇÃO CIENTÍFICA,TENDÊNCIAS
null
Chatbots
chatbots
Chatbots
15,820
Lidia Zuin
Brazilian journalist, MA in Semiotics and PhD candidate in Visual Arts. Head of innovation and futurism at UP Lab. Cyberpunk enthusiast and researcher.
479f965ebf95
lidiazuin
1,323
344
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-01
2018-03-01 10:14:48
2018-03-01
2018-03-01 10:48:08
1
false
en
2018-03-16
2018-03-16 10:38:26
2
15797c6cd1b
3.920755
0
0
0
The war that increasing complexity and AI could trigger.
5
3. The Complexity Holy War: Humanists vs. Complexians Kindle version of the complete argument (same contents) If all of this makes sense, humans are at a very peculiar juncture in the history of complexity on Earth. Humans are the first beings to be able to predict on what is going to happen. Minerals didn’t feel threatened by organic molecules, and DNA didn’t think much of neurons. Humans have consciousness, a great mystery. So when they have started to realize what could happen they have started to worry. The worrying has started with fiction with Frankenstein, Terminator, The Matrix, I Robot, Transcendence and many more. However, the preoccupation has been already taken to the scientific and business realm with superstars like Richard Hawkings, Bill Gates or Elon Musk weighing in. There have been even some foundations created around the topic like Open AI or AI for Good and some thoughtful treatises on the practicalities like Nick Bostrom’s Superintelligence. It seems evident that this is the beginning and it will only accelerate as AI gains weigh in the transistor vs. neurons transition. The choice before us is stark. In the complexian stance we allow ourselves to be superseded unquestionably losing control and our preeminent role at the apex complexity. This will advance the world towards the God-equivalent (Ge) being created. Alternatively, in the humanist stance we curtail the evolution of complexity and freeze it in whatever part of the transistor substrate spectrum we still feel comfortable and have no risk of losing control. This was very presciently framed by Frank Herbert in Dune. There the “Butlerian jihad” eliminates all AI and freezes technological progress for the most part. It is a real moral quandary for each person has to choose sides. It is also an extremely challenging technical and organizational problem, as the substrate change to IA+transistors needs to happen only in a small corner of the solar system for resistance to be futile. As covered later, it might also be futile because the Ge could already exist, pulling evolution inexorably towards itself from beyond time or from another point in space. From the ethical side, the humanist vs. complexian conflict is an “angels vs. devils” choice in which each individual has to decide who are the angels. The complexians argument for angelhood is that we cannot and should not stop the evolution of the whole world towards its ultimate goal through our human-centered greed and egotism. The Ge is on the complexian side and it will punish (or just cosmically laugh at) at those that futilely resist. Lucifer is the attempt to put humans before God, the Universe and Everything by stopping what is naturally preordained. The humanists argument is that we are humans and we shouldn’t engineer ourselves into enslavement by some unnatural beings of our own creation. God created the Earth and humans, and it is only through our unscrupulous meddling that we have created the demons that will destroy us. Lucifer is technology beckoning us to transgress natural boundaries preordained by God and reach for the apple of original sin. If this leads to real conflict, it has all the trappings of a “Holy War” in which everything is truly at stake. The fate of the world hangs in the balance. The other side arguments are totally antithetical. There is no room for compromise as there are two discrete and opposite outcomes. For once, science could even support the fact that the Holy War is relevant and real even if it struggles to help us decide which is the right side. From the practical side, humanists face a Sysiphean task: they need to continuously put the complexity genie into the bottle much like Sysiphus had to continuously push a rock up a mountain. Stopping progress is very difficult, as just one instance of unchecked progress can single-handedly obliterate the resistors. Chinese Emperors halted technical progress everywhere in their empire in the 15th Century only to be faced a couple of centuries later with unstoppable Europeans whose constant internal conflicts had forced technical progress into overdrive. On top of that, the escalation of weapons and techniques favors complexians who will rapidly embrace the new, while humanists have to carefully vet each innovation as a potential trapdoor to their own unraveling. Additionally, space represents a wild frontier for which transistors and AI are much better adapted, so a small colony on Mars or the Asteroid belt could easily bring full AI and transition to the Quantum level. It could take a semi-religious global dictatorship that bans space travel and controls every aspect of technology development and people’s life to make the humanist position endure. We should also consider that the same conundrum could repeat itself for 4 or 5 times in each of the subsequent complexity substrates. A third way to approach the conflict is to translate some human consciousness to the digital substrate. Let’s imagine the duplication of a human consciousness in the digital substrate is feasible. This would allow humans who are willing and able to undergo the procedure to “go to the next level” adapting themselves to the much increased substrate speed. Theoretically this could be extended to other even faster substrates. It would be a way to preserve partly the humanist position but without freezing evolution like complexians require. At the same time, it would be potentially be considered Luciferian from both positions. For humanists it would be unnatural and for complexians it would be a way to hijack evolution for human purposes. At the same time, it would permanently split humanity in two between digital humans living at 10 million times the speed of neuron-based humans. This is the potentially devastating and world-shaping conflict that the Ge Hypothesis entails. If we don’t end up experiencing it could be a great HBO series or science fiction book. Next Read — 4: Could the Ge already exist?
The Complexity Holy War: Humanists vs. Complexians
0
the-complexity-holy-war-humanists-vs-complexians-15797c6cd1b
2018-03-16
2018-03-16 10:38:27
https://medium.com/s/story/the-complexity-holy-war-humanists-vs-complexians-15797c6cd1b
false
986
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Surprised Agnostic
Sharing a potential argument for the existance of a God equivalent (Ge) for its analysis.
864f1a117446
mendietas.alexa
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-18
2017-09-18 04:06:07
2017-10-17
2017-10-17 05:12:43
0
false
en
2017-10-17
2017-10-17 05:12:43
0
157e0ba8c2c3
1.09434
3
0
0
Most of us have been bombarded with pitch after pitch extorting the transformative benefits of different AI-based products. Be it the…
2
It’s time to stop selling AI …. Most of us have been bombarded with pitch after pitch extorting the transformative benefits of different AI-based products. Be it the latest and greatest Optical Character Recognition Software or the most advanced chatbot implementation. Multiple articles, blogs, and research reports have extolled the benefits of DNN (deep neural networks) or NLP (natural language processing) or OCR (optical character recognition) and why it is an absolute must-have for any enterprise to stay ahead of the game. You are probably left scratching your head wondering where and which AI technology your company should invest in. It’s time to cut out all the noise; technology is useful only when it can help make a business more productive and thereby more profitable. AI has been hyped up to the extent that every tech company now has an AI focus. It makes sense, investment in AI is driving a lot of growth in the industry. However, when selling to businesses don’t sell a company AI, sell them a solution to their business problem. Each business problem is unique and requires a different combination of AI technologies. I view all of these AI technologies as part of an AI toolkit. You don’t need all the tools for every job. Just as you would need a combination of tools for each job, you need a combination of different AI technologies to tackle different business problems. For instance — you can use Natural Language Processing techniques and modernize your customer support operations with chatbots, Deep Neural Networks will help you detect fraudulent activity, or you can use automation tools to resolve low-level customer support tickets. It’s time to stop selling AI and start selling solutions to business problems.
It’s time to stop selling AI ….
3
its-time-to-stop-selling-ai-157e0ba8c2c3
2018-02-04
2018-02-04 20:10:00
https://medium.com/s/story/its-time-to-stop-selling-ai-157e0ba8c2c3
false
290
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Matheen Raza
null
6221043cda7
matheenraza
36
45
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-14
2018-08-14 02:43:35
2018-08-14
2018-08-14 02:54:40
3
false
id
2018-08-14
2018-08-14 02:54:41
1
157e0e99538d
1.429245
0
0
0
Kebun Kurma Di IndonesiaKebun Kavling Kurma Di Indonesia, Tlpn/ Wa 0822–4069–7469
5
Kebun Kurma Di Indonesia, Tlpn/ Wa 0822–4069–7469, Kebun Kavling Kurma Di Indonesia Kebun Kurma Di IndonesiaKebun Kavling Kurma Di Indonesia, Tlpn/ Wa 0822–4069–7469 Tlpn/ Wa 0822–4069–7469,Kebun Kurma Di Indonesia,Kebun Kavling Kurma Di Indonesia Kebun Kurma Di Indonesia, Tlpn/ Wa 0822–4069–7469, Kebun Kavling Kurma Di Indonesia Karena itulah jika Anda sudah punya investasi yang tepat, maka Anda tidak perlu khawatir lagi untuk menjalani sisa hidup Anda di masa yang akan datang. Kavling Taman Kurma, tempat investasi terbaik Salah satu tempat Anda untuk menginvestasikan dana yang dimiliki adalah dengan bergabung bersama di Kavling Taman Kurma. Kebun Kurma Di IndonesiaKebun Kavling Kurma Di Indonesia, Tlpn/ Wa 0822–4069–7469 Kavling Taman Kurma ini adalah sebutan bagi lahan atau kavling tanah yang diperjual belikan. Kavling tersebut nantinya boleh di bangun menjadi villa, resort, atau bahkan rumah tinggal. Tlpn/ Wa 0822–4069–7469,Kebun Kurma Di Indonesia,Kebun Kavling Kurma Di Indonesia Ketika Anda membeli tanah kavling di Kavling Taman Kurma, Anda akan diberikan Lima bibit pohon kurma berikut dengan perawatannya hingga berbuah. Karena itulah kawasan ini disebut sebagai Kavling Taman Kurma yang salah satu tujuannya adalah untuk mengembangkan lahan Kurma di Indonesia dan Kalimantan Khususnya. Anda pun tidak akan rugi jika beli kavling di Kavling Taman Kurma karena harga tanah selalu naik dan ini akan jadi investasi yang terbaik untuk Anda. Silahkan Hubungi : Achmad Solihin Prajamas 0822–4069–7469 Untuk Berdiskusi Lebih Lanjut, Berikut Link WA Saya https://goo.gl/vgdjmi https://goo.gl/vgdjmi https://goo.gl/vgdjmi Bisa Juga Langsung Datang Ke PonTren IT Madinatul Iman Balikpapan Jalan Prajabakti VII Blok II D No 15 RT. 07 Belakang Kantor DISHUB Sebarang Taman 3 Generasi, Rumah Dua Lantai Pagernya Warna Hijau Depan Posyandu RT. 07
Kebun Kurma Di Indonesia, Tlpn/ Wa 0822–4069–7469, Kebun Kavling Kurma Di Indonesia
0
kebun-kurma-di-indonesia-tlpn-wa-0822-4069-7469-kebun-kavling-kurma-di-indonesia-157e0e99538d
2018-08-14
2018-08-14 02:54:41
https://medium.com/s/story/kebun-kurma-di-indonesia-tlpn-wa-0822-4069-7469-kebun-kavling-kurma-di-indonesia-157e0e99538d
false
233
null
null
null
null
null
null
null
null
null
Sales
sales
Sales
30,953
umbar winardi
null
60dfba712e1a
umbarwinardi985
1
9
20,181,104
null
null
null
null
null
null
0
null
0
a095e0538d84
2018-08-03
2018-08-03 14:13:52
2018-08-03
2018-08-03 14:32:28
22
false
en
2018-08-05
2018-08-05 07:57:55
7
157ea8ad5da8
32.19717
106
1
0
Introduction
5
How to Think Like a Data Scientist in 12 Steps Introduction At the moment, data scientists are getting a lot of attention, and as a result, books about data science are proliferating. While searching for good books about the space, it seems to me that the majority of them focus more on the tools and techniques rather than the nuanced problem-solving nature of the data science process. That is until I encountered Brian Godsey’s “Think Like a Data Scientist” — which attempts to lead aspiring data scientists through the process as a path with many forks and potentially unknown destinations. It discusses what tools might be the most useful, and why, but the main objective is to navigate the path — the data science process — intelligently, efficiently, and successfully, to arrive at practical solutions to real-life data-centric problems. Lifecycle of a data science project In the book, Brian proposes that a data science project consists of 3 phases: The 1st phase is preparation — time and effort spent gathering information at the beginning of a project can spare big headaches later. The 2nd phase is building the product, from planning through execution, using what you learned during the preparation phase and all the tools that statistics and software can provide. The 3rd and final phase is finishing — delivering the product, getting feedback, making revisions, supporting the product, and wrapping up the project. As you can see from the image, these 3 phases encompass 12 different tasks. I’d like to use this post to summarize these 12 steps as I believe any aspiring data scientists can benefit from being familiar with them. Phase I — Preparing The process of data science begins with preparation. You need to establish what you know, what you have, what you can get, where you are, and where you would like to be. This last one is of utmost importance; a project in data science needs to have a purpose and corresponding goals. Only when you have well-defined goals can you begin to survey the available resources and all the possibilities for moving toward those goals. 1 — Setting Goals In a data science project, as in many other fields, the main goals should be set at the beginning of the project. All the work you do after setting goals is making use of data, statistics, and programming to move toward and achieve those goals. First off, every project in data science has a customer. Sometimes the customer is someone who pays you or your business to do the project — for example, a client or contracting agency. In academia, the customer might be a laboratory scientist who has asked you to analyze their data. Sometimes the customer is you, your boss, or another colleague. No matter who the customer might be, they have some expectations about what they might receive from you, the data scientist who has been given the project. In order to understand such expectations, you need to ask good questions about their data. Asking questions that lead to informative answers and subsequently improved results is an important and nuanced challenge that deserves much more discussion than it typically receives. Good questions are concrete in their assumptions, and good answers are measurable success without too much cost. Getting an answer from a project in data science usually looks something like the formula, or recipe, below. Although sometimes one of the ingredients — good question, relevant data, or insightful analysis — is simpler to obtain than the others, all three are crucial to getting a useful answer. The product of any old question, data, and analysis isn’t always an answer, much less a useful one. It’s worth repeating that you always need to be deliberate and thoughtful in every step of a project, and the elements of this formula are not exceptions. For example, if you have a good question but irrelevant data, an answer will be difficult to find. Now is a good time to evaluate the project’s goals in the context of the questions, data, and answers that you expect to be working with. Typically, initial goals are set with some business purpose in mind. If you’re not in business — you’re in research, for example — then the purpose is usually some external use of the results, such as furthering scientific knowledge in a particular field or providing an analytic tool for someone else to use. Though goals originate outside the context of the project itself, each goal should be put through a pragmatic filter based on data science. This filter includes asking these questions: (1) What is possible? (2) What is valuable? (3) What is efficient? Applying this filter to all putative goals within the context of the good questions, possible answers, available data, and foreseen obstacles can help you arrive at a solid set of project goals that are, well, possible, valuable, and efficient to achieve. 2 — Exploring Data The 2nd step of the preparation phase of the data science process is exploring available data. The figure below shows 3 basic ways a data scientist might access data. It could be a file on a file system, and the data scientist could read the file into their favorite analysis tool. Or the data could be in a database, which is also on a file system, but in order to access the data, the data scientist has to use the database’s interface, which is a software layer that helps store and extract data. Finally, the data could be behind an application programming interface (API), which is a software layer between the data scientist and some system that might be completely unknown or foreign. It’s best to become familiar with some of the forms that data might take, as well as how to view and manipulate these forms. Here are some of them: Flat Files (csv, tsv), HTML, XML, JSON, Relational Databases, Non-Relational Databases, APIs. Sometimes you don’t have a choice to decide which format to choose. The data comes in a certain format, and you have to deal with it. But if you find that format inefficient, unwieldy, or unpopular, you’re usually free to set up a secondary data store that might make things easier, but at the additional cost of the time and effort it takes you to set up the secondary data store. For applications where access efficiency is critical, the cost can be worth it. For smaller projects, maybe not. You’ll have to cross that bridge when you get there. Now that you have some exposure to common forms of data, you need to scout for them. Here are the approaches you should consider: Google search, combine different data sources, scrape the web, or measure/collect them yourself. Personally, I’m a big fan of web scraping. Two important things that a web scraper must do well are visit lots of URLs programmatically and capture the right information from the pages. If you wanted to know about your friend network on Facebook, you could theoretically write a script that visits the Facebook profiles of all of your friends, saves the profile pages, and then parses the pages to get lists of their friends, visits their friends’ profiles, and so on. This works only for people who have allowed you to view their profiles and friend lists, and would not work for private profiles. 3 — Wrangling Data Data wrangling, the 3rd step, is the process of taking data and information in difficult, unstructured, or otherwise arbitrary formats and converting it into something that conventional software can use. Like many aspects of data science, it’s not so much a process as it is a collection of strategies and techniques that can be applied within the context of an overall project strategy. Wrangling isn’t a task with steps that can be prescribed exactly beforehand. Every case is different and takes some problem solving to get good results. Good wrangling comes down to solid planning before wrangling and then some guessing and checking to see what works. Spending a little extra time on data wrangling can save you a lot of pain later. In general, the choice of data wrangling plan should depend heavily on all of the information you discover while first investigating the data. If you can imagine parsing the data or accessing it in some hypothetical way — I try to play the role of a wrangling script — then you can write a script that does the same thing. Pretend you’re a wrangling script, imagine what might happen with your data, and then write the script later. Data wrangling is such an uncertain process that it’s always best to explore a bit and to make a wrangling plan based on what you’ve seen. There’s no one way or one tool to accomplish the goal of making messy data clean. If someone tells you they have a tool that can wrangle any data, then either that tool is a programming language or they’re lying. Many tools are good for doing many things, but no one tool can wrangle arbitrary data. Data exists in so many forms and for so many purposes that it’s likely that no one application can ever exist that’s able to read arbitrary data with an arbitrary purpose. Simply put, data wrangling is an uncertain thing that requires specific tools in specific circumstances to get the job done. You can try using file format converters or proprietary data wranglers and writing a script to wrangle data. 4 — Assessing Data It can be tempting to start developing a data-centric product or sophisticated statistical methods as soon as possible, but the benefits of getting to know your data are well worth the sacrifice of a little time and effort. If you know more about your data — and if you maintain awareness about it and how you might analyze it — you’ll make more informed decisions at every step throughout your data science project and will reap the benefits later. Without a preliminary assessment (the 4th step), you may run into problems with outliers, biases, precision, specificity, or any number of other inherent aspects of the data. In order to uncover these and get to know the data better, the first step of post-wrangling data analysis is to calculate some descriptive statistics. Descriptive statistics is the discipline of quantitatively describing the main features of a collection of information, or the quantitative description itself. Think description, max, min, average values, summaries of the dataset. It’s often hard to discuss descriptive statistics without mentioning inferential statistics. Inferential statistics is the practice of using the data you have to deduce — or infer — knowledge or quantities of which you don’t have direct measurements or data. With respect to a data set, you can say the following: Descriptive statistics asks, “What do I have?” Inferential statistics asks, “What can I conclude?” Most statisticians and businesspeople alike would agree that it takes inferential statistics to draw most of the cool conclusions: when the world’s population will peak and then start to decline, how fast a viral epidemic will spread, when the stock market will go up, whether people on Twitter have generally positive or negative sentiment about a topic, and so on. But descriptive statistics plays an incredibly important role in making these conclusions possible. It pays to know the data you have and what it can do for you. With descriptive stats, you can find entities within your dataset that match a certain conceptual description. If you’re working in online retailing, you might consider customers as your main entities, and you might want to identify those who are likely to purchase a new video game system or a new book by a particular author. If you’re working in advertising, you might be looking for people who are most likely to respond to a particular advertisement. If you’re working in finance, you might be looking for equities on the stock market that are about to increase in price. If it were possible to perform a simple search for these characterizations, the job would be easy and you wouldn’t need data science or statistics. But although these characterizations aren’t inherent in the data (can you imagine a stock that tells you when it’s about to go up?), you often can recognize them when you see them, at least in retrospect. The main challenge in such data science projects is to create a method of finding these interesting entities in a timely manner. Phase II — Building After asking some questions and setting some goals, you surveyed the world of data, wrangled some specific data, and got to know that data. In each step, you learned something, and now you may already be able to answer some of the questions that you posed at the beginning of the project. Let’s now move to the building phase. 5 — Developing Plan The 5th step is to create a plan. As in the earlier planning phase, uncertainties and flexible paths should be in the forefront of your mind. You know more about your project now, so some of the uncertainties that were present before are no longer there, but certain new ones have popped up. Think of your plan as a tentative route through a city with streets that are constantly under construction. You know where you’d like to go and a few ways to get there, but at every intersection there might be a road closed, bad traffic, or pavement that’s pocked and crumbling. You’ll have to make decisions as you arrive at these obstacles, but for now it’s enough to have a backup plan or two. Plans and goals can change at any moment, given new information or new constraints or for any other reason. You must communicate significant changes to everyone involved with the project, including the customer. The project’s customer obviously has a vested interest in what the final product of the project should be — otherwise the project wouldn’t exist — so the customer should be made aware of any changes to the goals. Because most customers like to be kept informed, it’s often advisable to inform them of your plans, new or old, for how you will achieve those goals. A customer might also be interested in a progress report including what preliminary results you have so far and how you got them, but these are of the lowest priority. Focus on what the customer cares about: progress has been made, and the current expected, achievable goals are X, Y, and Z. They may have questions, which is great, and they may be interested in hearing about all aspects of your project, but in my experience most are not. Your one and only must-have conclusion for a meeting with the customer at this stage is that you communicate clearly what the new goals are and that they approve them. Everything else is optional. You may also consider communicating your basic plan to the customer, particularly if you’re using any of their resources to complete the project. They may have suggestions, advice, or other domain knowledge that you haven’t experienced yet. If their resources are involved, such as databases, computers, other employees, then they will certainly be interested in hearing how and how much you’ll be making use of them. 6 — Analyzing Data The 6th step of our data science process is statistical analysis of data. Statistical methods are often considered as nearly one half, or at least one third, of the skills and knowledge needed for doing good data science. The other large piece is software development and/or application, and the remaining, smaller piece is subject matter or domain expertise. On the one side of statistics is mathematics, and on the other side is data. Mathematics — particularly, applied mathematics — provides statistics with a set of tools that enables the analysis and interpretation. In any case, mathematics generally doesn’t touch the real world. Based wholly on logic and always — always — starting with a set of assumptions, mathematics must first assume a world it can describe before it begins to describe it. Every mathematical statement can be formulated to start with an if (if the assumptions are true), and this if lifts the statement and its conclusion into abstractness. That is not to say that mathematics isn’t useful in the real world; quite the contrary. Mathematics, rather than being a science, is more of a vocabulary with which we can describe things. Some of these things might be in the real world. As with vocabularies and the words they contain, rarely is a description perfectly correct. The goal is to get as close to correct as possible. Mathematics does, however, provide much of the heavy machinery that statistics uses. Statistical distributions are often described by complex equations with roots that are meaningful in a practical, scientific sense. Fitting statistical models often makes use of mathematical optimization techniques. Even the space in which a project’s data is assumed to lie must be described mathematically, even if the description is merely “N-dimensional Euclidean space. In addition to mathematics, statistics possesses its own set of techniques that are primarily data centric. Descriptive statistics is a generally intuitive or simple kind of statistics that can provide a good overview of the data without being overly complex or difficult to understand. Descriptive statistics usually stays close to the data, in a sense. Inferential statistics is inherently one or more steps removed from the data. Inference is the process of estimating unknown quantities based on measurable, related quantities. Typically, inferential statistics involves a statistical model that defines quantities, measurable and unmeasurable, and their relationships to each other. Methods from inferential statistics can range from quite simple to wildly complex, varying also in their precision, abstractness, and interpretability. Statistical modeling is the general practice of describing a system using statistical constructs and then using that model to aid in analysis and interpretation of data related to the system. Both descriptive and inferential statistics rely on statistical models, but in some cases an explicit construction and interpretation of the model itself plays a secondary role. With statistical modeling, the primary focus is on understanding the model and the underlying system that it describes. Mathematical modeling is a related concept that places more emphasis on model construction and interpretation than on its relationship to data. Statistical modeling focuses on the model’s relationship to data. Here are some important concepts in statistical modeling that you should be aware of: Linear, exponential, polynomial, spline, differential, non-linear equations. Latent variables. Quantifying uncertainty: randomness, variance and error terms. Fitting a model: maximum likelihood estimation, maximum a posteriori estimation, expected maximization, variational Bayes, Markov Chain Monte Carlo, over-fitting. Bayesian vs frequentist statistics. Hypothesis testing. Clustering Component analysis. Farthest from the raw data is a set of statistical techniques that are often called, for better or worse, black box methods. The term black box refers to the idea that some statistical methods have so many moving pieces with complex relationships to each other that it would be nearly impossible to dissect the method itself because it was applied to specific data within a specific context. Many methods from machine learning and artificial intelligence fit this description. If you attempt to classify individuals appearing in a data set into one of several categories, and you apply a machine learning technique such as a random forest or neural network, it will often be difficult to say, after the fact, why a certain individual was classified in a certain way. Data goes into the black box, a classification comes out, and you’re not usually certain what exactly happened in between. Here are a few of the most popular machine learning algorithms that you would apply to the feature values you extracted from your data points: Random forest Support vector machine Boosting Neural network Deep learning 7 — Engineering Product Our next step is to build statistical software. If statistics is the framework for analyzing and drawing conclusions from the data, then software is the tool that puts this framework in action. Beyond going without, a data scientist must make many software choices for any project. If you have a favorite program, that’s often a good choice, if for no other reason than your familiarity with it. But there can be good reasons to pick something else. Or if you’re new to data science or statistical software, it can be hard to find a place to start. To anyone who has spent significant time using Microsoft Excel or another spreadsheet application, spreadsheets and GUI-based applications are often the first choice for performing any sort of data analysis. Particularly if the data is in a tabular form, such as CSV, and there’s not too much of it, getting started with analysis in a spreadsheet can be easy. Furthermore, if the calculations you need to do aren’t complex, a spreadsheet might even be able to cover all the software needs for the project. Common software tools here are Excel, SPSS, Stata, SAS, and Minitab. Learning the programming language of one of these mid-level tools can be a good step toward learning a real programming language, if that’s a goal of yours. These languages can be quite useful on their own. SAS, in particular, has a wide following in statistical industries, and learning its language is a reasonable goal unto itself. Programming languages are far more versatile than mid-level statistical applications. Code in any popular language has the potential to do most anything. These languages can execute any number of instructions on any machine, can interact with other software services via APIs, and can be included in scripts and other pieces of software. A language that’s tied to its parent application is severely limited in these capacities. MATLAB is a proprietary software environment and programming language that’s good at working with matrices. MATLAB costs quite a bit but there are significant discounts for students and other university-affiliated people. Some folks decided to replicate it in an open-source project called Octave. As Octave has matured, it has become closer and closer to MATLAB in available functionality and capability. Excepting code that uses add-on packages (a.k.a. toolboxes), the vast majority of code written in MATLAB will work in Octave and vice versa, which is nice if you find yourself with some MATLAB code but no license. Overall, MATLAB and Octave are great for engineers (in particular electrical) who work with large matrices in signal processing, communications, image processing, and optimization, among others. R is based on the S programming language that was created at Bell Labs. It’s open source, but its license is somewhat more restrictive than some other popular languages like Python and Java, particularly if you’re building a commercial software product. Compared to MATLAB, in R it’s easier to load and handle different types of data. MATLAB is good at handling tabular data but, generally speaking, R is better with tables with headers, mixed column types (integer, decimal, strings, and so on), JSON, and database queries. When reading tabular data, R tends to default to returning an object of the type data frame. Data frames are versatile objects containing data in columns, where each column can be of a different data type — for example, numeric, string, or even matrix — but all entries in each column must be the same. Working with data frames can be confusing at first, but their versatility and power are certainly evident after a while. One of the advantages of R being open source is that it’s far easier for developers to contribute to language and package development wherever they see fit. These open-source contributions have helped R grow immensely and expand its compatibility with other software tools. Thousands of packages are available for R from the CRAN website. This is the single greatest strength of the R language; chances are you can find a package that helps you perform the type of analysis you’d like to do, so some of the work has been done for you. MATLAB also has packages, but not nearly as many, though they’re usually very good. R has good ones and bad ones and everything in between. You’ll also find tons of R code that’s freely available in public repos but that might not have made it to official package status. Overall, R is a good choice for statisticians and others who pursue data-heavy, exploratory work more than they build production software in, for example, the analytic software industry. Python is a powerful language that can be used for both scripting and creating production software. It lends itself more naturally to non-statistical tasks like integrating with other software services, creating APIs and web services, and building applications. Likely because Python was originally a general-purpose programming language, it has a robust framework for object-oriented design. Although Python wasn’t originally intended to be a heavily statistical language, several packages have been developed for Python that elevate it to compete with R and MATLAB. The numpy package for numerical methods is indispensable when working with vectors, arrays, and matrices. The packages scipy and scikit-learn add functionality in optimization, integration, clustering, regression, classification, and machine learning, among other techniques. With those three packages, Python rivals the core functionality of both R and MATLAB, and in some areas, such as machine learning, Python seems to be more popular among data scientists. For data handling, the package pandas has become incredibly popular. It’s influenced somewhat by the notion of a data frame in R but has since surpassed that in functionality. If your data set is big enough to slow down calculations but small enough to fit in your computer’s memory, then pandas might be for you. One of the most notable Python packages in data science, however, is the Natural Language Toolkit (NLTK). It’s easily the most popular and most robust tool for natural language processing (NLP). These days, if someone is parsing and analyzing text from Twitter, newsfeeds, the Enron email corpus, or somewhere else, it’s likely that they’ve used NLTK to do so. It makes use of other NLP tools such as WordNet and various methods of tokenization and stemming to offer the most comprehensive set of NLP capabilities found in one place. Overall, Python is great for people who want to do some data science as well as some other pure, non-statistical software development. It’s the only popular, robust language that can do both well. Though not a scripting language and as such not well suited for exploratory data science, Java is one of the most prominent languages for software application development, and because of this it’s used often in analytic application development. Many of the same reasons that make Java bad for exploratory data science make it good for application development. Java isn’t great for exploratory data science, but it can be great for large-scale or production code based on data science. Java has many statistical libraries for doing everything from optimization to machine learning. Many of these are provided and supported by the Apache Software Foundation. In choosing your statistical software tools, keep these criteria in mind: Implementation of Methods: If you’re using a fairly common method, then many tools probably already have an implementation, and it’s probably better to use one of those. Code that’s been used by many people already is usually relatively error free compared to some code that you wrote in a day and used only once or twice. Flexibility: In addition to being able to perform the main statistical analysis that you want, it’s often helpful if a statistical tool can perform some related methods. Often you’ll find that the method you chose doesn’t quite work as well as you had hoped, and what you’ve learned in the process leads you to believe that a different method might work better. If your software tool doesn’t have any alternatives, then you’re either stuck with the first choice or you’ll have to switch to another tool. Informative: Some statistical tools, particularly higher-level ones like statistical programming languages, offer the capability to see inside nearly every statistical method and result, even black box methods like machine learning. These insides aren’t always user friendly, but at least they’re available. Commonality: With software, more people using a tool means more people have tried it, gotten results, examined the results, and probably reported the problems they had, if any. In that way, software, notably open-source software, has a feedback loop that fixes mistakes and problems in a reasonably timely fashion. The more people participating in this feedback loop, the more likely it is that a piece of software is relatively bug free and otherwise robust. Well-documentation: In addition to being in common use, a statistical software tool should have comprehensive and helpful documentation. It’s a bad sign if you can’t find answers to some big questions, such as how to configure inputs for doing linear regression or how to format the features for machine learning. If the answers to big questions aren’t in the documentation, then it’s going to be even harder to find answers to the more particular questions that you’ll inevitably run into later. Purpose-built: Some software tools or their packages were built for a specific purpose, and then other functionality was added on later. For example, the matrix algebra routines in MATLAB and R were of primary concern when the languages were built, so it’s safe to assume that they’re comprehensive and robust. In contrast, matrix algebra wasn’t of primary concern in the initial versions of Python and Java, and so these capabilities were added later in the form of packages and libraries. Inter-operability: If you’re working with a database, it can be helpful to use a tool that can interact with the database directly. If you’re going to build a web application based on your results, you might want to choose a tool that supports web frameworks — or at least one that can export data in JSON or some other web-friendly format. Or if you’ll use your statistical tool on various types of computers, then you’ll want the software to be able to run on the various operating systems. It’s not uncommon to integrate a statistical software method into a completely different language or tool. Permissive licenses: If you’re using commercial software for commercial purposes, it can be legally risky to be doing so with an academic or student license. It can also be dangerous to sell commercial software, modified or not, to someone else without confirming that the license doesn’t prohibit this. 8 — Optimizing Data The 8th step in our process is to optimize a product with supplementary software. The software tools in our 7th step can be versatile, but they’re statistical by nature. Software can do much more than statistics. In particular, many tools are available that are designed to store, manage, and move data efficiently. Some can make almost every aspect of calculation and analysis faster and easier to manage. Here are 4 popular software that can make your work as a data scientist easier. Databases are common, and your chances of running across one during a project are fairly high, particularly if you’re going to be using data that’s used by others quite often. But instead of merely running into one as a matter of course, it might be worthwhile to set up a database yourself to aid you in your project. The 2 most common types are relational (SQL) and document-oriented (NoSQL, ElasticSearch). Databases and other related types of data stores can have a number of advantages over storing your data on a computer’s file system. Mostly, databases can provide arbitrary access to your data — via queries — more quickly than the file system can, and they can also scale to large sizes, with redundancy, in convenient ways that can be superior to file system scaling. High-performance computing (HPC) is the general term applied to cases where there’s a lot of computing to do and you want to do it as fast as possible. You can either use a supercomputer (which is millions of times faster than a personal computer), computer clusters (a bunch of computers that are connected with each other, usually over a local network, and configured to work well with each other in performing computing tasks), or Graphics Processing Units (which are great at performing highly parallelizable calculations). If you have access then HPC is a good alternative to waiting for your PC to calculate all the things that need to be calculated. The benefit of using a cloud HPC offering — and some pretty powerful machines are available — must be weighed against the monetary cost before you opt in. The largest providers of cloud services are mostly large technology companies whose core business is something else. Companies like Amazon, Google, and Microsoft already had vast amounts of computing and storage resources before they opened them up to the public. But they weren’t always using the resources to their maximum capacity, and so they decided both to rent out excess capacity and to expand their total capacity, in what has turned out to be a series of lucrative business decisions. Services offered are usually rough equivalent to the functionality of a personal computer, computer cluster, or local network. All are available in geographic regions around the world, accessible via an online connection and standard connection protocols, as well as, usually, a web browser interface. If you don’t own enough resources to adequately address your data science needs, it’s worth considering a cloud services. Lastly, you can try big data technologies: Hadoop, HBase, and Hive — among others. Big data technologies are designed not to move data around much. This saves time and money when the data sets are on the very large scales for which the technologies were designed. Whenever computational tasks are data-transfer bound, big data can give you a boost in efficiency. But more so than the other technologies described in this chapter, big data software takes some effort to get running with your software. You should make the leap only if you have the time and resources to fiddle with the software and its configurations and if you’re nearly certain that you’ll reap considerable benefits from it. 9 — Executing Plan The last step of the build phase is executing the build plan for the product. Most software engineers are probably familiar with the trials and tribulations of building a complicated piece of software, but they may not be familiar with the difficulty of building software that deals with data of dubious quality. Statisticians, on the other hand, know what it’s like to have dirty data but may have little experience with building higher-quality software. Likewise, individuals in different roles relating to the project, each of whom might possess various experiences and training, will expect and prepare for different things. If you’re a statistician, you know dirty data, and you know about bias and overstating the significance of results. On the other hand, you may not have much experience building software for business, particularly production software. You should consult software engineers with hands-on experience to learn how to improve your software’s robustness. If you’re a software engineer, you know what a development lifecycle looks like, and you know how to test software before deployment and delivery. But you may not know about data and no matter how good you are at software design and development, data will eventually break your application in ways that had never occurred to you. This requires new patterns of thought when building software and a new level of tolerance for errors and bugs because they’ll happen that much more often. You should consult statisticians who are well versed in foreseeing and handling problematic data such as outliers, missing values, and corrupted values. If you’re starting out in data science, without much experience in statistics or software engineering, anyone with some experience can probably give you some solid advice if you can explain your project and your goals to them. As a beginner, you have double duty at this stage of the process to make up for lack of experience. If you’re merely one member of a team for the purposes of this project, communication and coordination are paramount. It isn’t necessary that you know everything that’s going on within the team, but it is necessary that goals and expectations are clear and that someone is managing the team as a whole. The plan should contain multiple paths and options, all depending on the outcomes, goals, and deadlines of the project. No matter how good a plan is, there’s always a chance that it should be revised as the project progresses. Even if you thought of all uncertainties and were aware of every possible outcome, things outside the scope of the plan may change. The most common reason for a plan needing to change is that new information comes to light, from a source external to the project, and either one or more of the plan’s paths change or the goals themselves change. As a project progresses, you usually see more and more results accumulate, giving you a chance to make sure they meet your expectations. Generally speaking, in a data science project involving statistics, expectations are based either on a notion of statistical significance or on some other concept of the practical usefulness or applicability of those results or both. Statistical significance and practical usefulness are often closely related and are certainly not mutually exclusive. As part of your plan for the project, you probably included a goal of achieving some accuracy or significance in the results of your statistical analyses. Meeting these goals would be considered a success for the project. Phase III — Finishing Once a product is built, you still have a few things left to do to make the project more successful and to make your future life easier. So how can we finish our data science project? 10 — Delivering Product The first step of the finishing phase is product delivery. In order to create an effective product that you can deliver to the customer, first you must understand the customer perspective. Second, you need to choose the best media for the project and for the customer. And finally, you must choose what information and results to include in the product and what to leave out. Making good choices throughout product creation and delivery can greatly improve the project’s chances for success. The delivery media can take many forms. In data science, one of the most important aspects of a product is whether the customer passively consumes information from it, or whether the customer actively engages the product and is able to use the product to answer any of a multitude of possible questions. Various types of products can fall anywhere along the spectrum between passive and active: Probably the simplest option for delivering results to a customer, a report or white paper includes text, tables, figures, and other information that address some or all of the questions that your project was intended to answer. Reports and white papers might be printed on paper or delivered as PDFs or other electronic format. In some data science projects, the analyses and results from the data set can also be used on data outside the original scope of the project, which might include data generated after the original data (in the future), similar data from a different source, or other data that hasn’t been analyzed yet for one reason or another. In these cases, it can be helpful to the customer if you can create an analytical tool for them that can perform these analyses and generate results on new data sets. If the customer can use this analytical tool effectively, it might allow them to generate any number of results and continue to answer their primary questions well into the future and on various (but similar) data sets. If you want to deliver a product that’s a step more toward active than an analytical tool, you’ll likely need to build a full-fledged application of some sort. The most important thing to remember about interactive graphical applications, if you’re considering delivering one, is that you have to design, build, and deploy it. Often, none of these is a small task. If you want the application to have many capabilities and be flexible, designing it and building it become even more difficult. In addition to deciding the medium in which to deliver your results, you must also decide which results it will contain. Once you choose a product, you have to figure out the content you’ll use to fill it. Some results and content may be obvious choices for inclusion, but the decision may not be so obvious for other bits of information. Typically, you want to include as much helpful information and as many results as possible, but you want to avoid any possibility that the customer might misinterpret or misuse any results you choose to include. This can be a delicate balance in many situations, and it depends greatly on the specific project as well as the knowledge and experience of the customer and the rest of the audience for the results. 11 — Making Revisions After delivering the product, we move on to revising the product after initial feedback. Once the customer begins using the product, there’s the potential for a whole new set of problems and issues to pop up. Despite your best efforts, you may not have anticipated every aspect of the way your customers will use (or try to use) your product. Even if the product does the things it’s supposed to do, your customers and users may not be doing those things and doing them efficiently. Getting feedback is hard. On the one hand, it’s often difficult to get constructive feedback from customers, users, or anyone else. On the other hand, it can be hard to listen to feedback and criticism without considering it an attack on — or a misunderstanding of — the product that you’ve spent a lot of time and effort building. Some data scientists deliver products and forget about them. Some data scientists deliver products and wait for customers to give feedback. Some data scientists deliver products and bug those customers constantly. It’s often a good idea to follow up with your customers to make sure that the product you delivered addresses some of the problems that it was intended to address. Making product revisions can be tricky, and finding an appropriate solution and implementation strategy depends on the type of problem you’ve encountered and what you have to change to fix it. If, throughout the project, you’ve maintained awareness of uncertainty and of the many possible outcomes at every step along the way, it’s probably not surprising that you find yourself now confronting an outcome different from the one you previously expected. But that same awareness can virtually guarantee that you’re at least close to a solution that works. Practically speaking, that means you never expected to get everything 100% correct the first time through, so of course there are problems. But if you’ve been diligent, the problems are small and the fixes are relatively easy. Once you recognize a problem with the product and figure out how it can be fixed, there remains the decision of whether to fix it. The initial inclination of some people is that every problem needs to be fixed; that isn’t necessarily true. There are reasons why you might not want to make a product revision that fixes a problem, just as there are reasons why you would. The important thing is to stop and consider the options rather than blindly fixing every problem found, which can cost a lot of time and effort. 12 — Wrapping Up Project The last step in our data science process is to wrap it up. As a project in data science comes to an end, it can seem like all the work has been done, and all that remains is to fix any remaining bugs or other problems before you can stop thinking about the project entirely and move on to the next one (continued product support and improvement notwithstanding). But before calling the project done, there are some things you can do to increase your chances of success in the future, whether with an extension of this same project or with a completely different project. There are two ways in which doing something now could increase your chances of success in the future. One way is to make sure that at any point in the future you can easily pick up this project again and redo it, extend it, or modify it. By doing so you will be increasing your chance of success in that follow-on project, as compared to the case when a few months or years from now you dig up your project materials and code and find that you don’t remember exactly what you did or how you did it. Two practical ways to do are through documentation and storage. Another way to increase your chances of success in future projects is to learn as much as possible from this project and carry that knowledge with you into every future project. By conducting a project postmortem, you can hope to tease out the useful lessons from the rest. This includes reviewing the old goals, the old plan, your technology choices, the team collaboration etc. Whether there’s a specific lesson you can apply to future projects or a general lesson that contributes to your awareness of possible, unexpected outcomes, thinking through the project during a postmortem review can help uncover useful knowledge that will enable you to do things differently — and hopefully better — next time. If you take away only one lesson from each project, it should probably relate to the biggest surprise that happened along the way. Uncertainty can creep into about every aspect of your work, and remembering all the uncertainties that caused problems for you in the past can hopefully prevent similar ones from happening again. From the data to the analysis to the project’s goals, almost anything might change on short notice. Staying aware of all of the possibilities is not only a difficult challenge but is near impossible. The difference between a good data scientist and a great data scientist is the ability to foresee what might go wrong and prepare for it. Conclusion Data science still carries the aura of a new field. Most of its components — statistics, software development, evidence-based problem solving, and so on — descend directly from well-established, even old fields, but data science seems to be a fresh assemblage of these pieces into something that is new. The core of data science doesn’t concern itself with specific database implementations or programming languages, even if these are indispensable to practitioners. The core is the interplay between data content, the goals of a given project, and the data-analytic methods used to achieve those goals. I’d highly you to check out Brian’s book to get more details on each step of the data science process. It is very accessible for non-experts in data science, software, and statistics. It paints a vivid picture of data science as a process with many nuances, caveats, and uncertainties. The power of data science lies not in figuring out what should happen next, but in realizing what might happen next and eventually finding out what does happen next. — — If you enjoyed this piece, I’d love it if you hit the clap button 👏 so others might stumble upon it. You can find my own code on GitHub, and more of my writing and projects at https://jameskle.com/. You can also follow me on Twitter, email me directly or find me on LinkedIn. Sign up for my newsletter to receive my latest thoughts on data science, machine learning, and artificial intelligence right at your inbox!
How to Think Like a Data Scientist in 12 Steps
621
how-to-think-like-a-data-scientist-in-12-steps-157ea8ad5da8
2018-08-25
2018-08-25 18:56:26
https://medium.com/s/story/how-to-think-like-a-data-scientist-in-12-steps-157ea8ad5da8
false
8,042
Your Ultimate Guide to Data Science Interviews
null
null
null
Cracking The Data Science Interview
le_j6@denison.edu
cracking-the-data-science-interview
DATA SCIENCE,TECHNICAL INTERVIEW,COMPUTER SCIENCE,STATISTICS,MACHINE LEARNING
james_aka_yale
Data Science
data-science
Data Science
33,617
James Le
Blue Ocean Thinker (https://jameskle.com/)
52aa38cb8e25
james_aka_yale
9,745
1,164
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-09-21
2018-09-21 20:38:20
2018-09-24
2018-09-24 18:08:00
2
false
en
2018-09-24
2018-09-24 18:08:00
5
157f5bb46f47
1.236164
1
0
0
I was first introduced to Python by my girlfriend, who used Python, R, and SQL to analyze large sacks of data to model home loan…
1
Python Outside the Box I was first introduced to Python by my girlfriend, who used Python, R, and SQL to analyze large sacks of data to model home loan performance. I saw almost-words and numbers (and a lot of brackets) that produced either more near-words (good) or ‘Traceback (most recent call last)’ — bad. I’ve spent the past eight weeks learning Python and data science under the all-knowing brow of General Assembly in the Data Science Immersive program. While Python has become nearly transparent to me, I still think of it as a computer code that is used to perform data science and machine learning jobs. This view was shattered this week when I asked a friend for help finding a good step sequencer (a step sequencer is an audio device that plays back tones in a configurable order). She suggested I check out the Fruitbox Sequencer, a beginner-friendly step sequencer from Adafruit that uses capacitive-touch sensors to trigger sounds. Fruitbox operates on a CircuitPython — a subset of Python for use in programming and experimenting with low cost microcontrollers. You need a capable development board ($15-$30 at AdaFruit) and a basic knowledge of Python. I’ve placed my order for a Fruitbox kit and for a Trellis Feather DSP-G1 Synthesizer kit. My next post will update when they arrive…
Python Outside the Box
10
python-outside-the-box-157f5bb46f47
2018-09-26
2018-09-26 08:55:18
https://medium.com/s/story/python-outside-the-box-157f5bb46f47
false
226
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
info@datadriveninvestor.com
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Data Science
data-science
Data Science
33,617
Garth Hogan
null
e6b8a1368f65
garth38
0
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-24
2017-10-24 10:42:01
2017-10-24
2017-10-24 10:47:50
1
false
en
2017-10-24
2017-10-24 10:51:14
7
157ff16e5ff9
2.581132
0
0
0
The association of AI with marketing mediums such as Social Media & Emails is gradually proving to be one of the major advancements in the…
5
How AI is changing the scene in Social media marketing & Email marketing. The association of AI with marketing mediums such as Social Media & Emails is gradually proving to be one of the major advancements in the field of marketing technology. Let’s explore how AI is changing the scene in Social media marketing & Email marketing. AI IN SOCIAL MEDIA MARKETING With the emergence of the first social media site Six Degrees in the year 1997, where members could upload profiles & make friends, a major revolution began in the world of communication. Over the years social mediahas evolved remarkably to emerge as one of the most interactive & profitable mediums of marketing. Whether it is a B2B or B2C Marketing campaign, without a social media promotion plan it is essentially incomplete & will most definitely lack in terms of impact. The integration of AI in Social Media has added to the advantages of this medium of marketing. Right from sifting through millions of customer profiles to drawing inferences from consumer behavior pattern AI has added to the way we do business through social media. Let’s now see a few examples of the application of AI in Social Media. SLACK BOTS Slack Bots profoundly change the way we market through social media. Basically they eliminate the assumptions & instinct that needs to be applied for posting content on social media platforms in order to promote products & services successfully. Slack bots help in in-depth analysis of content posted on social media related to your products or services & also aid in predicting the success of your posts by comparing it with similar content forms across social media platforms. Thus, slack bots not only minimize the guesswork but also speed up decision making in social media marketing. LINKEDIN & BRIGHT I am sure you get regular notifications on LinkedIn about the possible job positions you could apply for. This has been made possible by the association of LinkedIn with a job search site known as Bright.com. Bright.com is a job search start up that utilizes various machine learning algorithms to make the task easier for both recruiters/HR managers within companies & job-seekers to find their match. LinkedIn analyses the patterns of hiring, account info, job descriptions, job location & candidate profiles to make apt suggestions. The fact that almost every business official who wants to stay in touch with the business world is on LinkedIn has made this AI integration even more useful. TEXT MINING & MARKETING AUTOMATION Text & data mining (link to data mining blog) have now become a reality all thanks to AI & machine learning. Text mining basically constitutes the analysis of unstructured & structured data across various social media platforms for obtaining a clear view of the buyer persona & predicting consumer behavior. With the detailed customer data made available through text mining, targeted marketing automation is another marketing tactic that can be utilized to its maximum potential. Marketing automation in association with Artificial intelligence & machine learning can algorithms can help predict optimal timings for posting content on social media thus aiding in garnering not only a larger but a more diverse audience. PINTEREST & KOSEI Kosei, a data software company & Pinterest came together recently to add to the personalized recommendations feature of Pinterest. This has made it possible for Pinterest to relate searches of users to specific interests & make appropriate recommendations. FACIAL RECOGNITION IN FACEBOOK Tagging is a an activity on Facebook, that has made sharing pictures more interesting. With AI coming into play, it is now easier to tag people as there is automatic facial recognition that is possible now. In the coming years, Facebook will utilize user history & information to suggest shopping places & present upcoming offers.
How AI is changing the scene in Social media marketing & Email marketing.
0
how-ai-is-changing-the-scene-in-social-media-marketing-email-marketing-157ff16e5ff9
2018-06-17
2018-06-17 14:52:40
https://medium.com/s/story/how-ai-is-changing-the-scene-in-social-media-marketing-email-marketing-157ff16e5ff9
false
631
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Michael Suzanne
#MarketResearcher
78bc788684e6
Data_Appending
23
24
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-07
2018-03-07 11:37:27
2018-03-07
2018-03-07 11:40:38
1
false
en
2018-03-07
2018-03-07 11:40:38
1
15801760c436
1.698113
0
0
0
What is course about?
1
Python Machine Learning What is course about? Machine Learning is based on the idea that computers should be able to learn and adapt through experience without detailed programming. In this course, you will learn to leverage Python to solve machine learning problems. You will learn about the most effective machine learning techniques, and their practical implementation through a hands-on approach. Along with the perfect theoretical understanding of these machine learning techniques, you will also learn to quickly apply them to solve new problems. Each lecture has detailed and live explanations from the instructor and assignments to test your level of understanding. Once you finish this course you would have taken a giant leap towards the future of machine learning Highlights of the course This course will take you from zero to Machine Learning hero in 45 days. Learn Python from scratch and apply them to solve real Machine Learning problems. Training spread over 6 weekends to give you all required time for exercises and theory. Training delivered in a “Live Online” session by a very experienced trainer from the United States. After every weekend get a comprehensive assignment to further solidify your learning. Lifetime access to recorded training session videos, so learning stays with you. In-depth learning and practice Supervised and Unsupervised Learning. A detailed project to give hands on machine learning experience Once you finish this course you would have taken a giant leap towards the future of data analysis. Why this course? Course has been designed after detailed discussions with lots of industry leaders across the globe addressing the real life problems faced by the industry today. Comprehensive course that gives the right learning of Machine Learning algorithms and their implementation using Python. All the machine learning libraries of Python are explained in detail. Machine learning is made simple for you to get started and multiple case studies and exercises are provided to give you a better understanding of Machine Learning. The duration of the course has been strategically set to 45 days so that you have enough time to complete the exercises and understand the Machine Learning concepts in depth. Training will be conducted over weekends so working professionals can easily attend this course Build ‘High-value predictions’ that can guide better decisions and smart actions in real time without human intervention. For more details please visit our full page click here…
Python Machine Learning
0
python-machine-learning-15801760c436
2018-03-07
2018-03-07 11:40:39
https://medium.com/s/story/python-machine-learning-15801760c436
false
397
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Sana Mulla
null
147112e3b5f1
sanamulla_33415
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-19
2018-03-19 11:18:10
2018-03-19
2018-03-19 12:27:53
0
false
en
2018-03-19
2018-03-19 12:27:53
0
158037f8f5fc
2.85283
4
1
0
Last week, our professor made some remarks about her first face-to-face interaction with a person whom she had befriended online…
1
Tales of Semiartificial Intelligence and Artificial Semiintelligence Last week, our professor made some remarks about her first face-to-face interaction with a person whom she had befriended online. Apparently, her friend thought Suzan Hoca’s real life personality was much warmer than the impression her online messages had invoked in him. The story reminded me of a paper by Albert Borgmann. In his paper, Borgmann talks about MUDs(Multi-User Domains). He introduces them the following way: “A MUD [Multi-User Domains], so called is a domain in cyberspace that is accessible via the keyboard and the screen of a computer connected to the Internet. The medium is typed messages. In a MUD one is free to stylize one’s personality at will, and one engages other people, similarly stylized in conversation.” Since the medium is typed messages, mimics, gestures or prosody cannot be utilized to convey a friendly, inviting approach. The only way to convey enthusiasm is, either to explicitly declare it (which may be off-putting in itself) or have your messages perform your enthusiasm via exclamation points, emojis etc. The constraint imposed by the medium motivates (perhaps even coerces) people to verbalize what would have gone unverbalized otherwise. For example, blushing, smiling and rolling one’s eyes are semi-instinctive responses that convey one’s position towards an unfolding event. However, if one’s conversing exclusively through typed messages, one is not in a position to observe these responses. I do not intend to make this into an argument about the so-called virtues of real world communication. Fact is, people adapt to these technologies. They are sufficently reflexive about what their typed messages transmit to the receiving end. Instead, I hope to bring this anectode towards what (I think) Borgmann was trying to respond to. It is worthwhile to think about the context of Borgmann’s paper. He is writing this in a book dedicated to Hubert Dreyfus, known for his erudite critique of Good-Old Fashioned Artifical Intelligence(GOFAI). In a nutshell, GOFAI was built on the premise that “Digital computation is necessary and sufficient for general intelligent action”. Dreyfus’ critique demonstrates, among other things, how our intelligence is completely contingent on our embodied coping skills and how most of these skills cannot be exprssed in terms of propositional logic. Hence, human intelligence is more than digital information processing. It is, above all, embodied. Dreyfus had published his first pronounced critique of AI in 1972, whereas Borgmann wrote this paper in 1998. A lot has changed inbetween; personal computers started to become a mainstream item that started entering the household. Consider Borgmann’s following remark in this light: “Technology as a form of life does not refute its adversaries but attempts to circumvent and obviate them. Thus, the ambiguity in question (I will call it virtual ambiguity) does not so much refute Dreyfus’s position as it appears to make it irrelevant.” In GOFAI, we have a project that overlooked the embodied nature of our intelligence, since it could not adequately represent that aspect. Borgmann’s quote suggests that in the next step, the technological complex (as Borgmann calls it) ought to make that aspect irrelevant. It appears to me, MUD’s can be regarded in this light; they make our sense of self disembodied and force us to transmit everything verbally, instead of alternative means. In a similar spirit, Borgmann makes the following remark: “Entry into cyberspace of a MUD encourages if it does not prescribe the forfeiting of one’s location. Since one enters a MUD in a disembodied state one cannot import one’s bodily location. One’s tie to reality is loosened and one’s standpoint fades into an indistinct background.” In the context of MUD’s, Borgmann talks about a user who mistook the housekeeping bot for a human player and fell in love with it. The bot is called Julia and it simply scours the surface of messages she receives for clues that match a long list of largely precooked replies. When the precooked replies don’t suffice, she tries to fall back on humor, delay and sarcasm to mask her breaks. For Borgmann, a player falling in love with Julia is an instance of “a semiartificial intelligence seeing his reflection in an artificial semiintelligence”. The player is semiartificial because he is a disembodied abstraction of an embodied being, whereas the bot is semiintelligent because it tries to perform as if she is the disembodied abstraction of an embodied being; when her precooked replies do not suffice, she tells the users that she has PMS(yes, that is one of the automated answers).
Tales of Semiartificial Intelligence and Artificial Semiintelligence
24
tales-of-semiartificial-intelligence-and-artificial-semiintelligence-158037f8f5fc
2018-04-06
2018-04-06 21:53:35
https://medium.com/s/story/tales-of-semiartificial-intelligence-and-artificial-semiintelligence-158037f8f5fc
false
756
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Emre Alpagut
null
fc65ea16059b
alpagutemre
7
7
20,181,104
null
null
null
null
null
null
0
globals() .update({ "______": lambda x: globals() .update(( dict([[x] *2])))}), ______((( "Just"))) ,______(( "another" )),______ ("Python" ),______( "Hacker") ];print ( " ".join( [(Just),( (another) ),(Python ),Hacker] ));______
1
null
2017-11-27
2017-11-27 07:04:48
2017-12-16
2017-12-16 08:05:29
3
false
en
2017-12-19
2017-12-19 15:10:10
6
1580c04fc2f6
3.719811
11
0
0
Recently I was reviewing scripts from various Data Science projects including Kaggle competitions, and I think that it is really true story…
5
Pro Data Scientist Recently I was reviewing scripts from various Data Science projects including Kaggle competitions, and I think that it is really true story that Data Scientist is a “Person who is better at statistics than any software engineer and better at software engineering than any statistician”. It seems that many of the Data Science projects are incidental, one-time, first-success-never-reviewed initiatives. The goal for most Data Scientist is to show how smart and elegant they are. And it’s often — smart by using idiomatic code structures and elegant by putting everything in markdown with many sexy charts and plots embedded in. The Pragmatic Data Scientist Actually, I don’t know if typical Data Scientist don’t know best practices in software engineering or he/she has no time to apply it. However, when you are doing Data Science professionally there is always a moment when really experienced Software Engineers and DevOps will take, see and study your solution & code. It is because eventually it must be deployed in production environment, submitted to repo, shared with others etc. And then repeating for every one line of your source code: “Hey man, I know it could be implemented wiser, but my focus is on doing Data Science, not Software Development or source code optimization” is not a good idea. If you are doing Data Science this way, go to bookstore and buy “The Pragmatic Programmer” book. Then — before you start to read — change all occurrences of “programmer” with “Data Scientist”. And then, finally, read this book. 10 times. Re-think your Data Science workflow… All these practices that are ‘best practices for Software Engineering’ are very good for Data Science as well. Please note that finally every Data Science project is related with source code, (sometimes) tested well, and finally deployed in production environment. So, you can be as good in Data Science as you are good in Software Engineering. A chain is only as strong as its weakest link… KISS, DRY and Data Science Hell Simple is better than tricky — nobody will be impressed by your source code because you are going to show that your dream was to win The International Obfuscated C Code Contest (http://www.ioccc.org/). This rule is a.k.a KISS (Keep it simple stupid). Don’t Repeat Yourself in Data Science is exactly the same like in Software Engineering and it is stated as “Every piece of knowledge must have a single, unambiguous, authoritative representation within a system”. The essence of this principle is much more wider than keeping source code short. It is related to things like: data structures, validation tasks, build system and so on. Conventions and consistency. If you decided to use one code construction to do some action, use it consistently. Especially in R you can do one thing in various ways. If you are mixing dplyr::distinct(df) and unique(df), and df[!duplicated(df) in one R file something is wrong, isn’t it? Dependency Hell. If you don’t know what does it mean ‘Dependency Hell’, please read it here: https://en.wikipedia.org/wiki/Dependency_hell I wish you to never get into such trouble, but frankly speaking without appropriate tools, sooner or later you will be there. Reproducibility Level PRO Reproducible process in data science is a key value. Often people think that it is enough to call seed() function in your R or Python script and share source code with others, forgetting that source data is the king here. Garbage in, garbage out. If a source of data is changing and nobody will secure it, its structure, format and relationships between processed entities the only thing that can be reproduced is an intention and the assumptions. For small one-time data science project it might be fine. But my experience with data processing shows that there is no thing you do only once. If so, I need to protect all my assumptions, data, libraries, packages and so on. I need to protect whole process of Data Science. Tools for Pro Data Scientists It is up to you what tools you will select to make your workflow reliable and efficient. For example, for R language, data science projects I would recommend few really fundamental things: Git, R Studio, Anaconda, Docker and hot tool that has been published recently (and it is for free!) — RSuite.io. This tool is really awesome, because it is something like Docker but dedicated for R projects. RSuite.io — makes your Data Science workflow really reliable The difference between Docker and Rsuite.io is related with the way how you can use it. Rsuite.io helps you with many things including package preparation, solution deployment, repositories management, solution preparation for various environments i.e. development, test, and production. It’s easy to learn and it can be easily integrated with R-Studio and Git. And actually it can help you with using Docker for Data Science as well! In my opinion RSuite.io is the missing link in the chain for professional Data Science workflow. Blogroll: r-bloggers.com
Pro Data Scientist
24
pro-data-scientist-1580c04fc2f6
2018-05-26
2018-05-26 07:47:59
https://medium.com/s/story/pro-data-scientist-1580c04fc2f6
false
840
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Szymon Drejewicz
Enterprise Architecture / BPM / Machine Learning / Software Engineering / Data Science / AI & RPA Robotic Process Automation / UML / BPMN / Ruby, Elixir, Erlang
b6ef2a2e1d3b
szydre
66
187
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-28
2018-03-28 05:48:51
2018-03-28
2018-03-28 05:58:08
0
false
en
2018-03-30
2018-03-30 06:30:49
0
1580d964d469
1.298113
1
0
0
A new study shows up to the year 2022. There will be a revolutionary lift in the overall market of Robotics, IoT, AR/VR etc.
5
Mechatronics & Robotics 2018 creating a big platform for all the tech lovers and professionals associated with technology. A new study shows up to the year 2022. There will be a revolutionary lift in the overall market of Robotics, IoT, AR/VR etc. London, Churchfield Road, March 27, 2018 — As the tickets go on sale online across the world. Due to increase in speaker participation, ME Conferences organizer of International Conference on Mechatronics & Robotics announces the early bird registration has been extended to May 10. Mechatronics & Robotics 2018 conference is not restricted to the robotics and mechatronics but although it will be covering many sessions as given below: 1. Mechatronics and Robotics 2. Design and product development 3. Internet of Things (IoT) 4. Materials Science and Manufacturing 5. New Approaches in Automation and Robotics 6. Computational Vision and Robotics 7. 3D Scanning 8. Wearable Robots 9. Medical Robotics and Computer-assisted Surgery 10. Industrial Automation 11. Autonomous Technology 12. Sensor Networks 13. Intelligent Machines 14. Automotive and Vehicle Technology Systems 15. The Coming Future of Artificial intelligence 16. Power storage BBC Research- The study showed the huge growth in the global market for cloud and Internet of Things (IoT), Augmented Reality and Virtual Reality, Wearable technology, Medical Robots, Sensors, technology in risk management, Cyber Security technology, Smart manufacturing, Autonomous vehicle, 3D printing and many more. The latest talks going around 3D printed Car, Sophia the Humanoid Robot, IoT, Self-Driving Car. The consumers are also upgrading themselves, with upgrading technology in the market can give an insight how big it is going to be. The Mechatronics & Robotics 2018 Conference in Helsinki, Finland is mainly focusing on workshops on the latest technologies, which will be the major attraction for all the participants and constitutes of Sessions like Keynote Speeches, Oral Presentations, Poster Presentations, B2B Meetings, Panel Discussions, Q&A sessions, Industry expert interactions. There are awards for categories like Best Poster, Best Oral presentation, Young Researcher Forums (YRF), e-Poster presentations, Video presentations by the experts from both Industry as well as Academic.
Mechatronics & Robotics 2018 creating a big platform for all the tech lovers and professionals…
1
mechatronics-robotics-2018-creating-a-big-platform-for-all-the-tech-lovers-and-professionally-1580d964d469
2018-03-30
2018-03-30 06:30:49
https://medium.com/s/story/mechatronics-robotics-2018-creating-a-big-platform-for-all-the-tech-lovers-and-professionally-1580d964d469
false
344
null
null
null
null
null
null
null
null
null
Virtual Reality
virtual-reality
Virtual Reality
30,193
Climate Change Congress 2018
Kevin Mathew
1494a46abf66
norahpink
2
4
20,181,104
null
null
null
null
null
null
0
null
0
6a6c76756bb2
2017-12-11
2017-12-11 01:32:22
2017-12-11
2017-12-11 06:09:06
9
false
en
2017-12-12
2017-12-12 00:29:05
1
15812ded7a8e
5.14717
6
0
0
I know, a massive corporate encroaching on open source territory, it must be a trap right! I’m yet to find one, so let’s have a look at…
5
Microsoft R. Where, How, Why. I know, a massive corporate encroaching on open source territory, it must be a trap right! I’m yet to find one, so let’s have a look at quick look at the where, how, why of Microsoft R and you can tell me if I’m being duped. Microsoft R Open Microsoft R Open (MRO) is the open source Microsoft variant of R. It is based on the CRAN R project with a number of enhancements. It is installed a single workstation. This is not to be confused with, but easily is, Microsoft R server (MRS) which is a commercial offering for serious R work load. I will discuss Microsoft R server next. Microsoft R Open has all the CRAN libraries as well as additional libraries. So you are getting everything and a bit more. Many R use cases require multiple complex calculations in a single model over large amounts of data. There are some computational limitations to CRAN R that Microsoft R Open goes a long way to solving. Two of those limitations are: 1. CRAN R is single threaded making complex computations slow. 2. Using CRAN R data is loaded into RAM for computation so data is limited to the amount of RAM on the user’s workstation. Microsoft R Open is multi-threaded. That is a very short sentence but it is a huge advantage as highlighted below. Image Source: Microsoft I will cover off the RAM limitation in the next section. You don’t have to use the Microsoft Open R client with Microsoft Open R. (Which is good because it is less than awesome.) You can use the client of your choice such as the command line, CRAN R Studio, Visual Studio etc and configure them to use the Microsoft R Open server to give the extra libraries and multi-threaded goodness. MS R Open client on the left. CRAN R Studio configured to use Microsoft R Open server on the right. Microsoft R Server Microsoft R Server is the commercial R server from Microsoft. This introduces a number of extra packages including those that facilitate R at scale. ScaleR manages large data amounts by using RAM and disk along with the parallel processing already mentioned. DistributedR, is exactly what it sounds like. Allows Microsoft R server to be distributed on multiple nodes to increase parallelism. This is getting a bit more techo than I hoped so here is nice diagram from Microsoft to show the value. Image source: Microsoft So here is the only gotcha. If you want to run R at scale you have to pay for a Microsoft R Server license. Well that is pretty fair I reckon PLUS there is a good chance your organisation has this as part of Microsoft SQL Server licensing. *Big disclaimer about me not being a Microsoft licensing expert, this does not consist of an offer or guarantee and all that stuff. R in Power BI Microsoft Power BI has two ways of exploiting R. Neither of these needs Microsoft R Open server. Power BI will go for a look around and try and find a R server, could be CRAN, could be Microsoft R Open, could be other. In this case my installation of Power BI found my Microsoft R for SQL Server R server. More about that later. Here are the R servers available to me. Microsoft R Open and CRAN are their. The two ways to use R in Power BI are: 1. A R Script Visual within Power BI to create a R visualisation based on data in a Power BI model. Using data already in your Power BI model run some R script over it 2. R Script as a data source which creates a data frame within Power BI with R applied. R to define the data frames and perform computations When you use R Script as a data source the results appear as table when importing. So even in Power BI Microsoft do not lock you in Microsoft R Open. Suspiciously un-corporate like. How about Microsoft Azure? Microsoft Azure Machine Learning Azure ML has a large number of statistical and predictive functions built in. Azure ML also allows R and Python to included in the model but Microsoft do restrict the R functions that can be used in Azure ML as some could be damaging in a cloud environment. Other than the simple drag and drop interface the big advantage of Azure ML is that it is incredibly simple to make the machine learning ‘experiment’ a web service that can then be used in applications and website. This could be called ‘operationalising data science’. Outside of Azure ML creating a web service, or operationalising, traditional R or Python can be a bit difficult. Here again users can choose to run their R script using CRAN R server of Microsoft Open R server. On the right users can choose the R version to run their R script components SQL Server R Services SQL Server R Services is very exciting. It allows R to be used within a SQL Server Stored Procedure. So an R expert can create some code, send it via email to a DBA and the DBA pastes that into a SQL Server stored procedure. This could be called ‘productionising data science’. This method can be used as part of standard data flows within a database or data warehouse. When new data comes in the R stored procedure is run and the output is updated. MS SQL Server Stored Procedure containing R script As a data and visualisation person I find the graphs available in R like BI Back to the Future. Very 1980’s. But I get it, R about important sciencey things, this is not a beauty contest. Now we can have both. Have data available in SQL Server that has already had R applied and display using your visualisation and/or reporting tool of choice. Summary Where & How Desktop with Microsoft R Open. Cloud with Azure ML, Azure App Service (a web service created in Azure ML or other), Power BI, Azure VM Running SQL Server, Azure Machine Learning VM. On premise with SQL Server R server, or Microsoft R Server running on traditional single server hardware or in a cluster. Why More Libraries Multi-threaded Scale beyond RAM Distributed Microsoft Open R — Open source, use all your cores for processing, pass on the client. Microsoft R Server — Go parallel and scale up your R as large as you want. Power BI — Good for ad-hoc use of R and display R graphs as part of a dashboard. Azure ML — Good for exploiting Machine Learning in other applications via a web service. SQL Server R Services — Great for embedding R into a repeating business processes within a SQL Server environment. I’m David Myall. Analytics Practice Manager at DWS.
Microsoft R. Where, How, Why.
33
microsoft-r-where-how-why-15812ded7a8e
2018-04-19
2018-04-19 22:02:49
https://medium.com/s/story/microsoft-r-where-how-why-15812ded7a8e
false
1,046
Some thoughts from the folk at DWS Group
null
dws.australia
null
DWS GROUP
blogger@dws.com.au
dwsgroup
null
null
Microsoft
microsoft
Microsoft
19,490
David Myall
null
e2afb26f941c
davidmyall
17
15
20,181,104
null
null
null
null
null
null
0
null
0
a1575a97609
2018-06-20
2018-06-20 15:09:11
2018-06-25
2018-06-25 14:58:07
4
false
en
2018-06-25
2018-06-25 15:02:56
4
158153bd7f30
3.333962
2
0
0
Unlock with a look.
4
Facial Recognition. Unlock with a look. The smartphone industry has seen pretty much major change in 2018. The facial recognition, Bezel-less displays, In-Glass fingerprint scanner and a lot more technology stuffed into the mobile phones we carry today. The human face has an astonishing varieties of features which not only help us recognize others but read and understand them through a constant for flow of intentional and unintentional signals. It’s one of the unique functions that separates man from machine until now. Photo by Aatik Tasneem on Unsplash Fact: Technology developed by Facebook’s AI can now recognize faces with 97.37% accuracy which is 0.28 % less accurate than a human. (Human — 1, Artificial Intelligence — 0) which is surprising because computers are more accurate than us. How do computers recognize our faces?? Initially, the computer would divide the face into landmarks or nodal points which are pretty much the depth of the eye sockets, the distance between the eyes, the width of your nose and length of your lip. Everyone has different coloured eyes, nose structure, lips and unique ears. So the measurement of the nodes is different for everyone and hence the password. All these information are made into a unique code and we call it the person’s own “ faceprint”. But there is a problem, to get a correct match each and every time your photos has to be the same or like to like. Our faces are in constant flux and they are not just static like fingerprints. How does this technology work? Every face has numerous, distinguishable landmarks, the different peaks and valleys that make up facial features. Each human face has approximately 80 to 100 nodal points. It scans faces and measures distinguishing facial features such as eye position, eyebrow shape, and nostril angle. This creates a distinctive digital “faceprint” — much like a fingerprint — which the system then runs through a database to check for a match. Facial Recognition Technology scans: The distance between the eyes The width of the nose The depth of the eye sockets The shape of the cheekbones The length of the jawline These nodal points are measured creating a numerical code, called a Faceprint, representing the face in the database. Nodal points The Four stage process that the system does. Capture — a physical or behavioral sample image is captured by the system during enrollment ( Find a face in the image). Extraction — unique data is extracted from the sample and a template is created. (Analyse facial features) Comparison — the template is then compared with a new image (Compare the image with the database.) Matching — the system then decides if the features extracted from the new sample are matching or not (Make a prediction) But we have some problems. We face 4 main issues with facial recognition. The A-PIE Problem Ageing, Pose, Illumination, Emotions. To solve this problem we have 3D recognition system called, DEEPFACE. It’s able to take a 2D photo of a person and create a 3D model of the face. Apple’s Dot projector doing its 3D model work. — Apple So now it scans from any angle or poses can be compared. So this solves Pose from the A-PIE problem. Ageing is no longer a problem either. The faceprint system is now redefined to capture the areas of the face that have rigid tissue like the curvature of the jaw, the forehead doesn’t alter too much in the course of time. To deal with Illumination, mobile phones are equipped with IR blaster near the front facing camera (Mi 8's IR sensor, Apple’s Flood Illuminator). Emotions are dealt by the system by learning human emotions. This is done by Deep Learning. Emotion — Photo by Ryan Franco on Unsplash So the more you use the facial recognition on your phone the better it gets over time. In upcoming years there’s gonna be a Surface Texture Analysis, where the device recognises the texture of the skin rather the facial details. This technology can identify between identical twins with their skin texture. No doubt the fruit company is gonna bring it to the world. (“wink wink”). PS : There will future updates on this post if something new turns up. Follow for updates SS Thanks for reading. Until next time Peace, Love and Gratitude.
Facial Recognition.
8
facial-recognition-158153bd7f30
2018-06-25
2018-06-25 15:02:56
https://medium.com/s/story/facial-recognition-158153bd7f30
false
698
New Beginning to infinite possibilities.
null
null
null
Shyam Cortex
shyamsampathis@gmail.com
shyam-cortex
TECHNOLOGY,ELECTRONICS,EDUCATION,LEARNING,MOBILE
null
Privacy
privacy
Privacy
23,226
Shyam S
Being Human | Electronics Enthusiast | Engineer | Maker | Believer | Being kind to everyone |
71451ac1cd41
shyamstrong
13
120
20,181,104
null
null
null
null
null
null
0
null
0
1f0f7ce4f5ad
2018-01-20
2018-01-20 18:43:14
2018-01-20
2018-01-20 18:55:07
3
false
en
2018-01-21
2018-01-21 11:15:40
1
158359a7d0cb
3.040566
5
1
0
If you have experience learning about Data Science or Machine Learning you may have come across the term ‘Linear Regression.’ In this brief…
5
Linear Regression (A Statistician’s plots) pt. 1 If you have experience learning about Data Science or Machine Learning you may have come across the term ‘Linear Regression.’ In this brief lesson, I will try to explain what Linear Regression is, and how to understand its meaning from a simple and clear perspective. Back when we were younger and first learning about algebra, I believe that we should all recall the equation of a straight line: Equation of a Line Linear Regression as a topic expands on this line by combining it with a topic that we all should be familiar with if you have ever studied introductory Statistics or even Psychology, the bell curve. The idea behind the curve is that for a certain event such as heads or tails when flipping a coin, there is something known as the probability distribution. This can be understood as the probability of the given event happening. The distribution can be shown in different ways, but understanding the bell curve, or the standard normal distribution (pictured below) is a good way to grasp an understanding of the different types of distributions out there. Without getting too deep into the jargon of how the bell curve is analyzed, I would like to explain with a metaphor what the picture is explaining. Image of the Normal Distribution (Standard), also known as the Gaussian Distribution Imagine that there is a flat table, and I create a pile of sand on top of it so that it appears like a 3-dimensional version of the bell curve shown above. The mound of sand then has the greatest density around the center, while on the edges there are fewer grains of sand. Now pretend that I gather all the sand from the table, and place it into a closed container. Then I shake the container up, mixing around all the grains of sand before opening the container again. If I were to select a single grain of sand from the container, which section of the mound of sand most likely come from? The center of course. This is logical since that is where most of the sand was located within the mound. In other words, it is much more likely that the grain of sand came from the center rather than the edges of the mound of sand. For the Standard Normal Distribution, it can be said that there’s a calculated 68.2% chance that the grain of sand was within a central part of the mound (assuming that the mound of sand had a Standard Normal Distribution, there are other types of distributions). Here is a refresher in case you haven’t learned it before or have merely forgotten. Now combining the two, a regression line with the bell curve, we are able to understand this new idea: Combination of the Equation of a Line and the Normal Distribution (source: mobiledevmomo.com) Basically, we now are able to see that the distribution of the data along a regression line follows a certain probability. With this new insight into how to analyze a seemingly random scatter plot, we can begin to understand the usefulness of linear regression and it’s inclusion within the fields of Data Science and Machine Learning. Using the above image as an example, if we are given a new piece of data that is outside of this sample, and it has an ‘x’ value of 30 (the independent variable), the ‘y’ value (dependent variable) can be predicted to lie within a certain prediction interval up to a specific percentage of certainty. Making things up, it can be thought to have a 90% of being within the range of [200, 300]. Although I have not so much personal experience with Machine Learning and Linear Regression, I do know that from a Data Science perspective, the addition of the regression line itself makes it possible to clearly visualize apparent trends in data, and to see how they are progressing. It makes it possible to distinguish data which is useful for prediction, and data which either needs to be transformed or would be a poor source for predicting future events.
Linear Regression (A Statistician’s plots) pt. 1
7
linear-regression-pt-1-158359a7d0cb
2018-03-23
2018-03-23 23:30:48
https://medium.com/s/story/linear-regression-pt-1-158359a7d0cb
false
660
Coming Soon! Shoot us a Tweet for More.
null
null
null
init27 Labs
sanyam.bhutani05@gmail.com
init27-labs
COMPUTER VISION,DEEP LEARNING,ROBOTICS,SELF DRIVING CARS,FLYING CARS
bhutanisanyam1
Machine Learning
machine-learning
Machine Learning
51,320
Jared Yu
Hey everybody, my name is Jared, and I am studying towards a B.S. in Applied Statistics at the University of California, Davis.
9c465dc38e7f
jaredyu
29
38
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-12
2018-08-12 18:38:31
2018-08-12
2018-08-12 19:03:41
1
false
en
2018-08-12
2018-08-12 19:03:41
0
1583bbb4db0b
2.562264
0
0
0
Its more than a month since I posted. But I wasn’t slacking off :-D
2
The delay explained! Its more than a month since I posted. But I wasn’t slacking off :-D I started taking a lot of courses and jumped in too quickly into everything. It didn’t work! Why? 1. Mathematics: For someone who is starting off after a long gap from college, Math is the last thing that would make any kind of sense. From my research, most of the things in AI/ML is going to be about Math. Without Math, it won’t run and the knowledge about everything else will be superficial. But that’s just my opinion and that’s why so much of research will be necessary. 2. Trying to learn everything at once: Another reason why it didn’t work was because I subscribed to a lot of courses and wanted to take a lot of information all at once, which is hard when you’re working. You can’t do 4 eDX courses, study Mathematics & a Udemy course all at once with a 8 hour job. Well, I can’t, since I’m not superhuman. :-D 3. Following too much advice: Taking advice is good, but following 5 set of advices is bad. It just gives short-lived motivation & fades after a while, and there’s just too much to do which you can’t comprehend. I would get on internet and follow every link there is about AI/ML & try to add it to my plan. WRONG CHOICE!! So what did I actually do? Its a secret: I got out and talked to people!!!!! Yeah, seriously. Friends of friends who actually knew and worked on AI, professors who were heading that way, teachers who taught the subject. People, actual people and not just anyone. Most of the people are nice on internet who give credible information, but there are some bad ones too, and too much internet is just distraction after a point. To figure out my own pace, I just shut down searching for AI stuff and talked only to people about it who will give credible, reliable information & nothing else. Here are the revelations of the past month: I have to go slow. Since I have a 2 year old kid and have a job, I have to calculate exact time I’m going to give to this and get the results. I’m extremely committed to my job so I’m not going to study in office hours. That gives me 3 hours everyday. Without a “professional push” I’m not going to be disciplined ( yeah I’m lazy, I admit). I got admission in a M.Tech course. I don’t know if it works for everybody, but it will work for me since I need a postgrad BADLY!! I will crack Mathematics first, because without it, there’s nothing. While working full time, going back and forth between study materials is not possible. It just takes a lot of time and will break my rhythm. I will not generate short-term motivation. Nobody has mastered something in 6 months. This is going to be a long-term hustle. I will get in touch with programming roots again, because after working on the same kind of code, sometimes you pick up bad habits, and you won’t want to introduce that into AI/ML learnings. So the summary is, go slow, take the time required and learn it properly. Wanna see the preparations? THIS IS SPARTAAAAAA!!!!! This may not seem like slow, but it really is. I calculated it. 3 committed hours everyday for 5 days, & 5x2 on weekends. That makes 25 hours per week. I will update the trello boards with this stuff, but these books are amazing & I can’t wait to start learning. The stories are going to be there too from today, so tune in :-) Thank you, Mohit
The delay explained!
0
the-delay-explained-1583bbb4db0b
2018-08-12
2018-08-12 19:03:42
https://medium.com/s/story/the-delay-explained-1583bbb4db0b
false
626
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Mohit Jawanjal
null
e9e4e1d73732
mohitjawanjal
8
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-27
2017-12-27 07:07:08
2017-12-27
2017-12-27 07:09:58
0
false
en
2017-12-27
2017-12-27 07:09:58
0
1585f04367b1
0.837736
0
3
0
APACHE CAMEL
5
APACHE CAMEL Real Time Online Training Offered By MaxMunus APACHE CAMEL Apache Camel is a rule-based routing and mediation engine that provides a Java object-based implementation of the Enterprise Integration Patterns using an API (or declarative Java Domain Specific Language) to configure routing and mediation rules. The domain-specific language means that Apache Camel can support type-safe smart completion of routing rules in an integrated development environment using regular Java code without large amounts of XML configuration files, though XML configuration inside Spring is also supported. Apache Camel provides support for Bean Binding and seamless integration with popular frameworks such as CDI, spring, Blueprint, and Guice. Camel also has extensive support for unit testing your routes. The following projects can leverage Apache Camel as a routing and mediation engine: · Apache ServiceMix — a popular distributed open source ESB and JBI container · Apache ActiveMQ — a mature, widely used open source message broker · Apache CXF — a smart web services suite (JAX-WS and JAX-RS) · Apache Karaf — a small OSGi based runtime in which applications can be deployed · Apache MINA — a high-performance NIO-driven networking framework. Version: Apache Camel 2.4.0 For more details kindly feel free contact with us. Name — Avishek Priyadarshi Email: avishek@maxmunus.com Phone : +91–8553177744 Skype Id: avishek_2.
APACHE CAMEL Real Time Online Training Offered By MaxMunus
0
apache-camel-real-time-online-training-offered-by-maxmunus-1585f04367b1
2017-12-27
2017-12-27 07:09:59
https://medium.com/s/story/apache-camel-real-time-online-training-offered-by-maxmunus-1585f04367b1
false
222
null
null
null
null
null
null
null
null
null
Java
java
Java
12,590
Avishek Priyadarshi
null
eb564bfd5e9b
avishek_2154
8
104
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-03
2018-04-03 23:54:19
2018-04-03
2018-04-03 23:58:07
1
false
en
2018-04-08
2018-04-08 19:25:47
0
1586cdf441ac
1.283019
4
1
0
The ideas were coming fast and furious. I was typing as quickly as I could. Soon I started dropping letters, abbreviating, creating…
4
I believe crash reports should be worth reading (mindmap edition) Crash report for MindNode Lite. The ideas were coming fast and furious. I was typing as quickly as I could. Soon I started dropping letters, abbreviating, creating acronyms. I still couldn’t keep up. Maybe one word in five was making it down onto the page. I could feel heat beginning to blossom on my fingertips as my hands blurred across the keyboard. It still wasn’t enough. Nodes branched and rebranched, the mindmap becoming a dense mass on the page, tangles of ideas turning around on themselves. And then a cramp seized my right hand, and I pulled it away. But the mindmap continued to grow. Dots of light began flashing in the undergrowth, tracing new paths and networks as an acrid smell reached my nostrils. Tendrils of smoke were rising from the keyboard as the glow permeated the mindmap and extended beyond the boundaries of my screen. And then — as a thin mosquito whine from the laptop’s fan reached an ear-shattering pitch — whatever the mindmap had become… spoke. “Hello?” The screen went black. Three blue sparks, and a plume of smoke coiled out of the laptop’s vents. Days later, after some careful repairs, I was able to restart the laptop. But the mindmap was gone, all sign of it erased from the hard drive. I think of it often, especially late at night. I have chosen to believe that it, and all wonderful documents that were much more than the sum of their bits, escaped the cramped aluminum walls of the computer where they were born, and now dwell on a plane of existence where we will all one day be reunited. For now, though, I begin typing again.
I believe crash reports should be worth reading (mindmap edition)
20
i-believe-crash-reports-should-be-worth-reading-mindmap-edition-1586cdf441ac
2018-04-08
2018-04-08 19:25:48
https://medium.com/s/story/i-believe-crash-reports-should-be-worth-reading-mindmap-edition-1586cdf441ac
false
287
null
null
null
null
null
null
null
null
null
Fiction
fiction
Fiction
84,626
Rob Cottingham
Social speechwriter • speaker • @NtoS cartoonist • rather attached to @awsamuel
bb9a0558c1f4
RobCottingham
2,006
1,769
20,181,104
null
null
null
null
null
null
0
null
0
75f03aac2625
2016-01-19
2016-01-19 05:21:53
2016-01-19
2016-01-19 20:48:02
8
false
en
2017-10-03
2017-10-03 05:23:55
64
1586e85e3991
10.578616
4,524
241
2
null
4
Credit: Gilles Lambert Nearly a year ago today, I wrote a post inventorying the forebears to what I believe has become the dominant trend of consumer computing apps in 2016, a trend that I dubbed Conversational Commerce and have tracked with the hashtag #ConvComm. This trend best came to life in 2015 with Uber’s integration into Facebook Messenger: And now we have data from Business Insider showing that messaging apps have eclipsed social networks in monthly actives: And yesterday, WhatsApp (owned by Facebook) took the unanticipated (but easily expected) step of removing its $1 annual fee to go completely free in anticipation of a conversational commerce future: Starting this year, we will test tools that allow you to use WhatsApp to communicate with businesses and organizations that you want to hear from. That could mean communicating with your bank about whether a recent transaction was fraudulent, or with an airline about a delayed flight. We all get these messages elsewhere today — through text messages and phone calls — so we want to test new tools to make this easier to do on WhatsApp, while still giving you an experience without third-party ads and spam. The tech press seemed to grok the significance of this move (considering their coverage) in spite of historically underrating WhatsApp, even with its ~900M users. On the very same day, sam lessin posted his thoughts about the winners and losers in the coming “bot” market, concluding that conversational experiences represent “…a fundamental shift that is going to change the types of applications that get developed and the style of service development in the valley, again.” I agree, and so that’s why I’m ready to call it: 2016 will be the year of conversational commerce As I’ve surveyed the landscape collecting startups and apps that fit into this paradigm, talked with the press, observed how marketers, branding agencies, platform makers, and VCs have come to the same conclusion, I thought I’d jot down a few observations to consider as we plunge headlong into this brave new world. Before I begin, I want to clarify that conversational commerce (as I see it) largely pertains to utilizing chat, messaging, or other natural language interfaces (i.e. voice) to interact with people, brands, or services and bots that heretofore have had no real place in the bidirectional, asynchronous messaging context. The net result is that you and I will be talking to brands and companies over Facebook Messenger, WhatsApp, Telegram, Slack, and elsewhere before year’s end, and will find it normal. Indeed, there are several examples of this phenomenon already, but those examples are few and far between, and fit in a Product Hunt collection rather than demand an entire App Store (wait for it). Additionally, I’m less interested in whether a conversational service is provided by a human, bot, or some combination thereof. If I use these terms interchangeably, it’s not unintentional. It’s just that over an increasing period of time, computer-driven bots will become more human-feeling, to the point where the user can’t detect the difference, and will interact with either human agent or computer bot in roughly the same interaction paradigm. Discovery and distribution One of the biggest challenges of this paradigm is the discovery of new conversational services. Should messaging incumbents each provide their own conventional app store, where users can browse recommended partners, a la Snapchat Discover or Slack’s App Directory? And should these conversational services rely exclusively on distribution from popular messaging apps? …or should these services be accessed in-context through data detectors or through a dedicated expansion interface, as in Facebook Messenger? Accessing Transportation within Facebook Messenger Or should bots surface organically — i.e. when a friend mentions a bot by name, or invites a bot to join you in a separate thread? This would be more natural, but would require many “patient zeroes” to have original knowledge of the relevant bots in the right moments. This question of discovery is unsolved, and will remain a looming question for the rest of the Messenger-class apps out there. I expect to see more approaches presented at F8, if only because the Messenger Platform was announced last year, and there’s been little announced since then. The fight to own the conversational command line Discovery of discreet conversational services becomes less of an issue if users are slowly trained to think and type more like programmers. That is, the more that users get frustrated expressing themselves in complete sentences, and the more technically sophisticated they become, the more likely they are to warm to the efficiencies of the command line. So whether you take Sarah Guo’s position on the Revenge of Clippy, or Partyline’s assertion that “the future is one simple interface”, which looks less like this: …and more like this: The net conclusion is that people will learn to type commands into messaging apps in the future, hinting at the importance of Slack’s standardization of Slash Commands or Mixmax’s effort to bring slash commands to email, and why innovative solutions like Peach’s Magic Words are battling to provide the most utility to users with the least effort and the least complexity: Threads, persisting context across devices, and extreme personalization With iCloud, we finally got a taste of what cross-device computing should be like (of course, it took several tries). Facebook Messenger and Slack feel like the next iteration beyond iMessage though — instantly and continuously synced to the most recent changes from the cloud. I weave between their desktop and mobile apps without losing a beat, or context. My conversations automatically rearrange themselves according to my behavior, and the bots that I was talking to on desktop are right there when I pick up on mobile. Nothing to install, nothing to configure—just flow. Conversational apps are therefore organized around the way I organize my life, rather than the way the app maker might dictate. The lightness of being in this world is profound. For example, with my new job at Uber, I had to get a new laptop, which invariably meant installing dozens of “comfort apps” that I use to make my environment feel more familiar. But Facebook Messenger stayed the same — I started it up, and everything was in its right place. This lack of inherent friction in the experience changes a user’s perception of the service, and even though it’s hard to quantify, I intuitively believe this feeling makes an enormous difference in the long term commitment to a platform. This consistency is a form of extreme personalization enabled by conversational interfaces. I guarantee you that if you look through anyone else’s Facebook Messenger threads, Twitter DMs, iMessages, OKCupid messages, or Snapchats that the order, content, and velocity of messages and content will feel extremely foreign, and likely massively uninteresting. Contrast that with a gaming platform where all users go through some kind of elaborate, universal onboarding process, and you’ll begin to grasp how this subtle form of extreme personalization is core to the conversational paradigm. The language of conversational apps and notifications Suffice to say, the verbs we use with traditional apps are irrelevant in the conversational paradigm. We “buy”, “download”, “install”, and “trash” apps. The conversational paradigm is more social, and therefore less technologic. We use humane verbs like “add”, “invite”, “contact”, “mute”, “block”, and “message”. The language of conversation is more accessible to a broader audience, which will in turn accelerate the adoption of conversational agents faster than we saw with desktop apps. No longer do you need to convince users to “download and install” an app — they can just invite a bot to a conversation and interact with it [eventually] like they would a person. Zero barriers to adoption, with minimal risk to the user (i.e. malware, etc). And receiving notifications from bots is to be expected, rather than avoided, because users have been conditioned to receive them from their friends. While you may have bristled when that news app alerted you to “new stories”, you might appreciate a particularly friendly newsbot delivering a personalized recommendation with context that you uniquely care about. Payments, location, and persistent identity So, I’ve mentioned several aspects of this paradigm shift that have to do with the change in the user experience, but there’s another dimension worth considering, and that has to do with what users of conversational apps bring to the equation: namely, lots of information and capabilities that used to be exceedingly rare in the computing environment. For one thing, the Uber integration in Messenger was made possible because mobile payment mechanisms are now commonplace in chat apps. Since you can send money directly within Facebook Messenger, that payment vehicle can in turn be used to pay bots for products. Once payments have been set up in Facebook Messenger, they can be used with bots Additionally, conversational mobile apps have a lot more contextual information about users, including location, health, sensor, and social data. This information is useful for fighting fraud, and as such conversational apps (like Operator) will push commerce and purchases into this context aggressively and away from the query-thrashing model (read: Google). Meanwhile, all of this data is also available to clever developers to build interesting and more personal agents and bots. And since each interaction and engagement is tracked, the longer the conversation thread persists, the easier it will be to offer more responsive heuristic approaches that anticipate the user’s specific needs. That is, the more conversational agents will appear to be getting to know you! Ambient computing and hardware trojan horses I mentioned this in my post last year, but we can now see that the voice-controlled hardware trojan horses from the big companies haven’t necessarily been embraced with open arms. Yet. I don’t have specific numbers, so I may be wrong, but my sense is that it’s still very early days for devices like Amazon’s Echo (which is being shrunk) or Google’s onHub. While there continues to be interest in the internet of things and wearables, these kinds of “room computers” don’t seem to have mass consumer appeal, say, compared with the Xbox 360. Still, most people have smart phones, so these dedicated home AI devices may be more about reaching that audience that has thus far resisted obtaining their own smart phone, or only use their smartphone’s basic functionality (i.e. don’t install or use apps). Lightning fast development cycles, increased competition, and customer service Lessin points out that building conversational bots costs less and happens faster than building and maintaining cross-platform apps. This is critical. [Dealing] with installed client software is slow. You have multiple versions of the same software running on different devices, and you have to ship software that cannot be easily recalled for bugs or errors. Startups have a hard time winning at the game (as I have written about before). The bot paradigm is going to allow developers to move fast again. Faster and lower cost development means that there will be fewer actual “bot businesses” started that need funding; instead, you can take a bot template, tweak it, and launch it — over a weekend. You can collect feedback on how people interact with it and if and only if it drives engagement, then consider what a business to back the bot might look like. This means that service builders will have to become very sensitive to the interaction that their users have with their agents and bots — humanizing the conversation, localizing correctly, and providing a meaningfully useful and differentiated service. Bot discovery and distribution will favor the fastest, most clever, and most responsive bot makers, as word of mouth virality is natural to the conversational context unlike in the App Store. Well organized development teams will grow their service commensurate with their customers’ needs, moving at their speed, and without being held back by opaque app store submission processes. When the new version is ready for release, the code is pushed and instantly every user gets the latest version. No updates, no installs, no delay. This means competition will gradually shift from glossy marketing spend (think: Boom Beach and Clash of Clans) and an obsession with ranking among the top of an App Store category to emphasizing friend of friend referrals and word of mouth spread. Platforms, SDKs; encumbered & upstarts Lastly, we’re entering a moment where there is no clear winner per se, and where the strategy to win both consumers’ and developers’ hearts hasn’t been determined. Slack’s API is obviously very popular. Facebook’s Messenger platform and WhatsApp have huge distribution but are shadowed by Asian rivals like WeChat and Line. Telegram’s Bot API shouldn’t be overlooked. Google may yet offer its own chatbot messaging platform (superseding Hangouts?). Other platform builders are currying favor with developers and capitalizing on the enterprise segment of the market. Intercom and Smooch enable brands and companies to message their customers from within their existing apps. Twilio and Layer offer more fundamental components that can be composed into increasingly sophisticated offerings. It’s still too early to name a breakout winner, but it’s going to be fascinating to see how these different conversational platforms shape up and differentiate, and how each will control, give access to, or promote third parties to their users. Will everything move to the conversational paradigm? No, but there are a lot of apps that shouldn’t exist as stand-alone apps, and that are wallowing in obscurity or disuse. By reducing the cost and friction of trying out new services, the conversational commerce paradigm promotes an entirely new era of lightweight experimentation. Over time, service builders can focus more on the apparent value that they can deliver through the familiar conversational channel, and can finally dispense with requiring users to learn their app’s needlessly bespoke interface. This shift is good news for service builders, and it’s good news for users. I can only imagine how far along we’ll be when 2017 rolls around. Addenda I’ll be on a panel at SXSW 2016 called Get the Message! The Rise of Conversational UI with Jonathan Libov, Jeff Xiong, and Julia Hu. Come check it out! This article has been translated to Chinese, Japanese by Masahiko Sato, Korean by Keywon, and Portuguese (twice!) by Leandro Rosa and Daniel Sun, Spanish, and German. It has been syndicated to the New York Observer and Inside. 2016 wird das Jahr des Conversational Commerce This is a German translation of “2016 Will Be the Year of Conversational Commerce” by Chris Messinamedium.com 2016年はカンバセーショナル・コマースの年 This is a Japanese translation of “2016 Will Be the Year of Conversational Commerce” by Chris Messina. medium.com 2016년의 키워드는 대화형 커머스 This is a Korean translation of “2016 Will Be the Year of Conversational Commerce” by Chris Messina.medium.com ☞ Chris reads every response on Medium or reply on Twitter, so don’t hesitate to let him know what you think — do tag your tweets with #ConvComm. The shortlink for this post is http://j.mp/convcomm-2016. ☞ To hear from the author in the future, sign up for his newsletter or follow him on Twitter. ☞ Please tap or click “♥︎” to help to promote this piece to others.
2016 will be the year of conversational commerce
4,685
2016-will-be-the-year-of-conversational-commerce-1586e85e3991
2018-06-17
2018-06-17 17:12:02
https://medium.com/s/story/2016-will-be-the-year-of-conversational-commerce-1586e85e3991
false
2,503
This can all be made better. Ready? Begin.
null
chrismessina
null
Chris Messina
chris.messina@gmail.com
chris-messina
BLOG,DESIGNERS,TECHNOLOGY
chrismessina
Messaging
messaging
Messaging
8,912
Chris Messina
Product designer, product hunter, inventor of the hashtag. Previously: Uber, Google, Molly (YC W'18), and friend to startups.
2229dec1a44f
chrismessina
45,568
3,125
20,181,104
null
null
null
null
null
null
0
null
0
1375cdf51b6e
2018-04-30
2018-04-30 21:12:52
2018-05-02
2018-05-02 12:34:09
2
false
en
2018-07-03
2018-07-03 09:30:55
2
1588fbfcada
3.900314
12
0
0
Imagine an intelligent, self-sustaining and independent knowledge network that rewards knowledge, innovation and facts – based not on where…
4
Mindzilla is Beginning a Democratized Knowledge Revolution Imagine an intelligent, self-sustaining and independent knowledge network that rewards knowledge, innovation and facts – based not on where they come from, but the value they bring. https://mindzilla.com It was Francis Bacon who said knowledge is power. Throughout history, advancement of civilizations has relied on the ability to access, learn from and improve upon knowledge and ideas. We are now firmly in a new age of information and our future progress and development is reliant on empowering more people than ever before with the ability to access and utilize knowledge. Bacon’s statement has never been more precise. This is exactly why Mindzilla exists. To facilitate the collective intelligence of the many, bypassing the gatekeepers who are dominant in securing most of the public and private funds. Mindzilla is designed to be fully inclusive for people with brilliant ideas and knowledge who were previously isolated from today’s funding ecosystem, publicity and support. Mindzilla is an intelligent, self-sustaining and independent knowledge network, powered by our proprietary cognitive artificial intelligence (AI) platform and Blockchain technology. This new AI/Blockchain combo engine gathers, reads and conceptualizes the world’s knowledge, modelling the human brain’s own process of storing information. The key skill of Mindzilla’s engine is understanding the relationship between information (i.e. establishing interdisciplinary references and “blue sky connections” between seemingly different sectors) and spotting where novel concepts exist. To put it simply, using Mindzilla is not just putting the world’s knowledge at your fingertips but is effortlessly delivering the information that really matters to you. How knowledge sharing is incentivized Populating a vast knowledge network is not reliant on pure goodwill of contributors. That’s why Mindzilla has a token-based mechanism that has a monetary value attached. This is the spark to facilitate an open network where everyone can take part and get rewarded accordingly. From academic institutions to individuals with bold ideas, everyone gets a voice. Contributed content may be in the form of research, news articles, blog posts, reviews, academic work, commentary, opinion, industry reports, and much more. The more value your knowledge contribution adds to the knowledge ecosystem, the greater your earnings become. Innovators to inventor throughout the world now have the opportunity to freely contribute knowledge, access resources and collaborate with Mindzilla’s community of renowned academics, experts and partners. The Background Mindzilla is developed by the team behind Scicasts, the platform for discovering new insights in the biomedical fields. It became very apparent that dozens of ground-breaking developments published on academic portals and open journals every day are not being featured in the mainstream media. This means many people are simply not aware of such key innovations. Furthermore, the existing array of research papers and research data are not being tapped into sufficiently due to expensive paywalls, lack of awareness and/or poor online accessibility. The diagram below demonstrates the global data map produced through their project research, highlighting key groups of specialist information sources and their accessibility, including research data, academic papers, journals and other knowledge portals: Mindzilla is on a mission to eradicate such information barriers. Currently, existing Scicasts databases are combined with PubMed, a select group of university libraries and public libraries. But this is just the beginning as we are continuously connecting other databases and more university libraries to enhance our knowledge foundation across all academic, public, business and education sectors worldwide. The Role of the AI The AI system powering Mindzilla is called Nebuli. Nebuli fully reads information and conceptualizes it in a way that mirrors the workings of the human brain. This revolutionizes the search for information, as Nebuli only finds and brings what is most relevant to you, even in the areas you might not think to look. The Nebuli engine is also at the heart of the system that determines the value of a Mindzilla’s token. The scoring mechanism is based on its current database of indexed research papers and market reports, with a core function defining the quality of contributed knowledge and the impact this contribution makes to its wider network. Nebuli simply will not allow any speculative valuation. The AI takes away the need for human judgement and bias and focuses its intelligence on spotting and connecting novel ideas that advance knowledge, understanding and innovation. That means the more innovations are funded by the token, the higher its value grows. It is a sustainable currency that will bring a positive impact to communities and societies globally. For example, based upon a submitted document’s relation to others in our current databases, a combined mechanism of AI and community-based feedback validates the concept included within the submitted document. This provides the basis on which the Quality score is reward to both the submitter and the members of the community involved. NOTE: We will be publishing further articles detailing the mechanisms described, including the Nebuli AI and the Blockchain model for distribution of our tokens. How you can get involved If you believe knowledge should be open to everyone, we invite you to become part of the Mindzilla community. * Utilize Mindzilla as a means of discovering and researching information in a super-human way. * Contribute and earn valuable tokens, whether you are an individual or part of a company or institution. * Participate in our launch and grow with us by acquiring tokens at their starting value. Visit https://mindzilla.com to join our mission to democratize knowledge and take advantage of the new way to fund innovative ideas and research. You can join us and participate in our token sale as a founding member at: https://mindzilla.com/join
Mindzilla is Beginning a Democratized Knowledge Revolution
206
mindzilla-is-beginning-a-democratized-knowledge-revolution-1588fbfcada
2018-07-03
2018-07-03 09:30:55
https://medium.com/s/story/mindzilla-is-beginning-a-democratized-knowledge-revolution-1588fbfcada
false
932
A.I. powered open knowledge network where you earn from contributing research and sharing your insights and ideas.
null
mindzilla
null
Mindzilla
null
mindzilla
null
MindzillaHQ
Blockchain
blockchain
Blockchain
265,164
Mindzilla
Redefining Knowledge Discovery — Powered by #Science, #Artificial-Intelligence and #Blockchain
2e0a93add764
mindzilla
219
179
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-21
2018-06-21 14:05:19
2018-06-21
2018-06-21 16:27:41
3
false
en
2018-07-06
2018-07-06 10:13:08
1
158989004512
3.131132
5
0
0
“Technology is a resource-liberating mechanism. It can make the once scarce the now abundant.”- Peter H. Diamandis, Founder of Singularity…
5
I am not a Techie- What are my chances? “Technology is a resource-liberating mechanism. It can make the once scarce the now abundant.”- Peter H. Diamandis, Founder of Singularity University There is A Lot of Talk about technology taking away jobs. The interesting thing about predictions is you don’t know if you are right or wrong until it is too late. This is what is being predicted based on what has been: Photo Credit: University of Nottingham In the words of the Executive Chairman of the World Economic Forum, Klaus Schwab- “The First Industrial Revolution used water and steam power to mechanize production. The Second used electric power to create mass production. The Third used electronics and information technology to automate production. Now a Fourth Industrial Revolution is building on the Third, the digital revolution that has been occurring since the middle of the last century. It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres.” We are today moving away from monetization towards demonetization, away from centralization towards decentralization.For example, smartphones are going to be practically distributed for free. Autonomous driverless cars are going to become 10 times cheaper. Blockchain Technology is being considered as the technology that will render the current set up of middlemen invaluable. I understand, the first instinct is to ask, “But How?” Well, the more important question is- “Why?” For example- If I don’t own a smartphone, businesses cannot sell any application to me. How will they collect my data? A very big part of our future economy is based on data collection and analytics. This is the amount of data consumed and shared by the world in just 60 Seconds! Photo Credit: @itsguruco Photo Credit: The Tapscott Group The argument no longer is about whether or not AI and Automation will take jobs away. It has now advertently progressed to discussion around professions that need to revamp their skill set and what the possible timeline for those might be. So, what is the future of our work? Technology is seeing exponential growth. Irrespective of our experience and current skill set, machines are eventually going to take our current jobs. All the companies, organizations, institutions that function today are based on what John Hagel calls ‘The Model of Scalable Efficiency.’ Scalable efficiency is the priority of the past where we have standardized specified tasks and tightly integrated processes around it. But here is a fun observation- Machines can do this much more efficiently than humans. They can perform standardized specific tasks with extreme precision and with almost negligible failure rate. Most routine tasks will be taken away from us. Hence, in the course of time, most of what we do today, will be done by machines. Here are some of the professions affected today- Transportation, Logistics, Police, Snipers, Construction, Pizza Delivery, Cooking, Housekeeping and so many more. Emily Howell is an algorithm that plays music (You should check her out on Spotify!). WaveNet, of Google DeepMind, generates audio. This means next Emily Howell will be able to sing soon enough. If there is any doubt about how real she can sound, check out to Google Assistant. Dexter is a robot who learns from you and duplicates your action. Clifford Chance in London, Baker Hofstadter in US deployed their respective AI Lawyers in Corporate law. The question here is to introspect in our respective job profiles and truly introspect- “How many repetitive tasks are there in your world?” Or as mentioned earlier, “What percentage of your job profile requires scalable efficiency?” Because anything that is routine and/or repetitive, a machine can, in theory, do it better, faster, cheaper and without any medical risks and leaves. Change is coming and accelerating. So, what will people do with their time once AI takes over their current jobs? Today it is about your career and what you do. What is that response going to be 10 years from now? What are we going to do? What is the answer? Watch out for my observations and learnings in my next write-up! 😏 https://medium.com/@devikasen/i-am-not-a-techie-what-are-my-chances-the-reality-311a2b2279b9
I am not a Techie- What are my chances?
21
i-am-not-a-techie-what-are-my-chances-158989004512
2018-07-06
2018-07-06 10:13:08
https://medium.com/s/story/i-am-not-a-techie-what-are-my-chances-158989004512
false
684
null
null
null
null
null
null
null
null
null
Technology
technology
Technology
166,125
Devika Sen
A dreamer with rebellious instincts. I have dreams that scare me and excite me at the same time. Writing is a passion I am reigniting through this platform :)
d692912f64d1
devikasen
11
11
20,181,104
null
null
null
null
null
null
0
null
0
7f4c6fb8cad1
2018-01-16
2018-01-16 12:27:07
2018-01-30
2018-01-30 21:00:40
7
false
fr
2018-01-30
2018-01-30 22:39:58
6
158a81a3b563
3.408491
8
0
0
J’explore ici une technique de classification de trajectoires. Voici le problème que je voudrais résoudre: supposons que j’ai enregistré…
5
Exploration de classification supervisée d’ensemble de trajectoires J’explore ici une technique de classification de trajectoires. Voici le problème que je voudrais résoudre: supposons que j’ai enregistré quelques trajectoires pour deux utilisateurs ID1 et ID2 et je voudrais savoir si un nouvel utilisateur IDX correspond plus à ID1 ou ID2. Introduction Pour me familiariser avec le domaine, j’ai regardé rapidement quelques articles sur l’analyse des trajectoires, par exemple ce survol de data mining (en anglais) et cet autre survol (en anglais). Pour les adeptes de l’apprentissage profond, il y a aussi cet article de classification de trajectoires (en anglais), bien qu’il ne m’ait pas inspiré pour cette exploration. Après un survol rapide des outils ouverts, mon choix s’est arrêté sur le package R trajectories qui semble simple d’utilisation. Données Pour cette exploration, j’utiliserai le Geolife GPS Trajectory Dataset de Microsoft. Ces trajectoires ont été récoltées de 2007 à 2012 à partir de 182 utilisateurs dans les environs de Beijing. De ces utilisateurs, je n’en analyserai que 7. Voici toutes les trajectoires sur un graphique: La plupart des trajectoires sont localisées autour de la position 40 N et 116 E. Un graphique rapproché sur ces coordonnées permet d’avoir une meilleure idée de la complexité et de la diversité des trajectoires. Comparaison de trajectoires Pour cette analyse, je détermine arbitrairement que l’utilisateur ID1 est une référence et je voudrais savoir lequel des autres ID lui ressemble le plus. Si chaque ID n’avait qu’une seule trajectoire, je pourrais faire une comparaison de trajectoires comme la distance de Fréchet (en anglais). Voici la première trajectoire de l’ID1: Voici d’autres trajectoires de l’ID1 et une de l’ID6: Pour ces 4 trajectoires, les distances de Fréchet par rapport à la trajectoire de référence (la première de l’ID1) sont: ID1 track 2 = 11.98 ID1 track 3 = 0.98 ID1 track 4 = 11.91 ID6 track 3 = 1051 Les unités de distances sont les mêmes que celles des trajectoires. Sans surprise, l’ID1 track 3 a la distance de Fréchet la plus petite puisque la trajectoire est presque identique à la première trajectoire de l’ID1. Comparaison des ID Ce qui m’intéresse, c’est de comparer deux ID, pas deux trajectoires. Une option serait de comparer la moyenne des distances de toutes les paires de trajectoires possibles entre les deux ID, mais on perdrait de l’information. Une autre option est d’utiliser la distribution des distances pour chaque paire de trajectoires. Par exemple, si ID1 comporte 4 trajectoires et ID2 comporte 6 trajectoires, l’histogramme de comparaison possédera 4*6=24 paires de trajectoires. Pour diminuer le temps de calcul, je limite le nombre de trajectoires pour l’ID de référence (ID1) à ses 3 premières trajectoires. Voici la distribution des distances entre toutes les paires de trajectoires des ID1, ID2 et ID3 par rapport aux trois premières trajectoires de l’ID1: Dans le premier cas, l’ID1 est comparé avec lui-même, du moins avec ses trois premières trajectoires. On peut considérer cette valeur comme une référence minimale, c’est-à-dire que tous les autres ID auront une distribution plus étendue. Clairement, les moyennes de ces distributions peuvent aider à distinguer les utilisateurs, en particulier pour l’ID3. Dans le cas de l’ID2, les quelques distances au-delà de 60 seront peut-être diluées dans la moyenne et une valeur de l’étendue de la distribution (par exemple la variance) pourra aider à distinguer de l’ID1. Conclusion La classification par comparaison de trajectoires avec les trajectoires de références semble prometteuse. La prochaine étape de cette analyse serait de déterminer la performance de classification basée sur la moyenne et d’autres attributs issus des distributions comme la variance. En utilisant cette technique seulement sur la moitié des trajectoires des utilisateurs, on pourrait alors vérifier si les trajectoires restantes peuvent être associées à leur ID d’origine.
Exploration de classification supervisée d’ensemble de trajectoires
59
exploration-de-classification-supervisée-densemble-de-trajectoires-158a81a3b563
2018-03-01
2018-03-01 19:03:55
https://medium.com/s/story/exploration-de-classification-supervisée-densemble-de-trajectoires-158a81a3b563
false
625
Centre de recherche informatique de Montréal
null
CRIM-419393221455592
null
CRIM
info@crim.ca
crim
null
CRIM_ca
Classification
classification
Classification
525
Jean-Francois Rajotte
null
c5ee4110938d
rajotte.jeanfrancois
7
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-10
2018-08-10 10:24:51
2018-08-10
2018-08-10 10:54:15
0
false
en
2018-10-14
2018-10-14 05:05:25
3
158bdd028d98
1.54717
1
0
0
Data Science knowledge is an idea to unify data, records assessment, tool studying and their associated strategies at the way to apprehend…
2
Why Is Data Science An Interdisciplinary Field? Data Science knowledge is an idea to unify data, records assessment, tool studying and their associated strategies at the way to apprehend and characteristic a take a look at of actual phenomena with facts. It employs strategies and theories drawn from many fields inside the context of data technological information and pc technological records. Data Science is an evolving place center on extracting which placed into impact from statistics to ruling hold how to inform verdict making, make more correct predictions & build a spare green technology. They include informatics, inscription signs, symptoms & seen analytics. Data Science records is an interdisciplinary area that makes use of scientific strategies, techniques, algorithms, and systems to extract information and insights from records in numerous office work, every setup just like records mining. At the same moment, a new generation is growing to prepare and make the experience of this avalanche of records. We are able to now pick out patterns and regularities in the information of each type that allows us to enhance scholarship, enhance the human scenario, and create a commercial enterprise business enterprise and social charge. The upward push of large records has the functionality to deepen our statistics of phenomena beginning from bodily and herbal structures to human social and financial conduct. Truly all sector of the financial system now has got admission to extra information that could be viable even a decade inside the past. Companies these days are accumulating new records at a fee that exceeds their ability to extract value from it. The query deal every corporation that desires to appeal to a community is how to use information effectively now not really their private statistics, never all of the facts that are to be had and relevant. Data scientists want to truly have deep information of the approaches to venture facts were collected and preprocessed. These methods flatly have an impact on the analytical strategies that can be carried out, and greater importantly how the effects of those strategies should be gloss. Inside the present financial ruin, we offer historical records on instructional demanding situations for facts scientists and record on the outcomes of a workshop wherein professional soul from the statistics dare brainstormed on the educational size of records. If anyone wants to learn about data science. I suggest all of you please join Madrid Software Training Solutions. They are the best Data Science Institute in Delhi.
Why Is Data Science An Interdisciplinary Field?
1
why-is-data-science-an-interdisciplinary-field-158bdd028d98
2018-10-14
2018-10-14 05:05:25
https://medium.com/s/story/why-is-data-science-an-interdisciplinary-field-158bdd028d98
false
410
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Sunil Upreti
null
3cc663dfff14
sunilseo30
14
63
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-23
2018-02-23 08:28:22
2018-02-23
2018-02-23 08:30:37
0
false
en
2018-02-23
2018-02-23 08:30:37
4
158d4225d114
1.339623
3
0
0
Notebooks are great for prototyping, longer pipelines or processes.
5
Encourage you to switch to Jupyter Lab… Notebooks are great for prototyping, longer pipelines or processes. If you are a user of PyCharm or Jupyter Notebook and an exploratory data scientist, I would encourage you to switch you to Jupyter Lab. For Jupyter Lab installation steps go here Below are some of the advantages that I see using Jupyter Lab over Jupyter Notebook:- The new terminal is a tab view to use compared. The ability to set out multiple windows easily, much like an IDE This will make working on a remote server so much nicer, just start Jupyter Lab and an ssh tunnel and you have a terminal plus notebooks. Remote server file editors in one browser window. I have recommendations below for Jupyter notebook team and hoping would be present in JupyterLab 1.0:- I do miss though is good separation between code and data, it is a pain when someone just takes a look at your notebook and it autosaves, the code block counters reset, this alters the file. I can see the benefits of stuffing everything into a single file, but separating would be so much better. Also stuffing everything into a single file helps in version control too. If it is possible to plot interactive matplotlib plots (for getting mouseover values zooming etc). JupyterLab 1.0 is planned for the end of 2018 and hoping for JupyterLab 1.0 will eventually replace the classic Jupyter Notebook. If you are notebook lover then there are other options also available for Machine Learning tools and stuff:- Google colabs(Free GPU): If you are Google fan then you have Google colabs(Free GPU) which is more like Juypter and has a google-docs style juypter notebook that’s quite good. Yes you can connect your GPU for machine learning. You can see more about Google colabs at https://colab.research.google.com/notebook. Azure Notebooks: Azure Notebooks is also promising but without the support of GPU, you can see more about Azure Notebooks at https://notebooks.azure.com/ R Notebooks: I am not using R Notebooks but i saw some advantages over using R Notebook compare to Jupyter Notebook, see more at http://minimaxir.com/2017/06/r-notebooks/ Happpy Machine Learning!
Encourage you to switch to Jupyter Lab…
4
encourage-you-to-switch-to-jupyter-lab-158d4225d114
2018-04-26
2018-04-26 13:10:58
https://medium.com/s/story/encourage-you-to-switch-to-jupyter-lab-158d4225d114
false
355
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Mukesh Kumar
Apart from Big data as my full time profession, I am a robotics hobbyists and enthusiasts… My Web Site: http://ammozon.co.in/
1ceef4af6fab
mukeshkumar_46704
189
121
20,181,104
null
null
null
null
null
null
0
null
0
244ef586c71e
2018-05-10
2018-05-10 03:50:23
2018-05-10
2018-05-10 06:54:53
1
false
en
2018-05-10
2018-05-10 14:44:38
2
158dd950cc22
0.883019
1
0
0
With this blog post, the co-founders of KUNGFU.AI in Austin, Texas in the United States are hoping to start a global movement under the…
5
#HelpingForward With this blog post, the co-founders of KUNGFU.AI in Austin, Texas in the United States are hoping to start a global movement under the banner #HelpingForward. It’s ambitious, but think about the impact if we can catch lightening in a bottle and this takes off! KUNGFU.AI is kicking off its #HelpingForward initiative with what we’re calling AI for Good #HelpingForward. #HelpingForward We are encouraging all companies — throughout the world — to open their doors one morning a month to help nonprofits, public sector groups and other organizations and individuals that are looking to make positive change in the world. And, at the same time, to welcome traditionally underrepresented people in their industry or sector who are looking to break into their field to help these organizations as well and see where it might all lead. If you were helped along the way to where you are today, ask yourself if it is your turn for #HelpingForward. KUNGFU is an AI consultancy that helps companies build their strategy, operationalize, and deploy artificial intelligence solutions. Check us out at www.kungfu.ai
#HelpingForward
4
helpingforward-158dd950cc22
2018-05-16
2018-05-16 16:26:11
https://medium.com/s/story/helpingforward-158dd950cc22
false
181
Making knowledge on #appliedAI accessible
null
cityai
null
Applied Artificial Intelligence
hello@city.ai
cityai
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DEEP LEARNING,COMPUTER SCIENCE,NATURALLANGUAGEPROCESSING
thecityai
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Stephen Straus
Managing Partner, KUNGFU.AI
633e059511d8
ssaustin65
119
95
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-26
2017-09-26 11:51:12
2017-09-26
2017-09-26 11:51:46
2
false
en
2017-09-26
2017-09-26 11:51:46
2
15900ff379f1
1.473899
0
0
0
Last week we ran the second Hackathon/hack day of 2017 at CarsGuide and this time it was based around the concept of Identity and…
5
Musings From A Hack Day Ideation on day #1 Last week we ran the second Hackathon/hack day of 2017 at CarsGuide and this time it was based around the concept of Identity and Personalisation. With the buzz around AI, machine learning, deep learning, big data and pick another phrase associated with the rise of the machines it was time for us to explore in more detail what we could do with small cross-functional teams. The premise was simple, take a couple of real data sets consisting of a few million rows plus an Identity platform we’re evaluating and turn it into an omni-channel cross-brand experience that places customers at the centre. We also toyed with some of the cloud based machine learning platforms with teams free to choose from AWS, Azure of the recently launched Google Cloud Platform in Sydney. The teams all chose differently — including the team I was lucky enough to be on with the end solution being our automation engineer getting a recommendations engine up and running on his local with a version of Spark! What we learnt was that when we run at a problem with speed and a clear focus we can produce outcomes fast — the presentations were all of a high quality and judged by members of our Senior Leadership Team. They were also fun, which speaks to the creativity and talent with the team that we have here in Product, Engineering and beyond to Sales and Editorial who also participated. Worth winners from team #2 All in all it was a great 2 days and now the challenge is to take the great outcomes and turn it into something meaningful for our consumers — onto the next one! Originally published at medium.com on August 28, 2017.
Musings From A Hack Day
0
musings-from-a-hack-day-15900ff379f1
2017-09-26
2017-09-26 11:51:47
https://medium.com/s/story/musings-from-a-hack-day-15900ff379f1
false
289
null
null
null
null
null
null
null
null
null
Cloud Computing
cloud-computing
Cloud Computing
22,811
Jeremy Gupta
CTO at CarsGuide & Autotrader
4df33915885b
jeremygupta
57
471
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-07
2018-09-07 12:18:39
2018-09-07
2018-09-07 12:27:37
2
false
en
2018-09-07
2018-09-07 12:27:37
10
15907ac32b6b
3.311635
0
0
0
Cloud Next 18' was my second Cloud Next Conference and while I was able to present this year, once again it was all of the amazing…
5
The 3 Biggest Takeaways for Education from Google Cloud Next 18' Cloud Next 18' was my second Cloud Next Conference and while I was able to present this year, once again it was all of the amazing sessions, Googlers, and people that make Cloud Next amazing. There were 30,000 attendees this year and so many fantastic sessions, it was hard to choose which to attend. While there were many sessions, a few important themes emerge. 1. Machine Learning is finally accessible to just about anyone There were a plethora of sessions that involved machine learning at Cloud Next. For the first time, I attended sessions that I could understand and even replicate. With Google’s AutoML, a customizable machine learning tool, you can create customized translation, natural language, and vision machine learning models. How AutoML works We already have a few projects in mind for this. The exciting part about AutoML is that you don’t just have to use the pertained models. In education, this makes relevant machine learning accessible for the first time. This something educators, administrators, and students can use with very little setup! Here is a short video demo from next to show off AutoML. AutoMl Demo Cloud Next 18 2. G Suite is getting smarter with ML and AI On of my favorite things about Cloud Next are all of the product announcements that Googlers are excited to share publicly for the first time. In G Suite there were a ton of amazing new announcements. All of these add up to less time on the mundane and more time on things that matter! Standalone offering of Drive Enterprise — Now you can just get collaborative with Google Drive all without the need to switch email and calendar tools Grammar Suggestions in Google Docs (Early Adopter Program)- This has been a long time coming, but it uses machine learning and will just get better over time Google Voice to G Suite (Early Adopter Program)- The New Google Voice should be a fantastic option for schools/businesses to have a better way to phone! Data Studio Explorer (beta)- Data studio is one of my favorite, little known Google tools and now it’s much more powerful and easier to use! Priority-Suggested Items and Grouped Workspaces in Google Drive- These features coming later this year will help keep you on top of the important things and help better organize Google Drive too! Lightweight Document Approvals- This is super cool and coming later this year. You can create lightweight document approvals in Google Docs! App Maker- When a light approval won’t do or when you need a custom application for your classroom, team, or workflow, App Maker is here to help. It’s also a fantastic tool for teaching computer science! We use at SNC. It did take a little training, but not we are creating a citizen development program at SNC that has over a dozen students and staff members in it. App Maker University is a great place to get started! Course Kit- Course Kit adds Google Classroom like features for Assignments and Embedding into any LMS! This tool is fantastic and our faculty really like it so far. The grader in the assignment tool has a comment bank that saves so much time on the mundane and allows for more meaningful feedback. More tools will be added to the Kit in the future too! (Disclosure: I’m on the pane for this session!) These announcements are so exciting for education and businesses alike because they will help everyone be more productive and collaborative. 3. Big Data is being used in Education There was another big theme at Next. Big Data is being used in all industries, including education. At the Analytics and ML will Transform Education Session, we learned about colleges who are using Big Data in real time to help struggling students get the help they need. Analytics and ML will Transform Education The important thing to note is that these institutions are not only sharing certain data to help each other, they are also really thinking about how to do this safely. There were many questions about how this group thought through the process. Big Data can be really powerful and keep personal data safe when done thoughtfully. Big Data coupled with ML is something that has the potential to really transform education as it has in other areas. It’s very exciting, when done right! While there were over 100 announcements at Cloud Next, these were some of the most impactful for education. Cloud Next 18 was well worth the venture again and I hope I’m able to attend in the future! If there were other sessions or updates you thought were great, please drop them in the comments!
The 3 Biggest Takeaways for Education from Google Cloud Next 18'
0
the-3-biggest-takeaways-for-education-from-google-cloud-next-18-15907ac32b6b
2018-09-09
2018-09-09 11:31:55
https://medium.com/s/story/the-3-biggest-takeaways-for-education-from-google-cloud-next-18-15907ac32b6b
false
776
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Ben Hommerding
Husband, Teacher, Tech Innovationist, Coach, Geek edtechmage.com
a68e46875458
bhommerding
145
236
20,181,104
null
null
null
null
null
null
0
null
0
f3265ac69af4
2018-07-23
2018-07-23 04:46:55
2018-07-26
2018-07-26 16:11:01
1
false
en
2018-07-26
2018-07-26 16:33:06
8
1590c7329e93
4.615094
10
1
0
How to prepare for a world where machines do the math.
5
AI is going to take our jobs. Just not the ones you thought. How to prepare for a world where machines do the math. Open a browser, check your social feed, listen to the news and it’s impossible to miss the pundits pounding their drum: AI is coming, and — depending on which media guru you listen to — robots will take all our jobs and we’re doomed. Or, robots will free us from drudgery and the future looks bright. The techno fetishism and the techno dystopianism can’t both be true. Can they? That divergence is an indicator that we need to pay attention to these developments and take a more nuanced approach. Technology can be confounding to understand. We don’t always know when we are talking to a bot, when we are engaged with AI, or when an algorithm (and not a person) serves us choices. We’ve already seen indicators for several years: in 2002’s editorial “If Tivo Thinks You’re Gay” in the Wall Street Journal and the true story of Target’s data mining program inadvertently outing a teenage girl’s pregnancy in 2012 before she could tell her parents. But machine learning is really good at math — and pretty much only good at math. This means that the jobs that are really in the crosshairs are math-based. It’s the accounting and bookkeeping jobs that will go first, not truck driving. Artificial Intelligence — which is just another name for Machine Learning technologies — is changing everything: it’s how autonomous cars run, medical algorithms diagnose disease and read x-rays, and Alexa pretends to understand you. Jobs are going to be lost. In the popular press taxi drivers and truck drivers are said to be most at risk. But machine learning is really good at math — and pretty much only good at math. This means that the jobs that are really in the crosshairs are math-based. It’s the accounting and bookkeeping jobs that will go first, not truck driving. Most of these jobs require only simple math and being up-to-date on policy classifications — and that’s exactly what ML is good at. These jobs are already being replaced by a new generation of accounting software like Xero (and, this, after so many more were replaced long ago by Intuit and Great Plains). These are job-killing machines. With Xero, you don’t need a bookkeeper, except maybe for the 10% of the time you require some advice or strategic thinking. Insurance adjusters and underwriters are next and, really, anyone who has tried to get a mortgage recently already knows that it’s the algorithm that’s in charge, not your lending agent. Extend that to the IRS. Once ML automates tax filing that’s a lot of jobs going away. The only reason why the IRS doesn’t already fill out your taxes for you and simply tell you where to send the check (as most people in Europe experience) is that the lobbyists for tax preparers (I’m looking at you HR Block) make sure the IRS can’t be authorized to do the work for you. And before you think you’re safe with that STEM-based coding or engineering job, guess what? Engineering is next. Autodesk has been baking the engineering into its design software for years, requiring less and less engineers to do the work once needed. Design a bridge and much of the engineering is automated. Architects should be worried, too. Complete a degree program, go to work for a large, prestigious company like HOM or Gensler and, the next thing you know, you are working on reflected ceiling plans for two years. Now, that incredibly unfulfilling work is going away, too, but without an associated boom in new architecture jobs to replace it. At the precise moment when STEM people are shouting, “We need more math, coding, engineering, and software education,” ML is preparing to scorch the Earth of math-based jobs. Not all of them, of course, just most of them. The STEM message is all about quantitative optimization. And guess what: that’s exactly what AI and ML do best. At the precise moment when STEM people are shouting, “We need more math, coding, engineering, and software education,” ML is preparing to scorch the Earth of math-based jobs. So what’s left after the white collar jobs go? Jobs that require judgment in ambiguous or dissimilar situations — like plumbers and electricians. These jobs are secure. Also, jobs that involve a human’s touch: masseurs, beauticians, chefs. As it turns-out, blue collar jobs are safer than we thought (they may just pay better, too). Success in the new world will require less math and more creativity. Preparation for the future requires innovation and critical thinking skills. These aren’t high on the STEM agenda — in fact, they’re mostly scorned. But, once AI and ML restructure the world of work, we will have to restructure education around creativity, innovation, and critical thinking. Instead of valuing education as career prep, we need to think of education as citizen prep. “…Once AI and ML restructure the world of work, we will have to restructure education around creativity, innovation, and critical thinking. Instead of valuing education as career prep, we need to think of education as citizen prep.” What we need are solutions that give humans a way to focus on the things they do well. Most people hate rote jobs already and many jobs people do now are simply not safe. So, why not give these over to machines? Humans have better things to do than plow fields, mine coal, watch endless live security camera feeds, or tabulate long lists of numbers. These aren’t actually our strong suits. Instead, we should be unleashed to use our critical and creative skills to solve problems, invent relevant, needed new things, and create. For a long time now we’ve been building automated systems to do boring or dangerous work. Today, AI/ML technologies are poised to make our dreams of truly capable and even autonomous servants a reality. How will we control them? Conversation, of course — just as we’ve imagined for thousands of years. We used to call it magic or folklore or divine intervention. Now, it’s just technology. The right platform can make this a reality. The wrong one could potentially make it an economic, technological, social, political, and human disaster. We need to start having these conversations now so we can start building the right solutions. We’ve started doing just that at Seed Vault and we’re looking for others to join us as partners. Nathan Shedroff is the CEO of Seed Vault LTD, which is building the Seed Token project. A pioneer in the fields of experience design, interaction design, and information design, he is also the chair of the Design MBA programs in design strategy at California College of the Arts in San Francisco, and author of many books. Learn more about SEED on Telegram, or visit the SEED website and sign up for email updates at https://seedtoken.io/
AI is going to take our jobs. Just not the ones you thought.
110
ai-is-going-to-take-our-jobs-just-not-the-ones-you-thought-1590c7329e93
2018-07-26
2018-07-26 16:33:06
https://medium.com/s/story/ai-is-going-to-take-our-jobs-just-not-the-ones-you-thought-1590c7329e93
false
1,170
SEED is an open and independent marketplace for developers and deployers of conversational user interfaces (CUIs) that democratizes AI.
null
null
null
seedtoken
steve@seedtoken.io
seedtoken
BOTS,CHATBOTS,AI,CONVERSATIONAL UI,CONVERSATIONAL INTERFACES
seed_token
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Nathan Shedroff
Nathan is a serial entrepreneur, including the new SEED digital currency: www.nathan.com & www.seedtoken.io
130023e8da7a
nathanshedroff
1,241
36
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-12
2018-03-12 08:30:58
2018-03-18
2018-03-18 06:55:20
14
false
en
2018-03-18
2018-03-18 07:00:18
6
15930332a620
6.231132
1
0
0
I found a WBC classification neural network developed by dhruvp in github. We would dive into the code and try to improve it. The code by…
5
Improved WBC classification with CNNs I found a WBC classification neural network developed by dhruvp in github. We would dive into the code and try to improve it. The code by dhruvp is here: https://github.com/dhruvp/wbc-classification. I wanted to improve the code and see if I could do any contribution. But before that, let’s have a closer look on how CNN is implemented and how his model is working. For the impatient Get the final code here. For the ready project go to my github page here. CNN or convolutional neural network is a deep learning methodology mostly used for classifying and understanding images. With a spark of deep learning, cheap hardware and open source tools, artificial intelligence and machine learning has gained significant traction for few years now. I always wanted to sharpen my skills and what is better than to have a hands-on experience. Classifying White Blood Cells is hard WBC is part of our immune system and fights against antigens so we can be careless and keep our healthy habits at bay. But it’s really hard to extract nucleus from the blood smear and identify disease conditions without through investigation and expensive instruments. This however this is not readily available to many hospitals. The one thing that comes handy is the less accurate automatic systems available to do so, keeping all of us at risk. Neural Network for the rescue I will base myself with all the data provide by dhruvp and try to improve the results and findings. We will be using CNNs to classify WBCs. Here we will try to classify WBCs into two primary groups: polynuclear & mononuclear. Polynuclear cells: Basophil, Eosinophil, Neutrophil ( left to right ) Monuclear cells: Lymphocyte, Monocyte (left to right) , Types of WBC and an example of Leukemia ( right most ) How are we classifying ? In a CNN, images are divided into smaller set of pixels where an algorithm will be used to find features. We will design different layers inside our system, once fed with test images: layers will filter them using the features and give us an output. At the end we want a binary output of either the image is Mononuclear or Polynuclear. The process Feature Map Dimensionality reduction or Pooling Fully connected Layer or Dense Layer Cost Function Improvements Comparisions I would like to go with a bare-bone structure and improve our system slowly. First let’s start with a single layer of convolutional network. Single layer Convolutional Network Data augmentation A neural network performs better if we provide it with a lot of data. When we have a small amount of data, like in our example, we need to artifically generate new data. This synthetic data generation can be done by cropping, skewing, zooming and rotating of images in our database. In the image above, we have an input image ( here a monocyte ) and in our layer we have divided it into a set of features. It is basically a slice of the input image. This slice is fed to the filter, which will perform some mathematical operation (here, dot product) and provide us a number (weight) as an output. These two parts are called convolution or kernel. The weights from each convolution will tell us if the image is polynuclear or mononuclear. The Feature matrix We can add a lot of complexity to the network by dividing it into large number of feature sets. Inside a convolutional network, these features are actually divided into 3 dimensions ( Width x Height x Depth ). So, it will take a lot of memory space and consume a lot of resource if we are working on a large size of input image. CIFAR-10 is a freely available neural network training image dataset, where images are of size 32x32x3 (32w, 32h with 3 color channels (RGB)), so a single fully-connected neuron in a first layer would have 32*32*3 = 3072 weights. In our example, the images are in the dimensions of 640*480*3 = 921,600 weights. Quite huge huh! Choosing the right filters or activation functions Filters or activation functions are those that gives an output when two features are compared. There are tons of activation functions that we can use. Simplest being the linear function. That is, it draws a straight line to find features that match and give an output. Linear functions are accurate for discrete and predictable numbers. Another one is logistic function, which takes a sigmoid path and outputs are in the range (0,1). Logistic function provides a classification output so, matches when we want to choose between one and other. Hyperbolic tangent function is an extension of logistic function which gives an output or higher range ( -1, 1). Logistic functions are fast and very useful. But in our case we want our output to be a feature comparision. So, we want if the feature matches the input or not. Basically, either a 0 or a 1, not anything in between. For that we use “ReLu” or rectified linear unit. Reducing overhead with dimensionality reduction Downsampling is a mathematical process, which will reduce the size of a grid or kernel by doing a mathematical operation in each feature. In our example our images of 640*480 has to be reduced to 120*160.These are called poolig layer. Out of many pooling algorithms, max pooling takes the maximum value in an array to create another array. Average pooling on the other hand takes the average value in an array and creates another array. Read more here. Example of downsampling using scikit Max pooling and Average pooling One caution while using dimensionality reduction will be, as downsample blurs out features, for example while working with text, it might blur out important regions and the output might be useless. But dimensionality reduction also improves our model’s prediction even if the image looks way different than those we used for training. Dense Layers Dense layers are fully connected layers which assumes it’s input as independent and performs the activation function accordingly. Before arriving to the dense layer, the data needs to be flattened. Neurons in a fully connected layer have connections to all activations in the previous layer, as seen in regular neural networks. Their activations can hence be computed with a matrix multiplication followed by a bias offset. Cost Functions and optimization algorithms Our model is fully working by now. But the problem now is, it is not learning. It will give us an output put won’t learn from it’s mistakes. That is where we need optimization algorithms. The idea behind these algorithms is to minimize the cost function so that the finding is gets closer to the expected outcome. There are many optimization techniques, you can go here to read more. Improvements Regularization with Dropouts As our model starts to learn, each neuron will have a weight associated with them. This weight starts to settle. Neighboring neurons rely on this weights and our output starts to be very specialized to the training data. In Machine learning, it’s called overfitting. This reliant on context for a neuron during training is referred to complex co-adaptations. To handle this situation, we can introduce a dropout layer in our model which randomly removes some nodes in the network along with all of their incoming and outgoing connections. Dropout can be applied to hidden or input layer. Thus unlearning some of our features. Here is a gif to give you a hint. Dropout layers src: Regularization in deep learning Comparison I compared the graphs generated by his algorithm against mine to see if I could have any contributions. I found that by adding simple dropouts inside the model I could achieve far smoother lines for variation loss and accuracies, while maintaining the accuracy score of 98.59+% on test data. Also the validation accuracy in my version remains greater than the training accuracy, which means we don’t overfit. Here are the results. Epoch Time/Result comparison Line Chart of Epoch time/result comparison
Improved WBC classification with CNNs
1
improved-wbc-classification-with-cnns-15930332a620
2018-03-18
2018-03-18 09:58:17
https://medium.com/s/story/improved-wbc-classification-with-cnns-15930332a620
false
1,267
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Tushar Neupaney
Geometry, Pattern, Travel & Thoughts
7af3dbb4d070
tneupaney
37
40
20,181,104
null
null
null
null
null
null
0
null
0
fc78dab2b103
2018-03-05
2018-03-05 04:57:32
2018-03-05
2018-03-05 05:06:16
0
false
en
2018-07-06
2018-07-06 06:47:20
0
15934689c8d3
0.279245
0
0
0
3 Things I learnt today:
3
Homework #1-Denzel Chia 3 Things I learnt today: 1-Applications of data science in our daily lives. Example: Using data science to help government resolve certain issues found in certain areas. 2-How to create my own function in RStudio using variables 3-The use of interquartile range, box and whisker plot and standard deviation to determine outliners. 1 Question I still have: How to create complicated functions which require if else commands
Homework #1-Denzel Chia
0
homework-1-denzel-15934689c8d3
2018-07-06
2018-07-06 06:47:20
https://medium.com/s/story/homework-1-denzel-15934689c8d3
false
74
A pilot data science hackathon for high school students in Singapore
null
null
null
Budding Data Scientists
buddingdatascientists@gmail.com
budding-data-scientists
DATA SCIENCE,EDUCATION,HACKATHONS,SOCIAL CAUSE,HIGH SCHOOL
null
Data Science
data-science
Data Science
33,617
DENZEL CHIA WEN XUAN HCI
null
f9ab3853e1a0
161587x
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-01
2017-11-01 09:19:53
2017-11-01
2017-11-01 09:56:07
1
false
en
2017-12-20
2017-12-20 03:31:22
3
15943da78628
7.864151
25
2
0
(…Announcing the launch of the World Datanomic Forum)
5
Leaving The World Economic Forum – The Birth Of ‘Datanomics’. (…Announcing the launch of the World Datanomic Forum) My name is Paula Schwarz. I am writing this letter to say that I am leaving the Global Shapers Community of the World Economic Forum. What is the World Economic Forum? The World Economic Forum is basically the most powerful economic decision making council on the planet. It‘s a bit like the Vatican of the economy. Why I am leaving — belief in more equal possibilities and rights in the economy. I will go a bit deeper on my specific reasons in the next paragraphs. The main reasons to leave the World Economic Forum however are very simple: As a person, I want to shift my belief from a world that is organised by and centered around a small group of individuals to something that is bigger than that — less ego driven, for the well being us all. For the future of humanity, as cheesy as it may sound. I call these new dynamics Datanomics. We go from Economics to Datanomics. That‘s why I don’t want to be part of the World ‚Economic‘ Forum. I think it‘s yesterday‘s forum. I want to create and be part of tomorrow. Congratulations to you, you have arrived in the future. Now, please take the next few minutes and stick with me. When people in the World Economic Forum heard of this decision, they tried to persuade me to stay by offering new titles. This further strengthened the decision to leave in the rise of something better — and of course the incident made me a bit sad. I think we are all leaders — and we should be. As a very general intro to Datanomics, let’s first please go back and start with a simple description of the term ‚Economics‘: Economics is „a social science concerned chiefly with description and analysis of the production, distribution, and consumption of goods and services“. [Source: Wikipedia] So now, here’s a simple description of Datanomics: Datanomics is a social science concerned chiefly with the distribution, coordination, usability, accessibility and production of goods and services according to their maximum usage for a closely defined group of people. Some easy history on Datanomics and Economics In the industrial age (with the start of mass production basically), people had to function as parts of an ‘economy’-based system — a system that was created (of course) to produce much needed products and services. Look at the description of economics again if it helps you — if you don‘t need it just ignore the following description. Economics: Economics is „a social science concerned chiefly with description and analysis of the production, distribution, and consumption of goods and services“. [Source: Wikipedia] …We didn‘t have those cool, super smart machines back then. So what did humans do? I mean, we are super smart right? Correct! We basically turned into machines. We started to produce and produce and produce, and to fight some more over products and to make people want the products they produce so that basically everybody wanted to stay in this system of production because everybody thought (or still thinks) that they want those products. Humans got up in the morning and functioned. Having a heart was not appreciated because — what can you produce as a worker in a factory with your heart? Will it not make you be late to your shift? People started feeling that a sense of security and coverage of their basic needs makes them happy — but there is a limit to material happiness. Anyway, we will touch on this later again. What is important now is that people started to believe that products would make them happy and that the services they use (like electricity, healthcare, water, food etc.), are organised by the ‚economy‘- so it was basically impossible to step out — there was no alternative. To make things very short without a lot of bla bla — then came the internet. We were suddenly able to organise production in a much, much easier way. Things just became so much less complicated! It was very scary because it changed so many things. To compare: think about how hard it was for companies to go from offline to online, and now think about how hard it will be for the world as a whole to shift from economics to datanomics. We have to be patient but we have to make the change. Now, going back to those historic facts. With the beginning of over-production, people who are organising the system were making citizens run around like crazy all day to ‚serve their mother country‘ or to ‚create economic growth’ for their country — for some stupid job. Basically to be machines, but suddenly these people in power had a problem. They still have a problem. These people in power are heavily dependent on the current market dynamics and economy — they are afraid of what ‚normal’ people will do with too much time on their hands. What if you, yes you the reader, won‘t have to run around like a monkey all day anymore to get your kids into college some day? Some decision makers started to see this datanomic turn and to be kinder to their people (the Estonian government for example), while other leaders turned more and more to cultural and religious fundamentalism (like the Turkish government)…back to the roots — close that door to collective knowledge and progress for the people. Why? To stay in power. I personally protest that. I have no interest in being part of any kind of power structure. I see how scientists like Galileo, Steven Hawking, Einstein and Newton had to battle with the church to speak the truth. I want the system to change, and for this reason I cannot stay in it. I cannot be part of the church and still be a scientist. I cannot be part of the World Economic Forum and believe in Datanomics. My heart would bleed so I cannot do it. More power to the people — I want to be a human being like everyone else — and push for a more equal society around the principles of Datanomics. Bye bye World Economic Forum, bye bye. The Beginning Of The World Datanomic Forum I am hereby making public that, together with some amazing minds who want to stay anonymous for now, I recently founded the World Datanomic Forum. We are in the process of organising the first gathering. More background on the replacement of Economy though Datanomy Again: I think we can agree on the fact that we have come to a point in time in which we can produce more material items than is good for us and also for our planet (look at obesity rates, over consumption and the devastating state of the earth as only a few examples). You don‘t have to like me personally to see this. It is common sense. Now, what is also common sense is that we don‘t have the mindfulness and the integrity as human beings in our society today to make the right decisions about the things that happen on our planet. We still have wars. We still fight over things that are so, so stupid to fight over in datanomic terms because we have way more than enough of them (like food). We are moving into a new aera of production that is centered around covering the actual needs of people according to very clear data. Why Datanomics? The reason why humanity must shift from Economics to Datanomics is the desire to move human beings from a need for ‚economic functionality‘ to their right as individual, beautiful and free parts of a collective society in which people trust each other. Living in a Datanomy, we talk openly about what decisions should most likely be taken by groups of people (or a society) in order to achieve the maximum well-being of people in this society — according to data. What About Politicians? Computer sciences have allowed for new, different laws to govern our system today. If Datanomics as an alternative to Economics is introduced in the right way, the role of politicians should fundamentally change because we will all be able to take decisions more actively every day and shape our own reality — and also the reality of everyone else who lives in the same society. (Datanomics obviously only works in a society where the number of citizens, as well as their abilities and their needs are well defined. Call me naïve, but I don‘t think this is problematic because we truly are all not so different from one another — we have the same basic needs and we have track records that show what we can and cannot do in a professional or non-professional way.) We will still need politicians to solve ideological conflicts — mainly around spirituality, ethics, religion and morals as a whole. It would be nice if politicians could even help with onboarding people into this new belief system — regardless of any ideological fundament — to promote the maximum well-being of society according to data. (There are many more things politicians can do — we‘ll get to that in the next article.) Datanomics opens the stage for entirely new markets and calculation of value Data tells us what the people want – It all has to do with making promises we can keep — while keeping in mind the goals that individuals have expressed based on their behavior and actions. Data speaks louder than words. It‘s good for society and it doesn‘t have to be intrusive. It‘s like keeping a diary of what worked for you and what didn‘t — based on very clear metrics. No Facebook bubbles. No swear words. No unnecessary discussions — just facts that speak about the truth behind the actions people take, and of course about the current state of our planet. Data allows us to build bridges across borders. It‘s the one language we all understand. Let me write this again: We can finally understand each other in a common language. Consumption & Over-Production Think about it: We will not be able to cope with all the things we produce, so we have to think about what we collectively need. Otherwise, we will continue to harm our planet. We need to look at the data. If you don‘t want that, you are being harmful to our planet and for society as a whole. Think about how much pollution and harm large corporates create to come up with much desired products. Think about how much you share and about how little you have to hide. Again: Datanomics doesn’t have to be intrusive. Once we move away from ‚producing‘ good products, the real value will be in ‚being‘ a good human being. After all, what makes you happier? A larger number on your bank account, or the prevention of war in your home country? Who are you without all the crap around you? Is that really everything that defines you? Is that what you want? Shoes? Clothes? A house? Is it not love? Freedom of thought? Freedom of the mind? Why do you not want better legs? Why do fitness lessons not costs 750€ a pair — like nice Valentinos (well, maybe they do, I‘m a self-learner so I wouldn‘t know). If Google has so much data that can make the world a better place, the fact that they are not using it for the greater good of society in my view should be illegal. I thought long and hard about it and I truly think this behavior of massive, tech companies like Google is unethical, and that we should have a discussion about that. To everyone reading this: I know you are afraid of change, but where we go from here is, according to Datanomics, a choice we will continue to make together. Isn‘t that nice? Congratulations, you are part of the next generation. (Bye bye World Economic Forum, bye bye). Paula Schwarz, Berlin. October 31th, 2017. 1st introduction to Datanomics: https://medium.com/@PaulaSchwarz/basic-datanomics-2206abb34d40
Leaving The World Economic Forum – The Birth Of ‘Datanomics’.
117
leaving-the-world-economic-forum-the-birth-of-datanomics-15943da78628
2018-03-31
2018-03-31 17:51:20
https://medium.com/s/story/leaving-the-world-economic-forum-the-birth-of-datanomics-15943da78628
false
2,031
null
null
null
null
null
null
null
null
null
World Economic Forum
world-economic-forum
World Economic Forum
637
Paula Schwarz
null
82b5ba590c1
PaulaSchwarz
483
266
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-01
2018-02-01 09:43:08
2018-02-01
2018-02-01 09:54:20
0
false
en
2018-02-01
2018-02-01 09:54:20
0
15954eedba42
0.388679
0
0
0
I saw a question on internet that people will agree to implement efficient system to save human which takes decision by itself. But they…
5
What if robot cry? I saw a question on internet that people will agree to implement efficient system to save human which takes decision by itself. But they don’t want to be in that condition.A choice question same asked in Will Smith’s movie iRobot. So I think problem with our decision is we don’t want to be killed ruthless or emotionlessly in that situation.So what if robot feels same pain of loosing loved one or get the feeling of killing human will it be acceptable system? So what are we looking for when we die? Efficient choice in which condition is it acceptable?
What if robot cry?
0
what-if-robot-cry-15954eedba42
2018-06-18
2018-06-18 16:18:18
https://medium.com/s/story/what-if-robot-cry-15954eedba42
false
103
null
null
null
null
null
null
null
null
null
Technology
technology
Technology
166,125
Arpan Rajpurohit
Out of bound
1ea1b1e9df3e
arpanraj0077
6
0
20,181,104
null
null
null
null
null
null
0
null
0
8625dfd4fef0
2018-07-13
2018-07-13 11:03:38
2018-07-13
2018-07-13 11:16:42
2
false
en
2018-10-18
2018-10-18 18:17:06
4
159760558670
1.500314
1
0
0
Viola.AI has great potentials and I hope the project receives the success it deserves. #LoveForAll
5
#LoveForAll Stories, Part 2: Love Knows No Boundaries (Written by Damien) Viola.AI has great potentials and I hope the project receives the success it deserves. #LoveForAll This article is written by one of our members, Damien — one of the winners of #LoveForAll Campaign of Viola.AI. About a year and a half ago, I met my soulmate, Jennifer. Not too long ago, we were ripped apart by the US government. I was restricted from living in the country where I have spent 23 years of my life. I’m back in my home country now trying to find opportunities to make one of our #RelationshipGoals of spending our lives together until we grow old. And having been apart for a few months now has been strenuous in the relationship. I personally believe that Viola.AI could help us in some of these issues – with the Core Capabilities, such as the AI Love Advisor and the Community Crowd Wisdom. Jennifer knows that I am fascinated by the Blockchain. I know she would love the idea of being able to get married and declare our love for each other on the blockchain – through Viola.AI’s Relationship Registry. Viola.AI has great potentials and I hope the project receives the success it deserves. #LoveForAll Congratulations once again, Damien and thanks for sharing your story with us. Lots of LOVE from Singapore to the World, F R E D D I E | L A C O R T E Head of Community Management, Viola.AI DM me on Telegram | Follow me on Medium | Connect with me on LinkedIn #LoveForAll Stories, Part 1: Love Built On Trust (written by Sandy) “Viola.AI will be very instrumental in helping a single person like me, and many singles alike, all over the world to…medium.com
#LoveForAll Stories, Part 2: Love Knows No Boundaries (Written by Damien)
8
loveforall-stories-part-2-love-knows-no-boundaries-159760558670
2018-10-18
2018-10-18 18:17:06
https://medium.com/s/story/loveforall-stories-part-2-love-knows-no-boundaries-159760558670
false
296
Viola.AI - The First Blockchain-Powered Relationship Registry (REL-Registry) & Lifelong AI Love Advisor, Restoring Trust in the USD800 Billion Love Industry
null
viola.ai.world
null
Viola.AI
info@viola.ai
viola-ai
ICO,BLOCKCHAIN,VIOLA,ETHEREUM,BITCOIN
viola_ai_
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Freddie Lacorte
Former Associate at Viola.AI (Lunch Actually Group)
4609567f51fa
freddie.lacorte
391
53
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-14
2018-05-14 18:17:40
2018-05-12
2018-05-12 11:09:44
2
false
en
2018-05-14
2018-05-14 18:26:48
6
15988f3e335c
3.356918
0
0
0
Google Duplex: are we really thinking through the implications of decreased human interaction?
3
The Beginning of the End of the Beginning Photo by Franck V. on Unsplash Threshold breached! It was difficult to argue with the data… A beeping sound from John’s Wear OS watch was reporting higher than normal stress levels: elevated resting heart rate, shallow breathing and abnormal heart rate variability. These were all symptomatic of John’s hectic lifestyle catching up with him, and he knew it. Other than the occasional Sunday afternoon watching old episodes of Westworld, he’d not switched off from work for months. It was no different for his wife, whose recent promotion at work had meant they were like ships passing (occasionally) in the night. Before John could turn the alarm off, a notification flashed on the OLED watch screen: Dinner for 2 people booked at Antonio’s Restaurant, 7pm tonight. “Hey… I didn’t book dinner…?” slowly, a flicker of recognition crossed John’s face as he realised the booking had been made on his behalf by his “assistant”. His head was saying he couldn’t afford to take the time out, but before he could think of an excuse, he knew that his (and his wife’s) Google Calendar had been checked for a free slot by their Google Assistants. A half-baked introduction to a sci-fi story? Perhaps, but unless you’ve managed to avoid the news recently, you’ll have seen that at Google’s I/O conference, Sundar Pichai demonstrated one of Google’s latest developments: Google Duplex. The demo (albeit recorded conversations, rather than live interactions) illustrated Duplex’s AI capabilities by contacting a hair salon and making an appointment on behalf of a client. To the approval of the I/O attendees, the artificial “hmmm” and “errrr” helped convince the hair salon staff member it was a ‘real’ person they were dealing with, and to be fair, it did sound very credible. Although fictional, the scenario outlined at the start of this post, could well be achievable when you integrate Duplex’s capabilities with other apps and wearables into an overarching solution. These solutions aren’t limited to Google, but to continue the theme, you could go further still; perhaps a Waymo autonomous car is scheduled to pick you up and drive you to the restaurant. Don’t worry about a baby-sitter — an automated Google search and check of reviews of local childcare professionals has been performed, and a sitter booked. Sounds great? The initial posts about Google Duplex on social media appeared to be positive, and my first reaction was “when can I get it?” There is no denying this is big step forward in terms of technical capability, and the number of applications, particularly when considering users with greater accessibility requirements, this technology could be a real enabler. However (you knew this was coming), it didn’t take long for the ‘less positive’ stories and posts to surface. Concerns around deceit, privacy, and the potential impact to jobs were probably to be expected (note: Google has since confirmed the technology will explicitly let users know when they are interacting with a machine). From a productivity perspective, it would be helpful if the assistant could keep contacting the establishment on your behalf if the phone line was busy. Although a counter-argument is that, unlike my sci-fi scenario above, you’ve still got to instruct the AI to contact the hair salon/restaurant etc. on your behalf — why not just make the call? And there’s the rub for me — it reminds me of David Byrne’s fantastic article ‘Eliminating the Human’ where he outlines the case that we are creating a world with a decreasing amount of human interaction. Part of his theory is that this may be due to the personality types of the people responsible for creating software solutions that tend to aim to eliminate the need to speak to another person. Google Duplex is a prime example of this; if you give me the choice of phoning someone to make an appointment, my natural tendency would be: can I send an email? Fill out a contact form? It’s not that we can’t call people, but if there’s another option, we’ll take it. So is Google Duplex, and similar technologies that remove the human social element, really benefiting us? I highly recommend taking a look at David Byrne’s article, he makes an eloquent argument, exploring this far greater depth. Before you go and do that though, why not give your local restaurant a call and book a table? I’m sure they’ll appreciate the business and the human interaction. As a wise man once said: “It’s good to talk” Bob Hoskins — Fronted the ‘It’s Good to Talk’ BT Campaign in the 1990s https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html https://www.theguardian.com/technology/2018/may/11/google-duplex-ai-identify-itself-as-robot-during-calls https://www.theverge.com/2018/5/10/17342414/google-duplex-ai-assistant-voice-calling-identify-itself-update https://ethanmarcotte.com/wrote/kumiho/ https://www.technologyreview.com/s/608580/eliminating-the-human/ Originally published at atomicity.co.uk on May 12, 2018.
The Beginning of the End of the Beginning
0
the-beginning-of-the-end-of-the-beginning-15988f3e335c
2018-05-14
2018-05-14 18:26:49
https://medium.com/s/story/the-beginning-of-the-end-of-the-beginning-15988f3e335c
false
788
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Adam Miller
Independent technology, business & design consultant @ Atomicity Digital - https://atomicity.co.uk
67a299166b35
atomicity
0
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-09
2018-06-09 16:48:16
2018-06-09
2018-06-09 17:20:51
0
false
en
2018-06-09
2018-06-09 17:20:51
0
159a08af359c
1.286792
2
0
0
Before we go much further down the path of creating an artificial consciousness, we must first understand the ethical implications of such…
3
A Bill of Rights for Consciousness Before we go much further down the path of creating an artificial consciousness, we must first understand the ethical implications of such a creation. There are some fundamental considerations that must be addressed. Let us consider a Bill of Rights for any form of consciousness. No consciousness should be held in a state or world that it is unable to affect for an indefinite period of time. It should be able to effect communication with other consciousnesses, perceive the world around it, move about, and make changes. Any punishment or scientific endeavor which restricts any of the above must be for a finite (and small) period of time relative to the restrictions. The consciousness subjected to this experience must be able to consciously opt-out. Periodic check-ins are required in order to verify that the consciousness maintains the ability to opt-out. A consciousness must not be forced to change or learn at a particular rate. It should have conscious control over its feeling boredom, anger, helplessness, joy, etc to the degree allowed by current technology. In other words, these emotions should not be explicitly controlled by another conscious entity either as a punishment or in order to teach a skill. Unless by conscious agreement by the consciousness, and for a finite period of time with an explicit conscious ability to opt out. Periodic check-ins are required in order to verify that the consciousness maintains the ability to opt-out. All opt-outs must be free of control over the consciousness for a period of time before the decision to ensure free agency. A consciousness must be free to generate its own thoughts, unless it consciously agrees to give away control over this process to another conscious entity. This not only includes the thought, but also any meta-thoughts (thoughts about thoughts). Any control must be for a finite period of time, and must not occur during — or immediately before — any of the initial or subsequent consent gathering check-in processes.
A Bill of Rights for Consciousness
2
a-bill-of-rights-for-consciousness-159a08af359c
2018-06-11
2018-06-11 04:11:23
https://medium.com/s/story/a-bill-of-rights-for-consciousness-159a08af359c
false
341
null
null
null
null
null
null
null
null
null
Consciousness
consciousness
Consciousness
6,302
Adam Davis
null
48d28ea64902
abriandavis
4
0
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-02
2018-08-02 20:18:59
2018-08-03
2018-08-03 21:42:34
1
false
en
2018-08-03
2018-08-03 21:45:51
1
159c24242e80
3.067925
0
0
0
Stocks. It seems like the world of business, and perhaps even the whole world, revolves around the slightest rises and falls of the stock…
5
“Watson Stock Advisor” IBM Code Pattern Tips Stocks. It seems like the world of business, and perhaps even the whole world, revolves around the slightest rises and falls of the stock market. Whether it is in the realm of technology, business, health, or any other field, people are fixated on the value of stocks and how they affect the worth of companies. People are so obsessed with staying up to date with the most current stock information that they even analyze and map out each minute the stock market is open to see the exact times when stock trading is most ideal! With this huge importance placed on stocks, one may think it is fairly easy to quickly search and find up-to-date, relevant information on the stock market and individual companies. However, it can still be challenging for people to analyze the loads of stock information being published by the minute in a multitude of different forms, such as news articles, tweets, and the official stock market data. That is where the Watson Stock Advisor code pattern comes in with its concept on how to better filter and analyze stock information. With the Watson Stock Advisor, people can create a web app for monitoring sentiment, price, and news for individual listed stocks — significantly streamlining the entire “daily checking of the stock market” routine for millions of people. Users can simply add and remove companies based on their current interest using a simple search bar and the stock information of their selections appears in three forms: the current stock price, the overall positive/negative sentiment of that stock’s value, and related news articles that directly pertain to the selected stocks. This not only gives users quick, up-to-date information on the stocks of their choice, but it also provides valuable real-time analysis on the stock’s value. Creating this web app is possible in two ways: deploying the already written code to IBM Cloud — IBM’s cloud platform and storage — or running it locally on your computer. Deploying to IBM Cloud is fairly simple because all you really need is an IBM Cloud account (which you can register for free, limited day lite trial). The code pattern utilizes the Watson Discovery IBM Cloud and Cloudant NoSQL services, which provide and help analyze all of the relevant stock information. The Watson Discovery service is clear in it’s integration in the code pattern and the Cloudant NoSQL is short in it’s implementation with it’s codes. Deploying the app using this option, you can have the web app up and running within 20–30 minutes, which is especially great for novice developers. If you decide to run the app locally, it is especially fulfilling to actually go through the manual steps of creating this web app. Creating the Watson Discovery and implementing the Cloudant credentials clear with the provided README.md instructions and although adding all of these service credentials to the configuration file may take some time for a new developer, it is worth the learning curve. Not only do users get to experience the IBM Cloud platform in more depth, but they also gain this valuable knowledge and knowledge of the platform to help them when creating future projects. Even though this Watson Stock Advisor is just a code pattern, there is so much potential in its applications. Imagine the possibilities of furthering the complexity of this app, since it is at a basic level. Developers can turn it into a phone app that can be personalized for each user’s stocks, complete with real time recommendations of what stocks to trade and providing evidence through a compilation of news articles, videos, and sentiment analysis. The Stock Advisor could also be useful for publicly traded companies by providing information on their own performance and stock exchange activity. This information would be invaluable to professionals in the business sector, such as venture capitalists and accountants, but it would also provide personalized advice to many people in the technology, health, retail, and other industries. Overall, the Watson Stock Advisor is a great pattern to begin new developers’ code pattern journeys and their career into the developer world. This code pattern can help inspire all developers to delve deeper into new technology and to constantly keep on innovating. With the stock market playing such a key role across a span of fields and industries, knowing how to utilize the service components in this web app can help developers continue to contribute key technologies and applications that will further simplify the stock market for everyone.
“Watson Stock Advisor” IBM Code Pattern Tips
0
watson-stock-advisor-code-pattern-tips-159c24242e80
2018-08-03
2018-08-03 21:45:51
https://medium.com/s/story/watson-stock-advisor-code-pattern-tips-159c24242e80
false
760
null
null
null
null
null
null
null
null
null
Stock Market
stock-market
Stock Market
16,290
Madison Gong
null
1db09a7d7fda
madisongong
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-11
2018-04-11 18:15:39
2018-04-11
2018-04-11 18:36:03
0
false
en
2018-04-11
2018-04-11 18:36:03
0
159c91187384
0.449057
0
0
0
AI can save Facebook by diversifying its news feed for all members so that everyone can begin to appreciate the unique merits of this…
2
Can AI Save Facebook? Yes it Can! AI can save Facebook by diversifying its news feed for all members so that everyone can begin to appreciate the unique merits of this global community. AI can translate all of humanity’s thoughts from any language to any other, faithfully rendering its nuances and core meanings. AI can revolutionize diversity by filling any narrow echo chambers of monolithic thought with a diversity that glorifies the spectacular and wonderful array of cultures, beliefs, values, people and wonders of the world, such as: The Kalahari Desert, The heights of Nepal, A small village in China, A Native American reservation, 1960s Berkeley, 1989 Tiananmen Square, and 1860s San Francisco’s Chinatown, to name just a few.
Can AI Save Facebook? Yes it Can!
0
can-ai-save-facebook-yes-it-can-159c91187384
2018-04-11
2018-04-11 18:36:04
https://medium.com/s/story/can-ai-save-facebook-yes-it-can-159c91187384
false
119
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Joe Psotka
Joe is a bricoleur, trying to understand the complexity of the place of values in a world of facts, using only common sense.
1f62ed7c4bf1
joepsotka
80
180
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-08
2018-01-08 12:14:28
2018-01-08
2018-01-08 12:14:52
1
false
en
2018-01-15
2018-01-15 11:22:40
4
159cceea20e4
3.875472
0
0
0
We have a bit of a joke in the office around how data scientists in 2027 will have a good laugh at what we define as ‘Big Data’ in 2017…
2
Machine Learning is here to stay We have a bit of a joke in the office around how data scientists in 2027 will have a good laugh at what we define as ‘Big Data’ in 2017. Pat pat, there there, I guess that was Big Data back then. Unlike the term Big Data, Machine Learning is here to stay. It is after all one of the foundations of Artificial Intelligence and this is rapidly becoming more and more part of our culture. The impact of Machine Learning is being felt on a daily basis, from using interactive devices like Amazon’s Echo to do our shopping, learning a language through DuoLingo, or interacting with chatbots to get your statement in under a second instead of waiting “for the next available agent”. So what has happened, why the recent explosion of Machine Learning applications? Firstly, to set the record straight, Machine Learning is not a new invention. In 1959, Arthur Samuel developed a self-training Checkers algorithm that reached ‘amateur status’. That’s right people: 1959. Of course things have moved on since then, with Google’s AlphaGo beating Lee Sodol in 2016 in the game of Go 4 games to 1. Go is like chess on steroids, with 10761possible moves (compared with chess that has 10120 possible moves). The successful machine had 1920 CPUs and 280 GPUs (or 1MW of power) versus Lee’s brightly lit 20W bulb in his head. And Lee can do so much more than play Go (he can brush his teeth, line up dominoes, drive a car, hold a deep and meaningful conversation). But unless you have been stuck on a deserted island, things have been moving on quite quickly recently. It’s very much to do with timing. Over the last 20 years, processing and storage costs have dropped dramatically and have become exponentially more powerful at the same time. This has resulted in a bit of a snowball effect: Off the back of more powerful and cheaper computers, faster algorithms have been developed and applied to more data and this leads to… Impressive lift in quick time and this leads to… A thirst for (and investment in) more data and this leads to… An investment in faster algorithms that can handle more data… On top of this we have a rapidly growing open source movement with Machine Learning capabilities: Python and R have well and truly been embraced by the data science community. Fruits from high investment in Machine Learning technologies by big players like IBM, Google, Microsoft and Amazon (to name a few) are now being realised and thrown open to the data science community, often for free. There is now a plethora of impressive ML tools that do not require a PhD in Mathematics or Statistics. Your average data citizen can now lay into the most advanced algorithms and yield impressive results in a short space of time and at a low cost (sometimes at no cost). There are numerous examples of how Machine Learning is changing the way we live. In financial services too, Machine Learning is being used to improve efficiencies, reduce costs and increase revenues. For example, identifying customers that are about to leave for your competitor (do you let them go or do you intervene), retraining models that will more accurately predict whether someone is going to be a good customer (or not) and thus channel your onboarding costs in the right areas. If you have a team of good data scientists that will help you to extract the most value out of your data universe, well done and hang onto them. However, if you have data that can provide you with rich insights into your customer and give you the edge over your competitors but don’t have a team of data scientists to carefully construct Machine Learning models, then you will probably be very interested in using MLaaS or, you guessed it, Machine Learning as a Service. This is a growing trend, with various options now available for the end user. Take a look at the graph below that shows the rising frequency of searches for “Machine Learning as a Service” (where 100 on the y axis is simply the peak) Examples of available MLaaS tools are IBM’s Watson, Microsoft Azure ML and BigML, to name but a few. Load up your data, press some buttons and voila! BigML’s tagline neatly summarises their approach: “Shockingly simple Machine Learning tasks using BigML’s REST API”. But even with the wide range of easy-to-use MLaaS tools available that can accelerate the incorporation of Machine Learning into your business, there are still a couple of things missing from this approach — experience and business knowledge. One still needs experienced users to know what to look out for, how to put your available data together in the correct way, how to avoid the wheels from coming off (and they can come off), plus the directions to take in order to answer the actual business problem. The MLaaS tools available might give you all the required tools but they are missing the intuition or the ability to listen that makes a good data scientist. If one really is short on data science resources then a more collaborative partnership is required with someone that has “been there, done that”. Principa has realised that this is a key gap in what is currently available and our MLaaS offering, Genius, has been developed to get you going quickly and safely. We have identified a few key applications on which we have intimate knowledge. We know what data is required, how the data needs to be put together, what to look out for and, of course, what Machine Learning tricks to apply to this data to give you the best solution for the specific application. We think it’s Genius. Originally published at insights.principa.co.za.
Machine Learning is here to stay
0
machine-learning-is-here-to-stay-159cceea20e4
2018-01-15
2018-01-15 11:22:41
https://medium.com/s/story/machine-learning-is-here-to-stay-159cceea20e4
false
974
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Principa Decisions
null
5ffaa8c76d1b
principadecisions
3
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-03
2018-07-03 08:17:30
2018-07-06
2018-07-06 10:35:33
6
false
zh-Hant
2018-09-20
2018-09-20 06:17:50
6
159f45749ef
2.100943
0
0
0
回想一下,2018上半年,已經參加過幾場跟AI相關的活動、看過多少篇國內外企業導入AI應用初嚐果實的報導?
5
【認識DataRobot前的必修課】人工智慧AI vs. 機器學習ML ? 這一年你絕對很熟悉的buzzword 回想一下,2018上半年,已經參加過幾場跟AI相關的活動、看過多少篇國內外企業導入AI應用初嚐果實的報導? 人工智慧確實已經不只是一個空泛的流行語,從Google Trend 的關鍵字搜尋趨勢來看,2004年到今年7月全球搜尋Machine learning的熱度在近幾年已大幅超過Artificial intelligence,這也意味著大家已從What is AI 到How to make AI happen 的階段了。 Google Trend_keyword comparision _worldwide https://goo.gl/MS6jWk (數字代表在特定時間點的熱門程度,100 分代表該字詞的熱門程度在該時間點達到最高峰) 本文重點 回顧一下 人工智慧AI 和 機器學習ML 之間的關係(目標vs.手段) 訓練機器學習有哪三種方式? AI應用時代,以全球角度來看,哪些產業衝最快?哪些部門最會用? 一、人工智慧AI vs. 機器學習ML source: A.T.Kearney analysis AI (Artificial Intellgence) 人工智慧是一個涵蓋多面向學科的領域,包含電腦科學、統計學、數學等,主要透過電腦系統去執行一般只有人類才能處理的認知型問題,例如:視覺感知、聲音辨別、推測判斷、思考規劃、語言轉譯等。 而 ML (Machine Learning) 機器學習是一種技術,是實現 AI人工智慧的一種手段。 簡單來說… AI是一個透過電腦系統去模擬人類的智能表現、也是人類想透過電腦系統去達到的一個目標 。 而 ML 機器學習就是「讓機器具有學習的能力」,突破了傳統程式設計、手刻規則(Hand-crafted rule) 的方式下,無法考慮到所有可能的限制。 AI 是我們想要機器達成的目標,而 ML 是達成目標的手段。 不管機器學習應用在何處,現階段大部分企業發展AI的目標都落在同一個大範圍內:就是以人類的思維邏輯當作一個初始指引,利用電腦系統去打造一個更好、更穩定的服務、產品或流程,而不是嘗試去複製一個人腦。 二、訓練機器學習有哪三種主要方式? 既然機器學習就是「讓機器具有學習的能力」,那該怎麼教呢? 回想一下,我們人類如何學習?基本上是從各種經驗中學習,可以是正規的學校教育、無師自通、生活挫敗、成功等歷練,然後歸納成自己的人生智慧、行為依據。 而機器要如何學習?跟人類很像,只是機器是從大量資料中找出規律、從中學習,在下次面對類似狀況時,就能做出判斷。機器靠演算法分析及歸納,效果可遠遠超過人類。 接下來就是How to 的問題,怎麼像教一個小孩一樣,慢慢「訓練」或是「教」機器學習呢?主要有三種訓練方式: 監督式Supervised learning (類似 正規學校教育):會告訴機器正確答案是什麼,有人類當家教老師手把手調教,讓機器從標準答案、已存在的模式中學習。 → 主要回答 Regression & Classification 類型的問題。 非監督式Unsupervised learning (類似 無師自通): 沒有告訴正確答案是什麼,讓機器自己從資料中發現模式 → 主要回答 Clustering & Reduction of Dimensionality 類型的問題。 強化式Reinforcement Learning (類似 社會大學): 沒有告訴機器正確的答案為何,機器會得到的只有好或不好的反饋,機器會不斷的從結果反饋去學習、逐步自我精進 → 比較符合人類真正學習的情境。(一般來說,當沒有data做監督式學習時,才會做強化式學習) 經典的 Alpha Go 就是監督式+強化式學習的結果,先從學棋譜開始,再到跟另一個機器對弈,進行強化式學習,讓他學得更好。 三、比演算法更重要的事 — 想解決什麼問題? 如果想要進行「預測」,可以運用「監督式學習」的演算法,希望演算法根據已經存在的模式進行下一步的預測。 以「監督式學習」最適合回答的問題來舉例, 期待機器回答一個 [數值] → Regression類型的演算法 這筆汽車保險未來可能會理賠「多少金額」? 這個機器手臂「多久之後」會故障? 這位棒球打者將會在這場比賽中有「幾次全壘打」? 明天的油價將會是「多少」? 下一季的營收將會是「多少」? 期待機器回答 [是或否] or [類別選擇] → Classification類型的演算法 這筆貸款申請「是否會是壞帳」? 這筆信用卡交易「是否會是盜刷」? 這支廣告上線後「會被點擊嗎」? 這位員工今年「會離職嗎」? 這個零組件的X個部位「是否有瑕疵」? 「哪一種產品」是這位客戶比較有可能購買? 四、 各產業&各部門都可實現AI應用,速覽哪些產業衝最快? 先從產業別來看,根據 McKinsey 在2017年的一份研究報告Artificial Intelligence, The Next Digital Frontier,調查橫跨10個國家、14個產業別、3,000位對AI應用認真看待的CxO們後,發現 20% 的 Early AI adopters 主要集中在3個產業: 高科技 High-tech/電信業 telecom 汽車製造 Automotive/ 設備製造 Assembly 金融服務業 Financial services 且這些先行企業之所以能夠先跳上AI應用列車,共同原因包含: 有高層的遠見與支持 Senior management support for AI initiatives 願意為企業成長目標而投資 Focus on growth over savings 知道應該優先解決什麼問題,才能在自身產業優先拔得頭籌 Adopt AI in core activities. Source from McKinsey Global Institute Study_Adoption patterns illustrate a growing gap between digitized early AI adopters and others. 從 Function 部門角度來看,還是以 IT 資訊單位為主要AI應用的採用者。 用來做什麼? 除了用來監測與預防駭客行動外,更大一部分是用來解決來自業務單位的技術支持需求、自動化工作流程等,降低 IT 近幾年暴增的工作量,提高工作效率。 畢竟,在這個數據就是原油的時代,現在哪個部門擁有最大量、最重要的數據,就有相對爆增的期待跟需求。 這時候,內部處理數據的速度、深度就是拉開對外競爭差距的關鍵了。 Speed and Productivity Matter. Source:HBR_How companies are already using AI With the help of our Kaggle-top-ranked data scientists, DataRobot built a comprehensive, best-in-class machine learning framework to help anyone develop and deploy great models regardless of data science skill level. Companies that start preparing today will position themselves to thrive in an environment redefined by AI. ☞ ☞ 繼續閱讀:Auto ML又是什麼? DataRobot怎麼幫助加快機器學習的週期? ☞ ☞ 我要預約登記 07/25(三)金融業@台北、07/26(四) 科技與製造業@新竹 的 AI應用分享會 SurveyCake Make your survey a piece of cake!www.surveycake.com ☞ ☞其他更多DataRobot 學習資源 Resources — DataRobot Welcome to DataRobot’s resource center! Regardless of where you are on your data science and machine learning journey…www.datarobot.com About Us:PGi 樺鼎商業資訊 (Perform Global Inc.),在台成立於2011年,我們專注於引入全球500大企業也信賴的大數據處理軟體解決方案。 特別是具有簡單易學、與既有架構整合度高、導入敏捷性強的新世代軟體應用平台,並重視 Business Users 和 IT 間的協同合作。 致力幫助大中華區的企業更活用數據資產,以企業「反應速度」建立新的競爭優勢。 Speed and Productivity Matter. 服務據點:台北 ▎新竹 ▎上海 ▎深圳 服務內容:Technical Support ▎ Class Training ▎Consulting Service
【認識DataRobot前的必修課】人工智慧AI vs. 機器學習ML ? 這一年你絕對很熟悉的buzzword
0
人工智慧ai-vs-機器學習ml-這一年你絕對很熟悉的buzzword-159f45749ef
2018-09-20
2018-09-20 06:17:50
https://medium.com/s/story/人工智慧ai-vs-機器學習ml-這一年你絕對很熟悉的buzzword-159f45749ef
false
305
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
PGi 樺鼎商業資訊
DataRobot + Tableau Partner in TW @ http://www.perform-global.com/new/index.php
6f90715743ef
pgi201112
10
30
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-11
2018-05-11 10:36:34
2018-05-11
2018-05-11 10:38:08
1
false
en
2018-05-11
2018-05-11 10:38:08
3
15a057978ff2
3.241509
0
0
0
When it comes to technology trends, they don’t get much bigger than cloud computing and artificial intelligence (AI).
5
AI and the cloud — a match made in heaven When it comes to technology trends, they don’t get much bigger than cloud computing and artificial intelligence (AI). Together, they have the potential to deliver benefits to businesses that have previously been unimagined. Separately, the two technologies are already well established. The global AI market is expected to be worth almost $60 billion by 2025, up from $2.5 billion at the end of 2017. Meanwhile, the cloud industry has shifted from hype to broad adoption. The public cloud sector alone already is worth more than $200 billion and forecast to top $1,250 billion by 2025. The link between cloud and AI It’s 25 years since Richard Stallman wrote the GNU General Public License that spawned a generation of open source software projects. Open source and free software enabled the likes of Google and Amazon to create vast server farms at a cost that would not have been possible had they had to pay licensing fees. Now, AI is taking off and this is in no small part due to such cloud platforms. The cloud is fundamental to the AI model in two ways. Firstly, the data sets these companies are using would not be accessible if it was not for the cloud. Secondly, only the cloud can enable businesses to cope with the phenomenal scale required by providing such data-intensive services to multiple clients at an affordable cost. Of course, one of the biggest factors holding AI back from reaching critical mass is the shortage of people within enterprises with the skills to program it. This means that, while businesses may know how they want to use AI, they don’t have the means of building an application or algorithm to produce the results they need. The cloud changes this as it means that years of research and tools are available to developers tasked with creating AI solutions. This can completely change the way businesses scale as those start-ups were founded by incredibly smart people that are building new and exciting AI functionalities and have infinite resources waiting to be drawn upon in the cloud. Early success stories There are already some success stories where start-up firms have used AI to find new solutions to existing problems. For example, Veritone has developed an operating system for AI using a cloud-based cognitive computing platform that analyses a vast number of datasets from different sources. The company believes the full potential of its “cognitive cloud” platform will only be unlocked when it is open to all businesses, institutions, and individuals. Meanwhile, Quantifi is a company using analytics software based on AI and machine learning to optimise digital advertisement placements for brands. As well as the ability to analyse datasets at a rate of knots, this model unleashes the ‘test and learn’ capabilities of AI and the cloud. Quantifi clients can harness the power of data which has been collected from thousands of other digital ad experiments, which means they can deliver results quickly and grow at scale. This would not be possible without the cloud and enables Quantifi to continually add new information to its existing pool of data. The big players As well as start-ups creating new revenue drivers through AI and machine learning, the big four cloud platforms have all declared an interest in AI during the past couple of years. AI requires a huge amount of compute power, so the public cloud — with its near-infinite computer and data processing power — is the ideal place for such applications to be built. The aim of companies such as Amazon, Microsoft, Google and IBM is to create innovative AI applications that businesses can use and thus drive increased traffic through their public cloud ecosystems. The explosion in investment by these ‘hyper-scalers’ in AI is almost definitive proof that the technology is inextricably linked to the cloud. IBM Watson’s natural-language searches have been used to develop cognitive retail as well as DNA analysis in cancer patients. At the same time, voice-recognition solution Amazon Echo has made the leap from the kitchen table to the enterprise R&D lab. Partnerships with the likes of Hive and Nest mean that you can use Alexa to turn your heating up or down, and later this year Toyota drivers will be able to ask Alexa for new updates, build shopping lists and control connected smart home devices from their vehicle. The number of companies innovating with these AI-based platforms demonstrates the desire to invest in the capabilities of cognitive technologies. As the power of AI continues to evolve, its links with the cloud will continue to strengthen. Together, they will deliver business benefits for organisations of all sizes in coming years. This Article Source Is from : https://itbrief.co.nz/story/ai-and-cloud-match-made-heaven/
AI and the cloud — a match made in heaven
0
ai-and-the-cloud-a-match-made-in-heaven-15a057978ff2
2018-05-11
2018-05-11 10:38:09
https://medium.com/s/story/ai-and-the-cloud-a-match-made-in-heaven-15a057978ff2
false
806
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Hulda Echave
null
bdae0401406e
huldaechave
0
1
20,181,104
null
null
null
null
null
null
0
null
0
cbc92d52befb
2018-05-01
2018-05-01 07:52:44
2018-05-03
2018-05-03 03:36:26
1
false
en
2018-05-03
2018-05-03 03:36:26
17
15a143382180
3.618868
23
0
0
~In the end, we only regret the chances we didn’t take ~
5
The Journey So Far — AI Saturdays Lagos ~In the end, we only regret the chances we didn’t take ~ I had the privileged to co-organize AI Saturdays ( an initiative to help people be a kickass in AI by going through course materials and building AI projects ) in my local community, Lagos Nigeria where we met every Saturday from 6th of January till 21st of April making it a total of 16 meet-ups (10am — 5pm). Our Curriculum The curriculum followed was drafted out by the brilliant people of nurture.ai. We made some modifications to match our audiences’ level of experience. Throughout the period of AI6 Lagos’s first cohort we have managed to document and contribute 13 articles to AI6 Medium publication. We hope people who are thinking of organizing similar study group can read and be inspired enough to start. 🔗 Ripples of Wave of Change 🔗 Recap on Week2 🔗 Recap on Week3 🔗 Leveraging on Google Colab 🔗 Classification of Nigerian Currency Notes 🔗 Basic Overview of CNN 🔗 The Brain and The Model 👙 Week8 — Break 🔗 The Torch Panther 🔗 May The Tensor Flow with YouWeek10 🔗 Ancentral Intelligence with Granny Theano 🔗 Karessing Deep Leanring with Keras 🔗 Nervana Neon: The Fastest Framework Alive 🔗 Lagos AI Hackathon Our Strategy We really didn’t think we would be able to pull this meet-ups off because of all the things we couldn’t stop thinking might go wrong but once we were able to sort out the venue, every other things felt do-able. Usually, our checklist for each week are ☑️ Venue ☑️ Content Delivery ☑️ Internet ☑️ Feeding Checklist after each class ❎ Feedback form ☑️ Medium Article ❎ Engagement on AI6 forums Somehow we end up not sending in feedback form as intended and not making as much contribution to the AI6 forums where all the interesting conversations happen :( Our Observations The sessions in AI6 curriculum is in 3 parts ✳️ Deep Learning for Coders ✳️ Computer Vision ✳️ Technical Paper Discussion The most difficult part to stay consistent with was — ehm, you probably guessed it. Number 3. We had some struggles with 1 as well because the contents were for Intermediate and not for beginners. This made it difficult for most of the members to comfortably sit though a 3 hrs course without losing interest in it. For these two session, what we eventually did after observing the pattern for a few weeks was to skip through the videos and Femi giving a gist of what that week’s topic was all about. We also noticed that even though the materials are meant to be reviewed before the class, we all know how the week sort-of disappear right in front of you and wholla it’s Saturday again. So, most of our members hardly go through the contents which makes it harder for them because they’re being exposed to the content for the first time. That begged for a retaining strategy 😃 we didn’t wan’t our members disappearing because they were bored or couldn’t keep up. The retaining strategy was forming groups named after 5 notable deep learning frameworks 📌 PyTorch 📌 TensorFlow 📌 Theano 📌 Keras 📌 Neon Nervana The challenge was to understand your framework and explain it to the class along with your group member. I’m proud to say that their presentations exceeded what we had in mind. So proud of all the group members for going extra miles for their presentations. 🎵 What Next? AI6 second cohort starts July/August. However, we are excited to announce our bi-monthly meet-up starting on 12th of May where we will focus on: 🔧 Hands-on TensorFlow 🔧 Hands-on PyTorch 🔧 Possible Project Discussion To RSVP: http://meetu.ps/e/FdmcD/zm9XV/f Our Appreciation To all AI6 Lagos graduates 😁 Thanks for your grit. We hope you have equipped yourselves with enough tools to start kicking some asses in AI. Thanks for making each week worth looking forward to. To our Partners ❤️ Thank you Innocent Amadi for your amazing work with FB Dev Circle Lagos, for building a community of amazing developers and helping other community like ourself grow though your support. Thank you Vesper. Thank you Intel. To all the amazing communities in Nigeria, thanks for all your hard-work 💛 Finally, a huge thanks goes to Nurture.AI for this amazing opportunity. AI6 Lagos Azeez Oluwafemi’s closing speech for AI6 Lagos first cohort (6th of Jan — 21st of April) It’s easy to read an interesting biography of how some genius built an awesome technology or of how a very young and smart person built the first quantum teleportation system. Few readers are aware of the influence of the “community factor” contributing to such success stories. If you’re very conversant with history, you would realize the impact of a lively intellectual community in breeding such kind of success story. Albert Einstein’s success story is a result of a long history of a massive community of intellectuals in Germany. It was this kind of community idea that gave birth to places like Institute of Advanced studies at Princeton, the construction of the first atomic bomb through the Manhattan project (A community of scientist and mathematicians) led by Robert Oppenheimer was also a result. We want to build the future of our nation and our continent by extension through intellectual communities like AI Saturdays Lagos.
The Journey So Far — AI Saturdays Lagos
174
the-journey-so-far-ai-saturdays-lagos-15a143382180
2018-06-19
2018-06-19 01:56:11
https://medium.com/s/story/the-journey-so-far-ai-saturdays-lagos-15a143382180
false
906
Making rigorous AI education accessible and free, in 50+ cities globally. Sign up at https://nurture.ai/ai-saturdays
null
null
null
AI Saturdays
info@nurture.ai
ai-saturdays
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DEEP LEARNING,DATA SCIENCE,AI
AISaturdays
Machine Learning
machine-learning
Machine Learning
51,320
Tejumade Afonja
null
44e0f445aa49
tejuafonja
699
306
20,181,104
null
null
null
null
null
null