| <?xml version="1.0" encoding="UTF-8"?> |
| <mteval> |
| <srcset setid="iwslt2018-tst2018" srclang="english"> |
| <doc docid="lecture0001" genre="Lecture"> |
| <seg id="1"> And let me switch to English at this point . </seg> |
| <seg id="2"> So the rest of the lecture will be in English . </seg> |
| <seg id="3"> And at some point , we will provide some simultaneous translation , not quite yet because we want to give you a little bit of suspense while I 'm talking about other subjects . </seg> |
| <seg id="4"> But what is this all about is that we want to provide human-human interaction support . </seg> |
| <seg id="5"> Now what is human-human interaction like in the real world ? </seg> |
| <seg id="6"> When we give a lecture , or a seminar , just like you see it here on the picture then it is a presentation by one speaker , but it is at the end of the day still a communication between people . </seg> |
| <seg id="7"> People watch what's going on in the screen , they watch the speaker , the speaker can have eye contact and see the people in the audience and so people connect with each other and see whether the lecture actually gets across a certain point or certain ideas . </seg> |
| <seg id="8"> And even a lecture in a big hall like this one can observe the audience and know , I can see you in the back for example , I can see that you 're talking to your neighbor or I can see over there somebody is kind of picking their nose somebody else may be reading a newspaper or something . </seg> |
| <seg id="9"> And if I see that , I may know that this is perhaps too boring or too slow and I can accelerate my lecture or my presentation et cetera . </seg> |
| <seg id="10"> Or if I see people's faces being puzzled , maybe my explanation of an algorithm or something was not clear . </seg> |
| <seg id="11"> So we connect . </seg> |
| <seg id="12"> We can tell from the audience what idea is getting across . </seg> |
| <seg id="13"> And at the same time the audience can observe a lecturer . </seg> |
| <seg id="14"> So it is really a human-human communication process , even in something like a large lecture like this one . </seg> |
| <seg id="15"> Of course much more so if we're sitting in meetings . </seg> |
| <seg id="16"> If we're sitting in meetings and talking to other people then it 's a very vivid exchange . </seg> |
| <seg id="17"> People talk with each other by speaking , but they also point to things , they produce gesture , they present power-point slides , they may look at objects or look at notes on their desk et cetera , et cetera . </seg> |
| <seg id="18"> Now by contrast human-machine interaction is very impoverished and terrible and we all have seen it , and you've seen this video , so I don 't need to play it but it is the video we all love so much because we all can relate to the situation of that poor guy in a cubical stuck in front of the computer and this computer essentially demands full attention and on top of it is rather stupid and devoid of any connection or any understanding of the human context . </seg> |
| <seg id="19"> So we like to resolve the situation and get to communication of a computer or the ability of a computer to understand human interaction and human activity better so that the computer can serve human needs better than in this poor situation . </seg> |
| <seg id="20"> Now we have other communication problems , multilingual communication problems . </seg> |
| <seg id="21"> If you don't understand English for example it's a problem because if you only speak German and I speak English up here , then you may not understand this lecture . </seg> |
| <seg id="22"> What can we produce ? </seg> |
| <seg id="23"> But if you visit China it may be worse . </seg> |
| <seg id="24"> You may not be able to speak Chinese except some of our friends who speak are native Chinese speakers . </seg> |
| <seg id="25"> But if you don't speak the language , you may be lost in a foreign language both by what they say as well by what is printed in the real world . </seg> |
| <seg id="26"> So what we really need to do is look at the human-machine process but as we will see in a moment , also the human-human communication and see how machines can better help this process . </seg> |
| <seg id="27"> Now even if we start with a human-machine process , there are already things we can do that provide helpful assistance with modalities other than simply type text . </seg> |
| <seg id="28"> Consider for example the situation we all have with a car navigation system in a car . </seg> |
| <seg id="29"> You 're driving on the road and you want to find the go to a particular destination and admittedly what we often then do is we program the GPS while driving and of course you shouldn't be doing this because it is unsafe . </seg> |
| <seg id="30"> And that interfaces on these navigation systems are really , really terrible because you have to punch in one letter by letter and toggle them and enter them and then press a button you can program the navigation system . </seg> |
| <seg id="31"> Wouldn't it be nice if you could just simply say it ? </seg> |
| <seg id="32"> And says I 'd like to go to Karlsruhe Am Fasanengarten fünf and then the system programs the navigation and gets you there . </seg> |
| <seg id="33"> So this is one of the things where clearly today modern dialogue systems come in place where the recognition by voice can recognize the command or the request by a human user and then the navigation system provides the speech navigation information and provides the navigation direction just like before . </seg> |
| <seg id="34"> Now this simple question of how to make a query to a navigation system is already a challenge . </seg> |
| <seg id="35"> First of all , we have to deal with conversational or spontaneous speech . </seg> |
| <seg id="36"> People generally don 't speak very clear and clean sentences . </seg> |
| <seg id="37"> They will stutter , they will produce ill-formed sentences . </seg> |
| <seg id="38"> So instead of saying how can I go to the station , they may say something like , how can I , I need to go to the station . </seg> |
| <seg id="39"> So how does a computer deal with the uhs and with the sentence that s not well-formed ? </seg> |
| <seg id="40"> That's one of the challenges . </seg> |
| <seg id="41"> Another challenge in a dialogue system is of course that if the user enters a piece of information that's not sufficiently precise , like if I say I need to go to Beethovenstraße . </seg> |
| <seg id="42"> There are probably several dozen straßen in Germany . </seg> |
| <seg id="43"> And so making such a request with a piece of information that's not sufficient needs to be specified or further specified and in a so-called dialogue system you'd like to have a system that then can say or ask a question , well do you need to go to Beethovenstraße in Karlsruhe or what city ? </seg> |
| <seg id="44"> Or if there are several such streets or several such buildings , then again you need to be able to have a dialogue system . </seg> |
| <seg id="45"> Moreover very often we have in our mind imprecise information . </seg> |
| <seg id="46"> So you may for example know that it 's a square between Kaiserstraße and Durlacher Tor or Adenauerring and so on , but you don 't know exactly where like for example if you wanted to get to this lecture hall . </seg> |
| <seg id="47"> So you 'd like to be able to specify it little bit indirectly in such a way that you can still program the navigation system . </seg> |
| <seg id="48"> So this is what dialogue systems do and at our laboratory that's one of the issues where you have a human machine interaction around this problem . </seg> |
| <seg id="49"> So once you have that you can then ask a question , how do I get to the Placa Catalunya in Barcelona or in another city . </seg> |
| <seg id="50"> And the navigation system will then give you the necessary directions . </seg> |
| <seg id="51"> Another thing of human machine interaction that gets a little bit more elaborate is human robot interaction . </seg> |
| <seg id="52"> That's something again that we are doing in the context of the humanoid robot project from the Sonderforschungsbereich , the SFB , together with colleague Dillmann Rüdiger Dillmann and his team . </seg> |
| <seg id="53"> And you've heard some of the presentations that they've given where it 's an issue of interacting between a robot and a machine . </seg> |
| <seg id="54"> But here too first of all the interaction here is complicated by the fact that again it's spontaneous speech but in this particular case it's also not a close-speaking microphone but a microphone on a robot and a robot can move around . </seg> |
| <seg id="55"> So it's a challenging task how to do even this even though it's human machine interaction . </seg> |
| <seg id="56"> Now this is if you're interacting directly between a human and a machine . </seg> |
| <seg id="57"> There are various different sorts of interactions we might envision as well . </seg> |
| <seg id="58"> Suppose for example you now want to access or want to have the computer access human information that's available only in data sources like television radio , broadcast news and various other stored resources and then further if we want to have human-human interaction that is supported by a machine or by a computer service and we'd like to look at these two or three different forms of interaction as well . </seg> |
| <seg id="59"> So here is for example an a system that we developed in here as well . </seg> |
| <seg id="60"> It's a so-called video retrieval system . </seg> |
| <seg id="61"> And it's some piece of research that we continue doing even in a multilingual situation . </seg> |
| <seg id="62"> So you want to ask a question , is there any news fro arlsruhe or has there been any strike of the Deutsche Bahn lately , for example . </seg> |
| <seg id="63"> And you want the computer to figure out what has happened in the news during the last two , three days . </seg> |
| <seg id="64"> And give you the information that only pertains to that particular query . </seg> |
| <seg id="65"> -- of course the problem in this case is that the information is not stored in textual form so you cannot simply search or google for it but it is in video form or in audio form if it's TV or radio . </seg> |
| <seg id="66"> And so what we need to do first of all is to do automatic recognition of the images in the data as well as the audio in the data to understand what the content is and then be able to search for it and perhaps at the end of the day also provide a summary that answers the questions that you may have raised . </seg> |
| <seg id="67"> We have such a system , that works in English and German . </seg> |
| <seg id="68"> So we can in fact ask queries about English news as well as German news . </seg> |
| <seg id="69"> And it provides then news clips that pertain to a particular query . </seg> |
| <seg id="70"> That you may have raised . </seg> |
| <seg id="71"> So this is a system that we're developing at our laboratory here and as fact in the future going to be expanded and extended . </seg> |
| <seg id="72"> Now we 'd like to do also implicit services . </seg> |
| <seg id="73"> So this is -- this is this connection between human and , sorry , computer and data sources that contain human information . </seg> |
| <seg id="74"> Now let's suppose we want to have a service that supports directly human-human interaction . </seg> |
| <seg id="75"> The Chil project is aimed at that . </seg> |
| <seg id="76"> So what we're doing in Chil is supporting human-human interaction , it 's instead of putting the human in the loop of computers we 're putting the computer in the loop of humans . </seg> |
| <seg id="77"> Humansns should be free to interact with other humans and the computer should observe that interaction and then provide helpful assistance . . . . . . . . . . . . . . . . . . . </seg> |
| <seg id="78"> Around this idea we formed a so-called integrated project funded by the European Commission , it's one of the largest project funded under the sixth framework program by the European Commission in this area . </seg> |
| <seg id="79"> We have been coordinating this in Karlsruhe together with the ITB Fraunnhofer Gesellschaft . </seg> |
| <seg id="80"> And as you can see there have been a large number of laboratories participating , fifteen laboratories in nine countries of the world that have participated in this problem . </seg> |
| <seg id="81"> Now , what do we mean by human-human communication or computer support of human-human communication . </seg> |
| <seg id="82"> Let me give you a couple of examples . </seg> |
| <seg id="83"> First one , we could have , of course , a meeting and in this case having a robot in the meeting and rather than the humans directly talking to the robot you want the humans talking to other humans and the robot listening to the conversation and every once in a while getting up and doing something helpful . </seg> |
| <seg id="84"> So if the robot notices that the people are getting thirsty , it should go and get coffee if it's in the morning , and maybe beer in the evening and various other things that you might imagine doing like providing printed material of information and so forth . </seg> |
| <seg id="85"> Another problem is the so-called this what we call the connector . </seg> |
| <seg id="86"> This is the idea that we all have had this problem that we're attending a meeting and somebody's phone rings and not only is this annoying because the phone rings , but it's also disconnects the person from the meeting and then if the person says I can't talk right now , I'll call you back later and then they leave messages and then you call back your friends and then they're busy et cetera . </seg> |
| <seg id="87"> So it 's a rather annoying , useless exercise that that -- wastes time and is a problem . </seg> |
| <seg id="88"> We 'd like to change that and provide different kind of assistance to to have a computer service that connects two people when they're both free . </seg> |
| <seg id="89"> So if I have the human butler or human secretary he or she can tell when I 'm busy or free . </seg> |
| <seg id="90"> And then can put a call through or connect a call whenever it's appropriate . </seg> |
| <seg id="91"> Why can 't we have machines that do that service for us ? </seg> |
| <seg id="92"> Another thing that also happens very frequently when we're sitting together with other people we forget their name . </seg> |
| <seg id="93"> So I know I met you before , but I don 't remember your name and so it would be nice if I had a machine that whispered in my ear and said , well this is Paul and you met him a year ago at such a meeting and you discussed the following topic . </seg> |
| <seg id="94"> We can't have that , and so we are always embarrassed when , you know , somebody says , oh hello , how are you , and then you don't remember when you met them and who they were and it's something that we also experience and it gets worse when you get older . </seg> |
| <seg id="95"> And so it'd be nice to have human memory support of this kind . </seg> |
| <seg id="96"> Another idea and another service of human human-interaction is if people speak different languages . </seg> |
| <seg id="97"> First of all you may want to have what you're saying translated to another person . </seg> |
| <seg id="98"> But at the same time you may also want to listen to a conversation between two people and see whether you can have that conversation translated . </seg> |
| <seg id="99"> So it's nothing as worse than you're attending a foreign language meeting and the people are sitting together and then they say Alex Waibel and you heard your name but you don't know what they said about you . </seg> |
| <seg id="100"> So this is terrible and you'd like to know what was said . </seg> |
| <seg id="101"> And again it would be nice if there was computer services that do that for you . </seg> |
| <seg id="102"> Now let's quickly consider what it takes to build such services so that we can actually build machines that that do all that . </seg> |
| <seg id="103"> So this would be the dream , this would the ideal situation what we're after having truly helpful machines that understand our need and that understand in what situation we are . </seg> |
| <seg id="104"> We call that context aware computing so imagine you are in a particular context and the machine should understand that human context , where we are , it should understand we're now all in lecture hall , it should understand a lecture is going on it should understand that I 'm looking at a particular person or looking into the audience et cetera , et cetera . </seg> |
| <seg id="105"> To have all this consider the simple example , if I if I couldn't attend a meeting and you were in that meeting and I asked you afterwards , after that meeting , why did Joe get angry at Bob about the budget ? </seg> |
| <seg id="106"> It is a simple question that you as a human could easily answer if I asked that question to you after the meeting . </seg> |
| <seg id="107"> But imagine what it would take to have a computer that can answer that question . </seg> |
| <seg id="108"> What would that computer have to be able to do . </seg> |
| <seg id="109"> What technologies would it have to have ? </seg> |
| <seg id="110"> Well , it needs to first of all recognize that Joe was there and Bob was there . </seg> |
| <seg id="111"> So IT needs to be able to identify people recognizing their faces , recognizing their voice , recognizing who they are . </seg> |
| <seg id="112"> Needs to know that Joe's emotion was anger needs to be able to tell emotional outburst . </seg> |
| <seg id="113"> It needs to know that Joe was addressing Bob when he got angry . </seg> |
| <seg id="114"> Right ? </seg> |
| <seg id="115"> If I'm looking at you or you and I get very angry then I need to be somebody needs to be able to detect I looked at you . </seg> |
| <seg id="116"> You know that I'm looking at you right now . </seg> |
| <seg id="117"> And I can tell that you're looking at me . </seg> |
| <seg id="118"> So this remarkable ability of a human being to tell who we're looking at is truly stunning because you're all sitting back there . </seg> |
| <seg id="119"> She has her eyes closed for example , she's sleeping over there in her in her seat . </seg> |
| <seg id="120"> And I can see that from here . </seg> |
| <seg id="121"> And I can see that up here you're doing this with your arm . </seg> |
| <seg id="122"> So I can tell all these things about what you're doing in the audience . </seg> |
| <seg id="123"> And it's a remarkable ability because we are doing this all at a long distance . </seg> |
| <seg id="124"> And we can tell so much about the other person simply by looking at them . </seg> |
| <seg id="125"> So anger , focus of attention , who am I talking to , it's important to know that , right ? </seg> |
| <seg id="126"> If I'm have a truly context aware environment and I say something to Matthias here about we should make sure that we don't delete our files . </seg> |
| <seg id="127"> You don't want the computer now to listen and say delete our files and then delete all my files , right . </seg> |
| <seg id="128"> It should know that I was talking to Matthias about something completely different . </seg> |
| <seg id="129"> And it should not do that right now . </seg> |
| <seg id="130"> That's context aware computing , it needs to understand the context that we're in and why we're saying something that we're saying . </seg> |
| <seg id="131"> And then last not least a topic of discussion budget and why it's happening and what is the sequence of things that we 're talking about and so on . </seg> |
| <seg id="132"> So to build all this to build all this we have to have speech recognition . </seg> |
| <seg id="133"> We have to recognize speakers , speaker identification . </seg> |
| <seg id="134"> Wehh need to identify emotion , emotion recognition or emotion identification . </seg> |
| <seg id="135"> Genre recognition , telling whether people are negotiating , lecturing and so on . </seg> |
| <seg id="136"> Language identification , which language was spoken here . </seg> |
| <seg id="137"> Summaries , topics , handwriting recognition . </seg> |
| <seg id="138"> Visually telling identity of people by faces gestures , body languages , track , faces and gaze and pose et cetera facial expressions and focus of attention . </seg> |
| <seg id="139"> There 's a long list of things we need to be able to tell and process . </seg> |
| <seg id="140"> And these are all cognitive systems , type things that you have studied in class namely doing all of these different things . </seg> |
| <seg id="141"> We have studied for example speech recognition but you see from the slides that is only one out of many different communication modalities we use . </seg> |
| <seg id="142"> So we call that the who , what , where , why and how of human communication . </seg> |
| <seg id="143"> Why do we call it this way ? </seg> |
| <seg id="144"> Because if I say for example , who is there I need to identify the person and I can tell that by recognizing the face but I can also do it by recognizing the speaker by voice . </seg> |
| <seg id="145"> I can also tell by many other biometric markers who that person is . </seg> |
| <seg id="146"> So it 's multimodal , it 's usually a decision we make based on multiple pieces of information . </seg> |
| <seg id="147"> What happened , what was said where is the person , why and how are they interacting . </seg> |
| <seg id="148"> All of these things are important . </seg> |
| <seg id="149"> So in our laboratory here in Karlsruhe as well as at Carnegie Mellon as well as many other partners in the Chil project at IBM , at Irst in Italy , AIT in Greece and UPC in Spain . </seg> |
| <seg id="150"> They all build rooms like this that are equipped with many cameras and many microphones . </seg> |
| <seg id="151"> And they can record an event that is happening in this room and we have recordings of these events seminars and they are basically now available in a European organization that distributes data and it's become a benchmark based on which we can now test out these types of algorithms . </seg> |
| <seg id="152"> What is the data like ? </seg> |
| <seg id="153"> The data records seminars that are happening in our and in other laboratories within the Chil project . </seg> |
| <seg id="154"> And here you see one example of a seminar given in our laboratory . </seg> |
| <seg id="155"> And what interests us is to extract from it certain human communication information like who is there , what is this person pointing to , what does he say , to whom does he speak where is he going to , where is he , what is the environment , what is the what is the discourse situation et cetera , et cetera . </seg> |
| <seg id="156"> So how do we answer all this ? </seg> |
| <seg id="157"> We broke down in the Chil project all these questions into actually research agenda , into actual research problems and these research items like audiovisual person tracking , tracking hands and faces , animated social agents , far field speech recognition et cetera these are actual research projects that you can do a Diplomarbeit or a dissertation on . </seg> |
| <seg id="158"> Wach of them is so hard that we have a dozen different dissertations going on in different areas of this problem . </seg> |
| <seg id="159"> Now this is enormous and is of course not something that a single person can do . </seg> |
| <seg id="160"> We have many people working on these research problems also at the other laboratories that are participating in the Chil project . </seg> |
| <seg id="161"> So this is an effort by a large consortium of researchers and people . </seg> |
| <seg id="162"> Now these are some videos that I'm playing but we will actually see them in a moment in reality because we brought some of the demonstrations . </seg> |
| <seg id="163"> So what you see here for example on the upper right is face identification as being done in our laboratory when people come through the door a camera recognizes them and identifies who they are . </seg> |
| <seg id="164"> In the upper left you're seeing a person tracking algorithm . </seg> |
| <seg id="165"> The idea here is to always track the speaker in a seminar . </seg> |
| <seg id="166"> Down here you see focus of attention tracking , this is an intriguing thing as well because you want to know who somebody is talking to or who somebody is looking at . </seg> |
| <seg id="167"> So if I 'm looking over here , you wanna be able to know or derive that I'm talking in this direction or when I'm turning around and I'm looking in that direction . </seg> |
| <seg id="168"> So we do this by actually automatically visually processing people's face and head pose direction and then we put little arrows on their face to tell in which direction they 're actually looking and who they 're actually talking to . </seg> |
| <seg id="169"> So you all know the expression if looks could kill , wenn Blicke töten könnten . </seg> |
| <seg id="170"> So we could actually build systems that could do that , so be careful . </seg> |
| <seg id="171"> Just a joke . </seg> |
| <seg id="172"> We 're being recorded here so I don 't wanna be on record of having said something like that when it 's just a joke . </seg> |
| <seg id="173"> Anyways here is another another demonstration of something that combines these things . </seg> |
| <seg id="174"> Now if you 're looking at a particular light switch and says turn off that switch you have three pieces of information . </seg> |
| <seg id="175"> One is that I 'm saying something , the word switch and I say turn off that switch but I 'm also looking in that direction and I 'm pointing with my gesture . </seg> |
| <seg id="176"> We need all that information , we need to know where the tip of my finger is we need to know where my face is looking and what I'm saying and integrate these things in order to build such services . </seg> |
| <seg id="177"> So you see this here in this demonstration where in fact Kai who is sitting right over here is in fact turning on lights and turning them off and operating various things in our laboratory . </seg> |
| <seg id="178"> So I think there is some demonstration we could briefly interrupt and show . </seg> |
| <seg id="179"> I think we might wanna do that now . </seg> |
| <seg id="180"> So while they 're being started OK . </seg> |
| <seg id="181"> Brighton . </seg> |
| <seg id="182"> Obviously all of these different processing algorithms are also then combined in fuse . </seg> |
| <seg id="183"> So if you consider for example the problem of saying if we have a meeting and I want to know who the people in the meeting are you need to recognize their faces but you could also recognize their voice . </seg> |
| <seg id="184"> And obviously we can combine the two and not always can you know for sure because in an open meeting I may not say anything , then speaker identification is not helpful . </seg> |
| <seg id="185"> And I could in a meeting just do for example what our friends are doing here , is just putting their hand in front of their nose and then recognizing faces is difficult from a visual point of view . </seg> |
| <seg id="186"> So this opportunistic grabbing of information that is relevant is obviously one one problem . </seg> |
| <seg id="187"> Bei mir geht's nie so besonders gut , es hat immer Probleme mit der Glatze . </seg> |
| <seg id="188"> OK . </seg> |
| <seg id="189"> Ganz herzlichen Dank , auch hierfür . </seg> |
| <seg id="190"> So , die Frage stellt sich natürlich mit all den tollen technischen Sachen , die man hier machen kann . </seg> |
| <seg id="191"> All das funktioniert natürlich immer noch nicht perfekt und man sieht also wie aufwendig und wie schwierig das ist , etwas was der Mensch eigentlich mit mit überraschender Leichtigkeit als Kind schon lernt Personen zu identifizieren und so weiter , aber ich denke bei uns im Kopf laufen auch Lernalgorithmen ab die immer besser werden je älter man wird . </seg> |
| <seg id="192"> Und so ist das natürlich eine deutliche Herausforderung zu schauen , dass wir Algorithmen oder Programme haben , die das so ähnlich tun . </seg> |
| <seg id="193"> Lassen Sie mich noch eine weitere Sache hier zeigen . </seg> |
| <seg id="194"> Was nun interessant ist bei all diesen Herausforderungen , ist wie wir diese verschiedenen Verarbeitungsschritte zusammenziehen in weitere Systeme oder Komponenten , die auf mehreren solchen Informationsquellen aufsetzen . </seg> |
| <seg id="195"> Wenn ich zum Beispiel mit einer Kamera Bilder aufnehme , dann kann ich natürlich auch mit Mikrophonen den Sprecher versuchen zu identifizieren . </seg> |
| <seg id="196"> Ich kann Mikrophonen versuchen zum Beispiel , die Zähnezu beschreiben , es ist ja im Raum nicht nur ein Sprecher sondern das Telefon mag ja klingeln oder die Türe geht auf und zu , die TTüresschlägt, die TTürekann man auf und zugehen sehen . </seg> |
| <seg id="197"> Und all diese Information ist natürlich relevant um dann zum Beispiel Aktivitäten in einem Raum zu erkennen . </seg> |
| <seg id="198"> Nun auch das haben wir jetzt mit diesen Sachen machen können . </seg> |
| <seg id="199"> Das ist hier ein Video von einem activity analysis oder activity detection . </seg> |
| <seg id="200"> Was Sie hier sehen ist im Wesentlichen die Aktivitäten in mehrerer unserer Büros . </seg> |
| <seg id="201"> Then we switch to English so everyone can follow here . </seg> |
| <seg id="202"> So what you see here is the activity analysis that cameras and microphones perform in our offices in our building . </seg> |
| <seg id="203"> So several students agreed by the way , this was not done without their consent of course . </seg> |
| <seg id="204"> But they agreed that they had microphones and cameras in their office and these microphones and cameras would then detect whether people are busy , whether they 're holding a meeting , whether they 're doing desk work whether they're discussing with each other whether nobody's in the in the office and so forth . </seg> |
| <seg id="205"> So you see here for example someone at their desk , you see that the desk work is the most likely hypothesis at this point . </seg> |
| <seg id="206"> And then in a moment you'll see someone comes in the in the door and starts a discussion with a person and as you see , all the sudden , the likelihood of a meeting increases more than the desk work . </seg> |
| <seg id="207"> Likelihood as people are discussing . </seg> |
| <seg id="208"> And all of this is done by a combination of microphone and images that can be collected and detected here in the room . </seg> |
| <seg id="209"> So , of course , what do you need such things for ? </seg> |
| <seg id="210"> Having such devices that can tell can analyze activity can then be useful for example for robots when you're interacting with that robot or when you're simply speaking with other people and a robot or in meetings when you want to provide the helpful service . </seg> |
| <seg id="211"> Here's another integration of such capabilities into a robot that is being talked to by one of the researchers and the robot performs certain actions based on dialogue and recognition of the human pointing to certain objects for example and giving certain instructions . </seg> |
| <seg id="212"> Now , if we want to recognize speech one of the difficulties of speech recognition is enhanced by the fact that a a speech event such as a lecture is in fact very difficult to to recognize . </seg> |
| <seg id="213"> This is a database called the translingual English database TED or sometimes affectionately also referred to as the terrible English database because it was recorded at an international conference and everybody at that international conference spoke with a different accent . </seg> |
| <seg id="214"> And hence the sound quality is rather poor and the and the error rates and you can see the error rates are relatively high we can see that foreign accented speeches are really difficult particularly if the microphone situation is poor and the speaking style spontaneous . </seg> |
| <seg id="215"> So if you put this together , you see that speech recognition is actually particular difficult because of the level of spontaneity . </seg> |
| <seg id="216"> So if I speak for example a lecture , I speak that lecture spontaneously and in the midst of it I have hesitation uhs and uhms , ill-formed sentences . </seg> |
| <seg id="217"> And on top of it the topic of discussion is very special . </seg> |
| <seg id="218"> A lecture is always about a special technical topic and will not be covered by the typical vocabulary from broadcast news . </seg> |
| <seg id="219"> So hence one of the things to notice here is that if you have a broadcast news recognizer that recognizes the the news anchor in a broadcast news TV or radio program our error rates today are fairly good . </seg> |
| <seg id="220"> We can recognize TV programs with better than ten percent error rate , lower than ten percent error rate , maybe five percent error rate . </seg> |
| <seg id="221"> But if we go and record open meetings where anyone discusses with any other other person and we use microphones that are not close talking like the one I'm wearing here but maybe such tabletop microphone , the error rate is dramatically higher , as you can see . </seg> |
| <seg id="222"> In such a case your error rates might be as high as fifty percent . </seg> |
| <seg id="223"> So still there is a lot of open questions in speech recognition today in terms of how to do this successfully . </seg> |
| <seg id="224"> Now , why is it ? </seg> |
| <seg id="225"> Part of the reason is of course the special vocabularies in special lectures but the other one is that also lectures and spontaneous speech is rather ill-formed and sloppily spoken . </seg> |
| <seg id="226"> So if you have for example a recording of a meeting . </seg> |
| <seg id="227"> So this was from a real meeting at Carnegie Mellon where the speaker said I think you were saying that they tried to influence but if you do the recognition by a machine , you get rather poor results . </seg> |
| <seg id="228"> Now , if you give the same speaker who said that sentence in the meeting the same microphone and a manual transcript of what she said in the meeting and you ask that speaker to reread her own sentence , so you ask her to read the sentence she spoke in the meeting it sounds like this and you can see actually that the said automatically recognized sentence below is much better than above here . </seg> |
| <seg id="229"> The error rate over lots of data of such reread spontaneous discourse is only half of the error rate of the actual spontaneous discourse . </seg> |
| <seg id="230"> So reread speech and spontaneous speech are dramatically different in terms of difficulty in speech recognition , and the reason of course is that when people speak spontaneously they swallow a lot of words or they skip a lot of words . </seg> |
| <seg id="231"> Now distant microphones is another challenge . </seg> |
| <seg id="232"> If I speak into a close-speaking microphone a recognition both of speakers as well as speech is much easier than if I have a remote microphone on the table or on the wall . </seg> |
| <seg id="233"> Now , back to the question we raised initially . </seg> |
| <seg id="234"> If we have a system that tries to do all of this recognition of faces , recognition of focus of attention , recognition of speech , recognition of speaker et cetera , et cetera we need to make progress based on solid benchmark performance . </seg> |
| <seg id="235"> One way that is being done is by having these Olympic Games of speech recognition or visual perception et cetera . </seg> |
| <seg id="236"> And this is being done now in two workshops that take place every year where people get a unknown or a secret test sequence from an actual meeting . </seg> |
| <seg id="237"> And different groups then try to do the speech recognition or the focus of attention tracking and so on . </seg> |
| <seg id="238"> And the algorithms of different research teams is being evaluated and then at the workshop the results are being discussed and the winner of the contest is then of course shown and that has provided tremendous speed up in performance or improvements so in some sense that we now can actually answer these questions , who was there , where are they , where are they going to , with actual numbers and actual performance numbers where we know seventy-six percent of the time or eighty percent of the time we can tell where the person is or who the person is . </seg> |
| <seg id="239"> And we can also track progress and know how much better these algorithms are getting as we go . </seg> |
| <seg id="240"> Now many of these things are obviously difficult so each of these individual processing algorithms are very hard to develop and to get right and to get good performance and one of the things I 'd like to stress with this slide is if we're actually looking at open environments like lectures and seminars we're actually also looking at realistic real data . </seg> |
| <seg id="241"> Many of the databases in the past have been artificial . </seg> |
| <seg id="242"> -- people sit directly in front of the camera and then you get the face photographed or something . </seg> |
| <seg id="243"> But in this data that we're looking at it's much harder because people turn their head they put their hand in front of their faces they talk sloppily , it is real data as humans talk with other humans . </seg> |
| <seg id="244"> Now let 's turn back to the question of Chil services . </seg> |
| <seg id="245"> Whatat can we do with all of that ? ? ? ? ? ? </seg> |
| <seg id="246"> What kind of services can we build ? </seg> |
| <seg id="247"> And I think you will see that that is actually a lot of fun to build these interesting systems . </seg> |
| <seg id="248"> So we talked about the connector and for this I wanna show you a video . </seg> |
| <seg id="249"> OK , this is the way the world is now , but now let's see how it could be . </seg> |
| <seg id="250"> Also Sie sehen , you see how a connector service could help in social relationships . </seg> |
| <seg id="251"> Well let me turn to yet one more issue that's interesting us , which is of course also the information delivery . </seg> |
| <seg id="252"> So what you've seen here is human-human support or computer support for human-human interaction in a situation where it is basic devices that we have but we can also be more creative and come up with devices that can be delivered only in a very private way . </seg> |
| <seg id="253"> So if you want to for example have delivery of personalized information , we could for example have steerable projectors beaming information in front of someone . </seg> |
| <seg id="254"> Or this in an intriguing device , it's a heads-up display in glasses . </seg> |
| <seg id="255"> So you could for example if it 's embarrassing to be reminded of the name of someone whose name you forgot you could just have it discretely beamed into the glasses saying you know this is Muntz and Colds and he works for you and so you should remember his name . </seg> |
| <seg id="256"> Or you know this is a very good friend of mine , who you met just yesterday and your senility has caused you to forget . </seg> |
| <seg id="257"> So all of this would be really nice if it's done discretely , so having for example private information delivery in your glasses or a targeted audio device that beams an audio beam to your ear so that only you can hear it and we will show that to you in a moment in the context of the translation services . </seg> |
| <seg id="258"> Now yet one more thing I'd like to show you is something rather wild . </seg> |
| <seg id="259"> Let's suppose you're in a meeting and a phone call comes in and it is really urgent that you communicate with that person on the er end but you cannot speak because it disturbs the neighbors and it may be also a personal type thing you want to say . </seg> |
| <seg id="260"> Then wouldn't it be nice if you could speak over the telephone silently or quietly so that the other people don't hear it ? </seg> |
| <seg id="261"> So how could that be done ? </seg> |
| <seg id="262"> That should be like science fiction . </seg> |
| <seg id="263"> Well , we 're working on some electromyographic speech recognition , this is now actually work that is being taken over by Tanja Schultz who has just started this summer semester here at our faculty . </seg> |
| <seg id="264"> It is work where the idea is to recognize the muscle movement of the cheek as someone formulates sentences without speaking them out aloud . </seg> |
| <seg id="265"> So if I move my mouth I'm moving my mouth but if you're not a good lipreader you can't tell . </seg> |
| <seg id="266"> What I said and it's certainly not loud . </seg> |
| <seg id="267"> But with electromyographic signals we can actually recognize what the person has said and then produce a synthetic nsmission over the telephone line . </seg> |
| <seg id="268"> Something wrong with the video . </seg> |
| <seg id="269"> There was a problem with the video . </seg> |
| <seg id="270"> But you get the idea , he's moving his mouth and that motion is being recognized in terms of the words that were spoken and the words are being transmitted over the telephone channel in terms of voice so that the other person on the other end actually hears the voice of the sentence that was spoken . </seg> |
| <seg id="271"> So it is possible , potentially , in the future to be able to sit in a meeting and carry on a conversation without anyone else hearing it . </seg> |
| <seg id="272"> That will be terrible because then I will have here an audience full of people who talk to their friends and well whatever . </seg> |
| <seg id="273"> -- the next and last issue that I 'd like to show you is language support across languages . . . . . . . . . . . . . . </seg> |
| <seg id="274"> So whatat can we do if we want to bridge the linguistic divide ? ? ? ? ? ? ? ? ? ? ? </seg> |
| <seg id="275"> Now this is of course a very interesting issue today because what we have is in today's world many languages being spoken , people participate in different activities in trading and lecturing and in many they work with partners in other countries and obviously that's important because we do increasingly communicate and work with people in other countries but at the same time we have to then speak some language that we can't understand . </seg> |
| <seg id="276"> So the common solution today is of course everyone learns English and everybody speaks English to each other but that's a problem in another respect is the cultural diversity and then also the the detail and the goodness of communication can suffer and so what you 'd like to be able to do is have something that can bridge this language divide and allow people to communicate with each other in their own language without actually losing that language identity . </seg> |
| <seg id="277"> And the question is can we do that ? </seg> |
| <seg id="278"> Can we do this with technology ? </seg> |
| <seg id="279"> Now if we ask the question why is this hard ? </seg> |
| <seg id="280"> Why is in fact language translation or language communication difficult ? </seg> |
| <seg id="281"> Then it becomes clear that it is ambiguity that's causing it . </seg> |
| <seg id="282"> And we have already seen in the speech recognition lecture that speech recognition is difficult but translation is difficult as well and you see up here many of the typical jokes that people have been telling about illustrating how hard it is to recognize , to translate and how to process language . </seg> |
| <seg id="283"> So if you for example wanted to translate the spirit is willing but the flesh is weak it could be misunderstood or mistranslated as the vodka is good but the meat is rotten . </seg> |
| <seg id="284"> Syntactically time flies like an arrow has six different interpretations or six different parses . </seg> |
| <seg id="285"> Stunning because we would only think of one . </seg> |
| <seg id="286"> Phonetics can be highly ambiguous , give me a new display could be give me a nudist play . </seg> |
| <seg id="287"> It 's the same acoustic phonetic string but it 's a different word sequence . </seg> |
| <seg id="288"> And you've seen already that we're dealing with some of that obviously with statistical language model models or with statistical models that assign a likelihood to each of these different hypotheses and bringing them out as the better solution . </seg> |
| <seg id="289"> And so this is how we deal with it because each of these modules obviously would be compounding the errors if we 're stacking them behind one another . </seg> |
| <seg id="290"> And so a successful model always has to involve probabilistic uncertainty or deal with probabilistic with uncertainty in a probabilistic way . </seg> |
| <seg id="291"> And it has to work with a variety of hypotheses and pull out the correct one by using subsequent knowledge sources as it goes down the chain of processing . </seg> |
| <seg id="292"> Now having said that , obviously , that provides possibilities and I'd like to describe some of them in the remaining time that we have and also alert you to some of the scientific research challenges that are still ahead that we're working on . </seg> |
| <seg id="293"> First of all , conversational speech . </seg> |
| <seg id="294"> How do you translate a sentence like this ? </seg> |
| <seg id="295"> This is an actual recording we made in our laboratory of somebody speaking German in the context of a appointment scheduling . </seg> |
| <seg id="296"> The first thing that is noticeable is there are these uhs and uhm and Schmatzen et cetera in between , the recognizer has to deal with these sounds . </seg> |
| <seg id="297"> Second thing you will notice is there 's no punctuation because people don't say in the middle of the sentence comma , period question mark . </seg> |
| <seg id="298"> But they will simply speak continuously and naturally as they do . </seg> |
| <seg id="299"> Now if you do put punctuation in it artificially and you remove the uhs and uhms and you feed it to traditional , classical translation systems you'll see that the output sometimes still is garbage . </seg> |
| <seg id="300"> It is a highly disfluent thing that needs to be suitably interpreted and often what people wanted to say was actually rather simple and the way they expressed it was very complicated and confuse . </seg> |
| <seg id="301"> So this is the nature of human interaction and hence our solutions have to take take care or take account of that . </seg> |
| <seg id="302"> Just briefly we have a consortium for speech translation advanced research that was founded in nineteen ninety one . </seg> |
| <seg id="303"> It's a consortium that has now many partners around the world that are working on speech translation and speech translation now has become from we were the first laboratory in Germany to actually ever build a speech translator and show it to the public and our partner lab in Pittsburgh was the first one to do that in the US . </seg> |
| <seg id="304"> But now it is the largest funded research area in this whole area of speech and language translation speech translation , speech to speech translation . </seg> |
| <seg id="305"> So what do we need to do ? </seg> |
| <seg id="306"> First of all , we need to realize that there are certain challenges namely how do I deliver a translation capability how do I overcome performance limitations in face of noise , errors and disfluencies ? </seg> |
| <seg id="307"> And then how do I deal with large domains and scope and many different languages ? </seg> |
| <seg id="308"> Two approaches are have been proposed in the past , there is the so-called interlingua approach which which works by analyzing a sentence and decoding it in terms of some semantic representation and producing an output or a statistical translation approach where the mapping is done directly with a source channel model , very similar to the kind of models that we have seen in this lecture for speech recognition . </seg> |
| <seg id="309"> So in the history of speech translation work started in the late eighties , early nineties and lead up to this first demonstration of speech to speech translation systems . </seg> |
| <seg id="310"> They were rather limited in vocabulary and speaking style so that the next ten years were concerned with translating spontaneous speech , so that you could say a sentence fluently the way people say it but still with a limitation of domain . </seg> |
| <seg id="311"> So these were systems that would then recognize sentences that you speak into a system in a particular domain , so if it is for example at the doctor's office you can say I have a headache or something but you cannot discuss seventeenth century French literature . </seg> |
| <seg id="312"> So it would be something that you then do only in a particular domain for discussion with people in the field . </seg> |
| <seg id="313"> That was work done in the nineties all the way to two thousand , two thousand one and it is these kinds of systems that are now being expanded in actual fieldable systems that are being deployed either for tourists or humanitarian humanitarian services or situations or government peacekeeping police and military so these are all situations where a simple or limited domain is sufficient . </seg> |
| <seg id="314"> You don't want to discuss everything possible but you want to still be able to carry out a dialogue naturally . </seg> |
| <seg id="315"> These systems have become smaller and smaller , I can show that to you later if you come up here . </seg> |
| <seg id="316"> We have these systems now on PDAs and you can say a certain sentence for a tourist in one language and it comes out by speech in another . </seg> |
| <seg id="317"> Now we can combine all of this into what we call wearable language assistance . </seg> |
| <seg id="318"> So this would involve the navigation , information access document translation and dialogue translation , so again if a human is in a foreign situation , it is many challenges that come together , recognizing the road signs , recognizing the getting navigation information getting information about the locale and translating dialogue . </seg> |
| <seg id="319"> We have a video for that but I don't have time so let me skip this one and if we have time , I can show it to you later . </seg> |
| <seg id="320"> This is something that I had a chance to participate on . </seg> |
| <seg id="321"> Some of our systems in health care translation , so it's basically the laptops and PDAs that translate doctor-patient dialogues . </seg> |
| <seg id="322"> And we could actually use them in a exercise that took place in May this year where there was a coalition forces exercise done in villages and jungle of Thailand where the US government Singapore , Japan Thailand and some participation from Indonesia teamed up to provide health care for people in remote villages just on a single day , so it was advertised and thousand people would come in the morning and by the nd of the people had their teeth taken care of and got new eye glasses et cetera . </seg> |
| <seg id="323"> It was a remarkable exercise in very short time to bring health care to so many people and obviously in this situation language is a problem in how do you communicate between a local villager in Thailand and the and the doctor , American doctor or English speaking doctor and this is what was done here . </seg> |
| <seg id="324"> Another thing that's interesting is sign translation . </seg> |
| <seg id="325"> Here , too , we developed devices that work on PDAs with cameras on top where you can go to China and take a picture of a Chinese road sign and then see the image of the road sign or the image the program would extract the text from the road sign and do the OCR , take a character recognition and then translate that sentence into English and put it in the image so that you can tell what the road sign says . </seg> |
| <seg id="326"> I actually myself collected this database in China , we took thousands of pictures of road signs in China and when we came back we actually found some curious and funny examples where translation really would have been helpful like this one in the middle where you can see that the Chinese sign actually said no entry for tourists . </seg> |
| <seg id="327"> So here I was in China and couldn't read the sign -- that was forbidding me to enter . </seg> |
| <seg id="328"> Now the last challenge is domain unlimited speech translation . </seg> |
| <seg id="329"> These devices still are limited to particular domain . </seg> |
| <seg id="330"> You can say it anywhere you like but it's only health care or only tourism . </seg> |
| <seg id="331"> But what if I want to translate my lecture to you from English into Spanish ? </seg> |
| <seg id="332"> There's a number of applications like that , translation of radio broadcast , translation of lectures and speeches translational parliamentary speeches telephone conversations , meetings they're all domain unlimited , we cannot limit the systems here in domain . </seg> |
| <seg id="333"> And we have to make sure that a delivery is found that is suitable for the situation . </seg> |
| <seg id="334"> So if you want to have translation of this lecture I need to be able to do domain unlimited speech translation and you need to have it somehow delivered privately because if only you don't understand English you want to have your personal translator into Spanish or Chinese . </seg> |
| <seg id="335"> But everybody else wants to hear it in English , let's say and in this case you don't want to disturb everybody with a loudspeaker that speaks Chinese into the room . </seg> |
| <seg id="336"> But you want to have something selective . </seg> |
| <seg id="337"> Now can we do this ? </seg> |
| <seg id="338"> There's speech recognition for different genres , it turns out lectures if they are in general , as we've seen before , it 's very hard . </seg> |
| <seg id="339"> Word error rates , are still around thirty percent speaker independent meetings is very hard but if it's a domain if it's domain-unmlimited , however speaker adapted to a particular speaker we can actually get reasonable recognition error rates for this task . </seg> |
| <seg id="340"> So in one other EC project called TC Star we worked on this issue where the idea was to translate parliamentary speeches in the European Parliament from English into Spanish and German and so forth with rather surprisingly good success . </seg> |
| <seg id="341"> These are some translation results measured in terms of so-called Bleu-scores where translation is being done on the actual transcripts of these parliamentary speeches . </seg> |
| <seg id="342"> And a good performance could be obtained . </seg> |
| <seg id="343"> Now . </seg> |
| <seg id="344"> Here , something worthy to note is that these statistical systems that have been developed in this context are already substantially better than some of the commercial translation systems that you can buy outside . </seg> |
| <seg id="345"> OK . </seg> |
| <seg id="346"> Now , how good do these systems work ? </seg> |
| <seg id="347"> Let me get to that actually after we show you the actual demonstration . </seg> |
| <seg id="348"> Lecture translation is of course an extension of something like parliamentary speeches , parliamentary speeches are still a rather general topic of discussion . </seg> |
| <seg id="349"> But if I give a lecture on a particular technical topic it is of course much more specific . </seg> |
| <seg id="350"> And we want to show that to you here on that screen over there if you would just engage our lecture translator while I'm giving the lecture . </seg> |
| <seg id="351"> Then we should be able to see up there the automatic recognition of what I'm saying in the upper screen while at the screen below the automatic translation into Spanish . </seg> |
| <seg id="352"> Now notice that this is now no longer limited to a particular domain , but we have full translation of open domain speech the way I produce it here in front of you . </seg> |
| <seg id="353"> Now these lecture translation systems were applied to talks in the European Community or in the European Parliament but in order to apply them to this type of lecture we obviously had to introduce also special vocabularies and the system had to be adapted to the typical kind of lecture that you might see in a technical lecture at a university . </seg> |
| <seg id="354"> Now the university lectures that we're trying to translate here are obviously now done into Spanish , but we're working on versions in German and in Chinese . </seg> |
| <seg id="355"> And the goal is at some point to provide you as students automatic translation of lectures done here at the University of Karlsruhe or at the other universities in our Interact joined center . </seg> |
| <seg id="356"> So future semesters of this class may potentially get simultaneous translation services into multiple languages , either English or German and perhaps Chinese and find it perhaps a useful addition to the the lecture presentation . </seg> |
| <seg id="357"> Now how is this done ? </seg> |
| <seg id="358"> Again it's open domain , open vocabulary speech translation but we have to deal with spontaneous speech disfluencies . </seg> |
| <seg id="359"> And that can only be done by applying statistical learning algorithms much the way you have learnt in class in the speech recognition and language processing lectures of this of this class . </seg> |
| <seg id="360"> Now another thing we have to worry about is of course delivery . </seg> |
| <seg id="361"> For example right now I find it disturbing that you're all watching over there and I cannot show my slides , no one is paying attention to my lecture but everybody is paying attention to that text . </seg> |
| <seg id="362"> And that's obviously not a good idea for a lecture when the whole point of the lecture is to transmit an idea and not to wow you or to impress you with particular displays on another screen . </seg> |
| <seg id="363"> So how can we do that ? </seg> |
| <seg id="364"> We already mentioned the targeted audio device . </seg> |
| <seg id="365"> That is a device that is a audio speaker that produces a rather straight or narrow targeted beam of audio in a particular direction that it's pointing to . </seg> |
| <seg id="366"> Like the Spanish translation of this lecture right now you should be hearing out of this loudspeaker . </seg> |
| <seg id="367"> I cannot hear anything here . </seg> |
| <seg id="368"> So I'm not sure if it's working . </seg> |
| <seg id="369"> Is it working ? </seg> |
| <seg id="370"> Can you hear it ? </seg> |
| <seg id="371"> So it is really a remarkable piece of technology that was developed by Daimler Chrysler in the context of the Chil project which delivers a beam of audio only in a particular direction in the audience or in the room . </seg> |
| <seg id="372"> I cannot hear it for example right now . </seg> |
| <seg id="373"> But as the loudspeaker comes around and gets to you you will hear in your ear their talking and giving you the Spanish translation of this lecture directly to you . </seg> |
| <seg id="374"> So you can imagine because if the loudspeaker goes away you don't hear it you'd hear just a regular lecture so in future it may be possible to have the Spanish section over here , the Chinese section over here , the Germans here and the English speakers here . </seg> |
| <seg id="375"> And you all get an individual acoustic presentation of the lecture as the loudspeaker is beaming into different parts of that room . </seg> |
| <seg id="376"> So that's exciting . </seg> |
| <seg id="377"> We can actually do simultaneous translation into different languages in an audience and have several people in the audience hear the lecture in their own language without actually speaking the or understanding the lecture of the lecturer . </seg> |
| <seg id="378"> The other possibility is of course these translation goggles , these again are goggles with the translation text or the text of the translation being displayed into your personal goggles and you put them on and while you're listening to a lecture in one language , you see translations much like captions in the movie in your own goggles to follow the lecture . </seg> |
| <seg id="379"> We have such a device , we didn't bring it today but we're experimenting with these types of devices as well . </seg> |
| <seg id="380"> Well last not least , of course one of the things we can also do is to try to imagine a world in which you produce speech in a foreign language without speaking it . </seg> |
| <seg id="381"> So one set of experimentations that we have done is to combine it for example with the ENG recognition , so if you could recognize speech just by moving your lips then maybe we can also produce output in another language . </seg> |
| <seg id="382"> So in the future maybe you can travel to another country move your lips in German and out comes Chinese and if you believe that at some point it could be perhaps implanted in your cheek or an earring or something then you might be able to simply turn your mouth into foreign language mode and produce speech in another language . </seg> |
| <seg id="383"> So this in another one from this is from discovery channel , by the way . </seg> |
| <seg id="384"> OK . </seg> |
| <seg id="385"> So this was very entertaining when they came and visited us there's is a couple of TV shows like that , there was one also done by German channel but I don't have time to show them to you all . </seg> |
| <seg id="386"> Let me leave you with a last thought that I think is also exciting and interesting and opens up possibilities for lots of interesting research . </seg> |
| <seg id="387"> Much of the research in some sense we're at interesting time where this type of translation across language boundaries is starting to work . </seg> |
| <seg id="388"> It's starting to work in a number of different situations that people encounter if they're going to foreign countries . </seg> |
| <seg id="389"> And we can now actually even get to domain unlimited speech and translate it . </seg> |
| <seg id="390"> So this is obviously remarkable and wonderful because we can really begin to communicate freely with people speaking different languages . </seg> |
| <seg id="391"> However there's one big challenge still remaining which is the large number of languages in the world . </seg> |
| <seg id="392"> Most of the translation systems and systems that you've seen here were all developed in basically about five languages . </seg> |
| <seg id="393"> And these languages are either very populous many people living in those countries very rich countries that can afford big research programs or considered dangerous , so research is being being done and so there's only four five or six languages that are being being researched Chinese for example , Spanish , English German , Japanese , Korean , French in those languages there's very active and vibrant language and speech processing programs under way where these types of technologies become reality . </seg> |
| <seg id="394"> What about the rest of the world ? </seg> |
| <seg id="395"> There are six thousand languages in the world and with five languages or ten languages we're not going to cover much of that wealth of languages . </seg> |
| <seg id="396"> So one important research direction also that we're doing at both Karlsruhe as well as Carnegie Mellon is to look at this long tail of language . </seg> |
| <seg id="397"> How can we in fact take these technologies and lower the cost and lower the barrier of entry and essentially develop translation and speech processing technology faster for these other languages . </seg> |
| <seg id="398"> And there are a number of ideas that can be proposed to this . </seg> |
| <seg id="399"> Several research themes , several we have a couple of PHD theses and Diplomarbeiten that are addressing this problem of how to lower the cost of of developing systems in those languages and how to in fact produce technology with much much fewer resources in those other languages . </seg> |
| <seg id="400"> So unfortunately we don't have time to go into all of these and again the whole goal of this lecture was not to give you a detail of all the possible research directions that come from all of the things we've seen today but to give you an impression of what wealth of possibilities there are if we think about these cognitively aware and cognitive processing systems that begin to process and observe and interpret the world that we're living in as well as the interaction between human beings . </seg> |
| <seg id="401"> So there is a large number of really interesting potential services that become possible if we have such systems around us . </seg> |
| <seg id="402"> So needless to say , there is lots of possibilities for Diplomarbeiten , Studienarbeiten , dissertations and all of this each one of these corners of the things we touched on today really provides a number of potential research projects to do . </seg> |
| <seg id="403"> So I hope that one or the other people among you might be interested in the future to work with us . </seg> |
| <seg id="404"> If you 're interested in any of these projects come talk to us so that we can can so that you can participate in these interesting activities . </seg> |
| <seg id="405"> Before I go let me point out that there is some information material that you're welcome to take along . </seg> |
| <seg id="406"> We have on Thursday a so-called Chil technology day . </seg> |
| <seg id="407"> Many of the systems you've seen here today will be demonstrated to visitors from industry as well as visitors from the European Community this Thursday in the EATB as well as in our building at Fasanengarten and I think there should be a program here or programs , yes , we have leaflets of programs that you can get information on this Chil technology day as well information about our Interact center if you might be interested in one of the exchanges . </seg> |
| <seg id="408"> So with that I hope we have given you a little bit of overview I hope you will do well in the Kognitive-Systeme-Klausur . </seg> |
| <seg id="409"> My advice to you for the Klausur is do the Übungen , do the homeworks . </seg> |
| <seg id="410"> If you haven't done them yet start them now because there is a high correlation between failure and not doing the Übungen . </seg> |
| <seg id="411"> So those people who do the Übungen do well in the Klausur and those who don't don't . </seg> |
| <seg id="412"> So please , please , please do the Übungen do the exercises . </seg> |
| <seg id="413"> And I hope that you'll all manage to do well in the final exam . </seg> |
| <seg id="414"> Thanks very much . </seg> |
| </doc> |
| <doc docid="lecture0002" genre="Lecture"> |
| <seg id="1"> Okay , thank you very much . </seg> |
| <seg id="2"> So , I'm going to present some work that I did on porting phoneme based speech recognition systems to new languages supported by articulatory feature models . </seg> |
| <seg id="3"> So just another to motivate the work in porting to new languages in general . </seg> |
| <seg id="4"> If we look at the languages that exist in the world today there are about five to seven thousand different languages which are still living and are being used in today's world and that is , of course , a large number and one of the interesting facts about that large number is that the vast majority of these languages are only spoken by a very small population . </seg> |
| <seg id="5"> If you look at the languages with one million speakers or above . </seg> |
| <seg id="6"> They are spoken by about ninety-six percent of the population . </seg> |
| <seg id="7"> So , roughly three hundred fifty to four hundred fifty languages are spoken by ninety-six percent of the population . </seg> |
| <seg id="8"> All the other remaining ninety-five percent of the languages are only spoken by six percent of the population . </seg> |
| <seg id="9"> Currently we are experiencing some of the fact that languages are frequently dying . </seg> |
| <seg id="10"> It is a trend that has started in the past , so over the past two hundred years linguists were able to show that languages have started to die but it seems that this general trend is increasing and linguists estimate a very pessimistic estimates that within a few generations up to ninety percent of all of the living languages today might have died out on the technical side , if we look at automatic speech recognition systems or natural language processing systems in general they only exist with only a fraction of these languages in the world . </seg> |
| <seg id="11"> So common saying nowadays is languages which were addressed are rich with a large number of speakers are dangerous . </seg> |
| <seg id="12"> So I put it as politically irrelevant , which is a little bit nice , I would say but these are mainly the languages that natural language processing systems have been developed for and interestingly if you look at the work of linguists linguists themselves from a non-technical point of view have also mainly worked on the main languages but very little work has been done on exploring many of the minority languages that exist in the in the world . </seg> |
| <seg id="13"> So , when we look at the technical side of training an ASR system training it requires large amount of datas . </seg> |
| <seg id="14"> Statistical methods just try to process as much preferably manually annotated audio data in order to gather necessary statistics to estimate the statistical models . </seg> |
| <seg id="15"> And , at the same time , ASR system now for actually a couple of years performed so well that they are being used in real-life applications . </seg> |
| <seg id="16"> If you look at this project , the industry partners are starting to use ASR systems even now in their systems and are thinking about how to use them . </seg> |
| <seg id="17"> If you look at the market there are many products now out there that make use of automatic speech recognition system and nowadays also in combination with translation systems but these systems only exist for the large languages in the world . </seg> |
| <seg id="18"> So , what we see here is that we are actually in danger of creating a digital divide just like that say the access to the Internet that is not available to everybody might create a digital divide on the information access size side , here we are in danger of creating a digital divide when it comes to accessing different or other languages in an automatic way . </seg> |
| <seg id="19"> For example , if you look at translation systems , we are currently running into the danger that translation systems are only available for the major languages and one of the reasons why languages are actually dying out at such a rapid rate is the fact that many speakers switch to languages which seem to them more advantageous so have economic advantages , advantages of political or social status and these are not only languages which are spoken in one remote village in the jungle , these are actually well known languages just as Gaelic or even nowadays Irish is considered to be on the path to extinction because people switch to English because we think it advantages to them . </seg> |
| <seg id="20"> So , the idea is it would be nice to keep up the language diversity in the world to have many different languages just similar , let's say , to biological diversity in order to have an healthy environment where languages can evolve and evolve well and , with them , the cultures of the people , which are closely linked to languages , can also evolve very well . </seg> |
| <seg id="21"> So , if we look at the way we said we need large amounts of annotated audio data in order to train the acoustic models of ASR systems if we want to address all languages in the world , or at least very many languages in the world this traditional approach most likely is not possible . </seg> |
| <seg id="22"> It is too expensive too time consuming when it comes to developing these systems . </seg> |
| <seg id="23"> So people now for quite a long time have started to look at how can we equate ASR systems in new languages in a cheaper way which can be applied to many more languages than it is done today . </seg> |
| <seg id="24"> So , one of the work by Tanya and Alex was the use of multilingual acoustic models to order to address this task of porting speech recognition systems to new languages with possibly little overhead or in a very fast and cheap way and cheap in that sense that we don't need much time and money and they define multilingual automatic speech recognition systems and systems that are able to recognize many languages simultaneously that was seen during training and then , as a next step these acoustic models could then be applied to new languages and one of the techniques developed by Tanya is this technique multilingual mix where you train acoustic models on multiple languages and you share the acoustic models across languages based on phoneme identity . </seg> |
| <seg id="25"> So , the idea behind that is , you have an annotation scheme , for example Ipa which notes all phonemes in the world in different languages in the same way and you can say that two phonemes in different languages which are represented by the same symbol sort of almost sound similar . </seg> |
| <seg id="26"> So you can train common models with them by sharing the training data from all the languages . </seg> |
| <seg id="27"> And the hope is if you have such an multilingual model if you include many languages in the training of course the well known ones that you already have ASR systems for that you then get an acoustic model which almost or even completely covers the acoustics of a new language . </seg> |
| <seg id="28"> In reality , there is still a clear drop in performance if you do multilingual modeling on the training language , and , also , if you apply it to a new language but , it is a really good starting point for initializing a acoustic model in a new language that you then can adapt with only few adaptation material in order to get a usable ASR system . </seg> |
| <seg id="29"> So , instead of collecting large amounts of data you now only need to collect fewer amounts of data in order to reach a similar performance . </seg> |
| <seg id="30"> And , in that way , that is the way multilingual acoustic modeling can be used for porting speech recognition systems to new languages with less effort . </seg> |
| <seg id="31"> So , this picture just illustrates the ML-Mix notion . </seg> |
| <seg id="32"> So , on the left side you have the traditional monolingual recognizer with the models for every language . </seg> |
| <seg id="33"> They have separate models here . </seg> |
| <seg id="34"> This is the model for the middle state of an M phoneme , so all the four languages have their separate models and then for ML-Mix , you pull the training data from all languages , and you train one one single Gaussian-Mixture-Models for these languages because you claim that an German M , Japanese M , etcetera they all basically sound the same . </seg> |
| <seg id="35"> Or at least similar enough so that they can be modeled by one single model . </seg> |
| <seg id="36"> So , also , in the past people have started to look at acoustic models which are different from phonemes . </seg> |
| <seg id="37"> So , and because people felt that phonemes are not necessarily possible to really capture all the effects that you have in speech when it comes to acoustic modeling especially researchers felt that for spontaneous speech the strict phoneme sequence that you use in order to describe the pronunciation of words does not account for all effects that you have in spontaneous speech such as elision of phonemes or the fact that phonemes who consist of different articulatory features not every articulatory feature is reached at the same level of accuracy , depending on whether you speak sloppy or depending on the context that a phoneme may occur . </seg> |
| <seg id="38"> So , one of alternative models that people have looked at is the use of articulatory features . </seg> |
| <seg id="39"> So , an articulatory feature as we are uses in this work is sort of a description of the articulatory targets that are being reached by the articulators during the articulation process . </seg> |
| <seg id="40"> So , for every phoneme , for example Ipa describes certain targets , so whether a sound is voiced or unvoiced , whether it is a vowel or consonant whether it is for example a plosive or not a plosive whether the dorsum of the tongue reaches a certain position during articulation . </seg> |
| <seg id="41"> These are the kind of articulatory features that we use when we hear talk of them , so place and manner of articulation . </seg> |
| <seg id="42"> And , Florian-Metze has done work on that in the past , where he showed in the monolingual case , so if you work on a well-known languages and you combine phonemes and models for these articulatory features and he showed improvements when doing this kind of combination . </seg> |
| <seg id="43"> And , in order to do that you need models for your articulatory features and , what he used were binary features . </seg> |
| <seg id="44"> So , for every articulatory feature that you define you train two models , one for detecting its absence , and one for detecting its presence . </seg> |
| <seg id="45"> So , for the feature voiceness , you have one model says whether sound is voiced and you have another model that says whether feature is unvoiced and we used Gaussian-Mixture-Models for that , with one hundred twenty-eight Gaussian components . </seg> |
| <seg id="46"> So , if you want to do frame-wise classification , you can easily build a Naive-Bayesian classifier using these two models but when incorporating that now in continuous speech recognition Florian used a stream-based setup where you combine the phoneme models and the articulatory feature models at the stage where you calculate the emission probability . </seg> |
| <seg id="47"> So here just is an overview of the Ipa table . </seg> |
| <seg id="48"> So , what you can clearly see now , for example that every phoneme in that table is sort of described as a combination of different articulatory features . </seg> |
| <seg id="49"> So , a phoneme actually is only a shortened for such a bundle of articulatory features , such as it is a fricative , it is labiodental and voiced , for example . </seg> |
| <seg id="50"> That would give you that would these bundle or vector of articulatory features you would then abbreviate as a phoneme . </seg> |
| <seg id="51"> And , when it comes to the stream based setup this figure illustrates the stream based setup . </seg> |
| <seg id="52"> So stream zero usually is our phoneme models just like you have in the phoneme based recognition and when you now want to calculate the emission probability of a state in your HMM it is not just anymore the emission probability from the Gaussian-Mixture-Model of the phoneme model but we do in the log probability domain a linear combination of this phoneme model with all the corresponding articulatory feature models that correspond to this phoneme . </seg> |
| <seg id="53"> So , let's say if you are calculating the score for a P then you would take the phoneme model score for the P then you would say it is a plosive , so you take the phoneme model for plosive and what is it ? </seg> |
| <seg id="54"> It is unvoiced , so you take the articulatory feature model for unvoiced and then you would calculate the scores , you sum them up and you assign them weights . </seg> |
| <seg id="55"> So , it is a log-linear combination , so you just don't sum up the values , but for numerical reasons and also for reasons how value are able to detect a certain feature you give different weights to the single probabilities or log probabilities in the sum . </seg> |
| <seg id="56"> So , since you need these weights what you need is actually a good way of selecting these weights in a good manner and in the past I've worked with two methods one is a heuristic simple heuristic , and the other one is a discriminative training method for finding these feature weights . </seg> |
| <seg id="57"> I'll explain them a little bit more in detail later . </seg> |
| <seg id="58"> So , what we also did in the past was work on examining whether articulatory features can also be modeled in a multilingual way and can be applied in a cross-lingual way and what we found out is that articulatory features actually can quite robustly be recognized across languages . </seg> |
| <seg id="59"> So , if you take a model for voiceness and it was trained in English and you apply it to German you are actually pretty much able to detect in German voice sound and distinguish them from unvoiced sound using this model that was only trained on English . </seg> |
| <seg id="60"> And , also , what you can do is just as you do it for phonemes you can train multilingual articulatory feature models by pooling the training data from all the different languages . </seg> |
| <seg id="61"> In her work , Tanya has introduced a measure called the share factor with which she measures that when you have an multilingual model , and you apply it to a new language you measure how well do you already cover the phonemes of the new target language by this multilingual model . </seg> |
| <seg id="62"> So , how many phonemes do they have in common and when you do the same for the articulatory features what you find out is that the share factor for articulatory features actually is higher than for phonemes , so the overlap between features in different languages seems to be in general higher than for phonemes . </seg> |
| <seg id="63"> So that makes them very interesting for multilingual and cross-lingual application because you are able to cover many of the features in the target language without seeing the target language . </seg> |
| <seg id="64"> And , in the past , so what we've looked at is at combining cross-lingual and multilingual articulatory features with monolingual phoneme models . </seg> |
| <seg id="65"> So , we always took phoneme models which were trained on the target language and we combined them with articulatory features from many languages or with multilingual articulatory features and then we tested on the training language of the phoneme models and we found improvements . </seg> |
| <seg id="66"> So , the question now that we ask ourselves is what if we do have the phoneme model also to be multilingual model or monolingual model which is different from the testing language . </seg> |
| <seg id="67"> So , if you have a new language which we have not seen , neither in the articulatory feature models nor the phoneme models and we now combine phoneme models multilingual ones , monolingual ones with articulatory features and applies them to the new language will we see improvements over just using phonemes ? </seg> |
| <seg id="68"> So , as I said before , we need to select stream rates and we have two ways of doing that . </seg> |
| <seg id="69"> One one thing we used was a heuristic . </seg> |
| <seg id="70"> So , that was a simple one . </seg> |
| <seg id="71"> We just selected a fixed stream rate for every articulatory feature that we would add to the stream based setup and then the weight for the phoneme based models would simply be the weight that makes them sum up to one and then we looked at the classification accuracies of the articulatory feature models and we just started to add them one by one in the order of their classification accuracy and we test the word error rate on a development set until you reach a maximum performance and then that is your setup which you apply later to the evaluation set . </seg> |
| <seg id="72"> The other way was actually a way of training the weights using a method called discriminative model combination . </seg> |
| <seg id="73"> It is a something developed by Peter-Beyerlein for actually the same stream based setup that we use . </seg> |
| <seg id="74"> So , what he actually did is he for example used it for training the the weight of different language models . </seg> |
| <seg id="75"> Which are also combined in a log-linear way in with the models from the acoustic model and just like that is exactly the same setup that we now have with the phoneme models and the articulatory feature models . </seg> |
| <seg id="76"> So , we used that in order to discriminatively train the weights that we have for the stream based setup and what this DMC does , it just or it implements a gradient descent on a smoothed word error rate function . </seg> |
| <seg id="77"> And , the way that the word error rate function is made smooth so that you can actually do a gradient descent is done in such a way that it needs an approximation of the probability of the whole hypothesis space and since that of course in reality is not possible we used as an approximation for that an N-best list . </seg> |
| <seg id="78"> So the experiments in this work were conducted on the Globalphone corpus , or languages from the Globalphone corpus . </seg> |
| <seg id="79"> Globalphone collected by Tanya and under Tanya's supervision is a corpus of read newspaper articles from many languages , I think eighteen and number is still growing , so that might already be outnumbered that number and these articles are all collected in a very similar manner , or basically in the same manner like close talking microphones , same recording quality newspaper articles are read by native speakers , normally within the country where they live and since it is such a uniform collection for many , many languages for an LVCSR task it is very well suited for doing research when porting to new languages or when comparing the performance among languages or doing multilingual modeling . </seg> |
| <seg id="80"> So , for our experiments , we selected the languages German , English , Russian and Spanish and we had mainly three sets from the corpus , one for training one for development work such as finding the correct stream rates and language model weights , etcetera and then once we found the optimal combination , we did the evaluation on a separate , held-out set . </seg> |
| <seg id="81"> So , this just gives an overview about the size for the four different languages , of the training , development and evaluation set . </seg> |
| <seg id="82"> So , in order to get , of course , feeling for how well your porting to your new language works , and how well the different languages perform as a baseline we trained monolingual recognizers on the languages that we selected and and just a standard setup as you know it with MFCC front end . </seg> |
| <seg id="83"> Left to right continuous HMM , we have context independent and context dependent models the context dependent models have three thousand models , and are phonetically tied using a classification and regression tree and then we also trained a multilingual model on the languages English , Russian and Spanish . </seg> |
| <seg id="84"> And we also of course trained the corresponding articulatory feature detectors for the languages English , Russian and Spanish and from that you can already guess German was our main target language , so we pretended that we don't know anything about German and did the porting to German and then you can compare against what a full-blown German system , if you had enough training material , actually would look like . </seg> |
| <seg id="85"> So , this just gives an overview of these baseline systems for the context independent and context dependent case for the different evaluation for the development and evaluation set and you can see that the numbers vary from language to language which usually hints at the difference in difficulty of the different languages when creating a speech recognition system for them and what you probably will notice right away Russian sort of sticks out having very high word error rates and the reason for Russian is Russian is from a linguistic point of view very complex . </seg> |
| <seg id="86"> First , it is highly inflecting and second , which really makes it difficult for N-gram language modeling . </seg> |
| <seg id="87"> It has a very loose word order . </seg> |
| <seg id="88"> Basically everything is possible . </seg> |
| <seg id="89"> The way you change words just give different intonations or different connotations to the different sentences , but you can very freely arrange the words , which leads to the fact that the Russian language model has a very high perplexity one thousand and higher and that is why we have such high word error rates for Russian and that is still an unsolved problem , but a different research problem for now . </seg> |
| <seg id="90"> So , if you now train a multilingual model on the languages English , Russian and Spanish get these numbers , and what you will notice is that these numbers are sort of somewhat lower than the monolingual numbers and the reason for that is even though a phoneme that is noted by the same Ipa symbol sounds very similar in the different languages it is in fact not completely the same but might have different variations . </seg> |
| <seg id="91"> Also , this multilingual model that covers three languages has the same amount of models as recognizer for only one language in the monolingual case . </seg> |
| <seg id="92"> So , if you do the experiments and train this multilingual model with let's say nine thousand models you will see that these numbers actually go down somewhat . </seg> |
| <seg id="93"> You don't completely reach the monolingual word error rates , but it is an indication that if you have the same amount of models for more than one language you lose somewhat . </seg> |
| <seg id="94"> You're not able to capture the context dependency in the context dependent tree as good as if you are only working on one language . </seg> |
| <seg id="95"> So , the first experiment was just a monolingual porting . </seg> |
| <seg id="96"> So we took the English recognizer and applied it to the German set and if you just take the phoneme models you see these numbers . </seg> |
| <seg id="97"> So , these are word error rates , and they are comparatively high because the English acoustic is different from the German acoustic and the word error rates , if you don't , and have any German data , and you just apply the English acoustic model to German is comparatively high . </seg> |
| <seg id="98"> So , when you now start to add articulatory feature models to these phoneme models . </seg> |
| <seg id="99"> You see that you get a drop in word error rate and , what is important to note , so we used both the heuristic and the discriminative model combination and what we did is that we actually determined the weights for the combination , the stream based setup on the English development set . </seg> |
| <seg id="100"> We then applied that to German . </seg> |
| <seg id="101"> So these numbers haven't seen any German data not even the German development set for finding the correct stream based rates and so it is actually interesting to note that these weights that we find on the English set actually seemed to somehow do something good on the German set , so they are sort of not completely language independent but they generalize to a new language in a reasonable way . </seg> |
| <seg id="102"> What we can also see is that when we add only the English articulatory features with the DMC we get a lower word error rate as if we use all articulatory features from the DMC . </seg> |
| <seg id="103"> And this must come from the fact that there is actually a mismatch between the set way determine the stream based rates and a set and the set that it tests on . </seg> |
| <seg id="104"> We will later see that if you actually calculate your stream based rates on the target language , you will get better numbers . </seg> |
| <seg id="105"> So , now instead of using the English phoneme based model and applying it to German we use now the multilingual model and apply that to German and what you can see is now that the word error rates start to drop . </seg> |
| <seg id="106"> So , as we know from Tanya's work if you have a multilingual model that has seen data from many languages you benefit from it when you apply it to a new language . </seg> |
| <seg id="107"> You are better able to capture it and now we started to add the articulatory feature models , first only the English ones then the multilingual ones at the end all monolingual ones and what you can see is that the word error rates in general drop again for the DMC . </seg> |
| <seg id="108"> You see , the problem that the phoneme based models and the English articulatory feature models don't hurt , but also don't give you a large gain but , if you lose all the models , you actually start to see some moderate gains . </seg> |
| <seg id="109"> They are not huge , but they are consistent under all different combi nations of adding articulatory feature models and calculating the stream based weights , so there is a clearly visible trend . </seg> |
| <seg id="110"> So , what we now did is we started to use adaptation material in order to also adapt the phoneme based model . </seg> |
| <seg id="111"> So , we pretended that we had fifteen minutes of adaptation data in the German language and we started to adapt the phoneme models with these fifteen minutes and you can see that the word error rate already goes down significantly and now when we add the articulatory features again , the word error rate goes down and , as we have seen , adding all articulatory features seem to be the best so , we did it this time with all the articulatory features and use the DMC in order to find the stream base and stream weights in the setup . </seg> |
| <seg id="112"> So in order to conclude this work already gave an indication that articulatory features are suited for supporting porting phoneme based models to new languages . </seg> |
| <seg id="113"> And , if you have stream weights that are estimated on the development set of a different language and then you test on they still give you improvements when you test on the new languages , so they seem to generalize somewhat across languages . </seg> |
| <seg id="114"> And , if you then combine the adapted phoneme models , so you adapt the phoneme models to German and you combine them with all articulatory features you also see improvements , not just when only adding unmatched phoneme models . </seg> |
| <seg id="115"> So , for future work , one thing that is still missing is we've been adapting the phoneme models , but so far we have not adapted the articulatory feature models . </seg> |
| <seg id="116"> So that would be one of the future experiments to do , what happens if you now also adapt the articulatory feature models will you seen will you see higher gains from adding the articulatory feature models . </seg> |
| </doc> |
| <doc docid="lecture0003" genre="Lecture"> |
| <seg id="1"> Hi . </seg> |
| <seg id="2"> So , for those of you who don't know me , I'm Kevin-Kilgour from Karlsruhe and I'll be talking to you today about language model adaptation in particular , using interlinked semantic data . </seg> |
| <seg id="3"> The just a quick overview of what we actually need language models for you probably already all know this , but language models are a mathematical representation of natural language and we need them whenever machines encounter natural language . </seg> |
| <seg id="4"> The this this language model here was built in particular for automatic speech recognition . </seg> |
| <seg id="5"> A common phenomenon when building language models is that you can train a language model in one domain and it is very good in that domain . </seg> |
| <seg id="6"> But if you try and use it for a different domain , it just feels and if you build a language model that is general enough for all domains , it is just not as good in one particular domain . </seg> |
| <seg id="7"> So , to do that people have been trying to adapt the language models by taking the output so the ASR system transcribes some text . </seg> |
| <seg id="8"> You take this output , you analyze it and you adapt your language model depending upon this output . </seg> |
| <seg id="9"> It is in this field for my adaptive language model . </seg> |
| <seg id="10"> Comes in , and I want to propose a a suggested suggest a a language model to fit this need . </seg> |
| <seg id="11"> Now the goals I set for myself was I wanted a language model that is domain independent . </seg> |
| <seg id="12"> I don't want to have to build it to a particular domain that should be usable straightaway in any domain should be able to generalize whenever you are getting to the specifics of something , like in the previous slide it started off with actually sorry . </seg> |
| <seg id="13"> It started off with the teacher thing good morning class and then start talking about history so , at first the language model might detect , ah , we're in a class environment but , when more information comes in , it will specialize to , ah , we're in a history environment . </seg> |
| <seg id="14"> We need to be prepared for words that have to do with history . </seg> |
| <seg id="15"> So , that is the generalization and specialization capabilities and , it would also be nice for further processing if we could get some semantic information out of it as well . </seg> |
| <seg id="16"> The this is just a a rough overview of how such a language model could look . </seg> |
| <seg id="17"> Like we've got ASR system down at the bottom left and to build the language model we need data . </seg> |
| <seg id="18"> We always require data . </seg> |
| <seg id="19"> And I'll go into that in a second . </seg> |
| <seg id="20"> Okay , take from stuff out of your data , and you build lots of language models and , then , while decoding you have to find some way of mixing these language models depending upon what you've detected . </seg> |
| <seg id="21"> Let's go into the requirements on the data sources . </seg> |
| <seg id="22"> A large amounts of text . </seg> |
| <seg id="23"> You always need lots of text and it has to cover multiple domains and you need to have enough text to be able to build a language model for the domain if you only have a couple hundred words , or even a couple thousand , you can't build a good enough language model . </seg> |
| <seg id="24"> So each individual domain also has to have a lot of text . </seg> |
| <seg id="25"> And your data source should be able to you should be able to extract domains from your data source and , also associate text with one data source that I found is Open-Directory project which is a large directory of websites and links , a bit like Yahoo , not the search engine but the directory where you can click on on topics and go deeper down into it . </seg> |
| <seg id="26"> It contains over four million links and it is freely available and the links have been sorted out into almost six hundred thousand categories which we can use as concepts . </seg> |
| <seg id="27"> If you're utilizing the category hierarchy , you can extract texts and associate them with the categories and also the concepts because of our requirement that we need a lot of text only about ninety thousand of those categories contain , well , usable as concepts . </seg> |
| <seg id="28"> And , to get that much text what you can do is you can follow down the hierarchy and then constantly add more data . </seg> |
| <seg id="29"> So , if this is how your hierarchy looks like and you want to build a language model that programming you recursively add all the texts and the whole sub-tree . </seg> |
| <seg id="30"> This a more a better look at one entry . </seg> |
| <seg id="31"> Each entry in the Open-Directory project contains links to websites associated with that concept or category and links to sub-concepts and they in themselves also contain lots of links . </seg> |
| <seg id="32"> So whenever you're building it whenever you're building your or whenever you're associating a text with your concepts this is a schematic view of the Open-Directory project . </seg> |
| <seg id="33"> And instead of just taking one node and associating the links with it , you follow the whole sub-tree down . </seg> |
| <seg id="34"> So , the higher up you are the larger your language model but also the more general it is and the lower down the more specialized it becomes . </seg> |
| <seg id="35"> Okay . </seg> |
| <seg id="36"> Once you've found out what texts and what concepts you want to associate with each other be the mechanism for choosing which language model you want to use when . </seg> |
| <seg id="37"> To do that , you look at you look at your whole system and you find at some point you'll have some text and we need to adapt to this text . </seg> |
| <seg id="38"> The I decided to build attribute vectors from the text associated with each concept just using simple TFIDFs and storing them as sparse vectors . </seg> |
| <seg id="39"> In a selector component . </seg> |
| <seg id="40"> The language models you also build from the same text . </seg> |
| <seg id="41"> So , you have language models , and each language model has a is associated with a concept and an attribute vector . </seg> |
| <seg id="42"> Well , now how you've got a query to this selector component , which will be a sequence of words you build an attribute vector out of it and compare it with the attribute vectors already in the selector and then you can find which language models closest or most relevant for your current text . </seg> |
| <seg id="43"> Here is the TFIDFs , again and the metric I used to measure this closeness was just a simple cosine metric which was fast . </seg> |
| <seg id="44"> Because in principle all we have to do is do the dot product of two sparse vectors and you can do that quite fast if you've normalized them both . </seg> |
| <seg id="45"> So , you return the top however many concepts you want . </seg> |
| <seg id="46"> The only new limit is how many you can work with and you use these similarity measurements . </seg> |
| <seg id="47"> Introduce them as weights whenever your data interpolate them in the in the ASR system . </seg> |
| <seg id="48"> So , just a quick example for what the cosine metric does . </seg> |
| <seg id="49"> You choose those language models that are closest vector wise or angle wise . </seg> |
| <seg id="50"> So here is an example . </seg> |
| <seg id="51"> And , this is in the output from that selector component when it is queried with a sentence detected by just a standard , ordinary language model and it goes into the selector and it becomes those language models that it found to be best . </seg> |
| <seg id="52"> And you can see , here it found quite specific ones but it also found a more general one covering the whole topic . </seg> |
| <seg id="53"> And it also returned weights . </seg> |
| <seg id="54"> And these weights are used to interpolate the language models on the fly whenever you're running your your ASR system . </seg> |
| <seg id="55"> So so far , we've kept up our promise of being domain independent haven't used any domain knowledge who shows that it can generalize , and has specialization capabilities . </seg> |
| <seg id="56"> And this slide here also returns semantic information . </seg> |
| <seg id="57"> We get that pretty much for free . </seg> |
| <seg id="58"> Okay , that is the how it then works to guess our ASR system . </seg> |
| <seg id="59"> So far we've just got a language model and we haven't integrated it into anything . </seg> |
| <seg id="60"> At Karlsruhe , we use a Janus language model Janus recognition toolkit . </seg> |
| <seg id="61"> And it is decoder is Ibis decoder . </seg> |
| <seg id="62"> And , interesting for language models is that bottom , right hand corner . </seg> |
| <seg id="63"> I augmented it by adding a selector component between the the linguistic knowledge source and then a set of one language model lots of language models . </seg> |
| <seg id="64"> This component is in here and this component communicates with the previously built selector component and queries the selector with a particular word history that normally whenever we talk about word history with language models we are thinking of the past two words , the past five words , perhaps . </seg> |
| <seg id="65"> But here with just the past two words I'm sending the the past whole hypothesis of the previous sentence perhaps even of the previous five or six or ten sentences . </seg> |
| <seg id="66"> Because you want to get a the bigger picture . </seg> |
| <seg id="67"> And , for testing purposes is also interesting to use a base standard language model do a first pass and then use that decoded hypothesis to adapt the language model for a second pass . </seg> |
| <seg id="68"> And just a quick word about the base language model used I use two different base language models for different tests . </seg> |
| <seg id="69"> In one case I just took my whole data source and built a language model out of it a simple language model that is domain independent . </seg> |
| <seg id="70"> And because I evaluated this on the TC-Star data I also used the handmade language model that we built for the TC-Star evaluation that was domain dependent . </seg> |
| <seg id="71"> It was optimized for that domain . </seg> |
| <seg id="72"> So let's have some some more parameters . </seg> |
| <seg id="73"> We can't include all ninety thousand concepts . </seg> |
| <seg id="74"> That is not reasonable . </seg> |
| <seg id="75"> And we also want to evaluate was my idea of including all the texts in the sub-tree a good idea ? </seg> |
| <seg id="76"> Am I just adding junk to it ? </seg> |
| <seg id="77"> Need to test to make sure that my statement there was correct all that I have to go over word history to adapt to and how we should adapt ? </seg> |
| <seg id="78"> What interpolation weights to use ? </seg> |
| <seg id="79"> Especially interpolation weights between the base language model and the adaptive part . </seg> |
| <seg id="80"> So the the selector will return these concept language models , and these have to be interpolated and for more general parts , we also interpolate it with the base language model we also need to know what parameters to use . </seg> |
| <seg id="81"> Hm . </seg> |
| <seg id="82"> No . </seg> |
| <seg id="83"> Yes . </seg> |
| <seg id="84"> Did this come out ? </seg> |
| <seg id="85"> Okay , now everything has gone black . </seg> |
| <seg id="86"> You know what ? </seg> |
| <seg id="87"> Oh . </seg> |
| <seg id="88"> That turns on ? </seg> |
| <seg id="89"> I've got the PDF yes . </seg> |
| <seg id="90"> PDF file . </seg> |
| <seg id="91"> I still got battery , I just don't have a display . </seg> |
| <seg id="92"> Maybe just turn it turn it ? </seg> |
| <seg id="93"> Close it and reopen it ? </seg> |
| <seg id="94"> Sorry for the delay . </seg> |
| <seg id="95"> He did it intentionally . </seg> |
| <seg id="96"> Yeah . </seg> |
| <seg id="97"> Yeah . </seg> |
| <seg id="98"> I did I did that deliberately . </seg> |
| <seg id="99"> It what is the name ? </seg> |
| <seg id="100"> Name of the presentation . </seg> |
| <seg id="101"> And it should just be pres . </seg> |
| <seg id="102"> Pres or pres-X ? </seg> |
| <seg id="103"> X is the old one . </seg> |
| <seg id="104"> Pres is the one with the new logos . </seg> |
| <seg id="105"> Yeah . </seg> |
| <seg id="106"> Yeah . </seg> |
| <seg id="107"> Yeah . </seg> |
| <seg id="108"> Yeah . </seg> |
| <seg id="109"> Mhm . </seg> |
| <seg id="110"> Hm . </seg> |
| <seg id="111"> Okay . </seg> |
| <seg id="112"> While that is booting up , perhaps I can just tell you a bit about the evaluation data that I tested my language model on . </seg> |
| <seg id="113"> But , I tested it on the TC-Star development data which I split . </seg> |
| <seg id="114"> I used the first part for my own development , and the second part for my evaluation . </seg> |
| <seg id="115"> -- do we have something ? </seg> |
| <seg id="116"> Hm . </seg> |
| <seg id="117"> I actually had a second laptop . </seg> |
| <seg id="118"> It yeah perfect example of the oh just ignore all the videos . </seg> |
| <seg id="119"> Yeah . </seg> |
| <seg id="120"> Slide eighteen or nineteen no , keep going . </seg> |
| <seg id="121"> At top right hand corner . </seg> |
| <seg id="122"> Okay . </seg> |
| <seg id="123"> Okay . </seg> |
| <seg id="124"> Yeah . </seg> |
| <seg id="125"> One more . </seg> |
| <seg id="126"> Perhaps one more . </seg> |
| <seg id="127"> Okay , again , one more . </seg> |
| <seg id="128"> Okay , one more . </seg> |
| <seg id="129"> Okay . </seg> |
| <seg id="130"> Okay . </seg> |
| <seg id="131"> Well okay . </seg> |
| <seg id="132"> Thank you . </seg> |
| <seg id="133"> Sorry about the technical difficulties . </seg> |
| <seg id="134"> Okay , back to the presentation . </seg> |
| <seg id="135"> Now as I was saying before you can't load ninety thousand language models with our current computers . </seg> |
| <seg id="136"> In actual fact , at about one thousand was the limit and even that required over twenty gigabytes of ram and some utterances to decode them . </seg> |
| <seg id="137"> So , to reduce I just did some quick heuristics to reduce the amount of concepts loaded . </seg> |
| <seg id="138"> And I only use concepts from those two nodes , society government and regional Europe and even here , because there were ten thousand concepts here I sorted them by size and removed the top twenty and then used the largest remaining and how many of the largest remaining you can see in the next slide . </seg> |
| <seg id="139"> Hm . </seg> |
| <seg id="140"> Go forward . </seg> |
| <seg id="141"> I built several language models one using only ten extra concept language models overlapped to a thousand extra concept language models and compared these to see how adding more concepts helped . </seg> |
| <seg id="142"> Also , in this test I used the texts of the whole sub-tree and the history was just as I mentioned before as a base language model went through one pass and that was used to adapt to for the second pass . </seg> |
| <seg id="143"> The interpolation weights are for now set at fifty fifty . </seg> |
| <seg id="144"> And I used two different types of base language models . </seg> |
| <seg id="145"> So , if we go on we can see the results of that test as mentioned , this was tested on this was tested on the TC-Star development set . </seg> |
| <seg id="146"> This is just parameter tuning , so it only used the first part of that set you can see using this ODPLM , which is the more general base language model built using just my whole domain independent data set . </seg> |
| <seg id="147"> It got a word error rate of twenty-one point five percent and by the time I added one thousand concept language models I had reduced that to twenty point five percent . </seg> |
| <seg id="148"> So if go on to the next slide this method improves domain independent language models , so I already have won that point . </seg> |
| <seg id="149"> No we can go on to the next test which was to evaluate how using this whole sub-tree of text . </seg> |
| <seg id="150"> Did that help ? </seg> |
| <seg id="151"> Or was that a bad idea ? </seg> |
| <seg id="152"> All the other parameters are kept the same , and I kept the largest language model the the one that used the top one thousand concepts . </seg> |
| <seg id="153"> Now , here next slide . </seg> |
| <seg id="154"> Yeah . </seg> |
| <seg id="155"> Just to illustrate in one of these tests , the one with the one thousand X I only use the individual text of the node and in the standard one just the adaptive one thousand I used the whole sub-tree so if we go on to the next slide using just the text of the node was atrocious . </seg> |
| <seg id="156"> It just performed awful . </seg> |
| <seg id="157"> Whereas using the whole sub-tree text , we were able to get enough text to build a decent language model . </seg> |
| <seg id="158"> And just in case anybody was claiming that by choosing those two nodes to select my concept language models from that I helped it along . </seg> |
| <seg id="159"> I took all the text in all the concept language models and built a language model out of that and interpolated it again fifty-fifty with the base language model and tested this ODPLM mixed language model and it also wasn't as good as wasn't even as good as ODPLM language model was to begin with . </seg> |
| <seg id="160"> So , here we can see that using the selective method and interpolating based on the selector's weights actually did increase the performance . </seg> |
| <seg id="161"> We can go on to the next slide . </seg> |
| <seg id="162"> Okay . </seg> |
| <seg id="163"> The next thing to evaluate would be which history to adapt to ? </seg> |
| <seg id="164"> That so far it is just used the base language model , made the hypothesis , and adapted to that . </seg> |
| <seg id="165"> Now we'll keep that test and we'll also evaluate how it performs if we use the last hypothesis . </seg> |
| <seg id="166"> The well , the hypothesis decoded in in the previous step or for the previous utterance or for the previous however many utterances . </seg> |
| <seg id="167"> Now , we can go on to the next slide . </seg> |
| <seg id="168"> And here we can see that well adapting to the baseline hypothesis which is whenever there is a plus-HB , that means hypothesis as a baseline . </seg> |
| <seg id="169"> That performs better than adapting to ah . </seg> |
| <seg id="170"> Thank you . </seg> |
| <seg id="171"> That performs better than adapting to the previous hypothesis which is whenever there is an H-one . </seg> |
| <seg id="172"> And unfortunately it didn't this one here didn't perform better than just the baseline . </seg> |
| <seg id="173"> So we need to tune it a bit more . </seg> |
| <seg id="174"> We go on to the next slide . </seg> |
| <seg id="175"> We can see a similar situation here . </seg> |
| <seg id="176"> Adding the adaptive one hundred or one thousand language model with different history length is this here is without the base language model . </seg> |
| <seg id="177"> This is one pass . </seg> |
| <seg id="178"> And , computing the perplexity it goes down in this one but hardly goes down using the language model previously built by hand . </seg> |
| <seg id="179"> So , we can improve on our on a domain independent language model so far , but not very much on a domain dependent language model . </seg> |
| <seg id="180"> Now , if we go on to the next slide ? </seg> |
| <seg id="181"> Perhaps we're just giving this domain independent part too much weight . </seg> |
| <seg id="182"> So I tried increasing the weight of the optimized domain dependent language model , to see if we can get some to see if we can get some improvements that way . </seg> |
| <seg id="183"> And on the next slide , again , keeping I kept all the parameters the same as in the previous test and we're back to using the two pass method using the hypothesis of the base language model to adapt to . </seg> |
| <seg id="184"> Here you can see that whenever you really increase the weight and really turn it down you can get a slight improvement and it appears that this language model is already as adapted to the domain as it can get , pretty much . </seg> |
| <seg id="185"> So adding for some more adaptive parts to it didn't improve the score that much . </seg> |
| <seg id="186"> But after having evaluated these parameters I did run can we go on to the next slide ? </seg> |
| <seg id="187"> I did run an evaluation on the remaining data in the development set using a different weight and again this is a domain dependent language model . </seg> |
| <seg id="188"> I'm trying to see if I can improve on the domain dependent language model . </seg> |
| <seg id="189"> And , if we can see the scores oh go back . </seg> |
| <seg id="190"> Can we go back a bit ? </seg> |
| <seg id="191"> Okay . </seg> |
| <seg id="192"> Thank you . </seg> |
| <seg id="193"> We can see the scores , and , unfortunately it hasn't improved on the domain dependent language model just yet , but this is still just the first ration of it . </seg> |
| <seg id="194"> So , can we go on to the next slide ? </seg> |
| <seg id="195"> Mhm . </seg> |
| <seg id="196"> So , in conclusion that the good news is we have been able to improve on domain independent language models increasing the score of of one of them by one absolute percent and we found that using the two pass method is is so far the best method . </seg> |
| <seg id="197"> I would like to do more tests on the history . </seg> |
| <seg id="198"> It just should be noted that it is quite slow right now . </seg> |
| <seg id="199"> Decoding that test set increased the decoding time from by six hours using just the base language model to about nineteen hours using well , I think one thousand concept language models . </seg> |
| <seg id="200"> So you can't just test something you have to choose your test carefully . </seg> |
| <seg id="201"> We found better interpolation weights and we haven't used them yet for anything , but it does give you some semantic tags for utterances and also weights to those tags . </seg> |
| <seg id="202"> So other development set , we did get an improvement over the domain dependent language model , but that didn't result in an improvement on the evaluation set . </seg> |
| <seg id="203"> We go on to the next slide . </seg> |
| <seg id="204"> This is just a first draft of the language model there is lots of work that can still be done on it . </seg> |
| <seg id="205"> In particular well , speeding it up and what would be very interesting is a dynamic vocabulary . </seg> |
| <seg id="206"> So you can dynamically adjust your based on the concepts you find . </seg> |
| <seg id="207"> I'm very interested in how that will turn out . </seg> |
| <seg id="208"> And it doesn't have to be in automatic speech recognition . </seg> |
| <seg id="209"> You can also use it in machine translation . </seg> |
| <seg id="210"> Especially if you were to build build it from concepts where you had the same concept in two different languages . </seg> |
| <seg id="211"> So thanks for for staying with me through all the technical problems . </seg> |
| <seg id="212"> And thank you for your attention . </seg> |
| <seg id="213"> Are there any questions ? </seg> |
| </doc> |
| <doc docid="lecture0004" genre="Lecture"> |
| <seg id="1"> Okay , thank you . </seg> |
| <seg id="2"> Good afternoon . </seg> |
| <seg id="3"> I'll be presenting my work on on big corpora and show how we filtered noise data from the Giga corpus and how we could speed up the processing time . </seg> |
| <seg id="4"> So this will be shown on two aspects , filtering the first aspect is the filtering and then we go to parallelizing the phrase scoring . </seg> |
| <seg id="5"> So , first of all the parallel corpora we all know as in machine translation they are very important and not only in machine translation but also in other NLP tasks . </seg> |
| <seg id="6"> And they can be manually created like the EPPS corpus or UN corpus . </seg> |
| <seg id="7"> And these kind of corpora have the better quality and but they are very restricted in terms of size and types . </seg> |
| <seg id="8"> But we can also automatically collect the data from the web and this kind of corpora have have high availability , but they have restrictions in the quality . </seg> |
| <seg id="9"> And the Giga corpus is one of these of these web crawled corpora was collected by Chris Callison-Burch in Two-Thousand-Nine and is still too noisy even after some heuristic cleaning by the author . </seg> |
| <seg id="10"> And just for comparison I put the number of sentences in the Giga corpus which is twenty-two point five million sentences and you can compare to the EPPS corpus which was collected on fourteen years of European Parliament proceedings which is like five percent of this size . </seg> |
| <seg id="11"> And what are the problems we face with the Giga corpus ? </seg> |
| <seg id="12"> Yeah , one of the problems we face is as you see here , junk portions in the in the corpus . </seg> |
| <seg id="13"> We can also see broken lines or broken sentences with scores sentence alignment errors like you can see here . </seg> |
| <seg id="14"> Or even some pairs from other languages . </seg> |
| <seg id="15"> And due to its size it might also take even days to finish training . </seg> |
| <seg id="16"> So the first acts we we treated this data is by filtering it from noise . </seg> |
| <seg id="17"> And we have several approaches for that . </seg> |
| <seg id="18"> So , we tried to automatically denoise this data using only lexical features . </seg> |
| <seg id="19"> And for that we created a training set and a test set from clean data available from previous evaluations namely the NC dev Two-Thousand-Seven and NC devtest Two-Thousand-Seven , for people who are have worked already in the evaluation , they know these datasets . </seg> |
| <seg id="20"> And so , to create the false examples we switch it thirty per cent of the of the source side switched positions for the source side so that they form false examples . </seg> |
| <seg id="21"> And we also needed lexical dictionaries and recreated them from the clean data EPPS and NC . </seg> |
| <seg id="22"> So the first approach , we call it naive approach , so we just thought that a lexical score alone would be sufficient to distinguish good pairs and bad pairs and this turns out not to work as we expected . </seg> |
| <seg id="23"> So the scoring formula is takes into consideration the lexical scores and by the constant which is multiplied outside the multiplication we give more chance to the longer sentences to pass the filter if if they could . </seg> |
| <seg id="24"> And with this approach we got very bad F-score . </seg> |
| <seg id="25"> Like fifty-eight percent . </seg> |
| <seg id="26"> And then we moved to discriminative approaches . </seg> |
| <seg id="27"> So for discriminative approach we have two classes . </seg> |
| <seg id="28"> Either we reject a pair which is with value zero , or we keep the pair value one . </seg> |
| <seg id="29"> And the features we used are the difference in number of words between source side and target side . </seg> |
| <seg id="30"> And we expect that the lower the number the lower the difference the better the the better the correspondence between source and target the IBM one score and we expect that the higher the better . </seg> |
| <seg id="31"> And the number of unaligned words between source and target and we expect for this that the more unaligned words the worse pair . </seg> |
| <seg id="32"> And the maximum number of words a given word is aligned , too , which is called the fertility . </seg> |
| <seg id="33"> And the maximum the fertility that should be the worse the the worse the pair . </seg> |
| <seg id="34"> And the first approach we tried is regression . </seg> |
| <seg id="35"> And by a linear combination of of scores of the features we optimized the lambdas using the Powell search against the sum of squared errors and we got an F-score of ninety per cent . </seg> |
| <seg id="36"> It is bad yeah . </seg> |
| <seg id="37"> And the next approach we tried is the logistic regression . </seg> |
| <seg id="38"> And we optimized it with the BFGS algorithm . </seg> |
| <seg id="39"> And to maximize the likelihood to the training data . </seg> |
| <seg id="40"> And we got much better in recall and much better on precision as well which gave us ninety-four or almost ninety-five percent of F-score . </seg> |
| <seg id="41"> We also tried the maximum in entropy classifier trained with the Mega-m package . </seg> |
| <seg id="42"> And we it did slightly better on precision , but worse on recall and then it gave a worse F-score . </seg> |
| <seg id="43"> The last technique we tried also is the SVM classifier which was trained by the SVM light package . </seg> |
| <seg id="44"> And it gave much better on precision and much better recall . </seg> |
| <seg id="45"> And this gave us ninety-seven percent of F-score . </seg> |
| <seg id="46"> And the results . </seg> |
| <seg id="47"> From the twenty-two point five million sentences we selected sixteen point eight and which lead us to throw like twenty-two percent of the corpus . </seg> |
| <seg id="48"> And we used these training data in our systems for the two last evaluations WMT and IWSLT . </seg> |
| <seg id="49"> And you can see the gain we got for French English . </seg> |
| <seg id="50"> In WMT it is around point seven for on development and test sets . </seg> |
| <seg id="51"> And in IWSLT it is even better . </seg> |
| <seg id="52"> And it is around one Bleu point for both Dev and and test . </seg> |
| <seg id="53"> That's for the filtering part . </seg> |
| <seg id="54"> Now let's move to the parallelizing part and as mentioned in the morning by Alexander the phrase scoring is or the the standard phrase scoring is just one step in building the translation model . </seg> |
| <seg id="55"> It comes after extracting the phrases and in which we calculate the corresponding probabilities to phrases like the one shown here . </seg> |
| <seg id="56"> And for that we need to count the source and target sentences . </seg> |
| <seg id="57"> And therefore we need the similar pairs or similar sentences to be together in order to count the number of occurrences . </seg> |
| <seg id="58"> And then we need a sorted list of the extracted of the extracted phrases . </seg> |
| <seg id="59"> Moses does it by the standard sort . </seg> |
| <seg id="60"> Unix command . </seg> |
| <seg id="61"> And here are a sample of times according to the corpora . </seg> |
| <seg id="62"> You can see that it can go until several days for for the big corpus which contains all the data . </seg> |
| <seg id="63"> So we implemented two different approaches one for shared memory architectures in which we used the STXXL library which is a an external memory container . </seg> |
| <seg id="64"> And and the process is is as follows . </seg> |
| <seg id="65"> So , we have an SMP machine , which has multiple cores , so for every core we have a thread and every thread does the processing locally , which is the sort , I mean , by processing the sort . </seg> |
| <seg id="66"> So every threat sorts its local data and then the aggregation or the merging is done globally which is the calculating or computing of the of the corresponding probabilities . </seg> |
| <seg id="67"> So , once for target and once for source . </seg> |
| <seg id="68"> And afterwards we tried a hybrid approach a hybrid approach and for that we used DEM-sort , which is distributed external memory sort algorithm , which is itself based on the STXXL containers and and we used also the MPI library . </seg> |
| <seg id="69"> And and the process is as follows , so for every node or for every process it has some local data , so it sorts and then it aggregates locally , but before doing the aggregation locally we need to ensure that every process has the right range of data . </seg> |
| <seg id="70"> For that we have an all to all operation after immediately after the sort . </seg> |
| <seg id="71"> And the problem here that in the aggregation operation some nodes could just finish way faster than the others , so the variation in time is too high . </seg> |
| <seg id="72"> So for that we might need some between the nodes . </seg> |
| <seg id="73"> I show here a comparison to the previously mentioned Moses times and our times . </seg> |
| <seg id="74"> So it cuts the time to the half at least with sixteen cores . </seg> |
| <seg id="75"> And finally a comparison between all the implemented methods so the distributed , unbalanced could cut the time until like ninety percent of speed-up . </seg> |
| <seg id="76"> And even we got more speed-up by balancing the load between the nodes . </seg> |
| <seg id="77"> And that's the last point I want to talk about . </seg> |
| </doc> |
| <doc docid="2507" genre="lectures"> |
| <url>http://www.ted.com/talks/trevor_timm_how_free_is_our_freedom_of_the_press</url> |
| <description>TED Talk Subtitles and Transcript: In the US, the press has a right to publish secret information the public needs to know, protected by the First Amendment. Government surveillance has made it increasingly more dangerous for whistleblowers, the source of virtually every important story about national security since 9/11, to share information. In this concise, informative talk, Freedom of the Press Foundation co-founder and TED Fellow Trevor Timm traces the recent history of government action against individuals who expose crime and injustice and advocates for technology that can help them do it safely and anonymously.</description> |
| <keywords>talks, Internet, TED Fellows, corruption, crime, government</keywords> |
| <talkid>2507</talkid> |
| <title>Trevor Timm: How free is our freedom of the press?</title> |
| <reviewer></reviewer> |
| <translator></translator> |
| <seg id="1"> So this is James Risen. </seg> |
| <seg id="2"> You may know him as the Pulitzer Prize-winning reporter for The New York Times. </seg> |
| <seg id="3"> Long before anybody knew Edward Snowden's name, Risen wrote a book in which he famously exposed that the NSA was illegally wiretapping the phone calls of Americans. </seg> |
| <seg id="4"> But it's another chapter in that book that may have an even more lasting impact. </seg> |
| <seg id="5"> In it, he describes a catastrophic US intelligence operation in which the CIA quite literally handed over blueprints of a nuclear bomb to Iran. </seg> |
| <seg id="6"> If that sounds crazy, go read it. </seg> |
| <seg id="7"> It's an incredible story. </seg> |
| <seg id="8"> But you know who didn't like that chapter? </seg> |
| <seg id="9"> The US government. </seg> |
| <seg id="10"> For nearly a decade afterwards, Risen was the subject of a US government investigation in which prosecutors demanded that he testify against one of his alleged sources. </seg> |
| <seg id="11"> And along the way, he became the face for the US government's recent pattern of prosecuting whistleblowers and spying on journalists. </seg> |
| <seg id="12"> You see, under the First Amendment, the press has the right to publish secret information in the public interest. </seg> |
| <seg id="13"> But it's impossible to exercise that right if the media can't also gather that news and protect the identities of the brave men and women who get it to them. </seg> |
| <seg id="14"> So when the government came knocking, Risen did what many brave reporters have done before him: he refused and said he'd rather go to jail. </seg> |
| <seg id="15"> So from 2007 to 2015, Risen lived under the specter of going to federal prison. </seg> |
| <seg id="16"> That is, until just days before the trial, when a curious thing happened. </seg> |
| <seg id="17"> Suddenly, after years of claiming it was vital to their case, the government dropped their demands to Risen altogether. </seg> |
| <seg id="18"> It turns out, in the age of electronic surveillance, there are very few places reporters and sources can hide. </seg> |
| <seg id="19"> And instead of trying and failing to have Risen testify, they could have his digital trail testify against him instead. </seg> |
| <seg id="20"> So completely in secret and without his consent, prosecutors got Risen's phone records. </seg> |
| <seg id="21"> They got his email records, his financial and banking information, his credit reports, even travel records with a list of flights he had taken. </seg> |
| <seg id="22"> And it was among this information that they used to convict Jeffrey Sterling, Risen's alleged source and CIA whistleblower. </seg> |
| <seg id="23"> Sadly, this is only one case of many. </seg> |
| <seg id="24"> President Obama ran on a promise to protect whistleblowers, and instead, his Justice Department has prosecuted more than all other administrations combined. </seg> |
| <seg id="25"> Now, you can see how this could be a problem, especially because the government considers so much of what it does secret. </seg> |
| <seg id="26"> Since 9/11, virtually every important story about national security has been the result of a whistleblower coming to a journalist. </seg> |
| <seg id="27"> So we risk seeing the press unable to do their job that the First Amendment is supposed to protect because of the government's expanded ability to spy on everyone. </seg> |
| <seg id="28"> But just as technology has allowed the government to circumvent reporters' rights, the press can also use technology to protect their sources even better than before. </seg> |
| <seg id="29"> And they can start from the moment they begin speaking with them, rather than on the witness stand after the fact. </seg> |
| <seg id="30"> Communications software now exists that wasn't available when Risen was writing his book, and is much more surveillance-resistant than regular emails or phone calls. </seg> |
| <seg id="31"> For example, one such tool is SecureDrop, an open-source whistleblower submission system that was originally created by the late Internet luminary Aaron Swartz, and is now developed at the non-profit where I work, Freedom of the Press Foundation. </seg> |
| <seg id="32"> Instead of sending an email, you go to a news organization's website, like this one here on The Washington Post. </seg> |
| <seg id="33"> From there, you can upload a document or send information much like you would on any other contact form. </seg> |
| <seg id="34"> It'll then be encrypted and stored on a server that only the news organization has access to. </seg> |
| <seg id="35"> So the government can no longer secretly demand the information, and much of the information they would demand wouldn't be available in the first place. </seg> |
| <seg id="36"> SecureDrop, though, is really only a small part of the puzzle for protecting press freedom in the 21st century. </seg> |
| <seg id="37"> Unfortunately, governments all over the world are constantly developing new spying techniques that put us all at risk. </seg> |
| <seg id="38"> And it's up to us going forward to make sure that it's not just the tech-savvy whistleblowers, like Edward Snowden, who have an avenue for exposing wrongdoing. </seg> |
| <seg id="39"> It's just as vital that we protect the next veteran's health care whistleblower alerting us to overcrowded hospitals, or the next environmental worker sounding the alarm about Flint's dirty water, or a Wall Street insider warning us of the next financial crisis. </seg> |
| <seg id="40"> After all, these tools weren't just built to help the brave men and women who expose crimes, but are meant to protect all of our rights under the Constitution. </seg> |
| <seg id="41"> Thank you. </seg> |
| </doc> |
| <doc docid="2478" genre="lectures"> |
| <url>http://www.ted.com/talks/robert_palmer_the_panama_papers_exposed_a_huge_global_problem_what_s_next</url> |
| <description>TED Talk Subtitles and Transcript: On April 3, 2016 we saw the largest data leak in history. The Panama Papers exposed rich and powerful people hiding vast amounts of money in offshore accounts. But what does it all mean? We called Robert Palmer of Global Witness to find out.</description> |
| <keywords>talks, activism, big problems, business, corruption, economics, global issues, government, identity, inequality, investment, law, money, news, poverty</keywords> |
| <talkid>2478</talkid> |
| <title>Robert Palmer: The Panama Papers exposed a huge global problem. What's next?</title> |
| <reviewer></reviewer> |
| <translator></translator> |
| <seg id="1"> [On April 3, 2016 we saw the largest data leak in history.] [The Panama Papers exposed rich and powerful people] [hiding vast amounts of money in offshore accounts.] [What does this mean?] [We called Robert Palmer of Global Witness to explain.] This week, there have been a whole slew and deluge of stories coming out from the leak of 11 million documents from a Panamanian-based law firm called Mossack Fonseca. </seg> |
| <seg id="2"> The release of these papers from Panama lifts the veil on a tiny piece of the secretive offshore world. </seg> |
| <seg id="3"> We get an insight into how clients and banks and lawyers go to companies like Mossack Fonseca and say, "OK, we want an anonymous company, can you give us one?" </seg> |
| <seg id="4"> So you actually get to see the emails, you get to see the exchanges of messages, you get to see the mechanics of how this works, how this operates. </seg> |
| <seg id="5"> Now, this has already started to have pretty immediate repercussions. </seg> |
| <seg id="6"> The Prime Minister of Iceland has resigned. </seg> |
| <seg id="7"> We've also had news that an ally of the brutal Syrian dictator Bashar Al-Assad has also got offshore companies. </seg> |
| <seg id="8"> There's been allegations of a $2 billion money trail that leads back to President Vladimir Putin of Russia via his close childhood friend, who happens to be a top cellist. </seg> |
| <seg id="9"> And there will be a lot of rich individuals out there and others who will be nervous about the next set of stories and the next set of leaked documents. </seg> |
| <seg id="10"> Now, this sounds like the plot of a spy thriller or a John Grisham novel. </seg> |
| <seg id="11"> It seems very distant from you, me, ordinary people. </seg> |
| <seg id="12"> Why should we care about this? </seg> |
| <seg id="13"> But the truth is that if rich and powerful individuals are able to keep their money offshore and not pay the taxes that they should, it means that there is less money for vital public services like healthcare, education, roads. </seg> |
| <seg id="14"> And that affects all of us. </seg> |
| <seg id="15"> Now, for my organization Global Witness, this exposé has been phenomenal. </seg> |
| <seg id="16"> We have the world's media and political leaders talking about how individuals can use offshore secrecy to hide and disguise their assets -- something we have been talking about and exposing for a decade. </seg> |
| <seg id="17"> Now, I think a lot of people find this entire world baffling and confusing, and hard to understand how this sort of offshore world works. </seg> |
| <seg id="18"> I like to think of it a bit like a Russian doll. </seg> |
| <seg id="19"> So you can have one company stacked inside another company, stacked inside another company, making it almost impossible to really understand who is behind these structures. </seg> |
| <seg id="20"> It can be very difficult for law enforcement or tax authorities, journalists, civil society to really understand what's going on. </seg> |
| <seg id="21"> I also think it's interesting that there's been less coverage of this issue in the United States. </seg> |
| <seg id="22"> And that's perhaps because some prominent US people just haven't figured in this exposé, in this scandal. </seg> |
| <seg id="23"> Now, that's not because there are no rich Americans who are stashing their assets offshore. </seg> |
| <seg id="24"> It's just because of the way in which offshore works, Mossack Fonseca has fewer American clients. </seg> |
| <seg id="25"> I think if we saw leaks from the Cayman Islands or even from Delaware or Wyoming or Nevada, you would see many more cases and examples linking back to Americans. </seg> |
| <seg id="26"> In fact, in a number of US states you need less information, you need to provide less information to get a company than you do to get a library card. </seg> |
| <seg id="27"> That sort of secrecy in America has allowed employees of school districts to rip off schoolchildren. </seg> |
| <seg id="28"> It has allowed scammers to rip off vulnerable investors. </seg> |
| <seg id="29"> This is the sort of behavior that affects all of us. </seg> |
| <seg id="30"> Now, at Global Witness, we wanted to see what this actually looked like in practice. </seg> |
| <seg id="31"> How does this actually work? </seg> |
| <seg id="32"> So what we did is we sent in an undercover investigator to 13 Manhattan law firms. </seg> |
| <seg id="33"> Our investigator posed as an African minister who wanted to move suspect funds into the United States to buy a house, a yacht, a jet. </seg> |
| <seg id="34"> Now, what was truly shocking was that all but one of those lawyers provided our investigator with suggestions on how to move those suspect funds. </seg> |
| <seg id="35"> These were all preliminary meetings, and none of the lawyers took us on as a client and of course no money moved hands, but it really shows the problem with the system. </seg> |
| <seg id="36"> It's also important to not just think about this as individual cases. </seg> |
| <seg id="37"> This is not just about an individual lawyer who's spoken to our undercover investigator and provided suggestions. </seg> |
| <seg id="38"> It's not just about a particular senior politician who's been caught up in a scandal. </seg> |
| <seg id="39"> This is about how a system works, that entrenches corruption, tax evasion, poverty and instability. </seg> |
| <seg id="40"> And in order to tackle this, we need to change the game. </seg> |
| <seg id="41"> We need to change the rules of the game to make this sort of behavior harder. </seg> |
| <seg id="42"> This may seem like doom and gloom, like there's nothing we can do about it, like nothing has ever changed, like there will always be rich and powerful individuals. </seg> |
| <seg id="43"> But as a natural optimist, I do see that we are starting to get some change. </seg> |
| <seg id="44"> Over the last couple of years, we've seen a real push towards greater transparency when it comes to company ownership. </seg> |
| <seg id="45"> This issue was put on the political agenda by the UK Prime Minister David Cameron at a big G8 Summit that was held in Northern Ireland in 2013. </seg> |
| <seg id="46"> And since then, the European Union is going to be creating central registers at a national level of who really owns and controls companies across Europe. </seg> |
| <seg id="47"> One of the things that is sad is that, actually, the US is lagging behind. </seg> |
| <seg id="48"> There's bipartisan legislation that had been introduced in the House and the Senate, but it isn't making as much progress as we'd like to see. </seg> |
| <seg id="49"> So we'd really want to see the Panama leaks, this huge peek into the offshore world, be used as a way of opening up in the US and around the world. </seg> |
| <seg id="50"> For us at Global Witness, this is a moment for change. </seg> |
| <seg id="51"> We need ordinary people to get angry at the way in which people can hide their identity behind secret companies. </seg> |
| <seg id="52"> We need business leaders to stand up and say, "Secrecy like this is not good for business." </seg> |
| <seg id="53"> We need political leaders to recognize the problem, and to commit to changing the law to open up this sort of secrecy. </seg> |
| <seg id="54"> Together, we can end the secrecy that is currently allowing tax evasion, corruption, money laundering to flourish. </seg> |
| </doc> |
| <doc docid="2447" genre="lectures"> |
| <url>http://www.ted.com/talks/joe_gebbia_how_airbnb_designs_for_trust</url> |
| <description>TED Talk Subtitles and Transcript: Joe Gebbia, the co-founder of Airbnb, bet his whole company on the belief that people can trust each other enough to stay in one another's homes. How did he overcome the stranger-danger bias? Through good design. Now, 123 million hosted nights later, Gebbia sets out his dream for a culture of sharing in which design helps foster community and connection instead of isolation and separation.</description> |
| <keywords>talks, behavioral economics, business, collaboration, community, culture, design, economics, entrepreneur, future, innovation, potential, privacy, product design, relationships, social change, technology, urban planning</keywords> |
| <talkid>2447</talkid> |
| <title>Joe Gebbia: How Airbnb designs for trust</title> |
| <reviewer></reviewer> |
| <translator></translator> |
| <seg id="1"> I want to tell you the story about the time I almost got kidnapped in the trunk of a red Mazda Miata. </seg> |
| <seg id="2"> It's the day after graduating from design school and I'm having a yard sale. </seg> |
| <seg id="3"> And this guy pulls up in this red Mazda and he starts looking through my stuff. </seg> |
| <seg id="4"> And he buys a piece of art that I made. </seg> |
| <seg id="5"> And it turns out he's alone in town for the night, driving cross-country on a road trip before he goes into the Peace Corps. </seg> |
| <seg id="6"> So I invite him out for a beer and he tells me all about his passion for making a difference in the world. </seg> |
| <seg id="7"> Now it's starting to get late, and I'm getting pretty tired. </seg> |
| <seg id="8"> As I motion for the tab, I make the mistake of asking him, "So where are you staying tonight?" </seg> |
| <seg id="9"> And he makes it worse by saying, "Actually, I don't have a place." </seg> |
| <seg id="10"> And I'm thinking, "Oh, man!" </seg> |
| <seg id="11"> What do you do? </seg> |
| <seg id="12"> We've all been there, right? </seg> |
| <seg id="13"> Do I offer to host this guy? </seg> |
| <seg id="14"> But, I just met him -- I mean, he says he's going to the Peace Corps, but I don't really know if he's going to the Peace Corps and I don't want to end up kidnapped in the trunk of a Miata. </seg> |
| <seg id="15"> That's a small trunk! </seg> |
| <seg id="16"> So then I hear myself saying, "Hey, I have an airbed you can stay on in my living room." </seg> |
| <seg id="17"> And the voice in my head goes, "Wait, what?" </seg> |
| <seg id="18"> That night, I'm laying in bed, I'm staring at the ceiling and thinking, "Oh my god, what have I done? </seg> |
| <seg id="19"> There's a complete stranger sleeping in my living room. </seg> |
| <seg id="20"> What if he's psychotic?" </seg> |
| <seg id="21"> My anxiety grows so much, I leap out of bed, I sneak on my tiptoes to the door, and I lock the bedroom door. </seg> |
| <seg id="22"> It turns out he was not psychotic. </seg> |
| <seg id="23"> We've kept in touch ever since. </seg> |
| <seg id="24"> And the piece of art he bought at the yard sale is hanging in his classroom; he's a teacher now. </seg> |
| <seg id="25"> This was my first hosting experience, and it completely changed my perspective. </seg> |
| <seg id="26"> Maybe the people that my childhood taught me to label as strangers were actually friends waiting to be discovered. </seg> |
| <seg id="27"> The idea of hosting people on airbeds gradually became natural to me and when I moved to San Francisco, I brought the airbed with me. </seg> |
| <seg id="28"> So now it's two years later. </seg> |
| <seg id="29"> I'm unemployed, I'm almost broke, my roommate moves out, and then the rent goes up. </seg> |
| <seg id="30"> And then I learn there's a design conference coming to town, and all the hotels are sold out. </seg> |
| <seg id="31"> And I've always believed that turning fear into fun is the gift of creativity. </seg> |
| <seg id="32"> So here's what I pitch my best friend and my new roommate Brian Chesky: "Brian, thought of a way to make a few bucks -- turning our place into 'designers bed and breakfast,' offering young designers who come to town a place to crash, complete with wireless Internet, a small desk space, sleeping mat, and breakfast each morning. </seg> |
| <seg id="33"> Ha!" </seg> |
| <seg id="34"> We built a basic website and Airbed and Breakfast was born. </seg> |
| <seg id="35"> Three lucky guests got to stay on a 20-dollar airbed on the hardwood floor. </seg> |
| <seg id="36"> But they loved it, and so did we. </seg> |
| <seg id="37"> I swear, the ham and Swiss cheese omelets we made tasted totally different because we made them for our guests. </seg> |
| <seg id="38"> We took them on adventures around the city, and when we said goodbye to the last guest, the door latch clicked, Brian and I just stared at each other. </seg> |
| <seg id="39"> Did we just discover it was possible to make friends while also making rent? </seg> |
| <seg id="40"> The wheels had started to turn. </seg> |
| <seg id="41"> My old roommate, Nate Blecharczyk, joined as engineering co-founder. </seg> |
| <seg id="42"> And we buckled down to see if we could turn this into a business. </seg> |
| <seg id="43"> Here's what we pitched investors: "We want to build a website where people publicly post pictures of their most intimate spaces, their bedrooms, the bathrooms -- the kinds of rooms you usually keep closed when people come over. </seg> |
| <seg id="44"> And then, over the Internet, they're going to invite complete strangers to come sleep in their homes. </seg> |
| <seg id="45"> It's going to be huge!" </seg> |
| <seg id="46"> We sat back, and we waited for the rocket ship to blast off. </seg> |
| <seg id="47"> It did not. </seg> |
| <seg id="48"> No one in their right minds would invest in a service that allows strangers to sleep in people's homes. </seg> |
| <seg id="49"> Why? </seg> |
| <seg id="50"> Because we've all been taught as kids, strangers equal danger. </seg> |
| <seg id="51"> Now, when you're faced with a problem, you fall back on what you know, and all we really knew was design. </seg> |
| <seg id="52"> In art school, you learn that design is much more than the look and feel of something -- it's the whole experience. </seg> |
| <seg id="53"> We learned to do that for objects, but here, we were aiming to build Olympic trust between people who had never met. </seg> |
| <seg id="54"> Could design make that happen? </seg> |
| <seg id="55"> Is it possible to design for trust? </seg> |
| <seg id="56"> I want to give you a sense of the flavor of trust that we were aiming to achieve. </seg> |
| <seg id="57"> I've got a 30-second experiment that will push you past your comfort zone. </seg> |
| <seg id="58"> If you're up for it, give me a thumbs-up. </seg> |
| <seg id="59"> OK, I need you to take out your phones. </seg> |
| <seg id="60"> Now that you have your phone out, I'd like you to unlock your phone. </seg> |
| <seg id="61"> Now hand your unlocked phone to the person on your left. </seg> |
| <seg id="62"> That tiny sense of panic you're feeling right now -- is exactly how hosts feel the first time they open their home. </seg> |
| <seg id="63"> Because the only thing more personal than your phone is your home. </seg> |
| <seg id="64"> People don't just see your messages, they see your bedroom, your kitchen, your toilet. </seg> |
| <seg id="65"> Now, how does it feel holding someone's unlocked phone? </seg> |
| <seg id="66"> Most of us feel really responsible. </seg> |
| <seg id="67"> That's how most guests feel when they stay in a home. </seg> |
| <seg id="68"> And it's because of this that our company can even exist. </seg> |
| <seg id="69"> By the way, who's holding Al Gore's phone? </seg> |
| <seg id="70"> Would you tell Twitter he's running for President? </seg> |
| <seg id="71"> OK, you can hand your phones back now. </seg> |
| <seg id="72"> So now that you've experienced the kind of trust challenge we were facing, I'd love to share a few discoveries we've made along the way. </seg> |
| <seg id="73"> What if we changed one small thing about the design of that experiment? </seg> |
| <seg id="74"> What if your neighbor had introduced themselves first, with their name, where they're from, the name of their kids or their dog? </seg> |
| <seg id="75"> Imagine that they had 150 reviews of people saying, "They're great at holding unlocked phones!" </seg> |
| <seg id="76"> Now how would you feel about handing your phone over? </seg> |
| <seg id="77"> a well-designed reputation system is key for building trust. </seg> |
| <seg id="78"> And we didn't actually get it right the first time. </seg> |
| <seg id="79"> It's hard for people to leave bad reviews. </seg> |
| <seg id="80"> Eventually, we learned to wait until both guests and hosts left the review before we reveal them. </seg> |
| <seg id="81"> Now, here's a discovery we made just last week. </seg> |
| <seg id="82"> We did a joint study with Stanford, where we looked at people's willingness to trust someone based on how similar they are in age, location and geography. </seg> |
| <seg id="83"> The research showed, not surprisingly, we prefer people who are like us. </seg> |
| <seg id="84"> The more different somebody is, the less we trust them. </seg> |
| <seg id="85"> Now, that's a natural social bias. </seg> |
| <seg id="86"> But what's interesting is what happens when you add reputation into the mix, in this case, with reviews. </seg> |
| <seg id="87"> Now, if you've got less than three reviews, nothing changes. </seg> |
| <seg id="88"> But if you've got more than 10, everything changes. </seg> |
| <seg id="89"> High reputation beats high similarity. </seg> |
| <seg id="90"> The right design can actually help us overcome one of our most deeply rooted biases. </seg> |
| <seg id="91"> Now we also learned that building the right amount of trust takes the right amount of disclosure. </seg> |
| <seg id="92"> This is what happens when a guest first messages a host. </seg> |
| <seg id="93"> If you share too little, like, "Yo," acceptance rates go down. </seg> |
| <seg id="94"> And if you share too much, like, "I'm having issues with my mother," acceptance rates also go down. </seg> |
| <seg id="95"> But there's a zone that's just right, like, "Love the artwork in your place. Coming for vacation with my family." </seg> |
| <seg id="96"> So how do we design for just the right amount of disclosure? </seg> |
| <seg id="97"> We use the size of the box to suggest the right length, and we guide them with prompts to encourage sharing. </seg> |
| <seg id="98"> We bet our whole company on the hope that, with the right design, people would be willing to overcome the stranger-danger bias. </seg> |
| <seg id="99"> What we didn't realize is just how many people were ready and waiting to put the bias aside. </seg> |
| <seg id="100"> This is a graph that shows our rate of adoption. </seg> |
| <seg id="101"> There's three things happening here. </seg> |
| <seg id="102"> The first, an unbelievable amount of luck. </seg> |
| <seg id="103"> The second is the efforts of our team. </seg> |
| <seg id="104"> And third is the existence of a previously unsatisfied need. </seg> |
| <seg id="105"> Now, things have been going pretty well. </seg> |
| <seg id="106"> Obviously, there are times when things don't work out. </seg> |
| <seg id="107"> Guests have thrown unauthorized parties and trashed homes. </seg> |
| <seg id="108"> Hosts have left guests stranded in the rain. </seg> |
| <seg id="109"> In the early days, I was customer service, and those calls came right to my cell phone. </seg> |
| <seg id="110"> I was at the front lines of trust breaking. </seg> |
| <seg id="111"> And there's nothing worse than those calls, it hurts to even think about them. </seg> |
| <seg id="112"> And the disappointment in the sound of someone's voice was and, I would say, still is our single greatest motivator to keep improving. </seg> |
| <seg id="113"> Thankfully, out of the 123 million nights we've ever hosted, less than a fraction of a percent have been problematic. </seg> |
| <seg id="114"> Turns out, people are justified in their trust. </seg> |
| <seg id="115"> And when trust works out right, it can be absolutely magical. </seg> |
| <seg id="116"> We had a guest stay with a host in Uruguay, and he suffered a heart attack. </seg> |
| <seg id="117"> The host rushed him to the hospital. </seg> |
| <seg id="118"> They donated their own blood for his operation. </seg> |
| <seg id="119"> Let me read you his review. </seg> |
| <seg id="120"> "Excellent house for sedentary travelers prone to myocardial infarctions. </seg> |
| <seg id="121"> The area is beautiful and has direct access to the best hospitals. </seg> |
| <seg id="122"> Javier and Alejandra instantly become guardian angels who will save your life without even knowing you. </seg> |
| <seg id="123"> They will rush you to the hospital in their own car while you're dying and stay in the waiting room while the doctors give you a bypass. </seg> |
| <seg id="124"> They don't want you to feel lonely, they bring you books to read. </seg> |
| <seg id="125"> And they let you stay at their house extra nights without charging you. </seg> |
| <seg id="126"> Highly recommended!" </seg> |
| <seg id="127"> Of course, not every stay is like that. </seg> |
| <seg id="128"> But this connection beyond the transaction is exactly what the sharing economy is aiming for. </seg> |
| <seg id="129"> Now, when I heard that term, I have to admit, it tripped me up. </seg> |
| <seg id="130"> How do sharing and transactions go together? </seg> |
| <seg id="131"> So let's be clear; it is about commerce. </seg> |
| <seg id="132"> But if you just called it the rental economy, it would be incomplete. </seg> |
| <seg id="133"> The sharing economy is commerce with the promise of human connection. </seg> |
| <seg id="134"> People share a part of themselves, and that changes everything. </seg> |
| <seg id="135"> You know how most travel today is, like, I think of it like fast food -- it's efficient and consistent, at the cost of local and authentic. </seg> |
| <seg id="136"> What if travel were like a magnificent buffet of local experiences? </seg> |
| <seg id="137"> What if anywhere you visited, there was a central marketplace of locals offering to get you thoroughly drunk on a pub crawl in neighborhoods you didn't even know existed. </seg> |
| <seg id="138"> Or learning to cook from the chef of a five-star restaurant? </seg> |
| <seg id="139"> Today, homes are designed around the idea of privacy and separation. </seg> |
| <seg id="140"> What if homes were designed to be shared from the ground up? </seg> |
| <seg id="141"> What would that look like? </seg> |
| <seg id="142"> What if cities embraced a culture of sharing? </seg> |
| <seg id="143"> I see a future of shared cities that bring us community and connection instead of isolation and separation. </seg> |
| <seg id="144"> In South Korea, in the city of Seoul, they've actually even started this. </seg> |
| <seg id="145"> They've repurposed hundreds of government parking spots to be shared by residents. </seg> |
| <seg id="146"> They're connecting students who need a place to live with empty-nesters who have extra rooms. </seg> |
| <seg id="147"> And they've started an incubator to help fund the next generation Tonight, just on our service, 785,000 people in 191 countries will either stay in a stranger's home or welcome one into theirs. </seg> |
| <seg id="148"> Clearly, it's not as crazy as we were taught. </seg> |
| <seg id="149"> We didn't invent anything new. </seg> |
| <seg id="150"> Hospitality has been around forever. </seg> |
| <seg id="151"> There's been many other websites like ours. </seg> |
| <seg id="152"> So, why did ours eventually take off? </seg> |
| <seg id="153"> Luck and timing aside, I've learned that you can take the components of trust, and you can design for that. </seg> |
| <seg id="154"> Design can overcome our most deeply rooted stranger-danger bias. </seg> |
| <seg id="155"> And that's amazing to me. </seg> |
| <seg id="156"> It blows my mind. </seg> |
| <seg id="157"> I think about this every time I see a red Miata go by. </seg> |
| <seg id="158"> Now, we know design won't solve all the world's problems. </seg> |
| <seg id="159"> But if it can help out with this one, if it can make a dent in this, it makes me wonder, what else can we design for next? </seg> |
| <seg id="160"> Thank you. </seg> |
| </doc> |
| <doc docid="2442" genre="lectures"> |
| <url>http://www.ted.com/talks/dalia_mogahed_what_do_you_think_when_you_look_at_me</url> |
| <description>TED Talk Subtitles and Transcript: When you look at Muslim scholar Dalia Mogahed, what do you see: a woman of faith? a scholar, a mom, a sister? or an oppressed, brainwashed, potential terrorist? In this personal, powerful talk, Mogahed asks us, in this polarizing time, to fight negative perceptions of her faith in the media -- and to choose empathy over prejudice.</description> |
| <keywords>talks, Islam, United States, culture, faith, politics</keywords> |
| <talkid>2442</talkid> |
| <title>Dalia Mogahed: What do you think when you look at me?</title> |
| <reviewer></reviewer> |
| <translator></translator> |
| <seg id="1"> What do you think when you look at me? </seg> |
| <seg id="2"> A woman of faith? An expert? </seg> |
| <seg id="3"> Maybe even a sister. </seg> |
| <seg id="4"> Or oppressed, brainwashed, a terrorist. </seg> |
| <seg id="5"> Or just an airport security line delay. </seg> |
| <seg id="6"> That one's actually true. </seg> |
| <seg id="7"> If some of your perceptions were negative, I don't really blame you. </seg> |
| <seg id="8"> That's just how the media has been portraying people who look like me. </seg> |
| <seg id="9"> One study found that 80 percent of news coverage about Islam and Muslims is negative. </seg> |
| <seg id="10"> And studies show that Americans say that most don't know a Muslim. </seg> |
| <seg id="11"> I guess people don't talk to their Uber drivers. </seg> |
| <seg id="12"> Well, for those of you who have never met a Muslim, it's great to meet you. </seg> |
| <seg id="13"> Let me tell you who I am. </seg> |
| <seg id="14"> I'm a mom, a coffee lover -- double espresso, cream on the side. </seg> |
| <seg id="15"> I'm an introvert. </seg> |
| <seg id="16"> I'm a wannabe fitness fanatic. </seg> |
| <seg id="17"> And I'm a practicing, spiritual Muslim. </seg> |
| <seg id="18"> But not like Lady Gaga says, because baby, I wasn't born this way. </seg> |
| <seg id="19"> It was a choice. </seg> |
| <seg id="20"> When I was 17, I decided to come out. </seg> |
| <seg id="21"> No, not as a gay person like some of my friends, but as a Muslim, and decided to start wearing the hijab, my head covering. </seg> |
| <seg id="22"> My feminist friends were aghast: "Why are you oppressing yourself?" </seg> |
| <seg id="23"> The funny thing was, it was actually at that time a feminist declaration of independence from the pressure I felt as a 17-year-old, to conform to a perfect and unattainable standard of beauty. </seg> |
| <seg id="24"> I didn't just passively accept the faith of my parents. </seg> |
| <seg id="25"> I wrestled with the Quran. </seg> |
| <seg id="26"> I read and reflected and questioned and doubted and, ultimately, believed. </seg> |
| <seg id="27"> My relationship with God -- it was not love at first sight. </seg> |
| <seg id="28"> It was a trust and a slow surrender that deepened with every reading of the Quran. </seg> |
| <seg id="29"> Its rhythmic beauty sometimes moves me to tears. </seg> |
| <seg id="30"> I see myself in it. I feel that God knows me. </seg> |
| <seg id="31"> Have you ever felt like someone sees you, completely understands you and yet loves you anyway? </seg> |
| <seg id="32"> That's how it feels. </seg> |
| <seg id="33"> And so later, I got married, and like all good Egyptians, started my career as an engineer. </seg> |
| <seg id="34"> I later had a child, after getting married, and I was living essentially the Egyptian-American dream. </seg> |
| <seg id="35"> And then that terrible morning of September, 2001. </seg> |
| <seg id="36"> I think a lot of you probably remember exactly where you were that morning. </seg> |
| <seg id="37"> I was sitting in my kitchen finishing breakfast, and I look up on the screen and see the words "Breaking News." </seg> |
| <seg id="38"> There was smoke, airplanes flying into buildings, people jumping out of buildings. </seg> |
| <seg id="39"> What was this? </seg> |
| <seg id="40"> An accident? </seg> |
| <seg id="41"> A malfunction? </seg> |
| <seg id="42"> My shock quickly turned to outrage. </seg> |
| <seg id="43"> Who would do this? </seg> |
| <seg id="44"> And I switch the channel and I hear, "... Muslim terrorist ...," "... in the name of Islam ...," "... Middle-Eastern descent ...," "... jihad ...," "... we should bomb Mecca." </seg> |
| <seg id="45"> Oh my God. </seg> |
| <seg id="46"> Not only had my country been attacked, but in a flash, somebody else's actions had turned me from a citizen to a suspect. </seg> |
| <seg id="47"> That same day, we had to drive across Middle America to move to a new city to start grad school. </seg> |
| <seg id="48"> And I remember sitting in the passenger seat as we drove in silence, crouched as low as I could go in my seat, for the first time in my life, afraid for anyone to know I was a Muslim. </seg> |
| <seg id="49"> We moved into our apartment that night in a new town in what felt like a completely different world. </seg> |
| <seg id="50"> And then I was hearing and seeing and reading warnings from national Muslim organizations saying things like, "Be alert," "Be aware," "Stay in well-lit areas," "Don't congregate." </seg> |
| <seg id="51"> I stayed inside all week. </seg> |
| <seg id="52"> And then it was Friday that same week, the day that Muslims congregate for worship. </seg> |
| <seg id="53"> And again the warnings were, "Don't go that first Friday, it could be a target." </seg> |
| <seg id="54"> And I was watching the news, wall-to-wall coverage. </seg> |
| <seg id="55"> Emotions were so raw, understandably, and I was also hearing about attacks on Muslims, or people who were perceived to be Muslim, being pulled out and beaten in the street. </seg> |
| <seg id="56"> Mosques were actually firebombed. </seg> |
| <seg id="57"> And I thought, we should just stay home. </seg> |
| <seg id="58"> And yet, something didn't feel right. </seg> |
| <seg id="59"> Because those people who attacked our country attacked our country. </seg> |
| <seg id="60"> I get it that people were angry at the terrorists. </seg> |
| <seg id="61"> Guess what? So was I. </seg> |
| <seg id="62"> And so to have to explain yourself all the time isn't easy. </seg> |
| <seg id="63"> I don't mind questions. I love questions. </seg> |
| <seg id="64"> It's the accusations that are tough. </seg> |
| <seg id="65"> Today we hear people actually saying things like, "There's a problem in this country, and it's called Muslims. </seg> |
| <seg id="66"> When are we going to get rid of them?" </seg> |
| <seg id="67"> So, some people want to ban Muslims and close down mosques. </seg> |
| <seg id="68"> They talk about my community kind of like we're a tumor in the body of America. </seg> |
| <seg id="69"> And the only question is, are we malignant or benign? </seg> |
| <seg id="70"> You know, a malignant tumor you extract altogether, and a benign tumor you just keep under surveillance. </seg> |
| <seg id="71"> The choices don't make sense, because it's the wrong question. </seg> |
| <seg id="72"> Muslims, like all other Americans, aren't a tumor in the body of America, we're a vital organ. </seg> |
| <seg id="73"> Thank you. </seg> |
| <seg id="74"> Muslims are inventors and teachers, first responders and Olympic athletes. </seg> |
| <seg id="75"> Now, is closing down mosques going to make America safer? </seg> |
| <seg id="76"> It might free up some parking spots, but it will not end terrorism. </seg> |
| <seg id="77"> Going to a mosque regularly is actually linked to having more tolerant views of people of other faiths and greater civic engagement. </seg> |
| <seg id="78"> And as one police chief in the Washington, DC area recently told me, people don't actually get radicalized at mosques. </seg> |
| <seg id="79"> They get radicalized in their basement or bedroom, in front of a computer. </seg> |
| <seg id="80"> And what you find about the radicalization process is it starts online, is the person gets cut off from their community, from even their family, so that the extremist group can brainwash them into believing that they, the terrorists, are the true Muslims, and everyone else who abhors their behavior and ideology are sellouts or apostates. </seg> |
| <seg id="81"> So if we want to prevent radicalization, we have to keep people going to the mosque. </seg> |
| <seg id="82"> Now, some will still argue Islam is a violent religion. </seg> |
| <seg id="83"> After all, a group like ISIS bases its brutality on the Quran. </seg> |
| <seg id="84"> Now, as a Muslim, as a mother, as a human being, I think we need to do everything we can to stop a group like ISIS. </seg> |
| <seg id="85"> But we would be giving in to their narrative if we cast them as representatives of a faith of 1.6 billion people. </seg> |
| <seg id="86"> Thank you. </seg> |
| <seg id="87"> ISIS has as much to do with Islam as the Ku Klux Klan has to do with Christianity. </seg> |
| <seg id="88"> Both groups claim to base their ideology on their holy book. </seg> |
| <seg id="89"> But when you look at them, they're not motivated by what they read in their holy book. </seg> |
| <seg id="90"> It's their brutality that makes them read these things into the scripture. </seg> |
| <seg id="91"> Recently, a prominent imam told me a story that really took me aback. </seg> |
| <seg id="92"> He said that a girl came to him because she was thinking of going to join ISIS. </seg> |
| <seg id="93"> And I was really surprised and asked him, had she been in contact with a radical religious leader? </seg> |
| <seg id="94"> And he said the problem was quite the opposite, that every cleric that she had talked to had shut her down and said that her rage, her sense of injustice in the world, was just going to get her in trouble. </seg> |
| <seg id="95"> And so with nowhere to channel and make sense of this anger, she was a prime target to be exploited by extremists promising her a solution. </seg> |
| <seg id="96"> What this imam did was to connect her back to God and to her community. </seg> |
| <seg id="97"> He didn't shame her for her rage -- instead, he gave her constructive ways to make real change in the world. </seg> |
| <seg id="98"> What she learned at that mosque prevented her from going to join ISIS. </seg> |
| <seg id="99"> I've told you a little bit about how Islamophobia affects me and my family. </seg> |
| <seg id="100"> But how does it impact ordinary Americans? </seg> |
| <seg id="101"> How does it impact everyone else? </seg> |
| <seg id="102"> How does consuming fear 24 hours a day affect the health of our democracy, the health of our free thought? </seg> |
| <seg id="103"> Well, one study -- actually, several studies in neuroscience -- show that when we're afraid, at least three things happen. </seg> |
| <seg id="104"> We become more accepting of authoritarianism, conformity and prejudice. </seg> |
| <seg id="105"> One study showed that when subjects were exposed to news stories that were negative about Muslims, they became more accepting of military attacks on Muslim countries and policies that curtail the rights of American Muslims. </seg> |
| <seg id="106"> Now, this isn't just academic. </seg> |
| <seg id="107"> When you look at when anti-Muslim sentiment spiked between 2001 and 2013, it happened three times, but it wasn't around terrorist attacks. </seg> |
| <seg id="108"> It was in the run up to the Iraq War and during two election cycles. </seg> |
| <seg id="109"> So Islamophobia isn't just the natural response to Muslim terrorism as I would have expected. </seg> |
| <seg id="110"> It can actually be a tool of public manipulation, eroding the very foundation of a free society, which is rational and well-informed citizens. </seg> |
| <seg id="111"> Muslims are like canaries in the coal mine. </seg> |
| <seg id="112"> We might be the first to feel it, but the toxic air of fear is harming us all. </seg> |
| <seg id="113"> And assigning collective guilt isn't just about having to explain yourself all the time. </seg> |
| <seg id="114"> Deah and his wife Yusor were a young married couple living in Chapel Hill, North Carolina, where they both went to school. </seg> |
| <seg id="115"> Deah was an athlete. </seg> |
| <seg id="116"> He was in dental school, talented, promising ... </seg> |
| <seg id="117"> And his sister would tell me that he was the sweetest, most generous human being she knew. </seg> |
| <seg id="118"> She was visiting him there and he showed her his resume, and she was amazed. </seg> |
| <seg id="119"> She said, "When did my baby brother become such an accomplished young man?" </seg> |
| <seg id="120"> Just a few weeks after Suzanne's visit to her brother and his new wife, their neighbor, Craig Stephen Hicks, murdered them, as well as Yusor's sister, Razan, who was visiting for the afternoon, in their apartment, execution style, after posting anti-Muslim statements on his Facebook page. </seg> |
| <seg id="121"> He shot Deah eight times. </seg> |
| <seg id="122"> So bigotry isn't just immoral, it can even be lethal. </seg> |
| <seg id="123"> So, back to my story. </seg> |
| <seg id="124"> What happened after 9/11? </seg> |
| <seg id="125"> Did we go to the mosque or did we play it safe and stay home? </seg> |
| <seg id="126"> Well, we talked it over, and it might seem like a small decision, but to us, it was about what kind of America we wanted to leave for our kids: one that would control us by fear or one where we were practicing our religion freely. </seg> |
| <seg id="127"> So we decided to go to the mosque. </seg> |
| <seg id="128"> And we put my son in his car seat, buckled him in, and we drove silently, intensely, to the mosque. </seg> |
| <seg id="129"> I took him out, I took off my shoes, I walked into the prayer hall and what I saw made me stop. </seg> |
| <seg id="130"> The place was completely full. </seg> |
| <seg id="131"> And then the imam made an announcement, thanking and welcoming our guests, because half the congregation were Christians, Jews, Buddhists, atheists, people of faith and no faith, who had come not to attack us, but to stand in solidarity with us. </seg> |
| <seg id="132"> I just break down at this time. </seg> |
| <seg id="133"> These people were there because they chose courage and compassion over panic and prejudice. </seg> |
| <seg id="134"> What will you choose? </seg> |
| <seg id="135"> What will you choose at this time of fear and bigotry? </seg> |
| <seg id="136"> Will you play it safe? </seg> |
| <seg id="137"> Or will you join those who say we are better than that? </seg> |
| <seg id="138"> Thank you. </seg> |
| <seg id="139"> Thank you so much. </seg> |
| <seg id="140"> Helen Walters: So Dalia, you seem to have struck a chord. </seg> |
| <seg id="141"> But I wonder, what would you say to those who might argue that you're giving a TED Talk, you're clearly a deep thinker, you work at a fancy think tank, you're an exception, you're not the rule. </seg> |
| <seg id="142"> What would you say to those people? </seg> |
| <seg id="143"> Dalia Mogahed: I would say, don't let this stage distract you, I'm completely ordinary. </seg> |
| <seg id="144"> I'm not an exception. </seg> |
| <seg id="145"> My story is not unusual. </seg> |
| <seg id="146"> I am as ordinary as they come. </seg> |
| <seg id="147"> When you look at Muslims around the world -- and I've done this, I've done the largest study ever done on Muslims around the world -- people want ordinary things. </seg> |
| <seg id="148"> They want prosperity for their family, they want jobs and they want to live in peace. </seg> |
| <seg id="149"> So I am not in any way an exception. </seg> |
| <seg id="150"> When you meet people who seem like an exception to the rule, oftentimes it's that the rule is broken, not that they're an exception to it. </seg> |
| <seg id="151"> HW: Thank you so much. Dalia Mogahed. </seg> |
| </doc> |
| <doc docid="2440" genre="lectures"> |
| <url>http://www.ted.com/talks/raffaello_d_andrea_meet_the_dazzling_flying_machines_of_the_future</url> |
| <description>TED Talk Subtitles and Transcript: When you hear the word "drone," you probably think of something either very useful or very scary. But could they have aesthetic value? Autonomous systems expert Raffaello D'Andrea develops flying machines, and his latest projects are pushing the boundaries of autonomous flight -- from a flying wing that can hover and recover from disturbance to an eight-propeller craft that's ambivalent to orientation ... to a swarm of tiny coordinated micro-quadcopters. Prepare to be dazzled by a dreamy, swirling array of flying machines as they dance like fireflies above the TED stage.</description> |
| <keywords>talks, beauty, creativity, demo, design, drones, flight, future, invention, technology</keywords> |
| <talkid>2440</talkid> |
| <title>Raffaello D'Andrea: Meet the dazzling flying machines of the future</title> |
| <reviewer></reviewer> |
| <translator></translator> |
| <seg id="1"> What started as a platform for hobbyists is poised to become a multibillion-dollar industry. </seg> |
| <seg id="2"> Inspection, environmental monitoring, photography and film and journalism: these are some of the potential applications for commercial drones, and their enablers are the capabilities being developed at research facilities around the world. </seg> |
| <seg id="3"> For example, before aerial package delivery entered our social consciousness, an autonomous fleet of flying machines built a six-meter-tall tower composed of 1,500 bricks in front of a live audience at the FRAC Centre in France, and several years ago, they started to fly with ropes. </seg> |
| <seg id="4"> By tethering flying machines, they can achieve high speeds and accelerations in very tight spaces. </seg> |
| <seg id="5"> They can also autonomously build tensile structures. </seg> |
| <seg id="6"> Skills learned include how to carry loads, how to cope with disturbances, and in general, how to interact with the physical world. </seg> |
| <seg id="7"> Today we want to show you some new projects that we've been working on. </seg> |
| <seg id="8"> Their aim is to push the boundary of what can be achieved with autonomous flight. </seg> |
| <seg id="9"> Now, for a system to function autonomously, it must collectively know the location of its mobile objects in space. </seg> |
| <seg id="10"> Back at our lab at ETH Zurich, we often use external cameras to locate objects, which then allows us to focus our efforts on the rapid development of highly dynamic tasks. </seg> |
| <seg id="11"> For the demos you will see today, however, we will use new localization technology developed by Verity Studios, a spin-off from our lab. </seg> |
| <seg id="12"> There are no external cameras. </seg> |
| <seg id="13"> Each flying machine uses onboard sensors to determine its location in space and onboard computation to determine what its actions should be. </seg> |
| <seg id="14"> The only external commands are high-level ones such as "take off" and "land." </seg> |
| <seg id="15"> This is a so-called tail-sitter. </seg> |
| <seg id="16"> It's an aircraft that tries to have its cake and eat it. </seg> |
| <seg id="17"> Like other fixed-wing aircraft, it is efficient in forward flight, much more so than helicopters and variations thereof. </seg> |
| <seg id="18"> Unlike most other fixed-wing aircraft, however, it is capable of hovering, which has huge advantages for takeoff, landing and general versatility. </seg> |
| <seg id="19"> There is no free lunch, unfortunately. </seg> |
| <seg id="20"> One of the limitations with tail-sitters is that they're susceptible to disturbances such as wind gusts. </seg> |
| <seg id="21"> We're developing new control architectures and algorithms that address this limitation. </seg> |
| <seg id="22"> The idea is for the aircraft to recover no matter what state it finds itself in, and through practice, improve its performance over time. </seg> |
| <seg id="23"> OK. </seg> |
| <seg id="24"> When doing research, we often ask ourselves fundamental abstract questions that try to get at the heart of a matter. </seg> |
| <seg id="25"> For example, one such question would be, what is the minimum number of moving parts needed for controlled flight? </seg> |
| <seg id="26"> Now, there are practical reasons why you may want to know the answer to such a question. </seg> |
| <seg id="27"> Helicopters, for example, are affectionately known as machines with a thousand moving parts all conspiring to do you bodily harm. </seg> |
| <seg id="28"> It turns out that decades ago, skilled pilots were able to fly remote-controlled aircraft that had only two moving parts: a propeller and a tail rudder. </seg> |
| <seg id="29"> We recently discovered that it could be done with just one. </seg> |
| <seg id="30"> This is the monospinner, the world's mechanically simplest controllable flying machine, invented just a few months ago. </seg> |
| <seg id="31"> It has only one moving part, a propeller. </seg> |
| <seg id="32"> It has no flaps, no hinges, no ailerons, no other actuators, no other control surfaces, just a simple propeller. </seg> |
| <seg id="33"> Even though it's mechanically simple, there's a lot going on in its little electronic brain to allow it to fly in a stable fashion and to move anywhere it wants in space. </seg> |
| <seg id="34"> Even so, it doesn't yet have the sophisticated algorithms of the tail-sitter, which means that in order to get it to fly, I have to throw it just right. </seg> |
| <seg id="35"> And because the probability of me throwing it just right is very low, given everybody watching me, what we're going to do instead is show you a video that we shot last night. </seg> |
| <seg id="36"> If the monospinner is an exercise in frugality, this machine here, the omnicopter, with its eight propellers, is an exercise in excess. </seg> |
| <seg id="37"> What can you do with all this surplus? </seg> |
| <seg id="38"> The thing to notice is that it is highly symmetric. </seg> |
| <seg id="39"> As a result, it is ambivalent to orientation. </seg> |
| <seg id="40"> This gives it an extraordinary capability. </seg> |
| <seg id="41"> It can move anywhere it wants in space irrespective of where it is facing and even of how it is rotating. </seg> |
| <seg id="42"> It has its own complexities, mainly having to do with the interacting flows from its eight propellers. </seg> |
| <seg id="43"> Some of this can be modeled, while the rest can be learned on the fly. </seg> |
| <seg id="44"> Let's take a look. </seg> |
| <seg id="45"> If flying machines are going to enter part of our daily lives, they will need to become extremely safe and reliable. </seg> |
| <seg id="46"> This machine over here is actually two separate two-propeller flying machines. </seg> |
| <seg id="47"> This one wants to spin clockwise. </seg> |
| <seg id="48"> This other one wants to spin counterclockwise. </seg> |
| <seg id="49"> When you put them together, they behave like one high-performance quadrocopter. </seg> |
| <seg id="50"> If anything goes wrong, however -- a motor fails, a propeller fails, electronics, even a battery pack -- the machine can still fly, albeit in a degraded fashion. </seg> |
| <seg id="51"> We're going to demonstrate this to you now by disabling one of its halves. </seg> |
| <seg id="52"> This last demonstration is an exploration of synthetic swarms. </seg> |
| <seg id="53"> The large number of autonomous, coordinated entities offers a new palette for aesthetic expression. </seg> |
| <seg id="54"> We've taken commercially available micro quadcopters, each weighing less than a slice of bread, by the way, and outfitted them with our localization technology and custom algorithms. </seg> |
| <seg id="55"> Because each unit knows where it is in space and is self-controlled, there is really no limit to their number. </seg> |
| <seg id="56"> Hopefully, these demonstrations will motivate you to dream up new revolutionary roles for flying machines. </seg> |
| <seg id="57"> That ultrasafe one over there for example has aspirations to become a flying lampshade on Broadway. </seg> |
| <seg id="58"> The reality is that it is difficult to predict the impact of nascent technology. </seg> |
| <seg id="59"> And for folks like us, the real reward is the journey and the act of creation. </seg> |
| <seg id="60"> It's a continual reminder of how wonderful and magical the universe we live in is, that it allows creative, clever creatures to sculpt it in such spectacular ways. </seg> |
| <seg id="61"> The fact that this technology has such huge commercial and economic potential is just icing on the cake. </seg> |
| <seg id="62"> Thank you. </seg> |
| </doc> |
| <doc docid="2439" genre="lectures"> |
| <url>http://www.ted.com/talks/allan_adams_what_the_discovery_of_gravitational_waves_means</url> |
| <description>TED Talk Subtitles and Transcript: More than a billion years ago, two black holes in a distant galaxy locked into a spiral, falling inexorably toward each other, and collided. "All that energy was pumped into the fabric of time and space itself," says theoretical physicist Allan Adams, "making the universe explode in roiling waves of gravity." About 25 years ago, a group of scientists built a giant laser detector called LIGO to search for these kinds of waves, which had been predicted but never observed. In this mind-bending talk, Adams breaks down what happened when, in September 2015, LIGO detected an unthinkably small anomaly, leading to one of the most exciting discoveries in the history of physics.</description> |
| <keywords>talks, astronomy, cosmos, curiosity, exploration, nature, physics, science, space, technology, universe</keywords> |
| <talkid>2439</talkid> |
| <title>Allan Adams: What the discovery of gravitational waves means</title> |
| <reviewer></reviewer> |
| <translator></translator> |
| <seg id="1"> 1.3 billion years ago, in a distant, distant galaxy, two black holes locked into a spiral, converting three Suns' worth of stuff into pure energy in a tenth of a second. </seg> |
| <seg id="2"> For that brief moment in time, the glow was brighter than all the stars in all the galaxies in all of the known Universe. </seg> |
| <seg id="3"> It was a very big bang. </seg> |
| <seg id="4"> But they didn't release their energy in light. </seg> |
| <seg id="5"> I mean, you know, they're black holes. </seg> |
| <seg id="6"> All that energy was pumped into the fabric of space and time itself, making the Universe explode in gravitational waves. </seg> |
| <seg id="7"> Let me give you a sense of the timescale at work here. </seg> |
| <seg id="8"> 1.3 billion years ago, Earth had just managed to evolve multicellular life. </seg> |
| <seg id="9"> Since then, Earth has made and evolved corals, fish, plants, dinosaurs, people and even -- God save us -- the Internet. </seg> |
| <seg id="10"> And about 25 years ago, a particularly audacious set of people -- Rai Weiss at MIT, Kip Thorne and Ronald Drever at Caltech -- to build a giant laser detector with which to search for the gravitational waves from things like colliding black holes. </seg> |
| <seg id="11"> Now, most people thought they were nuts. </seg> |
| <seg id="12"> But enough people realized that they were brilliant nuts that the US National Science Foundation decided to fund their crazy idea. </seg> |
| <seg id="13"> So after decades of development, construction and imagination and a breathtaking amount of hard work, they built their detector, called LIGO: The Laser Interferometer Gravitational-Wave Observatory. </seg> |
| <seg id="14"> For the last several years, LIGO's been undergoing a huge expansion in its accuracy, a tremendous improvement in its detection ability. </seg> |
| <seg id="15"> It's now called Advanced LIGO as a result. </seg> |
| <seg id="16"> In early September of 2015, LIGO turned on for a final test run while they sorted out a few lingering details. </seg> |
| <seg id="17"> And on September 14 of 2015, just days after the detector had gone live, the gravitational waves from those colliding black holes passed through the Earth. </seg> |
| <seg id="18"> And they passed through you and me. </seg> |
| <seg id="19"> And they passed through the detector. </seg> |
| <seg id="20"> Scott Hughes: There's two moments in my life more emotionally intense than that. </seg> |
| <seg id="21"> One is the birth of my daughter. </seg> |
| <seg id="22"> The other is when I had to say goodbye to my father when he was terminally ill. </seg> |
| <seg id="23"> You know, it was the payoff of my career, basically. </seg> |
| <seg id="24"> Everything I'd been working on -- it's no longer science fiction! Allan Adams: So that's my very good friend and collaborator, Scott Hughes, a theoretical physicist at MIT, who has been studying gravitational waves from black holes and the signals that they could impart on observatories like LIGO, So let me take a moment to tell you what I mean by a gravitational wave. </seg> |
| <seg id="25"> A gravitational wave is a ripple in the shape of space and time. </seg> |
| <seg id="26"> As the wave passes by, it stretches space and everything in it in one direction, and compresses it in the other. </seg> |
| <seg id="27"> This has led to countless instructors of general relativity doing a really silly dance to demonstrate in their classes on general relativity. </seg> |
| <seg id="28"> "It stretches and expands, it stretches and expands." </seg> |
| <seg id="29"> So the trouble with gravitational waves is that they're very weak; they're preposterously weak. </seg> |
| <seg id="30"> For example, the waves that hit us on September 14 -- and yes, every single one of you stretched and compressed under the action of that wave -- when the waves hit, they stretched the average person by one part in 10 to the 21. </seg> |
| <seg id="31"> That's a decimal place, 20 zeroes, That's why everyone thought the LIGO people were nuts. </seg> |
| <seg id="32"> Even with a laser detector five kilometers long -- and that's already crazy -- they would have to measure the length of those detectors to less than one thousandth of the radius of the nucleus of an atom. </seg> |
| <seg id="33"> And that's preposterous. </seg> |
| <seg id="34"> So towards the end of his classic text on gravity, described the hunt for gravitational waves as follows: He said, "The technical difficulties to be surmounted in constructing such detectors are enormous. </seg> |
| <seg id="35"> But physicists are ingenious, and with the support of a broad lay public, all obstacles will surely be overcome." </seg> |
| <seg id="36"> Thorne published that in 1973, 42 years before he succeeded. </seg> |
| <seg id="37"> Now, coming back to LIGO, Scott likes to say that LIGO acts like an ear more than it does like an eye. </seg> |
| <seg id="38"> I want to explain what that means. </seg> |
| <seg id="39"> Visible light has a wavelength, a size, that's much smaller than the things around you, the features on people's faces, the size of your cell phone. </seg> |
| <seg id="40"> And that's really useful, because it lets you make an image or a map of the things around you, by looking at the light coming from different spots in the scene about you. </seg> |
| <seg id="41"> Sound is different. </seg> |
| <seg id="42"> Audible sound has a wavelength that can be up to 50 feet long. </seg> |
| <seg id="43"> And that makes it really difficult -- in fact, in practical purposes, impossible -- to make an image of something you really care about. </seg> |
| <seg id="44"> Your child's face. </seg> |
| <seg id="45"> Instead, we use sound to listen for features like pitch and tone and rhythm and volume to infer a story behind the sounds. </seg> |
| <seg id="46"> That's Alice talking. </seg> |
| <seg id="47"> That's Bob interrupting. </seg> |
| <seg id="48"> Silly Bob. </seg> |
| <seg id="49"> So, the same is true of gravitational waves. </seg> |
| <seg id="50"> We can't use them to make simple images of things out in the Universe. </seg> |
| <seg id="51"> But by listening to changes in the amplitude and frequency of those waves, we can hear the story that those waves are telling. </seg> |
| <seg id="52"> And at least for LIGO, the frequencies that it can hear are in the audio band. </seg> |
| <seg id="53"> So if we convert the wave patterns into pressure waves and air, into sound, we can literally hear the Universe speaking to us. </seg> |
| <seg id="54"> For example, listening to gravity, just in this way, can tell us a lot about the collision of two black holes, something my colleague Scott has spent an awful lot of time thinking about. </seg> |
| <seg id="55"> SH: If the two black holes are non-spinning, you get a very simple chirp: whoop! </seg> |
| <seg id="56"> If the two bodies are spinning very rapidly, I have that same chirp, but with a modulation on top of it, so it kind of goes: whir, whir, whir! </seg> |
| <seg id="57"> It's sort of the vocabulary of spin imprinted on this waveform. </seg> |
| <seg id="58"> AA: So on September 14, 2015, a date that's definitely going to live in my memory, LIGO heard this: [Whirring sound] So if you know how to listen, that is the sound of -- SH: ... two black holes, each of about 30 solar masses, that were whirling around at a rate comparable to what goes on in your blender. </seg> |
| <seg id="59"> AA: It's worth pausing here to think about what that means. </seg> |
| <seg id="60"> Two black holes, the densest thing in the Universe, one with a mass of 29 Suns and one with a mass of 36 Suns, whirling around each other 100 times per second before they collide. </seg> |
| <seg id="61"> Just imagine the power of that. </seg> |
| <seg id="62"> It's fantastic. </seg> |
| <seg id="63"> And we know it because we heard it. </seg> |
| <seg id="64"> That's the lasting importance of LIGO. </seg> |
| <seg id="65"> It's an entirely new way to observe the Universe that we've never had before. </seg> |
| <seg id="66"> It's a way that lets us hear the Universe and hear the invisible. </seg> |
| <seg id="67"> And there's a lot out there that we can't see -- in practice or even in principle. </seg> |
| <seg id="68"> So supernova, for example: I would love to know why very massive stars explode in supernovae. </seg> |
| <seg id="69"> They're very useful; we've learned a lot about the Universe from them. </seg> |
| <seg id="70"> The problem is, all the interesting physics happens in the core, and the core is hidden behind thousands of kilometers of iron and carbon and silicon. </seg> |
| <seg id="71"> We'll never see through it, it's opaque to light. </seg> |
| <seg id="72"> Gravitational waves go through iron as if it were glass -- The Big Bang: I would love to be able to explore the first few moments of the Universe, but we'll never see them, because the Big Bang itself is obscured by its own afterglow. </seg> |
| <seg id="73"> With gravitational waves, we should be able to see all the way back to the beginning. </seg> |
| <seg id="74"> Perhaps most importantly, I'm positive that there are things out there that we've never seen that we may never be able to see and that we haven't even imagined -- things that we'll only discover by listening. </seg> |
| <seg id="75"> And in fact, even in that very first event, LIGO found things that we didn't expect. </seg> |
| <seg id="76"> Here's my colleague and one of the key members of the LIGO collaboration, Matt Evans, my colleague at MIT, addressing exactly that: Matt Evans: The kinds of stars which produce the black holes that we observed here are the dinosaurs of the Universe. </seg> |
| <seg id="77"> They're these massive things that are old, from prehistoric times, and the black holes are kind of like the dinosaur bones with which we do this archeology. </seg> |
| <seg id="78"> So it lets us really get a whole nother angle on what's out there in the Universe and how the stars came to be, and in the end, of course, how we came to be out of this whole mess. </seg> |
| <seg id="79"> AA: Our challenge now is to be as audacious as possible. </seg> |
| <seg id="80"> Thanks to LIGO, we know how to build exquisite detectors that can listen to the Universe, to the rustle and the chirp of the cosmos. </seg> |
| <seg id="81"> Our job is to dream up and build new observatories -- a whole new generation of observatories -- on the ground, in space. </seg> |
| <seg id="82"> I mean, what could be more glorious than listening to the Big Bang itself? </seg> |
| <seg id="83"> Our job now is to dream big. </seg> |
| <seg id="84"> Dream with us. </seg> |
| <seg id="85"> Thank you. </seg> |
| </doc> |
| <doc docid="2438" genre="lectures"> |
| <url>http://www.ted.com/talks/shonda_rhimes_my_year_of_saying_yes_to_everything</url> |
| <description>TED Talk Subtitles and Transcript: Shonda Rhimes, the titan behind Grey's Anatomy, Scandal and How to Get Away With Murder, is responsible for some 70 hours of television per season, and she loves to work. "When I am hard at work, when I am deep in it, there is no other feeling," she says. She has a name for this feeling: The hum. The hum is a drug, the hum is music, the hum is God's whisper in her ear. But what happens when it stops? Is she anything besides the hum? In this moving talk, join Rhimes on a journey through her "year of yes" and find out how she got her hum back.</description> |
| <keywords>talks, children, creativity, culture, decision-making, family, identity, motivation, parenting, personal growth, television, work, work-life balance, writing</keywords> |
| <talkid>2438</talkid> |
| <title>Shonda Rhimes: My year of saying yes to everything</title> |
| <reviewer></reviewer> |
| <translator></translator> |
| <seg id="1"> So a while ago, I tried an experiment. </seg> |
| <seg id="2"> For one year, I would say yes to all the things that scared me. </seg> |
| <seg id="3"> Anything that made me nervous, took me out of my comfort zone, I forced myself to say yes to. </seg> |
| <seg id="4"> Did I want to speak in public? </seg> |
| <seg id="5"> No, but yes. </seg> |
| <seg id="6"> Did I want to be on live TV? </seg> |
| <seg id="7"> No, but yes. </seg> |
| <seg id="8"> Did I want to try acting? </seg> |
| <seg id="9"> No, no, no, but yes, yes, yes. </seg> |
| <seg id="10"> And a crazy thing happened: the very act of doing the thing that scared me made it not scary. </seg> |
| <seg id="11"> My fear of public speaking, my social anxiety, poof, gone. </seg> |
| <seg id="12"> It's amazing, the power of one word. </seg> |
| <seg id="13"> "Yes" changed my life. </seg> |
| <seg id="14"> "Yes" changed me. </seg> |
| <seg id="15"> But there was one particular yes that affected my life in the most profound way, in a way I never imagined, and it started with a question from my toddler. </seg> |
| <seg id="16"> I have these three amazing daughters, Harper, Beckett and Emerson, and Emerson is a toddler who inexplicably refers to everyone as "honey." </seg> |
| <seg id="17"> as though she's a Southern waitress. </seg> |
| <seg id="18"> "Honey, I'm gonna need some milk for my sippy cup." </seg> |
| <seg id="19"> The Southern waitress asked me to play with her one evening when I was on my way somewhere, and I said, "Yes." </seg> |
| <seg id="20"> And that yes was the beginning of a new way of life for my family. </seg> |
| <seg id="21"> I made a vow that from now on, every time one of my children asks me to play, no matter what I'm doing or where I'm going, I say yes, every single time. </seg> |
| <seg id="22"> Almost. I'm not perfect at it, but I try hard to practice it. </seg> |
| <seg id="23"> And it's had a magical effect on me, on my children, on our family. </seg> |
| <seg id="24"> But it's also had a stunning side effect, and it wasn't until recently that I fully understood it, that I understood that saying yes to playing with my children likely saved my career. </seg> |
| <seg id="25"> See, I have what most people would call a dream job. </seg> |
| <seg id="26"> I'm a writer. I imagine. I make stuff up for a living. </seg> |
| <seg id="27"> Dream job. </seg> |
| <seg id="28"> No. </seg> |
| <seg id="29"> I'm a titan. </seg> |
| <seg id="30"> Dream job. </seg> |
| <seg id="31"> I create television. I executive produce television. </seg> |
| <seg id="32"> I make television, a great deal of television. </seg> |
| <seg id="33"> In one way or another, this TV season, I'm responsible for bringing about 70 hours of programming to the world. </seg> |
| <seg id="34"> Four television programs, 70 hours of TV -- Three shows in production at a time, sometimes four. </seg> |
| <seg id="35"> Each show creates hundreds of jobs that didn't exist before. </seg> |
| <seg id="36"> The budget for one episode of network television can be anywhere from three to six million dollars. </seg> |
| <seg id="37"> Let's just say five. </seg> |
| <seg id="38"> A new episode made every nine days times four shows, so every nine days that's 20 million dollars worth of television, four television programs, 70 hours of TV, three shows in production at a time, sometimes four, 16 episodes going on at all times: 24 episodes of "Grey's," 21 episodes of "Scandal," 15 episodes of "How To Get Away With Murder," 10 episodes of "The Catch," that's 70 hours of TV, </seg> |
| <seg id="39"> that's 350 million dollars for a season. </seg> |
| <seg id="40"> In America, my television shows are back to back to back on Thursday night. </seg> |
| <seg id="41"> Around the world, my shows air in 256 territories in 67 languages for an audience of 30 million people. </seg> |
| <seg id="42"> My brain is global, and 45 hours of that 70 hours of TV are shows I personally created and not just produced, so on top of everything else, I need to find time, real quiet, creative time, to gather my fans around the campfire and tell my stories. </seg> |
| <seg id="43"> Four television programs, 70 hours of TV, three shows in production at a time, sometimes four, 350 million dollars, campfires burning all over the world. </seg> |
| <seg id="44"> You know who else is doing that? </seg> |
| <seg id="45"> Nobody, so like I said, I'm a titan. </seg> |
| <seg id="46"> Dream job. </seg> |
| <seg id="47"> Now, I don't tell you this to impress you. </seg> |
| <seg id="48"> I tell you this because I know what you think of when you hear the word "writer." </seg> |
| <seg id="49"> I tell you this so that all of you out there who work so hard, whether you run a company or a country or a classroom or a store or a home, take me seriously when I talk about working, so you'll get that I don't peck at a computer and imagine all day, so you'll hear me when I say that I understand that a dream job is not about dreaming. </seg> |
| <seg id="50"> It's all job, all work, all reality, all blood, all sweat, no tears. </seg> |
| <seg id="51"> I work a lot, very hard, and I love it. </seg> |
| <seg id="52"> When I'm hard at work, when I'm deep in it, there is no other feeling. </seg> |
| <seg id="53"> For me, my work is at all times building a nation out of thin air. </seg> |
| <seg id="54"> It is manning the troops. It is painting a canvas. </seg> |
| <seg id="55"> It is hitting every high note. It is running a marathon. </seg> |
| <seg id="56"> It is being Beyoncé. </seg> |
| <seg id="57"> And it is all of those things at the same time. </seg> |
| <seg id="58"> I love working. </seg> |
| <seg id="59"> It is creative and mechanical and exhausting and exhilarating and hilarious and disturbing and clinical and maternal and cruel and judicious, and what makes it all so good is the hum. </seg> |
| <seg id="60"> There is some kind of shift inside me when the work gets good. </seg> |
| <seg id="61"> A hum begins in my brain, and it grows and it grows and that hum sounds like the open road, and I could drive it forever. </seg> |
| <seg id="62"> And a lot of people, when I try to explain the hum, they assume that I'm talking about the writing, that my writing brings me joy. </seg> |
| <seg id="63"> And don't get me wrong, it does. </seg> |
| <seg id="64"> But the hum -- it wasn't until I started making television that I started working, working and making and building and creating and collaborating, that I discovered this thing, this buzz, this rush, this hum. </seg> |
| <seg id="65"> The hum is more than writing. </seg> |
| <seg id="66"> The hum is action and activity. The hum is a drug. </seg> |
| <seg id="67"> The hum is music. The hum is light and air. </seg> |
| <seg id="68"> The hum is God's whisper right in my ear. </seg> |
| <seg id="69"> And when you have a hum like that, you can't help but strive for greatness. </seg> |
| <seg id="70"> That feeling, you can't help but strive for greatness at any cost. </seg> |
| <seg id="71"> That's called the hum. </seg> |
| <seg id="72"> Or, maybe it's called being a workaholic. </seg> |
| <seg id="73"> Maybe it's called genius. </seg> |
| <seg id="74"> Maybe it's called ego. </seg> |
| <seg id="75"> Maybe it's just fear of failure. </seg> |
| <seg id="76"> I don't know. </seg> |
| <seg id="77"> I just know that I'm not built for failure, and I just know that I love the hum. </seg> |
| <seg id="78"> I just know that I want to tell you I'm a titan, and I know that I don't want to question it. </seg> |
| <seg id="79"> But here's the thing: the more successful I become, the more shows, the more episodes, the more barriers broken, the more work there is to do, the more balls in the air, the more eyes on me, the more history stares, the more expectations there are. </seg> |
| <seg id="80"> The more I work to be successful, the more I need to work. </seg> |
| <seg id="81"> And what did I say about work? </seg> |
| <seg id="82"> I love working, right? </seg> |
| <seg id="83"> The nation I'm building, the marathon I'm running, the troops, the canvas, the high note, the hum, the hum, the hum. </seg> |
| <seg id="84"> I like that hum. I love that hum. </seg> |
| <seg id="85"> I need that hum. I am that hum. </seg> |
| <seg id="86"> Am I nothing but that hum? </seg> |
| <seg id="87"> And then the hum stopped. </seg> |
| <seg id="88"> Overworked, overused, overdone, burned out. </seg> |
| <seg id="89"> The hum stopped. </seg> |
| <seg id="90"> Now, my three daughters are used to the truth that their mother is a single working titan. </seg> |
| <seg id="91"> Harper tells people, "My mom won't be there, but you can text my nanny." </seg> |
| <seg id="92"> And Emerson says, "Honey, I'm wanting to go to ShondaLand." </seg> |
| <seg id="93"> They're children of a titan. </seg> |
| <seg id="94"> They're baby titans. </seg> |
| <seg id="95"> They were 12, 3, and 1 when the hum stopped. </seg> |
| <seg id="96"> The hum of the engine died. </seg> |
| <seg id="97"> I stopped loving work. I couldn't restart the engine. </seg> |
| <seg id="98"> The hum would not come back. </seg> |
| <seg id="99"> My hum was broken. </seg> |
| <seg id="100"> I was doing the same things I always did, all the same titan work, 15-hour days, working straight through the weekends, no regrets, never surrender, a titan never sleeps, a titan never quits, full hearts, clear eyes, yada, whatever. </seg> |
| <seg id="101"> But there was no hum. </seg> |
| <seg id="102"> Inside me was silence. </seg> |
| <seg id="103"> Four television programs, 70 hours of TV, three shows in production at a time, sometimes four. </seg> |
| <seg id="104"> Four television programs, 70 hours of TV, three shows in production at a time ... </seg> |
| <seg id="105"> I was the perfect titan. </seg> |
| <seg id="106"> I was a titan you could take home to your mother. </seg> |
| <seg id="107"> All the colors were the same, and I was no longer having any fun. </seg> |
| <seg id="108"> And it was my life. </seg> |
| <seg id="109"> It was all I did. </seg> |
| <seg id="110"> I was the hum, and the hum was me. </seg> |
| <seg id="111"> So what do you do when the thing you do, the work you love, starts to taste like dust? </seg> |
| <seg id="112"> Now, I know somebody's out there thinking, "Cry me a river, stupid writer titan lady." </seg> |
| <seg id="113"> But you know, you do, if you make, if you work, if you love what you do, being a teacher, being a banker, being a mother, being a painter, being Bill Gates, if you simply love another person and that gives you the hum, if you know the hum, if you know what the hum feels like, if you have been to the hum, when the hum stops, who are you? </seg> |
| <seg id="114"> What are you? </seg> |
| <seg id="115"> What am I? </seg> |
| <seg id="116"> Am I still a titan? </seg> |
| <seg id="117"> If the song of my heart ceases to play, can I survive in the silence? </seg> |
| <seg id="118"> And then my Southern waitress toddler asks me a question. </seg> |
| <seg id="119"> I'm on my way out the door, I'm late, and she says, "Momma, wanna play?" </seg> |
| <seg id="120"> And I'm just about to say no, when I realize two things. </seg> |
| <seg id="121"> One, I'm supposed to say yes to everything, and two, my Southern waitress didn't call me "honey." </seg> |
| <seg id="122"> She's not calling everyone "honey" anymore. </seg> |
| <seg id="123"> When did that happen? </seg> |
| <seg id="124"> I'm missing it, being a titan and mourning my hum, and here she is changing right before my eyes. </seg> |
| <seg id="125"> And so she says, "Momma, wanna play?" </seg> |
| <seg id="126"> And I say, "Yes." </seg> |
| <seg id="127"> There's nothing special about it. </seg> |
| <seg id="128"> We play, and we're joined by her sisters, and there's a lot of laughing, and I give a dramatic reading from the book Everybody Poops. </seg> |
| <seg id="129"> Nothing out of the ordinary. </seg> |
| <seg id="130"> And yet, it is extraordinary, because in my pain and my panic, in the homelessness of my humlessness, I have nothing to do but pay attention. </seg> |
| <seg id="131"> I focus. </seg> |
| <seg id="132"> I am still. </seg> |
| <seg id="133"> The nation I'm building, the marathon I'm running, the troops, the canvas, the high note does not exist. </seg> |
| <seg id="134"> All that exists are sticky fingers and gooey kisses and tiny voices and crayons and that song about letting go of whatever it is that Frozen girl needs to let go of. </seg> |
| <seg id="135"> It's all peace and simplicity. </seg> |
| <seg id="136"> The air is so rare in this place for me that I can barely breathe. </seg> |
| <seg id="137"> I can barely believe I'm breathing. </seg> |
| <seg id="138"> Play is the opposite of work. </seg> |
| <seg id="139"> And I am happy. </seg> |
| <seg id="140"> Something in me loosens. </seg> |
| <seg id="141"> A door in my brain swings open, and a rush of energy comes. </seg> |
| <seg id="142"> And it's not instantaneous, but it happens, it does happen. </seg> |
| <seg id="143"> I feel it. </seg> |
| <seg id="144"> A hum creeps back. </seg> |
| <seg id="145"> Not at full volume, barely there, it's quiet, and I have to stay very still to hear it, but it is there. </seg> |
| <seg id="146"> Not the hum, but a hum. </seg> |
| <seg id="147"> And now I feel like I know a very magical secret. </seg> |
| <seg id="148"> Well, let's not get carried away. </seg> |
| <seg id="149"> It's just love. That's all it is. </seg> |
| <seg id="150"> No magic. No secret. It's just love. </seg> |
| <seg id="151"> It's just something we forgot. </seg> |
| <seg id="152"> The hum, the work hum, the hum of the titan, that's just a replacement. </seg> |
| <seg id="153"> If I have to ask you who I am, if I have to tell you who I am, if I describe myself in terms of shows and hours of television and how globally badass my brain is, I have forgotten what the real hum is. </seg> |
| <seg id="154"> The hum is not power and the hum is not work-specific. </seg> |
| <seg id="155"> The hum is joy-specific. </seg> |
| <seg id="156"> The real hum is love-specific. </seg> |
| <seg id="157"> The hum is the electricity that comes from being excited by life. </seg> |
| <seg id="158"> The real hum is confidence and peace. </seg> |
| <seg id="159"> The real hum ignores the stare of history, and the balls in the air, and the expectation, and the pressure. </seg> |
| <seg id="160"> The real hum is singular and original. </seg> |
| <seg id="161"> The real hum is God's whisper in my ear, but maybe God was whispering the wrong words, because which one of the gods was telling me I was the titan? </seg> |
| <seg id="162"> It's just love. </seg> |
| <seg id="163"> We could all use a little more love, a lot more love. </seg> |
| <seg id="164"> Any time my child asks me to play, I will say yes. </seg> |
| <seg id="165"> I make it a firm rule for one reason, to give myself permission, to free me from all of my workaholic guilt. </seg> |
| <seg id="166"> It's a law, so I don't have a choice, and I don't have a choice, not if I want to feel the hum. </seg> |
| <seg id="167"> I wish it were that easy, but I'm not good at playing. </seg> |
| <seg id="168"> I'm not interested in doing it the way I'm interested in doing work. </seg> |
| <seg id="169"> The truth is incredibly humbling and humiliating to face. </seg> |
| <seg id="170"> I don't like playing. </seg> |
| <seg id="171"> I work all the time because I like working. </seg> |
| <seg id="172"> I like working more than I like being at home. </seg> |
| <seg id="173"> Facing that fact is incredibly difficult to handle, because what kind of person likes working more than being at home? </seg> |
| <seg id="174"> Well, me. </seg> |
| <seg id="175"> I mean, let's be honest, I call myself a titan. </seg> |
| <seg id="176"> I've got issues. </seg> |
| <seg id="177"> And one of those issues isn't that I am too relaxed. </seg> |
| <seg id="178"> We run around the yard, up and back and up and back. </seg> |
| <seg id="179"> We have 30-second dance parties. </seg> |
| <seg id="180"> We sing show tunes. We play with balls. </seg> |
| <seg id="181"> I blow bubbles and they pop them. </seg> |
| <seg id="182"> And I feel stiff and delirious and confused most of the time. </seg> |
| <seg id="183"> I itch for my cell phone always. </seg> |
| <seg id="184"> But it is OK. </seg> |
| <seg id="185"> My tiny humans show me how to live and the hum of the universe fills me up. </seg> |
| <seg id="186"> I play and I play until I begin to wonder why we ever stop playing in the first place. </seg> |
| <seg id="187"> You can do it too, say yes every time your child asks you to play. </seg> |
| <seg id="188"> Are you thinking that maybe I'm an idiot in diamond shoes? </seg> |
| <seg id="189"> You're right, but you can still do this. </seg> |
| <seg id="190"> You have time. </seg> |
| <seg id="191"> You know why? Because you're not Rihanna and you're not a Muppet. </seg> |
| <seg id="192"> Your child does not think you're that interesting. </seg> |
| <seg id="193"> You only need 15 minutes. </seg> |
| <seg id="194"> My two- and four-year-old only ever want to play with me for about 15 minutes or so before they think to themselves they want to do something else. </seg> |
| <seg id="195"> It's an amazing 15 minutes, but it's 15 minutes. </seg> |
| <seg id="196"> If I'm not a ladybug or a piece of candy, I'm invisible after 15 minutes. </seg> |
| <seg id="197"> And my 13-year-old, if I can get a 13-year-old to talk to me for 15 minutes I'm Parent of the Year. </seg> |
| <seg id="198"> 15 minutes is all you need. </seg> |
| <seg id="199"> I can totally pull off 15 minutes of uninterrupted time on my worst day. </seg> |
| <seg id="200"> Uninterrupted is the key. </seg> |
| <seg id="201"> No cell phone, no laundry, no anything. </seg> |
| <seg id="202"> You have a busy life. You have to get dinner on the table. </seg> |
| <seg id="203"> You have to force them to bathe. But you can do 15 minutes. </seg> |
| <seg id="204"> My kids are my happy place, they're my world, but it doesn't have to be your kids, the fuel that feeds your hum, the place where life feels more good than not good. </seg> |
| <seg id="205"> It's not about playing with your kids, it's about joy. </seg> |
| <seg id="206"> It's about playing in general. </seg> |
| <seg id="207"> Give yourself the 15 minutes. </seg> |
| <seg id="208"> Find what makes you feel good. </seg> |
| <seg id="209"> Just figure it out and play in that arena. </seg> |
| <seg id="210"> I'm not perfect at it. In fact, I fail as often as I succeed, seeing friends, reading books, staring into space. </seg> |
| <seg id="211"> "Wanna play?" starts to become shorthand for indulging myself in ways I'd given up on right around the time I got my first TV show, right around the time I became a titan-in-training, right around the time I started competing with myself for ways unknown. </seg> |
| <seg id="212"> 15 minutes? What could be wrong with giving myself my full attention for 15 minutes? </seg> |
| <seg id="213"> Turns out, nothing. </seg> |
| <seg id="214"> The very act of not working has made it possible for the hum to return, as if the hum's engine could only refuel while I was away. </seg> |
| <seg id="215"> Work doesn't work without play. </seg> |
| <seg id="216"> It takes a little time, but after a few months, one day the floodgates open and there's a rush, and I find myself standing in my office filled with an unfamiliar melody, full on groove inside me, and around me, and it sends me spinning with ideas, and the humming road is open, and I can drive it and drive it, and I love working again. </seg> |
| <seg id="217"> But now, I like that hum, but I don't love that hum. </seg> |
| <seg id="218"> I don't need that hum. </seg> |
| <seg id="219"> I am not that hum. That hum is not me, not anymore. </seg> |
| <seg id="220"> I am bubbles and sticky fingers and dinners with friends. </seg> |
| <seg id="221"> I am that hum. </seg> |
| <seg id="222"> Life's hum. </seg> |
| <seg id="223"> Love's hum. </seg> |
| <seg id="224"> Work's hum is still a piece of me, it is just no longer all of me, and I am so grateful. </seg> |
| <seg id="225"> And I don't give a crap about being a titan, because I have never once seen a titan play Red Rover, Red Rover. </seg> |
| <seg id="226"> I said yes to less work and more play, and somehow I still run my world. </seg> |
| <seg id="227"> My brain is still global. My campfires still burn. </seg> |
| <seg id="228"> The more I play, the happier I am, and the happier my kids are. </seg> |
| <seg id="229"> The more I play, the more I feel like a good mother. </seg> |
| <seg id="230"> The more I play, the freer my mind becomes. </seg> |
| <seg id="231"> The more I play, the better I work. </seg> |
| <seg id="232"> The more I play, the more I feel the hum, the nation I'm building, the marathon I'm running, the troops, the canvas, the high note, the hum, the hum, the other hum, the real hum, life's hum. </seg> |
| <seg id="233"> The more I feel that hum, the more this strange, quivering, uncocooned, awkward, brand new, alive non-titan feels like me. </seg> |
| <seg id="234"> The more I feel that hum, the more I know who I am. </seg> |
| <seg id="235"> I'm a writer, I make stuff up, I imagine. </seg> |
| <seg id="236"> That part of the job, that's living the dream. </seg> |
| <seg id="237"> That's the dream of the job. </seg> |
| <seg id="238"> Because a dream job should be a little bit dreamy. </seg> |
| <seg id="239"> I said yes to less work and more play. </seg> |
| <seg id="240"> Titans need not apply. </seg> |
| <seg id="241"> Wanna play? </seg> |
| <seg id="242"> Thank you. </seg> |
| </doc> |
| <doc docid="2429" genre="lectures"> |
| <url>http://www.ted.com/talks/jocelyne_bloch_the_brain_may_be_able_to_repair_itself_with_help</url> |
| <description>TED Talk Subtitles and Transcript: Through treating everything from strokes to car accident traumas, neurosurgeon Jocelyne Bloch knows the brain's inability to repair itself all too well. But now, she suggests, she and her colleagues may have found the key to neural repair: Doublecortin-positive cells. Similar to stem cells, they are extremely adaptable and, when extracted from a brain, cultured and then re-injected in a lesioned area of the same brain, they can help repair and rebuild it. "With a little help," Bloch says, "the brain may be able to help itself."</description> |
| <keywords>talks, Surgery, brain, health, medical research, medicine, mind, neuroscience, science</keywords> |
| <talkid>2429</talkid> |
| <title>Jocelyne Bloch: The brain may be able to repair itself -- with help</title> |
| <reviewer></reviewer> |
| <translator></translator> |
| <seg id="1"> So I'm a neurosurgeon. </seg> |
| <seg id="2"> And like most of my colleagues, I have to deal, every day, with human tragedies. </seg> |
| <seg id="3"> I realize how your life can change from one second to the other after a major stroke or after a car accident. </seg> |
| <seg id="4"> And what is very frustrating for us neurosurgeons is to realize that unlike other organs of the body, the brain has very little ability for self-repair. </seg> |
| <seg id="5"> And after a major injury of your central nervous system, the patients often remain with a severe handicap. </seg> |
| <seg id="6"> And that's probably the reason why I've chosen to be a functional neurosurgeon. </seg> |
| <seg id="7"> What is a functional neurosurgeon? </seg> |
| <seg id="8"> It's a doctor who is trying to improve a neurological function through different surgical strategies. </seg> |
| <seg id="9"> You've certainly heard of one of the famous ones called deep brain stimulation, where you implant an electrode in the depths of the brain in order to modulate a circuit of neurons to improve a neurological function. </seg> |
| <seg id="10"> It's really an amazing technology in that it has improved the destiny of patients with Parkinson's disease, with severe tremor, with severe pain. </seg> |
| <seg id="11"> However, neuromodulation does not mean neuro-repair. </seg> |
| <seg id="12"> And the dream of functional neurosurgeons is to repair the brain. </seg> |
| <seg id="13"> I think that we are approaching this dream. </seg> |
| <seg id="14"> And I would like to show you that we are very close to this. </seg> |
| <seg id="15"> And that with a little bit of help, the brain is able to help itself. </seg> |
| <seg id="16"> So the story started 15 years ago. </seg> |
| <seg id="17"> At that time, I was a chief resident working days and nights in the emergency room. </seg> |
| <seg id="18"> I often had to take care of patients with head trauma. </seg> |
| <seg id="19"> You have to imagine that when a patient comes in with a severe head trauma, his brain is swelling and he's increasing his intracranial pressure. </seg> |
| <seg id="20"> And in order to save his life, you have to decrease this intracranial pressure. </seg> |
| <seg id="21"> you sometimes have to remove a piece of swollen brain. </seg> |
| <seg id="22"> So instead of throwing away these pieces of swollen brain, we decided with Jean-François Brunet, who is a colleague of mine, a biologist, to study them. </seg> |
| <seg id="23"> What do I mean by that? </seg> |
| <seg id="24"> We wanted to grow cells from these pieces of tissue. </seg> |
| <seg id="25"> It's not an easy task. </seg> |
| <seg id="26"> Growing cells from a piece of tissue is a bit the same as growing very small children out from their family. </seg> |
| <seg id="27"> So you need to find the right nutrients, the warmth, the humidity and all the nice environments to make them thrive. </seg> |
| <seg id="28"> So that's exactly what we had to do with these cells. </seg> |
| <seg id="29"> And after many attempts, Jean-François did it. </seg> |
| <seg id="30"> And that's what he saw under his microscope. </seg> |
| <seg id="31"> And that was, for us, a major surprise. </seg> |
| <seg id="32"> Why? </seg> |
| <seg id="33"> Because this looks exactly the same as a stem cell culture, with large green cells surrounding small, immature cells. </seg> |
| <seg id="34"> And you may remember from biology class that stem cells are immature cells, able to turn into any type of cell of the body. </seg> |
| <seg id="35"> The adult brain has stem cells, but they're very rare and they're located in deep and small niches in the depths of the brain. </seg> |
| <seg id="36"> So it was surprising to get this kind of stem cell culture from the superficial part of swollen brain we had in the operating theater. </seg> |
| <seg id="37"> And there was another intriguing observation: Regular stem cells are very active cells -- cells that divide, divide, divide very quickly. </seg> |
| <seg id="38"> And they never die, they're immortal cells. </seg> |
| <seg id="39"> But these cells behave differently. </seg> |
| <seg id="40"> They divide slowly, and after a few weeks of culture, they even died. </seg> |
| <seg id="41"> So we were in front of a strange new cell population that looked like stem cells but behaved differently. </seg> |
| <seg id="42"> And it took us a long time to understand where they came from. </seg> |
| <seg id="43"> They come from these cells. </seg> |
| <seg id="44"> These blue and red cells are called doublecortin-positive cells. </seg> |
| <seg id="45"> All of you have them in your brain. </seg> |
| <seg id="46"> They represent four percent of your cortical brain cells. </seg> |
| <seg id="47"> They have a very important role during the development stage. </seg> |
| <seg id="48"> When you were fetuses, they helped your brain to fold itself. </seg> |
| <seg id="49"> But why do they stay in your head? </seg> |
| <seg id="50"> This, we don't know. </seg> |
| <seg id="51"> We think that they may participate in brain repair because we find them in higher concentration close to brain lesions. </seg> |
| <seg id="52"> But it's not so sure. </seg> |
| <seg id="53"> But there is one clear thing -- that from these cells, we got our stem cell culture. </seg> |
| <seg id="54"> And we were in front of a potential new source of cells to repair the brain. </seg> |
| <seg id="55"> And we had to prove this. </seg> |
| <seg id="56"> we decided to design an experimental paradigm. </seg> |
| <seg id="57"> The idea was to biopsy a piece of brain in a non-eloquent area of the brain, and then to culture the cells exactly the way Jean-François did it in his lab. </seg> |
| <seg id="58"> And then label them, to put color in them in order to be able to track them in the brain. </seg> |
| <seg id="59"> And the last step was to re-implant them in the same individual. </seg> |
| <seg id="60"> We call these autologous grafts -- autografts. </seg> |
| <seg id="61"> So the first question we had, "What will happen if we re-implant these cells in a normal brain, and what will happen if we re-implant the same cells in a lesioned brain?" </seg> |
| <seg id="62"> Thanks to the help of professor Eric Rouiller, we worked with monkeys. </seg> |
| <seg id="63"> So in the first-case scenario, we re-implanted the cells in the normal brain and what we saw is that they completely disappeared after a few weeks, as if they were taken from the brain, they go back home, the space is already busy, they are not needed there, so they disappear. </seg> |
| <seg id="64"> In the second-case scenario, we performed the lesion, we re-implanted exactly the same cells, and in this case, the cells remained -- and they became mature neurons. </seg> |
| <seg id="65"> And that's the image of what we could observe under the microscope. </seg> |
| <seg id="66"> Those are the cells that were re-implanted. </seg> |
| <seg id="67"> And the proof they carry, these little spots, those are the cells that we've labeled in vitro, when they were in culture. </seg> |
| <seg id="68"> But we could not stop here, of course. </seg> |
| <seg id="69"> Do these cells also help a monkey to recover after a lesion? </seg> |
| <seg id="70"> So for that, we trained monkeys to perform a manual dexterity task. </seg> |
| <seg id="71"> They had to retrieve food pellets from a tray. </seg> |
| <seg id="72"> They were very good at it. </seg> |
| <seg id="73"> And when they had reached a plateau of performance, we did a lesion in the motor cortex corresponding to the hand motion. </seg> |
| <seg id="74"> So the monkeys were plegic, they could not move their hand anymore. </seg> |
| <seg id="75"> And exactly the same as humans would do, they spontaneously recovered to a certain extent, exactly the same as after a stroke. </seg> |
| <seg id="76"> Patients are completely plegic, and then they try to recover due to a brain plasticity mechanism, they recover to a certain extent, exactly the same for the monkey. </seg> |
| <seg id="77"> So when we were sure that the monkey had reached his plateau of spontaneous recovery, we implanted his own cells. </seg> |
| <seg id="78"> So on the left side, you see the monkey that has spontaneously recovered. </seg> |
| <seg id="79"> He's at about 40 to 50 percent of his previous performance before the lesion. </seg> |
| <seg id="80"> He's not so accurate, not so quick. </seg> |
| <seg id="81"> And look now, when we re-impant the cells: Two months after re-implantation, the same individual. </seg> |
| <seg id="82"> It was also very exciting results for us, I tell you. </seg> |
| <seg id="83"> Since that time, we've understood much more about these cells. </seg> |
| <seg id="84"> We know that we can cryopreserve them, we can use them later on. </seg> |
| <seg id="85"> We know that we can apply them in other neuropathological models, like Parkinson's disease, for example. </seg> |
| <seg id="86"> But our dream is still to implant them in humans. </seg> |
| <seg id="87"> And I really hope that I'll be able to show you soon that the human brain is giving us the tools to repair itself. </seg> |
| <seg id="88"> Thank you. </seg> |
| <seg id="89"> Bruno Giussani: Jocelyne, this is amazing, and I'm sure that right now, there are several dozen people in the audience, possibly even a majority, who are thinking, "I know somebody who can use this." </seg> |
| <seg id="90"> I do, in any case. </seg> |
| <seg id="91"> And of course the question is, what are the biggest obstacles before you can go into human clinical trials? </seg> |
| <seg id="92"> Jocelyne Bloch: The biggest obstacles are regulations. So, from these exciting results, you need to fill out about two kilograms of papers and forms to be able to go through these kind of trials. </seg> |
| <seg id="93"> BG: Which is understandable, the brain is delicate, etc. </seg> |
| <seg id="94"> JB: Yes, it is, but it takes a long time and a lot of patience and almost a professional team to do it, you know? </seg> |
| <seg id="95"> BG: If you project yourself -- having done the research and having tried to get permission to start the trials, if you project yourself out in time, how many years before somebody gets into a hospital and this therapy is available? </seg> |
| <seg id="96"> JB: So, it's very difficult to say. </seg> |
| <seg id="97"> It depends, first, on the approval of the trial. </seg> |
| <seg id="98"> Will the regulation allow us to do it soon? </seg> |
| <seg id="99"> And then, you have to perform this kind of study in a small group of patients. </seg> |
| <seg id="100"> So it takes, already, a long time to select the patients, do the treatment and evaluate if it's useful to do this kind of treatment. </seg> |
| <seg id="101"> And then you have to deploy this to a multicentric trial. </seg> |
| <seg id="102"> You have to really prove first that it's useful before offering this treatment up for everybody. </seg> |
| <seg id="103"> BG: And safe, of course. JB: Of course. </seg> |
| <seg id="104"> BG: Jocelyne, thank you for coming to TED and sharing this. </seg> |
| <seg id="105"> BG: Thank you. </seg> |
| </doc> |
| <doc docid="2413" genre="lectures"> |
| <url>http://www.ted.com/talks/yanis_varoufakis_capitalism_will_eat_democracy_unless_we_speak_up</url> |
| <description>TED Talk Subtitles and Transcript: Have you wondered why politicians aren't what they used to be, why governments seem unable to solve real problems? Economist Yanis Varoufakis, the former Minister of Finance for Greece, says that it's because you can be in politics today but not be in power -- because real power now belongs to those who control the economy. He believes that the mega-rich and corporations are cannibalizing the political sphere, causing financial crisis. In this talk, hear his dream for a world in which capital and labor no longer struggle against each other, "one that is simultaneously libertarian, Marxist and Keynesian."</description> |
| <keywords>talks, Europe, United States, activism, big problems, business, capitalism, democracy, economics, finance, global issues, government, investment, leadership, money, politics, society</keywords> |
| <talkid>2413</talkid> |
| <title>Yanis Varoufakis: Capitalism will eat democracy -- unless we speak up</title> |
| <reviewer></reviewer> |
| <translator></translator> |
| <seg id="1"> Democracy. </seg> |
| <seg id="2"> In the West, we make a colossal mistake taking it for granted. </seg> |
| <seg id="3"> We see democracy not as the most fragile of flowers that it really is, but we see it as part of our society's furniture. </seg> |
| <seg id="4"> We tend to think of it as an intransigent given. </seg> |
| <seg id="5"> We mistakenly believe that capitalism begets inevitably democracy. </seg> |
| <seg id="6"> It doesn't. </seg> |
| <seg id="7"> Singapore's Lee Kuan Yew and his great imitators in Beijing that it is perfectly possible to have a flourishing capitalism, spectacular growth, while politics remains democracy-free. </seg> |
| <seg id="8"> Indeed, democracy is receding in our neck of the woods, here in Europe. </seg> |
| <seg id="9"> Earlier this year, while I was representing Greece -- the newly elected Greek government -- in the Eurogroup as its Finance Minister, I was told in no uncertain terms that our nation's democratic process -- our elections -- could not be allowed to interfere with economic policies that were being implemented in Greece. </seg> |
| <seg id="10"> At that moment, I felt that there could be no greater vindication of Lee Kuan Yew, or the Chinese Communist Party, indeed of some recalcitrant friends of mine who kept telling me that democracy would be banned if it ever threatened to change anything. </seg> |
| <seg id="11"> Tonight, here, I want to present to you an economic case for an authentic democracy. </seg> |
| <seg id="12"> I want to ask you to join me in believing again that Lee Kuan Yew, the Chinese Communist Party and indeed the Eurogroup are wrong in believing that we can dispense with democracy -- that we need an authentic, boisterous democracy. </seg> |
| <seg id="13"> And without democracy, our societies will be nastier, our future bleak and our great, new technologies wasted. </seg> |
| <seg id="14"> Speaking of waste, allow me to point out an interesting paradox that is threatening our economies as we speak. </seg> |
| <seg id="15"> I call it the twin peaks paradox. </seg> |
| <seg id="16"> One peak you understand -- you know it, you recognize it -- is the mountain of debts that has been casting a long shadow over the United States, Europe, the whole world. </seg> |
| <seg id="17"> We all recognize the mountain of debts. </seg> |
| <seg id="18"> But few people discern its twin. </seg> |
| <seg id="19"> A mountain of idle cash belonging to rich savers and to corporations, too terrified to invest it into the productive activities that can generate the incomes from which you can extinguish the mountain of debts and which can produce all those things that humanity desperately needs, like green energy. </seg> |
| <seg id="20"> Now let me give you two numbers. </seg> |
| <seg id="21"> Over the last three months, in the United States, in Britain and in the Eurozone, we have invested, collectively, 3.4 trillion dollars on all the wealth-producing goods -- things like industrial plants, machinery, office blocks, schools, roads, railways, machinery, and so on and so forth. </seg> |
| <seg id="22"> $3.4 trillion sounds like a lot of money until you compare it to the $5.1 trillion that has been slushing around in the same countries, in our financial institutions, doing absolutely nothing during the same period except inflating stock exchanges and bidding up house prices. </seg> |
| <seg id="23"> So a mountain of debt and a mountain of idle cash form twin peaks, failing to cancel each other out through the normal operation of the markets. </seg> |
| <seg id="24"> The result is stagnant wages, more than a quarter of 25- to 54-year-olds in America, in Japan and in Europe And consequently, low aggregate demand, which in a never-ending cycle, reinforces the pessimism of the investors, who, fearing low demand, reproduce it by not investing -- exactly like Oedipus' father, who, terrified by the prophecy of the oracle that his son would grow up to kill him, </seg> |
| <seg id="25"> unwittingly engineered the conditions that ensured that Oedipus, his son, would kill him. </seg> |
| <seg id="26"> This is my quarrel with capitalism. </seg> |
| <seg id="27"> Its gross wastefulness, all this idle cash, should be energized to improve lives, to develop human talents, and indeed to finance all these technologies, green technologies, which are absolutely essential for saving planet Earth. </seg> |
| <seg id="28"> Am I right in believing that democracy might be the answer? </seg> |
| <seg id="29"> I believe so, but before we move on, what do we mean by democracy? </seg> |
| <seg id="30"> Aristotle defined democracy as the constitution in which the free and the poor, being in the majority, control government. </seg> |
| <seg id="31"> Now, of course Athenian democracy excluded too many. </seg> |
| <seg id="32"> Women, migrants and, of course, the slaves. </seg> |
| <seg id="33"> But it would be a mistake to dismiss the significance of ancient Athenian democracy on the basis of whom it excluded. </seg> |
| <seg id="34"> What was more pertinent, and continues to be so about ancient Athenian democracy, was the inclusion of the working poor, who not only acquired the right to free speech, but more importantly, crucially, they acquired the rights to political judgments that were afforded equal weight in the decision-making concerning matters of state. </seg> |
| <seg id="35"> Now, of course, Athenian democracy didn't last long. </seg> |
| <seg id="36"> Like a candle that burns brightly, it burned out quickly. </seg> |
| <seg id="37"> And indeed, our liberal democracies today do not have their roots in ancient Athens. </seg> |
| <seg id="38"> They have their roots in the Magna Carta, in the 1688 Glorious Revolution, indeed in the American constitution. </seg> |
| <seg id="39"> Whereas Athenian democracy was focusing on the masterless citizen and empowering the working poor, our liberal democracies are founded on the Magna Carta tradition, which was, after all, a charter for masters. </seg> |
| <seg id="40"> And indeed, liberal democracy only surfaced when it was possible to separate fully the political sphere from the economic sphere, so as to confine the democratic process fully in the political sphere, leaving the economic sphere -- the corporate world, if you want -- as a democracy-free zone. </seg> |
| <seg id="41"> Now, in our democracies today, this separation of the economic from the political sphere, it gave rise to an inexorable, epic struggle between the two, with the economic sphere colonizing the political sphere, eating into its power. </seg> |
| <seg id="42"> Have you wondered why politicians are not what they used to be? </seg> |
| <seg id="43"> It's not because their DNA has degenerated. </seg> |
| <seg id="44"> It is rather because one can be in government today and not in power, because power has migrated from the political to the economic sphere, which is separate. </seg> |
| <seg id="45"> I spoke about my quarrel with capitalism. </seg> |
| <seg id="46"> If you think about it, it is a little bit like a population of predators, that are so successful in decimating the prey that they must feed on, that in the end they starve. </seg> |
| <seg id="47"> Similarly, the economic sphere has been colonizing and cannibalizing the political sphere to such an extent that it is undermining itself, Corporate power is increasing, political goods are devaluing, inequality is rising, aggregate demand is falling and CEOs of corporations are too scared to invest the cash of their corporations. </seg> |
| <seg id="48"> So the more capitalism succeeds in taking the demos out of democracy, the taller the twin peaks and the greater the waste of human resources and humanity's wealth. </seg> |
| <seg id="49"> Clearly, if this is right, we must reunite the political and economic spheres and better do it with a demos being in control, like in ancient Athens except without the slaves or the exclusion of women and migrants. </seg> |
| <seg id="50"> Now, this is not an original idea. </seg> |
| <seg id="51"> The Marxist left had that idea 100 years ago and it didn't go very well, did it? </seg> |
| <seg id="52"> The lesson that we learned from the Soviet debacle is that only by a miracle will the working poor be reempowered, as they were in ancient Athens, without creating new forms of brutality and waste. </seg> |
| <seg id="53"> But there is a solution: eliminate the working poor. </seg> |
| <seg id="54"> Capitalism's doing it by replacing low-wage workers with automata, androids, robots. </seg> |
| <seg id="55"> The problem is that as long as the economic and the political spheres are separate, automation makes the twin peaks taller, the waste loftier and the social conflicts deeper, including -- soon, I believe -- in places like China. </seg> |
| <seg id="56"> So we need to reconfigure, we need to reunite the economic and the political spheres, but we'd better do it by democratizing the reunified sphere, lest we end up with a surveillance-mad hyperautocracy that makes The Matrix, the movie, look like a documentary. </seg> |
| <seg id="57"> So the question is not whether capitalism will survive the technological innovations it is spawning. </seg> |
| <seg id="58"> The more interesting question is whether capitalism will be succeeded by something resembling a Matrix dystopia or something much closer to a Star Trek-like society, where machines serve the humans and the humans expend their energies exploring the universe and indulging in long debates about the meaning of life in some ancient, Athenian-like, high tech agora. </seg> |
| <seg id="59"> I think we can afford to be optimistic. </seg> |
| <seg id="60"> But what would it take, what would it look like to have this Star Trek-like utopia, instead of the Matrix-like dystopia? </seg> |
| <seg id="61"> In practical terms, allow me to share just briefly, a couple of examples. </seg> |
| <seg id="62"> At the level of the enterprise, imagine a capital market, where you earn capital as you work, and where your capital follows you from one job to another, from one company to another, and the company -- whichever one you happen to work at at that time -- is solely owned by those who happen to work in it at that moment. </seg> |
| <seg id="63"> Then all income stems from capital, from profits, and the very concept of wage labor becomes obsolete. </seg> |
| <seg id="64"> No more separation between those who own but do not work in the company and those who work but do not own the company; no more tug-of-war between capital and labor; no great gap between investment and saving; indeed, no towering twin peaks. </seg> |
| <seg id="65"> At the level of the global political economy, imagine for a moment that our national currencies have a free-floating exchange rate, with a universal, global, digital currency, one that is issued by the International Monetary Fund, on behalf of all humanity. </seg> |
| <seg id="66"> And imagine further that all international trade is denominated in this currency -- let's call it "the cosmos," in units of cosmos -- with every government agreeing to be paying into a common fund a sum of cosmos units proportional to the country's trade deficit, or indeed to a country's trade surplus. </seg> |
| <seg id="67"> And imagine that that fund is utilized to invest in green technologies, especially in parts of the world where investment funding is scarce. </seg> |
| <seg id="68"> This is not a new idea. </seg> |
| <seg id="69"> It's what, effectively, John Maynard Keynes proposed in 1944 at the Bretton Woods Conference. </seg> |
| <seg id="70"> The problem is that back then, they didn't have the technology to implement it. </seg> |
| <seg id="71"> Now we do, especially in the context of a reunified political-economic sphere. </seg> |
| <seg id="72"> The world that I am describing to you is simultaneously libertarian, in that it prioritizes empowered individuals, Marxist, since it will have confined to the dustbin of history the division between capital and labor, and Keynesian, global Keynesian. </seg> |
| <seg id="73"> But above all else, it is a world in which we will be able to imagine an authentic democracy. </seg> |
| <seg id="74"> Will such a world dawn? </seg> |
| <seg id="75"> Or shall we descend into a Matrix-like dystopia? </seg> |
| <seg id="76"> The answer lies in the political choice that we shall be making collectively. </seg> |
| <seg id="77"> It is our choice, and we'd better make it democratically. </seg> |
| <seg id="78"> Thank you. </seg> |
| <seg id="79"> Bruno Giussani: Yanis ... </seg> |
| <seg id="80"> It was you who described yourself in your bios as a libertarian Marxist. </seg> |
| <seg id="81"> What is the relevance of Marx's analysis today? </seg> |
| <seg id="82"> Yanis Varoufakis: Well, if there was any relevance in what I just said, then Marx is relevant. </seg> |
| <seg id="83"> Because the whole point of reunifying the political and economic is -- if we don't do it, then technological innovation is going to create such a massive fall in aggregate demand, what Larry Summers refers to as secular stagnation. </seg> |
| <seg id="84"> With this crisis migrating from one part of the world, as it is now, it will destabilize not only our democracies, but even the emerging world that is not that keen on liberal democracy. </seg> |
| <seg id="85"> So if this analysis holds water, then Marx is absolutely relevant. </seg> |
| <seg id="86"> But so is Hayek, that's why I'm a libertarian Marxist, and so is Keynes, so that's why I'm totally confused. </seg> |
| <seg id="87"> BG: Indeed, and possibly we are too, now. </seg> |
| <seg id="88"> YV: If you are not confused, you are not thinking, OK? </seg> |
| <seg id="89"> BG: That's a very, very Greek philosopher kind of thing to say -- YV: That was Einstein, actually -- BG: During your talk you mentioned Singapore and China, and last night at the speaker dinner, you expressed a pretty strong opinion about how the West looks at China. </seg> |
| <seg id="90"> Would you like to share that? </seg> |
| <seg id="91"> YV: Well, there's a great degree of hypocrisy. </seg> |
| <seg id="92"> In our liberal democracies, we have a semblance of democracy. </seg> |
| <seg id="93"> It's because we have confined, as I was saying in my talk, democracy to the political sphere, while leaving the one sphere where all the action is -- the economic sphere -- a completely democracy-free zone. </seg> |
| <seg id="94"> In a sense, if I am allowed to be provocative, China today is closer to Britain in the 19th century. </seg> |
| <seg id="95"> Because remember, we tend to associate liberalism with democracy -- that's a mistake, historically. </seg> |
| <seg id="96"> Liberalism, liberal, it's like John Stuart Mill. </seg> |
| <seg id="97"> John Stuart Mill was particularly skeptical about the democratic process. </seg> |
| <seg id="98"> So what you are seeing now in China is a very similar process to the one that we had in Britain during the Industrial Revolution, especially the transition from the first to the second. </seg> |
| <seg id="99"> And to be castigating China for doing that which the West did in the 19th century, smacks of hypocrisy. </seg> |
| <seg id="100"> BG: I am sure that many people here are wondering about your experience as the Finance Minister of Greece earlier this year. </seg> |
| <seg id="101"> YV: I knew this was coming. </seg> |
| <seg id="102"> BG: Yes. </seg> |
| <seg id="103"> BG: Six months after, how do you look back at the first half of the year? </seg> |
| <seg id="104"> YV: Extremely exciting, from a personal point of view, and very disappointing, because we had an opportunity to reboot the Eurozone. </seg> |
| <seg id="105"> Not just Greece, the Eurozone. </seg> |
| <seg id="106"> To move away from the complacency and the constant denial that there was a massive -- and there is a massive architectural fault line going through the Eurozone, which is threatening, massively, the whole of the European Union process. </seg> |
| <seg id="107"> We had an opportunity on the basis of the Greek program -- was the first program to manifest that denial -- to put it right. </seg> |
| <seg id="108"> And, unfortunately, the powers in the Eurozone, in the Eurogroup, chose to maintain denial. </seg> |
| <seg id="109"> But you know what happens. </seg> |
| <seg id="110"> This is the experience of the Soviet Union. </seg> |
| <seg id="111"> When you try to keep alive an economic system that architecturally cannot survive, through political will and through authoritarianism, you may succeed in prolonging it, but when change happens it happens very abruptly and catastrophically. </seg> |
| <seg id="112"> BG: What kind of change are you foreseeing? </seg> |
| <seg id="113"> YV: Well, there's no doubt that if we don't change the architecture of the Eurozone, the Eurozone has no future. </seg> |
| <seg id="114"> BG: Did you make any mistakes when you were Finance Minister? </seg> |
| <seg id="115"> YV: Every day. </seg> |
| <seg id="116"> BG: For example? YV: Anybody who looks back -- No, but seriously. </seg> |
| <seg id="117"> If there's any Minister of Finance, or of anything else for that matter, who tells you after six months in a job, especially in such a stressful situation, that they have made no mistake, they're dangerous people. </seg> |
| <seg id="118"> Of course I made mistakes. </seg> |
| <seg id="119"> The greatest mistake was to sign the application for the extension of a loan agreement in the end of February. </seg> |
| <seg id="120"> I was imagining that there was a genuine interest on the side of the creditors to find common ground. </seg> |
| <seg id="121"> And there wasn't. </seg> |
| <seg id="122"> They were simply interested in crushing our government, just because they did not want to have to deal with the architectural fault lines that were running through the Eurozone. </seg> |
| <seg id="123"> And because they didn't want to admit that for five years they were implementing a catastrophic program in Greece. </seg> |
| <seg id="124"> We lost one-third of our nominal GDP. </seg> |
| <seg id="125"> This is worse than the Great Depression. </seg> |
| <seg id="126"> And no one has come clean from the troika of lenders that have been imposing this policy to say, "This was a colossal mistake." </seg> |
| <seg id="127"> BG: Despite all this, and despite the aggressiveness of the discussion, you seem to be remaining quite pro-European. </seg> |
| <seg id="128"> YV: Absolutely. </seg> |
| <seg id="129"> Look, my criticism of the European Union and the Eurozone comes from a person who lives and breathes Europe. </seg> |
| <seg id="130"> My greatest fear is that the Eurozone will not survive. </seg> |
| <seg id="131"> Because if it doesn't, the centrifugal forces that will be unleashed and they will destroy the European Union. </seg> |
| <seg id="132"> And that will be catastrophic not just for Europe but for the whole global economy. </seg> |
| <seg id="133"> We are probably the largest economy in the world. </seg> |
| <seg id="134"> And if we allow ourselves to fall into a route of the postmodern 1930's, which seems to me to be what we are doing, then that will be detrimental to the future of Europeans and non-Europeans alike. </seg> |
| <seg id="135"> BG: We definitely hope you are wrong on that point. </seg> |
| <seg id="136"> Yanis, thank you for coming to TED. </seg> |
| <seg id="137"> YV: Thank you. </seg> |
| </doc> |
| <doc docid="2403" genre="lectures"> |
| <url>http://www.ted.com/talks/sebastian_wernicke_how_to_use_data_to_make_a_hit_tv_show</url> |
| <description>TED Talk Subtitles and Transcript: Does collecting more data lead to better decision-making? Competitive, data-savvy companies like Amazon, Google and Netflix have learned that data analysis alone doesn't always produce optimum results. In this talk, data scientist Sebastian Wernicke breaks down what goes wrong when we make decisions based purely on data -- and suggests a brainier way to use it.</description> |
| <keywords>talks, TEDx, algorithm, brain, data, decision-making, intelligence, media, technology</keywords> |
| <talkid>2403</talkid> |
| <title>Sebastian Wernicke: How to use data to make a hit TV show</title> |
| <reviewer></reviewer> |
| <translator></translator> |
| <seg id="1"> Roy Price is a man that most of you have probably never heard about, even though he may have been responsible for 22 somewhat mediocre minutes of your life on April 19, 2013. </seg> |
| <seg id="2"> He may have also been responsible for 22 very entertaining minutes, but not very many of you. </seg> |
| <seg id="3"> And all of that goes back to a decision that Roy had to make about three years ago. </seg> |
| <seg id="4"> So you see, Roy Price is a senior executive with Amazon Studios. </seg> |
| <seg id="5"> That's the TV production company of Amazon. </seg> |
| <seg id="6"> He's 47 years old, slim, spiky hair, describes himself on Twitter as "movies, TV, technology, tacos." </seg> |
| <seg id="7"> And Roy Price has a very responsible job, because it's his responsibility to pick the shows, the original content that Amazon is going to make. </seg> |
| <seg id="8"> And of course that's a highly competitive space. </seg> |
| <seg id="9"> I mean, there are so many TV shows already out there, that Roy can't just choose any show. </seg> |
| <seg id="10"> He has to find shows that are really, really great. </seg> |
| <seg id="11"> So in other words, he has to find shows that are on the very right end of this curve here. </seg> |
| <seg id="12"> So this curve here is the rating distribution of about 2,500 TV shows on the website IMDB, and the rating goes from one to 10, and the height here shows you how many shows get that rating. </seg> |
| <seg id="13"> So if your show gets a rating of nine points or higher, that's a winner. </seg> |
| <seg id="14"> Then you have a top two percent show. </seg> |
| <seg id="15"> That's shows like "Breaking Bad," "Game of Thrones," "The Wire," so all of these shows that are addictive, whereafter you've watched a season, your brain is basically like, "Where can I get more of these episodes?" </seg> |
| <seg id="16"> That kind of show. </seg> |
| <seg id="17"> On the left side, just for clarity, here on that end, you have a show called "Toddlers and Tiaras" -- -- which should tell you enough about what's going on on that end of the curve. </seg> |
| <seg id="18"> Now, Roy Price is not worried about getting on the left end of the curve, because I think you would have to have some serious brainpower to undercut "Toddlers and Tiaras." </seg> |
| <seg id="19"> So what he's worried about is this middle bulge here, the bulge of average TV, you know, those shows that aren't really good or really bad, they don't really get you excited. </seg> |
| <seg id="20"> So he needs to make sure that he's really on the right end of this. </seg> |
| <seg id="21"> So the pressure is on, and of course it's also the first time that Amazon is even doing something like this, so Roy Price does not want to take any chances. </seg> |
| <seg id="22"> He wants to engineer success. </seg> |
| <seg id="23"> He needs a guaranteed success, and so what he does is, he holds a competition. </seg> |
| <seg id="24"> So he takes a bunch of ideas for TV shows, and from those ideas, through an evaluation, they select eight candidates for TV shows, and then he just makes the first episode of each one of these shows and puts them online for free for everyone to watch. </seg> |
| <seg id="25"> And so when Amazon is giving out free stuff, you're going to take it, right? </seg> |
| <seg id="26"> So millions of viewers are watching those episodes. </seg> |
| <seg id="27"> What they don't realize is that, while they're watching their shows, actually, they are being watched. </seg> |
| <seg id="28"> They are being watched by Roy Price and his team, who record everything. </seg> |
| <seg id="29"> They record when somebody presses play, when somebody presses pause, what parts they skip, what parts they watch again. </seg> |
| <seg id="30"> So they collect millions of data points, because they want to have those data points to then decide which show they should make. </seg> |
| <seg id="31"> And sure enough, so they collect all the data, they do all the data crunching, and an answer emerges, and the answer is, "Amazon should do a sitcom about four Republican US Senators." </seg> |
| <seg id="32"> They did that show. </seg> |
| <seg id="33"> So does anyone know the name of the show? </seg> |
| <seg id="34"> Yes, "Alpha House," but it seems like not too many of you here remember that show, actually, because it didn't turn out that great. </seg> |
| <seg id="35"> It's actually just an average show, actually -- literally, in fact, because the average of this curve here is at 7.4, and "Alpha House" lands at 7.5, so a slightly above average show, but certainly not what Roy Price and his team were aiming for. </seg> |
| <seg id="36"> Meanwhile, however, at about the same time, at another company, another executive did manage to land a top show using data analysis, and his name is Ted, Ted Sarandos, who is the Chief Content Officer of Netflix, and just like Roy, he's on a constant mission to find that great TV show, and he uses data as well to do that, except he does it a little bit differently. </seg> |
| <seg id="37"> So instead of holding a competition, what he did -- and his team of course -- was they looked at all the data they already had about Netflix viewers, you know, the ratings they give their shows, the viewing histories, what shows people like, and so on. </seg> |
| <seg id="38"> And then they use that data to discover all of these little bits and pieces about the audience: what kinds of shows they like, what kind of producers, what kind of actors. </seg> |
| <seg id="39"> And once they had all of these pieces together, they took a leap of faith, and they decided to license not a sitcom about four Senators but a drama series about a single Senator. </seg> |
| <seg id="40"> You guys know the show? </seg> |
| <seg id="41"> Yes, "House of Cards," and Netflix of course, nailed it with that show, at least for the first two seasons. </seg> |
| <seg id="42"> "House of Cards" gets a 9.1 rating on this curve, so it's exactly where they wanted it to be. </seg> |
| <seg id="43"> Now, the question of course is, what happened here? </seg> |
| <seg id="44"> So you have two very competitive, data-savvy companies. </seg> |
| <seg id="45"> They connect all of these millions of data points, and then it works beautifully for one of them, and it doesn't work for the other one. </seg> |
| <seg id="46"> So why? </seg> |
| <seg id="47"> Because logic kind of tells you that this should be working all the time. </seg> |
| <seg id="48"> I mean, if you're collecting millions of data points on a decision you're going to make, then you should be able to make a pretty good decision. </seg> |
| <seg id="49"> You have 200 years of statistics to rely on. </seg> |
| <seg id="50"> You're amplifying it with very powerful computers. </seg> |
| <seg id="51"> The least you could expect is good TV, right? </seg> |
| <seg id="52"> And if data analysis does not work that way, then it actually gets a little scary, because we live in a time where we're turning to data more and more to make very serious decisions that go far beyond TV. </seg> |
| <seg id="53"> Does anyone here know the company Multi-Health Systems? </seg> |
| <seg id="54"> No one. OK, that's good actually. </seg> |
| <seg id="55"> OK, so Multi-Health Systems is a software company, and I hope that nobody here in this room ever comes into contact with that software, because if you do, it means you're in prison. </seg> |
| <seg id="56"> If someone here in the US is in prison, and they apply for parole, then it's very likely that data analysis software from that company will be used in determining whether to grant that parole. </seg> |
| <seg id="57"> So it's the same principle as Amazon and Netflix, but now instead of deciding whether a TV show is going to be good or bad, you're deciding whether a person is going to be good or bad. </seg> |
| <seg id="58"> And mediocre TV, 22 minutes, that can be pretty bad, but more years in prison, I guess, even worse. </seg> |
| <seg id="59"> And unfortunately, there is actually some evidence that this data analysis, despite having lots of data, does not always produce optimum results. </seg> |
| <seg id="60"> And that's not because a company like Multi-Health Systems doesn't know what to do with data. </seg> |
| <seg id="61"> Even the most data-savvy companies get it wrong. </seg> |
| <seg id="62"> Yes, even Google gets it wrong sometimes. </seg> |
| <seg id="63"> In 2009, Google announced that they were able, with data analysis, to predict outbreaks of influenza, the nasty kind of flu, by doing data analysis on their Google searches. </seg> |
| <seg id="64"> And it worked beautifully, and it made a big splash in the news, including the pinnacle of scientific success: a publication in the journal "Nature." </seg> |
| <seg id="65"> It worked beautifully for year after year after year, until one year it failed. </seg> |
| <seg id="66"> And nobody could even tell exactly why. </seg> |
| <seg id="67"> It just didn't work that year, and of course that again made big news, of a publication from the journal "Nature." </seg> |
| <seg id="68"> So even the most data-savvy companies, Amazon and Google, they sometimes get it wrong. </seg> |
| <seg id="69"> And despite all those failures, data is moving rapidly into real-life decision-making -- into the workplace, law enforcement, medicine. </seg> |
| <seg id="70"> So we should better make sure that data is helping. </seg> |
| <seg id="71"> Now, personally I've seen a lot of this struggle with data myself, because I work in computational genetics, which is also a field where lots of very smart people are using unimaginable amounts of data to make pretty serious decisions like deciding on a cancer therapy or developing a drug. </seg> |
| <seg id="72"> And over the years, I've noticed a sort of pattern or kind of rule, if you will, about the difference between successful decision-making with data and unsuccessful decision-making, and I find this a pattern worth sharing, and it goes something like this. </seg> |
| <seg id="73"> So whenever you're solving a complex problem, you're doing essentially two things. </seg> |
| <seg id="74"> The first one is, you take that problem apart into its bits and pieces so that you can deeply analyze those bits and pieces, You put all of these bits and pieces back together again to come to your conclusion. </seg> |
| <seg id="75"> And sometimes you have to do it over again, but it's always those two things: taking apart and putting back together again. </seg> |
| <seg id="76"> And now the crucial thing is that data and data analysis is only good for the first part. </seg> |
| <seg id="77"> Data and data analysis, no matter how powerful, can only help you taking a problem apart and understanding its pieces. </seg> |
| <seg id="78"> It's not suited to put those pieces back together again and then to come to a conclusion. </seg> |
| <seg id="79"> There's another tool that can do that, and we all have it, and that tool is the brain. </seg> |
| <seg id="80"> If there's one thing a brain is good at, it's taking bits and pieces back together again, even when you have incomplete information, and coming to a good conclusion, especially if it's the brain of an expert. </seg> |
| <seg id="81"> And that's why I believe that Netflix was so successful, because they used data and brains where they belong in the process. </seg> |
| <seg id="82"> They use data to first understand lots of pieces about their audience that they otherwise wouldn't have been able to understand at that depth, but then the decision to take all these bits and pieces and put them back together again and make a show like "House of Cards," that was nowhere in the data. </seg> |
| <seg id="83"> Ted Sarandos and his team made that decision to license that show, which also meant, by the way, that they were taking a pretty big personal risk with that decision. </seg> |
| <seg id="84"> And Amazon, on the other hand, they did it the wrong way around. </seg> |
| <seg id="85"> They used data all the way to drive their decision-making, first when they held their competition of TV ideas, then when they selected "Alpha House" to make as a show. </seg> |
| <seg id="86"> Which of course was a very safe decision for them, because they could always point at the data, saying, "This is what the data tells us." </seg> |
| <seg id="87"> But it didn't lead to the exceptional results that they were hoping for. </seg> |
| <seg id="88"> So data is of course a massively useful tool to make better decisions, but I believe that things go wrong when data is starting to drive those decisions. </seg> |
| <seg id="89"> No matter how powerful, data is just a tool, and to keep that in mind, I find this device here quite useful. </seg> |
| <seg id="90"> Many of you will ... </seg> |
| <seg id="91"> Before there was data, this was the decision-making device to use. </seg> |
| <seg id="92"> Many of you will know this. </seg> |
| <seg id="93"> This toy here is called the Magic 8 Ball, and it's really amazing, because if you have a decision to make, a yes or no question, all you have to do is you shake the ball, and then you get an answer -- "Most Likely" -- right here in this window in real time. </seg> |
| <seg id="94"> I'll have it out later for tech demos. </seg> |
| <seg id="95"> Now, the thing is, of course -- so I've made some decisions in my life where, in hindsight, I should have just listened to the ball. </seg> |
| <seg id="96"> But, you know, of course, if you have the data available, you want to replace this with something much more sophisticated, like data analysis to come to a better decision. </seg> |
| <seg id="97"> But that does not change the basic setup. </seg> |
| <seg id="98"> So the ball may get smarter and smarter and smarter, but I believe it's still on us to make the decisions if we want to achieve something extraordinary, on the right end of the curve. </seg> |
| <seg id="99"> And I find that a very encouraging message, in fact, that even in the face of huge amounts of data, it still pays off to make decisions, to be an expert in what you're doing and take risks. </seg> |
| <seg id="100"> Because in the end, it's not data, it's risks that will land you on the right end of the curve. </seg> |
| <seg id="101"> Thank you. </seg> |
| </doc> |
| </srcset> |
| </mteval> |
|
|