diff --git a/transcript/allocentric_-PmPWc_n_0s.txt b/transcript/allocentric_-PmPWc_n_0s.txt new file mode 100644 index 0000000000000000000000000000000000000000..da2e699170054b56bb18d6c2b8b6b356c7fc8d9e --- /dev/null +++ b/transcript/allocentric_-PmPWc_n_0s.txt @@ -0,0 +1,714 @@ +[0.000 --> 8.040] I've been to this Monica Hari who is visiting us from University of Glasgow. Many of you probably +[8.040 --> 16.800] already know Monica. She has done her PhD here at St Andrews in 1980. Long time ago. +[16.800 --> 27.080] Well, she did her PhD with David Milner who at that time was launching his very influential +[27.080 --> 34.520] theory about the dual stream of visual processing. So people who study visual perception know +[34.520 --> 43.880] that visual stimuli are processed along two separate streams in the brain. And it's been discussed +[43.880 --> 51.320] for a long time how one could best characterize these streams. And it is David Milner and Melvin +[51.320 --> 57.480] Gute and contribution to propose that one stream is vision for action. The other stream is vision for +[57.480 --> 66.840] perception. We've had this theory for more than two decades now and it has now again come into +[66.840 --> 74.040] debate because it has been found that some patients who are believed to have just one type of +[74.040 --> 84.600] deficit, perception or action actually also show signs that the other function could be impaired as +[84.600 --> 92.360] well. So it's my pleasure to invite Monica into here the newest research in this topic and let's see +[92.360 --> 98.600] what has happened with the dual stream of visual processing of David Milner. Okay, thank you very +[98.600 --> 101.880] much Daniela and I think this is when you find out that really you know research doesn't actually +[101.880 --> 106.040] progress and I basically we were talking the same talk that I kind of gave 20 years ago. I hope not +[106.040 --> 111.160] but you know we'll see. So it's actually really nice that Daniela has kind of given this kind of +[111.160 --> 116.200] introduction about sort of talking about different pathways sort of for perception and action and +[116.200 --> 120.200] there are really different ways of kind of thinking about the brain. And one of the ways which is +[120.200 --> 124.440] quite different from the way that sort of I kind of think about it because my thinking as you can +[124.440 --> 130.200] imagine comes very much from David Milner having been his PhD student. But people also think about +[130.200 --> 134.680] action and perception in terms of common coding and friends and really people around him have been +[134.680 --> 139.720] very influential in this kind of way of kind of looking at stuff. And the argument here is really +[139.720 --> 145.240] that perception representations are stored together with the actions that they elicit. And so what +[145.240 --> 149.400] this actually means is that the recognition of an object will automatically activate an associated +[149.400 --> 154.280] action. So that's very much you know the viewpoints sort of of the common coding theory. So what +[154.280 --> 159.240] does actually mean is that if we initiate an action sequence we actually work backwards from the +[159.240 --> 164.360] design perceptual effect. And this then triggers the sequence of the actions that we need to execute +[164.360 --> 170.440] to achieve the effect. So put very simply seeing an object, ultimately activates automatically activates +[170.440 --> 175.640] the action. So you see an action is kind of the same. So from the perceptual system we then act. +[175.640 --> 180.200] And this is very much the sort of common coding theory background you know two perceptual action. +[181.720 --> 185.000] In relation to that and sort of really quite differently there's obviously the dual-root model +[185.000 --> 189.400] of visual processing. And very good else model is not the only model and really confusingly +[189.400 --> 193.880] in the second part of my talk about another dual-root model. But there are sort of different +[193.880 --> 197.720] ways of thinking. So if you believe more sort of in the dual-root models what you would actually +[197.720 --> 203.320] argue is that despite the form of knowledge that we all have or we unified a visual experience +[203.880 --> 207.560] there are actually two different pathways in the brain and there's functionally different +[207.560 --> 212.600] and they're anatomically different. And what that actually means is that vision for perception +[212.600 --> 216.200] on the one hand is independent of vision for action. So we basically have different systems. +[216.200 --> 220.200] So we have a vision for perception system and we have a vision for action system. +[221.400 --> 225.000] And I would actually argue that depending on which viewpoints you're actually coming from +[225.000 --> 229.560] and your interpretation first of all of the neuroscientific data and the interpretation of +[229.560 --> 233.800] neuropsychological deficits is actually quite different. So this is actually important to kind of +[233.800 --> 239.320] make these kind of distinctions. And I think if you think about sort of in particular neuropsychological +[239.320 --> 243.480] disorders in terms of dual-root models and in terms of differences between perception and +[243.480 --> 248.200] actions that actually can be quite informative. And one of the disorders I think this way of +[248.200 --> 254.360] thinking is actually variant formative for is hemispatial neglect. I'm kind of assuming that most +[254.360 --> 259.240] of you know what hemispatial neglect is. If you don't it's generally described as a failure to +[259.240 --> 265.960] report, respond or orient to stimuli opposite the side of the brain. It's usually occurs after +[265.960 --> 270.600] a lesion to the right half of the brain to the right hemisphere. And if you then ask people to +[270.600 --> 275.480] sort of perform certain visual tasks they tend to ignore objects on the left hand side. And this +[275.480 --> 280.360] is kind of an example of a classic neglect assessment test called the behavioral intention test +[280.920 --> 285.320] where you ask patients to cancel out all the small lines that they can see. And the patients are +[285.320 --> 290.040] quite capable of understanding the task. They're just kind of ignoring the items on the left here. +[290.840 --> 295.160] And this is just an example of a patient sort of performance. +[298.760 --> 303.400] Yes, I do the sound isn't actually working just now. So effectively this is just me demonstrating +[303.400 --> 307.640] to the patient what I want him to do. And as you can see there's no problem with actually +[307.640 --> 312.600] understanding the task which is just to cross out all the small lines, all the lines in fact on +[312.600 --> 317.800] this one. But what you notice is that the patient's head is very much kind of deviated to the right +[317.800 --> 323.000] hand side. So all the kind of attention is focused on the right. He actually starts you know +[323.000 --> 326.280] penciling out the lines from the right hand side of the page, you know rather than from the left +[326.280 --> 339.160] which is more commonly what you and I would do. And then another thing I think what you also +[339.160 --> 344.040] notice here is that lines are crossed out repeatedly. So the rather moving over to the lines on +[344.040 --> 348.600] the left, the patient actually repeatedly sort of cancels out the lines on the right. And +[348.600 --> 352.440] that you can actually sort of see this online. So this is actually a sort of stroke training and +[352.440 --> 358.120] awareness module which was kind of really designed for people who kind of work with stroke and +[358.120 --> 361.960] you want to find out more about visual disorder. So people who kind of sort of stroke know this is +[361.960 --> 366.360] kind of on the wards who want to gain a little bit more insight in terms of the kind of symptomatology +[366.360 --> 370.600] that you get after occipital and parietal stores. So there's modules they're kind of describing +[370.600 --> 374.920] hemispheration neglect and also describing hemianobia. It's a freely available online +[374.920 --> 378.760] model which could maybe quite much for teaching. So if you go to this website you can actually +[378.760 --> 385.160] download those movies. And really just to put neglect you know generally sort of into the sort of +[385.160 --> 390.280] framework sort of of the NHS is actually quite frequent after Vitamys Filijins. It's a +[390.280 --> 396.280] effect up to 80% of people with Vitamys Filijins initially straight after the stroke. It's the +[396.280 --> 400.920] strongest single predictor of pure functionary recovery after Vitamys Filijins for stroke. So people +[400.920 --> 404.920] who suffer from neglect do spend much longer time in hospital. They're much more likely to end up +[404.920 --> 410.200] in nursing home compared to being released you know back to their homes. And therefore the cost +[410.200 --> 414.360] really to the NHS is really quite great and they're actually great need to kind of obviously try +[414.360 --> 420.520] and rehabilitate the actual symptom. It really in line with this and quite impressingly people +[420.520 --> 424.920] have actually tried to come up with effective treatments for hemispheration neglect and so far if +[424.920 --> 429.640] you look at clinical trials and the sort of sign guidelines and the Bowen Cochrane reviews +[429.640 --> 434.120] actually look at studies which are around in the literature evaluate them beautiful effectiveness +[434.680 --> 438.680] at present no recognized treatment actually exists this can be recommended to be applied +[438.680 --> 442.840] in a clinical setting. So there doesn't really seem to be any effective way at the moment actually +[442.840 --> 448.120] rehabilitating neglect. And I come back to this a little bit later what's also quite relevant I +[448.120 --> 453.960] think for the actual disorder is to look at the classical leaves location and the lesion location +[453.960 --> 458.760] that's most typically kind of demonstrates causes hemispheration neglect is lesions in the +[458.760 --> 464.200] inferior parietal law, broadness area 39 and 14 and also lesions in the superior temporal sulcus +[464.200 --> 468.200] and otter carnage was actually quite instrumental in actually sort of also implicating +[468.200 --> 472.360] more temporal areas you know as being a sort of common denominator of sort of causing hemispacial +[473.080 --> 478.760] neglect. Just briefly say a little bit about the assessment of neglect you've seen one example +[478.760 --> 483.640] already before where the patients are asked to cancel out all the small stars and you kind of see +[483.640 --> 488.120] this bias. Again this is the example I showed you in the video where the patients are asked to +[488.120 --> 491.560] cross out all the lines you see this sort of repeated crossing out of lines on the right +[492.120 --> 497.080] and not of lines kind of on the left. Similar idea here the patients are required to cross out all +[497.080 --> 502.040] these in ours they tend not to forget you know the letters that they have to cross out so memory +[502.040 --> 506.120] impairments are not particularly dominant in hemispheration neglect although there are other people +[506.120 --> 510.440] to kind of make more of a case about memory disorder also being a prominent feature. +[510.600 --> 517.480] If you ask people to sort of make sort of marks mark lines in the center you get this typical right +[517.480 --> 521.960] with deviation I'll say a little bit more about sort of bisection and landmark behavior in the +[521.960 --> 527.720] second part of my talk. This is sort of copying behavior, copying for memory. The important thing I +[527.720 --> 532.200] think this is a refer to these are all subtests of the behavioral inattention tests which is +[532.200 --> 537.320] effectively a standardized clinical tool and what's quite nice about this is for you get a +[537.320 --> 541.880] cut-off score sort of indicating whether neglect is present for each of these individual subtests +[541.880 --> 547.560] but also for all subtests put together so you kind of have some idea of sort of is neglect present +[547.560 --> 552.040] or not and just what's the severity of the disorder. I think one of the problems with this test is +[552.040 --> 555.960] there's weight in for this unperception so you're purely looking at you know one of the perceptual +[555.960 --> 561.640] problems that these patients have and I think this is kind of interesting when we then come back +[561.640 --> 568.200] again sort of to this kind of dual world model of perception and action and if we kind of look +[568.200 --> 571.560] at the dual world model what people will actually what Milner and Gould have actually said and +[571.560 --> 575.640] that they kind of really started thinking about neglect in this way really when I kind of started my +[575.640 --> 581.960] PhD here they were actually saying you know we we know already that our perception of the world +[581.960 --> 586.760] is very much kind of mediated you know by the visual ventral stream and our action of the world +[586.760 --> 590.120] is very much mediated by the visual doors of stream and this is kind of what the dual world +[590.120 --> 594.920] model is all about and other people you know who are kind of based more on the common coding frame +[594.920 --> 599.320] kind of very much disagree with it but this is kind of the argument and they're actually saying if +[599.320 --> 604.280] we look at this model's in relation to neglect what we already know is if you look at the critical +[604.280 --> 608.600] lesion side we already know that the critical lesion side is either in the inferior profile to +[608.600 --> 613.000] logo or in the superior temporal lobe these kind of areas here so the doors of stream as such is +[613.000 --> 619.000] actually spared in hemispatial neglect so what we might be able to infer from that is that all +[619.000 --> 622.520] these perceptual difficulties which I've just described to you which are quite dramatic in these +[622.520 --> 628.280] patients might not necessarily be reflected in their actions so they might actually be able to +[628.280 --> 632.840] act on objects and interact with objects because that's what the visual doors of stream is responsible +[632.840 --> 638.200] for whereas on the other hand they completely fail to perceive objects they have perceptual problems +[638.200 --> 642.920] but they could be dissociation and maybe this is kind of important and can be exploited +[644.040 --> 648.760] so just to kind of investigate this a little bit more one of the confusions which I think often +[648.760 --> 653.640] wise in relation to the sort of perception and action model is what people actually understand +[653.640 --> 658.200] as actions which the doors of visual doors of streams implicated in and me and I'm going to +[658.200 --> 662.040] actually quite specific about this because they're basically saying the doors of streams actually +[662.040 --> 667.960] implicated when stimuli are presented in the here and now on the other hand as soon as time is +[667.960 --> 672.680] allowed to pass or an explicit perception mapping has to be made then the ventral streams were +[673.160 --> 677.880] successful performance so therefore if we're now looking at the Sunnummer Hinespacially +[677.880 --> 683.640] Glect if we do find any kind of action impairments they should mainly affect offline action control +[683.640 --> 687.720] so when the patients are allowed to directly interact with objects they should actually be okay +[688.920 --> 693.800] so really from this model you can make very specific predictions so we would actually expect +[693.800 --> 698.120] neglect patients to show some spared immediate pointing so when they can directly interact with +[698.120 --> 702.440] the objects there should be okay even in that space or even in the space where they show these +[702.440 --> 709.240] all dramatic perceptual problems or the other hand if we look at sort of tasks which action tasks +[709.240 --> 712.360] which aren't directly interacting with the objects for example if you're looking at delayed +[712.360 --> 716.840] pointing or anti pointing that's where problems should occur and we've done a whole series of +[716.840 --> 721.560] experiments really sort of looking at these specific predictions and I just show you one +[721.560 --> 726.520] set of experiments which really makes this point so we actually run an experiment where we +[726.520 --> 733.480] compare any black patients a poor pointing where patients were simply asked to reach for targets +[733.480 --> 737.800] which were presented in different spatial locations and they just had to reach directly to the target +[738.600 --> 743.720] and we compared that to an anti pointing task where the patients were asked to reach for a target +[743.720 --> 748.200] which was actually presented here but then basically perform a mirror image reach to the +[748.200 --> 752.280] exact location on the other side so for left targets they had to reach to the equivalent position +[752.360 --> 755.800] on the right for right targets to the equivalent position on the left. +[757.880 --> 761.960] We then looked at the we did some sort of typical a lesion analysis so we mapped all the lesions +[761.960 --> 767.800] onto sort of T1 weighted image using MRI or software which is kind of the standard way now there's +[767.800 --> 772.760] a sort of mapping lesions and we then also performed a lesion symptom mapping where we were trying +[772.760 --> 779.080] to actually associate more specifically the lesion location with the specific behavioral symptoms +[779.720 --> 784.840] which we expected the patients to show on some of the tasks but first of all just to show you a +[784.840 --> 789.000] picture of the kind of patients that we used the first of all we had a control group which were +[789.000 --> 793.160] right in this phase in patients without neglect this is the kind of the lesions overlay of those +[793.160 --> 797.720] of patients in the different slices they're never and this is kind of important they never really +[797.720 --> 802.200] showed neglect at any time so we tested them sort of quite soon after this talk the neglect was +[802.200 --> 808.520] never present this was the group of right in this phase in patients who showed neglect traditionally +[808.600 --> 812.920] patients with neglect tend to have larger lesions as well and I can just say that now the size +[812.920 --> 817.160] of the lesion really didn't have any implications in the behavioral impairments that they showed but +[817.160 --> 822.200] these patients which show larger lesions very much in line with other studies the patients all +[822.200 --> 826.360] were impaired they were at least impaired on one of the neglect tests most of the patients were +[826.360 --> 830.760] actually impaired on all of the neglect tests the BIT is the one I showed you before +[830.760 --> 834.440] line by section you just look at bias and the balloon's similar visual search task +[834.840 --> 839.480] and again just to kind of show the either kind of looking in a pretty much sort of classic group +[839.480 --> 845.720] of neglect patients so if overall we just sort of subtracted the lesions of the patients with neglect +[845.720 --> 850.200] from the patients who just had my attempts for lesions without neglect the critical lesions +[850.200 --> 854.200] side where again the inferior profile to look in the superior temporal gyros very much you know +[854.200 --> 857.880] those are the lesions which are generally implicated in neglect that doesn't show you tell you +[857.880 --> 865.480] much about task behavior so not to go back to the actual task first of all what do the patients +[865.480 --> 869.960] show what kind of behavior do the patients show in the pro-pointing task so when they can point +[869.960 --> 875.400] directly to targets on the left and the right and be compared first of all this is the neglect +[875.400 --> 879.640] group here this is the we had right-elis-relision control group we also had healthy controls +[879.640 --> 884.440] these are people who were perfectly healthy sort of matched in age and hopefully as you can all +[884.440 --> 888.920] see in the pro-pointing condition the neglect patients were absolutely perfect just like everyone +[888.920 --> 892.760] else so there was absolutely no difference between how they were could how well they could reach +[892.760 --> 896.760] to targets compared to the various control groups even on the left there was no difference between +[896.760 --> 900.120] left and right and remember the left space is really the space where they show all these +[900.120 --> 904.120] perception problems but if they're reaching for an object they're actually very good at that +[905.640 --> 909.960] in the anti-pointing task really dramatically different result first of all if you look again at +[909.960 --> 914.840] the two control groups people are slightly worse it's a harder task you have to kind of identify +[914.840 --> 921.240] the location and then kind of remap remap it onto the equivalent position but the neglect patients +[921.240 --> 925.560] actually found this very very much harder than both of the control groups so they were in all four +[925.560 --> 931.320] in all the special locations they were dramatically and significantly impaired and we also found +[931.320 --> 936.520] a positive correlation with neglect severity so the stronger the neglect the greater the areas that +[936.520 --> 940.280] they're performed so the more they kind of deviated you know from the position that there really +[940.280 --> 947.000] should be pointing to you and if we then sort of perform the sort of voxel based allegian mapping +[947.000 --> 952.360] to the lens saying you know for this fairly dramatic anti-pointing accuracy impairment what are +[952.360 --> 957.480] the voxels which are kind of critically implicated and sort of driving this impairment and what we +[957.480 --> 961.960] found here is that apart from the inferiority of the superior temporal gyros we also found the +[961.960 --> 967.640] middle temporal gyros kind of implicated in mediating being responsible for that kind of behavior so +[967.640 --> 973.000] these were the lesions to the classically associated you know with the anti-pointing inaccuracy those +[973.000 --> 979.080] are the power of the compel gyros so what can you conclude sort of from this sort of behavioral +[979.800 --> 984.360] experiment so first of all it seems to be that neglect patients are kind of unimpaired and +[984.360 --> 989.720] pro-pointing and this is actually in line with a range of reasons other studies have kind of really +[989.720 --> 994.520] demonstrated sort of similar behaviors when neglect patients are allowed to interact directly with +[994.520 --> 1001.080] objects so it seems that online action control is relatively unaffected so we can hopefully argue +[1001.080 --> 1006.360] from this that we already know that the visual doors the stream is unimpaired in terms of anatomy +[1006.360 --> 1009.880] it now seems to be that we can argue it's also unimpaired in terms of function so function is +[1009.880 --> 1016.280] really seems to be okay but what we did find is that the neglect patients presented greater errors in +[1016.280 --> 1021.560] the endpoint accuracy of the anti-movements and there seem to be therefore suffering from a +[1021.560 --> 1026.440] deficit in detecting and transforming a splitted spatial mapping spatial representation for +[1026.440 --> 1030.040] remapping because of course what you have to do in the anti-pointing task you have to identify +[1030.040 --> 1034.600] the location of the target and then we map it onto the opposite side and then reach towards that +[1034.600 --> 1040.600] side and this is clearly where the deficit actually occurred and I think what we can be +[1040.600 --> 1045.880] dissafe from this is you know that really immediate action do actually differ you know from other +[1045.880 --> 1050.920] actions like fake action delayed actions or sort of anti-pointing actions and I can't we have +[1050.920 --> 1055.000] done other experiments where we found similar impairment in neglect patients for example in delayed +[1055.000 --> 1059.480] actions so depending on the kind of action that you're actually investigating you know different +[1059.480 --> 1067.480] areas are kind of implicated in those action control movements and regarding these kind of problems +[1067.480 --> 1071.800] you know the patients really have you know with the anti-pointing there is actually a sort of an +[1071.800 --> 1079.480] FMI study that was performed by Koliak in 2007 who implicated a similar area in their sort of FMI +[1079.480 --> 1086.040] studies and what they actually did in the FMI study is they compared grasping and reaching so +[1086.040 --> 1090.600] grasping to an object and reaching for an object with pantomiming a reaching into grasping +[1090.600 --> 1095.000] object and when I then subtracted those two conditions from each other they actually found that for +[1095.000 --> 1100.040] the pantomimed reaching and grasping the right middle temporal gyros really also had to be +[1100.040 --> 1104.440] active to kind of you know generate the sort of pantomimed reaching and grasping movements and +[1104.440 --> 1108.520] that's very much in line with our data because this was one of the areas which was also critically +[1108.520 --> 1114.280] implicated in our anti-pointing task to kind of be impaired and kind of generating you know the +[1114.280 --> 1121.320] anti-pointing arrows so for those kind of tasks you do seem to the we do seem to have to rely on +[1121.320 --> 1126.120] areas kind of outside the visual door stream to kind of really mediate our actions and this is +[1126.120 --> 1132.440] kind of nice supporting evidence for our sort of study so really from this what are the implications +[1132.440 --> 1138.440] really first of all for perception and neuroscience well hopefully I've kind of shown you that +[1138.440 --> 1143.080] action and perception control can actually disassociate so I don't really buy into this kind of common +[1143.080 --> 1148.280] coding model which I could be presented to you on the first slide but it's also important to +[1148.280 --> 1153.880] know that really not all actions depend solely on the door to visual stream so once we become +[1153.880 --> 1159.400] to things like pantomimed actions fake actions more complex anti-pointing actions delayed actions +[1159.400 --> 1163.800] maybe these kind of actions actually require additional interesting neural networks and for +[1163.800 --> 1168.600] more data it kind of seems to be that they require more sort of temporal and occipital areas +[1170.040 --> 1173.880] and I think this is actually what I would like to argue from this is that contrary really to a +[1173.880 --> 1179.160] range of new scientific studies it is actually important to realize that maybe when we're dealing +[1179.160 --> 1185.640] with offline actions they are not really mediated by the same areas in the brain as real actions +[1185.640 --> 1190.280] so when we kind of talking about for example um faking um movements in this kind of a lot of +[1190.280 --> 1194.280] studies do this because it's much easier when people are sort of tied up in an compromised +[1194.280 --> 1198.120] scanner to just pretend to do a movement rather than really do a movement we can't really assume +[1198.120 --> 1202.360] that the data and the the results you get from that can actually be generated you know two real +[1202.360 --> 1206.280] movements because I think we are actually looking at different areas for example when I move around +[1206.280 --> 1210.120] here and also when I'm in play on a wee I think those are fundamentally different things and I +[1210.120 --> 1213.320] can totally believe that because I can't do anything on a wee so I think it's just much more +[1213.320 --> 1217.720] complex and you need other brain structures than you need to do in sort of picking up an apple for +[1217.720 --> 1222.920] example so I think these are the kind of the implications um for perception and neuroscience +[1223.560 --> 1227.880] the other implications I think are for rehabilitation of neglect and I just like to spend a +[1227.880 --> 1233.160] little bit of time really sort of making this kind of argument so the argument here is like if +[1233.160 --> 1237.080] what we found is correct and that neglect patients are actually quite good in interacting with +[1237.080 --> 1242.440] objects even on the neglecting side then why don't you sort of develop a rehabilitation approach +[1242.440 --> 1247.800] where you get them to interact loads with objects really activate the dose of stream and then see +[1247.800 --> 1251.800] if there's some filtering through you know to the perception impairments that they have because +[1251.800 --> 1255.320] a bit like Daniela mentioned in the beginning we all know that you know even if you believe in this +[1255.320 --> 1259.320] idea of dose of invented streams and separate visual streams there are lots of interactions +[1259.320 --> 1263.640] so the streams clearly interact and maybe we can actually use that to then improve the neglect +[1263.640 --> 1269.240] symptoms and these ideas actually not new so kind of using actions to kind of really improve +[1269.240 --> 1274.760] hemispation neglect studies really have been done sort of more than 10 years ago sort of by myself +[1274.760 --> 1280.040] in particular sort of in Robertson where we ask patients to reach out and interact with objects +[1280.040 --> 1285.560] kind of repeatedly sort of over sort of to repeat it and we then looked if whether we would actually +[1285.560 --> 1290.680] find an improvement in the BRT score so really in their neglect symptoms and really a bit a +[1290.680 --> 1294.840] little bit more and precisely what we had is we had an intervention group where we asked +[1294.840 --> 1300.040] neglect patients to grasp for a vote in the center if I didn't actually quite do this correctly +[1300.040 --> 1304.280] and the robot actually tilt they would get proprioceptive feedback from that and then be encouraged +[1304.280 --> 1310.840] to kind of re-grasp until the vote was actually centrally a grasp and kind of held straight +[1310.840 --> 1314.920] and we compared that to an intervention condition where the people where the patients were simply +[1314.920 --> 1319.640] asked to pick up the vote on the right hand side and put it down so they did some sort of very +[1319.640 --> 1323.800] basic motor action but they didn't really use visual motor feedback to kind of guide you in the +[1323.800 --> 1329.240] perception of the vote and we had some sort of okay results kind of in this study so we asked +[1329.240 --> 1333.400] patients to so we showed them how to do the actual task over three sessions we then got them to +[1333.400 --> 1338.440] do it for ten sessions in their home and we then kind of looked at you know how was an improvement +[1338.440 --> 1344.440] kind of on BRT score and we found an improvement one month after the intervention after at the +[1344.440 --> 1348.520] follow-up one month follow-up in the intervention group the intervention group slightly improved +[1349.320 --> 1353.320] and I wasn't really terribly excited about the result at the time but Ian Robertson basically was +[1353.320 --> 1356.600] because the patients are we tested with chronic patients so these were patients who'd had neglect +[1356.600 --> 1361.400] like four years and we still found some improvement in the intervention group but we've since sort +[1361.400 --> 1365.400] of just about finished sort of another study now where we tried to actually make the intervention +[1365.400 --> 1372.280] a little bit more feasible to be applied in the clinical setting so what we've actually done now +[1372.280 --> 1377.000] and we just kind of finished really the analysis of this data now is we actually reduced the training +[1377.000 --> 1380.920] from three to two days and reduced the number of sessions that the patients and trained by themselves +[1380.920 --> 1385.880] from two to one session and the session was much shorter only 15 minutes we were then slightly more +[1385.880 --> 1391.080] ambitious in kind of our assessment of the outcome measures so we actually looked whether there +[1391.080 --> 1396.440] was kind of an effect not just at one month sort of post intervention but four months post intervention +[1396.440 --> 1401.320] and rather than just looking at an improvement on neglect scores we also said well do these patients +[1401.320 --> 1406.280] actually improve overall do they have an increased quality of life are they more likely to socially +[1406.280 --> 1411.800] participate kind of move outside to go shopping are there any changes in mood in emotional +[1411.800 --> 1416.920] and communication etc and to assess that we actually used the stroke impact scale which is a +[1416.920 --> 1421.960] scale reason to be commonly used in a new in a clinical setting to kind of really assess people's +[1421.960 --> 1427.320] stroke outcome so this was actually the design so we had two sessions where we sort of instructed +[1427.320 --> 1432.920] the patients on what to do we had a quick assessment after that we then had them run 10 sessions +[1432.920 --> 1437.960] once a day in their home over a period of sort of two weeks we then did a quick assessment then +[1437.960 --> 1444.440] we left them completely alone and followed them up again at four months these are kind of the +[1444.440 --> 1449.800] characteristics of the patients 10 patients in the inventor intervention group 10 patients in the +[1449.800 --> 1454.840] control group the quite well matched for age times and stroke so these patients weren't quite as +[1454.840 --> 1459.560] chronic so they were kind of obviously not acute by medical terms they would still be judged +[1459.560 --> 1462.600] as chronic but you know they were literally on average three months post-stroke and we really +[1462.600 --> 1467.080] quite as long term as a previous study and they were quite matched well matched sort of for +[1467.080 --> 1473.320] neglect score initial BRT score so first of all and again you remember in the intervention group +[1473.320 --> 1477.080] they were actually encouraged to grasp what's in the center sort of repeated there was sort of +[1477.080 --> 1481.960] 50 minutes they were placed in different spatial conditions in the control group they were simply +[1481.960 --> 1486.280] asked to reach with a right hand to the right hand side of the world and I think this is important +[1486.280 --> 1491.400] all these patients effectively had right hemisphere lesions so they had some sort of motor impairments +[1491.400 --> 1494.760] with their left hand so we only asked them to use the unimpaired hand so they're only ever +[1494.760 --> 1498.760] using their right hand so we're either using their right hand to grasp the center of the world +[1498.760 --> 1502.680] or to just grasp the side and kind of pick it up and put it down again but they were using the +[1502.680 --> 1505.640] hand that they could use because they are also intervention studies where you ask the patients +[1505.640 --> 1510.120] to use the hand which is actually impaired and there's big problems actually with consent and +[1510.120 --> 1515.880] sort of retention of patients in this first of all we tested them on line by section how well +[1515.880 --> 1521.480] do they actually perceive lines as you can see sort of initially you know well matched sort of +[1521.480 --> 1526.440] for bias if anything the intervention groups showed a larger error video in the control group +[1526.440 --> 1531.400] already after two sessions there's a big improvement in the so I should really put this away +[1531.400 --> 1535.560] in the intervention group there's some improvement in the control group but the improvement +[1535.560 --> 1541.400] that we see in the intervention group that also remains the same after two sessions after 12 +[1541.400 --> 1545.640] sessions altogether and then it follow up and this graph just gives you the sort of percentage +[1545.640 --> 1549.800] improvement so as you can see here the control group improves a little bit as well but there's +[1549.800 --> 1553.880] a much bigger improvement kind of in the intervention group and that actually stays the same +[1554.440 --> 1559.400] also after four months so it seems to be a bit of a long term effect and you can think okay +[1559.400 --> 1563.560] line by section is actually quite similar to by setting a word a gospel word at the center +[1563.560 --> 1568.120] so what actually happens to the neglect score and the neglect score again it was quite similar +[1568.120 --> 1573.160] so quite by match the baseline after two sessions already you see a big improvement which gets +[1573.160 --> 1578.040] slightly bit higher none of the significantly different after the 12 day session but then it +[1578.040 --> 1581.080] actually stays high at the four months follow up and I think that's really the important thing +[1581.080 --> 1585.640] because you really want to show that whatever you're improving is actually long term and again in +[1585.640 --> 1590.360] this graph you can kind of see the percentage improvement so again little improvement in the control +[1590.360 --> 1594.760] group none of the statistically significant big improvement really in the intervention group +[1594.760 --> 1601.640] already after two days which then sort of remains the same and then really the big one is really +[1602.200 --> 1607.400] what kind of happened on this talk impact scale where we're kind of measuring sort of different +[1607.400 --> 1613.160] dimensions sort of off these patients kind of engaging you're in the everyday life and what we +[1613.160 --> 1616.680] actually found and this is obviously the big test you know for any kind of intervention study +[1616.680 --> 1621.160] because to find a generalization to like activities extremely difficult and extremely rare and not +[1621.160 --> 1627.000] really many studies find it so what we actually found here is that again and this is a big test to +[1627.000 --> 1630.280] kind of applies a big question there so we only did this at baseline and then again the four +[1630.280 --> 1635.320] months follow up so what we found is that the patients in the intervention group showed some sort +[1635.320 --> 1640.600] of increase in the activities kind of of the daily lives as the control group stayed the same +[1640.600 --> 1644.600] and I haven't really had time because really as you can see these two groups aren't really well-bedded +[1644.600 --> 1649.240] for baseline but they are now so they're now perfectly matched at the end of the trial and this +[1649.240 --> 1654.360] this test finding still holds and if you look at sort of clinical trials and specifically +[1654.360 --> 1659.000] clinical science in relation to and neglect a lot of them actually claim big effects a lot of them +[1659.640 --> 1664.760] some of them claim generalization to other tasks but most of the trials are really not control trials +[1664.760 --> 1670.120] it's surprising the very little number of control trials that you see which kind of show +[1670.120 --> 1675.080] a sustained effect over time and be it on also some generalization kind of to other tasks +[1675.080 --> 1678.440] and I think that's one of the reasons why at the moment you know neglect therapies actually +[1678.440 --> 1682.440] recommended because ideally you want to show effects in a control trial and you want to show +[1682.440 --> 1687.320] long-term effects that kind of translate onto other behaviors so I think this is quite encouraging +[1688.280 --> 1694.840] so hopefully what can be conclude from this visual feedback training is that zero-driven intervention +[1694.840 --> 1699.560] can actually lead to successful rehabilitation that there are some sort of transfer to activities +[1699.560 --> 1705.160] of daily living that this intervention hopefully as you can see it's a fairly basic intervention +[1705.160 --> 1710.440] it's cost effective it's easy to apply and it's easy to try and staff and care us to actually do +[1710.440 --> 1714.360] it and one of the things which I think is also really crucial is the patient doesn't actually +[1714.360 --> 1719.560] require an insight into the disorder in order to actually perform the actual rehabilitation procedure +[1719.560 --> 1724.600] because at the moment but health professionals tell the detect patients to do as some kind of +[1724.600 --> 1728.840] intervention because nothing is actually formally recognized is scanning training so patients +[1728.840 --> 1733.560] are encouraged to scan the left hand side and so it's encouraged to scan the left side of space +[1733.560 --> 1737.400] and of course they don't really know that they have their problem so as the minute you tell them +[1737.400 --> 1741.240] to stop doing it they stop doing it as with this task you don't really need this kind of insight +[1741.240 --> 1745.560] into this order but of course what we need to do now is this obviously a need for a +[1745.560 --> 1750.680] larger clinical trial to assess the efficacy of this particular treatment and obviously I have a +[1750.680 --> 1754.840] sort of magical clinical collaborator who I've collected all this data with and he basically says +[1754.840 --> 1758.680] you've shown this in ten patients don't talk about it at all it means nothing we need a bigger trial +[1758.680 --> 1762.040] but of course I am talking about it because there's no way that I'm doing a larger clinical trial +[1762.040 --> 1767.960] so that's not my job so this is as good as it gets you know from my point of view okay so really +[1767.960 --> 1772.840] is the first part of my talk which is really the longer part but at the overall conclusions +[1773.560 --> 1778.280] well hopefully I've shown to you that neglect patients are not impaired in online action control +[1778.280 --> 1783.240] but that they fail in indirect offline actions that therefore we can really +[1785.240 --> 1790.040] exploit these sort of unimpaired online reaching abilities for successful rehabilitation +[1791.000 --> 1794.840] and this actually impairs that they're clearly there must be shared influences of vision for +[1794.840 --> 1799.320] action on vision for perception and I think this is actually quite nice because I think there are +[1799.320 --> 1803.640] a lot of there's a lot of evidence kind of in the literature that perception can influence action +[1803.640 --> 1807.240] there's much less evidence saying action can actually influence perception and hopefully +[1807.240 --> 1812.360] this is what I've kind of shown with these experiments here and therefore again I've already said +[1812.360 --> 1815.560] this before that maybe you know when we're looking at actions and we're kind of talking about +[1815.560 --> 1819.960] actions we need to be more precise about how we actually define actions and not all actions are +[1820.040 --> 1826.360] the same and not all actions are mediated you know by similar structures okay so this is really +[1826.360 --> 1832.520] the sort of first part of my talk which is kind of more the clinical side and what we've done now +[1833.240 --> 1838.040] is we kind of what I've already said before is I don't really want to move on to sort of a large +[1838.040 --> 1842.680] scale clinical trial but one of the things I am actually interested in and this is kind of very +[1842.680 --> 1847.720] much driven by the multilaterature is to actually compare this visual feedback training which we've +[1847.720 --> 1853.640] been doing with TDCS which is trans-craned direct current stimulation because a spinous study is +[1853.640 --> 1859.560] sort of by GERI and things like in GERI and things like one by Roland Sparing who actually applied +[1860.440 --> 1865.640] TDCS to the left-pronatal cortex so they actually perform some sort of inhibitory function to the +[1865.640 --> 1870.360] left-pronatal cortex in neglect patients and by doing that they actually found that the neglect +[1870.360 --> 1875.160] symptoms actually improved because the idea is that if you have a right-em-sphileogen which kind +[1875.160 --> 1879.880] of needs to neglect you get a sort of overactive left-em-sphile so left-em-sphile is kind of +[1879.880 --> 1885.880] too active if you dump and down that activity you actually find an improvement in neglect function +[1886.680 --> 1892.200] and really what we what we're now sort of trying to do is we're not trying to combine this TDCS +[1892.200 --> 1897.480] applying TDCS to the undemaged left-em-sphile in combination with this sort of behavioral +[1897.480 --> 1902.920] training which we've been doing and we're hoping that if we combine TDCS with we have T training +[1902.920 --> 1907.960] that we actually get the biggest sort of in behavioral sort of rehabilitation effect that we find +[1907.960 --> 1913.080] the biggest improvement in neglect symptoms and the reason we're kind of hoping that this is true +[1913.080 --> 1918.360] is very much sort of taken from the multiliterature because TDCS has been quite successfully used +[1918.360 --> 1923.320] in kind of trying to improve motor function and it successfully it's particularly successful +[1923.960 --> 1927.640] when the patient's actually performing motor actions kind of at the same time so you find the +[1927.640 --> 1932.760] biggest improvement in kind of improving paralysis by applying sort of TDCS together with some +[1932.760 --> 1937.320] sort of behavioral training and this is really something that we're investigating just now +[1937.320 --> 1940.600] that is talked to Daniella about getting the ethics and how painful is it and I think we kind of +[1940.600 --> 1944.600] pretty much a similar stage is doing this so I don't know why we're doing it actually it's just too +[1945.240 --> 1950.360] bad okay so this was this is now kind of really moving on to stuff which is kind of not +[1950.360 --> 1957.160] clinical because the clinical stuff it takes a very very long time to actually do so at the same time +[1957.160 --> 1962.200] I've always kind of had an interest really in kind of what happens in special bias this in healthy +[1962.200 --> 1966.760] subjects and this is really stuff which I spend a long time sort of doing in Britain and I kind of +[1966.760 --> 1973.000] moved away from and I've not sort of started to investigate a little bit more I think most of you +[1973.000 --> 1978.200] will actually know that you know all of us and it's not just people also animals we all +[1978.200 --> 1983.480] over show a sort of subtle bias in favoring left space when it comes to visual attention so we all +[1983.480 --> 1989.800] have a bias of orienting towards left space so for example in tasks like this we are asked to kind +[1989.800 --> 1994.600] of judge you know where the center of the line is we also a subtle kind of bias to the left +[1994.600 --> 2000.040] hand side sort of like this so this mark here is objectively actually further to the left we tend +[2000.040 --> 2004.120] to kind of judge that as sort of being centrally presented and the idea is that because the +[2004.120 --> 2009.240] right hemisphere sort of favors sort of attention we tend to get an exaggeration of kind of left +[2009.240 --> 2015.400] space you know in healthy subjects and people really have known about this really for sort of +[2015.400 --> 2019.400] quite a long time so we also favor left space people do it animals do it the seems to be +[2019.400 --> 2024.600] as orienting bias towards the left space and there are certain properties which kind of influences +[2024.600 --> 2030.520] bias so it can get modulated you know by certain task and certain situations and one of the +[2031.480 --> 2036.440] things which can actually mediate the bias is actually fatigue so there's sort of some of studies +[2036.440 --> 2041.320] sort of by a tormentally who basically showed that the left of bias that we all show gets attenuated +[2041.320 --> 2045.960] and shifts towards the right with bias with decreasing alertness and fatigue so the more fatigue we +[2045.960 --> 2051.960] become the less of a left bias we actually show and they're kind of very much argued you know this +[2051.960 --> 2056.680] is kind of in line again with another dual root model which talks about dorsal and ventral streams +[2056.680 --> 2060.760] but they're kind of slightly different kind of in position to the milner and good at dorsal +[2060.760 --> 2066.360] and ventral streams so in co-better and shumans attentional model they say that healthy people +[2066.360 --> 2071.080] like you and me have a right-hymnus-velaturalized ventral attention network which underpins alertness +[2071.560 --> 2075.800] and the ventral attention network doesn't really quite one here I mean it kind of one sort of much +[2075.800 --> 2080.600] more superior here but it doesn't really matter so we have a ventral kind of attention networks +[2080.600 --> 2084.920] which kind of underpins alertness and that's the same in all of us and obviously if you perform a +[2084.920 --> 2090.200] task over a long period of time you then get fatigue so you have a decreased activation in this +[2090.200 --> 2096.200] network which then gives the left dorsal orienting network which is pretty much kind of this network +[2096.200 --> 2101.080] a competitive advantage and therefore driving behavior rightward so this is kind of very much +[2101.080 --> 2107.960] the idea we all have a sort of right lateralized alertness network which sort of tires out over time +[2109.720 --> 2113.480] and the question that we were then really asking in the remained of my talk which I was sort of +[2113.480 --> 2118.360] trying to address is is this really true do all of us really have a right-hymnus-velaturalized +[2118.360 --> 2122.840] attention network is this kind of a uniform feature kind of in the healthy population or maybe +[2122.840 --> 2129.800] other differences in between different people on this and this was actually an absolute I mean +[2130.200 --> 2135.720] this idea pretty much came almost entirely sort of for my sort of PhD student because when we were +[2135.720 --> 2139.560] kind of doing this kind of work he bent through the literature and he basically said when you +[2139.560 --> 2143.640] do realize that in all studies kind of all special attention people have a left with bias but +[2143.640 --> 2148.680] there's always a subsection of people you know ranging between five to 30% to show a right with bias +[2148.680 --> 2152.120] and I'm like yeah you get some variation I mean you look at stupid bias and some people are left +[2152.120 --> 2156.520] some people are right it's totally boring and he's like hmm I don't know really no because maybe +[2156.520 --> 2160.040] what what do you do maybe this is meaningful maybe there are generally differences between people +[2160.040 --> 2164.760] maybe some people show a left bias and some people show a right bias and already McCourt in +[2164.760 --> 2169.000] 2000 and born actually kind of noticed it and said well this might be meaningful there might be +[2169.000 --> 2175.400] genuine observer differences but nobody ever really kind of followed this up until a paper was +[2175.400 --> 2180.040] published by Tvolshotton in 2011 and this is when I sort of paid a little bit more sort of +[2180.040 --> 2185.720] attention to this idea and what they actually showed in their paper was that their relative +[2185.720 --> 2191.080] lateralization of the right matter pathway predicted the degree of spatial bias so what they +[2191.080 --> 2194.280] were actually showing in particular and it doesn't really matter what we're talking about here +[2194.280 --> 2198.280] in terms of connections but you basically have a very big sort of right matter pathway which kind +[2198.280 --> 2203.480] of connects providers in frontal areas and what they actually showed in their paper is that in +[2203.480 --> 2209.240] participants where this pathway was larger in the right compared to the left that these participants +[2209.240 --> 2214.120] deviated more to the left in line with section task where participants who had the opposite +[2214.120 --> 2219.000] asymmetry actually showed either right bias or no bias so there seems to be some relationship +[2219.000 --> 2222.760] between the size of your right matter track and the kind of bias you that you show on these kind of +[2222.760 --> 2227.800] task and this is really what I thought was actually quite interesting because it really seems to be +[2227.800 --> 2232.520] maybe there are anatomical differences between people which kind of drive the fact that somebody +[2232.520 --> 2238.200] has a left bias or somebody has a right-ward bias so what we then really kind of started asking +[2238.200 --> 2242.440] more specifically this question saying where is it possible that some people actually have a right +[2242.440 --> 2246.120] with bias and that this can actually be a trait you know rather than just an abandoned variation +[2246.840 --> 2252.040] in the data like you would expect and if they do if we if we can identify people who show a right +[2252.040 --> 2257.320] bias do this then show different behavioral patterns for example if you look at time on task if you +[2257.320 --> 2262.520] look at performance over time do they shift in the same direction as people who show a left with bias +[2263.720 --> 2266.920] so we kind of really decided and this is actually Chris so Chris kind of decided to sort of +[2266.920 --> 2272.520] investigate this a little bit more so we use this kind of task which I kind of shown you to you +[2272.520 --> 2276.920] before the landmark task where rather than asking people to buy second lines you present them +[2276.920 --> 2280.840] which line which are already people are sacked it and you taste to say to them which of these two +[2280.840 --> 2286.360] ends do you think it's actually shorter you know or longer so we did this sort of in two sets of +[2286.360 --> 2292.280] experiments we first had 20 participants who we tested in three different sessions because what +[2292.280 --> 2297.640] we really wanted to know is is people's bias if the usual bias is this bias consistent over time +[2297.640 --> 2302.840] so do they show the same bias repeatedly on different occasions and if this is true if we kind +[2302.840 --> 2307.160] of establish that maybe different people kind of do this we then wanted to see what happens to +[2307.160 --> 2311.400] the time on task effect so what happens when you then ask people to do a task prolonged over +[2311.400 --> 2316.760] peer over peer to time because if you follow the co-beta and Schumer model what you would actually say +[2317.560 --> 2322.760] is people have a wide time is for a specialized attention network that sort of tires out over time +[2322.760 --> 2326.120] so everybody should shift widewards so whether you have an initial left-wise or right-by +[2326.120 --> 2331.960] fortwise people's behavior kind of should shift widewards so those are the kind of two questions +[2331.960 --> 2337.160] that we kind of really addressed in two experiments so this is video the paradigm very simple we +[2337.160 --> 2342.120] had an initial fixation cross the lines was presented for 150 milliseconds so quite briefly quite +[2342.120 --> 2347.160] difficult task the participants and had to decide whether the this line was actually longer or +[2347.160 --> 2353.720] this line was longer from that again we then calculated the point of subjective equality and usually +[2353.720 --> 2359.160] if you do this over a large number of subjects you find sort of overall left-foot bias so what we +[2359.160 --> 2366.040] actually did on based on this performance we then actually split groups in three different subgroups +[2366.040 --> 2370.280] so we had a left bias group a right bias group and a new bias group and we actually calculated +[2370.280 --> 2375.560] these bias groups by actually using the 50% confidence interval of 1 the individually fitted +[2375.560 --> 2379.400] on psychrometric functions so we basically had a sort of cut off where we decided which people +[2379.400 --> 2385.080] were showing left-bys or right-bys or no bias and the first of all we just said well people who +[2385.080 --> 2390.920] we identify as showing a left bias you know on one day do they also show a left-bys on a second +[2390.920 --> 2395.000] day and the third day so we basically man the experiment over three different days they were +[2395.400 --> 2401.240] separated by minimum of 24 hours and hopefully as you can see here participants baseline bias was +[2401.240 --> 2406.760] hugely consistent sort of across a different day so this is day one with day two they two with +[2406.760 --> 2411.640] day three and then obviously they won with day three here so that we hugely correlated so people +[2411.640 --> 2418.440] who show a bias on one occasion tend to show the same bias on repeated occasions so this kind of +[2418.440 --> 2422.440] initially maybe supports the notion that there's a basic trait you know that it's not just random +[2422.440 --> 2427.720] behavior people do kind of show biases consistently over time and we then looked at what happened +[2427.720 --> 2431.720] to the time on task effect and remember if you look at the co-battern Schumann's model you would +[2431.720 --> 2436.840] expect over time this is kind of the effect over time the bias to shift to the right independent +[2436.840 --> 2441.480] of initial bias and that's really not what we found because what we found is very much like +[2441.480 --> 2447.160] it's like expected the participants who had a left-bys kind of shifted white words but on the +[2447.160 --> 2451.480] other hand the participants who had a right-bys actually shifted left words so they really didn't +[2451.480 --> 2456.920] show the expected right-bid bias the participants who had no bias pretty much sort of stayed the same +[2457.560 --> 2460.200] and if you kind of look at this graph and think okay maybe this is kind of just +[2460.200 --> 2464.840] navigation to the mean and kind of learning we also kind of looked at the curvewits +[2464.840 --> 2469.560] and the curvewits actually gives you an indicator of variability how variable is people's performance +[2469.560 --> 2474.200] you know over time and very much like you would expect so the curvewits actually became greater +[2474.200 --> 2478.040] you know throughout the course of the experiment because obviously people were kind of tiring out +[2478.040 --> 2482.600] and were sort of finding stuff sort of more difficult but the important thing is that curvewits +[2482.600 --> 2486.680] so this is the variability that people show over time and the shift in baseline was actually +[2486.680 --> 2491.320] uncorrelated so there was no correlation between the shifts that we demonstrated here +[2491.320 --> 2495.880] and the increasing curvewits or generally sort of the change in the curvewits of psychometric function +[2496.920 --> 2501.160] and really just to kind of look at this again what we also did is we then also looked at the +[2501.160 --> 2506.920] relationship between the initial bias and the shift over time and what we found there because you +[2506.920 --> 2510.360] can say okay why are you making this kind of binary distinction why are you grouping people in left +[2510.360 --> 2514.760] and right bias why don't you just put look at them all together which is kind of what we did +[2514.760 --> 2519.960] here so what we actually found here is that the stronger the initial bias the stronger the shift +[2519.960 --> 2524.840] in bias over time in the opposite direction so what that basically means is that people with a +[2524.840 --> 2529.640] bigger with a big left bias and shifted more to the right compared to people with a smaller bias +[2529.640 --> 2533.480] and the same was actually true for the right bias so people had a bigger initial right bias and +[2533.480 --> 2539.000] shifted kind of more in the opposite direction so there was actually a negative correlation or not +[2540.600 --> 2546.680] okay so what can we conclude from this data but I would actually like to argue that maybe it's +[2546.680 --> 2551.800] possible that we actually have genuine sort of behavior differences and and genuine subtypes in +[2551.800 --> 2556.680] the population in relation to spatial attention you know we all know a lot about individual differences +[2556.680 --> 2559.960] people haven't really looked at this very much in relation to spatial bias a sort of spatial +[2559.960 --> 2565.560] attention so maybe it's actually a sort of stable trait because what we've actually found is that +[2565.560 --> 2572.120] the bias remains consistent over three different days so maybe there are actually different subtypes +[2572.120 --> 2576.760] and maybe they're actually driven by varying anatomical asymmetries like before like to border +[2576.760 --> 2581.560] shorten and people actually found and maybe they're also driven by functional asymmetries +[2582.360 --> 2587.800] because there was an even more recent paper by a clientele who actually found the participants who +[2587.800 --> 2593.240] actually displayed atypical white hemisphere language production this was actually an FMI study where +[2593.240 --> 2598.520] they looked at this so people who showed white hemisphere language production also displayed atypical +[2598.520 --> 2601.880] left hemisphere spatial attention dominance and they actually used the non-marked task kind of to +[2601.880 --> 2606.040] assess this so they clearly people aren't all the same and this is quite interesting so if you have +[2606.040 --> 2611.640] a white hemisphere language production dominance you also tend to have a left hemisphere spatial attention +[2612.600 --> 2618.760] so maybe it is actually true the trait actually determines first of all your behavior and then +[2618.760 --> 2623.560] also it actually determines some kind of other function like for example time on task which is +[2623.560 --> 2630.200] kind of what we looked at here so coming back to this kind of again sort of dual route models +[2630.200 --> 2634.760] of attention so the question really then here was you know is it really true that there's a +[2634.760 --> 2640.200] white hemisphere luncheon light attention network which actually whose activity decreases over time +[2640.200 --> 2646.120] and therefore induces a uniform white with bias kind of in all participants and we would really +[2646.120 --> 2650.760] argue from this state that maybe this interpretation doesn't really hold because we found that patient +[2650.760 --> 2655.000] participants who had initial white with bias actually shifted leftwards rather than for the +[2655.000 --> 2660.680] white words which very much would you predict from that model so what we actually propose instead +[2660.680 --> 2664.600] and we kind of at the moment kind of doing e g studies to kind of look into this a little bit more +[2665.320 --> 2669.640] is we actually proposing instead that there might be some sort of neural fatigue which kind of accounts +[2669.640 --> 2674.520] a little bit better for the time on task effect so maybe in participants with an initial left with +[2674.520 --> 2679.240] bias fatigue is actually greater in the right hemisphere causing my right with shift but in +[2679.240 --> 2683.000] participants with an initial right with bias fatigue may be greater in the left hemisphere and thus +[2683.000 --> 2687.480] calling left with shift and we kind of at the moment trying to look into that with e g and what +[2687.480 --> 2692.840] we've already found is that basically the size of the bias that you show is very much driven by +[2692.840 --> 2697.000] the involvement of the right hemisphere so the greater involvement you have of the right +[2697.000 --> 2701.400] hemisphere the greater the student neglect the bias that you kind of show but really this specific +[2702.200 --> 2706.840] issues we haven't really quite addressed yet okay so what came to be conclude from this +[2707.720 --> 2712.040] there seemed to be differences in attention to biases and these differences could we +[2712.040 --> 2716.920] really genuinely observe subtypes there may be driven both but anatomically and function +[2716.920 --> 2721.720] asymmetries because there's been other studies kind of pointing in this direction and maybe if +[2721.720 --> 2725.960] you have these kind of observer differences this actually leads to different behavioral patterns +[2725.960 --> 2730.120] so we've looked at time on task there might be other behavioral patterns which are interesting +[2730.120 --> 2733.480] so it does actually challenge current models of attention and alertness which seem to assume +[2733.480 --> 2740.760] that we all have the uniform a lateralized attentional network okay and this really just leaves me +[2740.760 --> 2745.480] to kind of thank my collaborators in particular Keith Mure and Stephanie was at sort of on the +[2745.480 --> 2749.480] clinical side so they were kind of involved in the clinical side this is Keith who's basically +[2749.480 --> 2754.680] stopping me from talking about the we have data so don't don't mention at all then really the +[2754.680 --> 2759.480] more behavioral studies Gregor Toodh, Gemma Eliamoth and particular Chris Benwald who kind of really +[2759.480 --> 2764.600] very much sort of told this last day to set and this is really just as you can imagine especially +[2764.600 --> 2769.080] with a rehabilitation study there are a lot of clinical people and other people kind of involved +[2769.080 --> 2772.840] in kind of helping you getting the patients together and keeping them on track you know for +[2772.840 --> 2778.840] forming the tasks and then last but least the different funding bodies and that's video thank you very +[2778.840 --> 2784.680] much. +[2785.160 --> 2789.320] For us to meet again for kind of both of thanks to Mike don't feel a question and I should +[2789.320 --> 2793.560] premise the questions they have to be benevolent questions no critical question +[2793.560 --> 2800.120] probably because as our external examiner Mike has record to be most benevolent person she +[2800.120 --> 2807.160] she stripped out a further than the work for the examiner and she also elevated student scores +[2808.120 --> 2813.960] exammers would be a bit harsh so mostly you wouldn't know that but I mean what +[2813.960 --> 2818.600] my gizm immensely modest station, previously taught saying there's no progress in 20 years but +[2818.600 --> 2825.080] but actually in the heart of it he's seen as effective you curing the orphan or neglect which +[2825.080 --> 2830.280] improves quality of life and that shouldn't be underestimated so okay you've had time to think +[2830.280 --> 2835.320] about questions and prepare anywhere you want us to go no critical questions allowed to hand out. +[2836.120 --> 2840.600] Really boring probably around a question but I wondered if there's any relationship with +[2840.600 --> 2845.400] hand at this with your observer sometimes. Yes in effect I mean I think this is really an important +[2845.400 --> 2848.760] point because the effect of we've demonstrated at the moment we're very much for right-hand people +[2848.760 --> 2853.560] so we very much kind of really initially selected to people to be right-handed because there is a +[2853.560 --> 2857.160] whole kind of shift in bite with left-handed people so I think this is another whole interesting +[2857.160 --> 2861.400] question basically what happens in left-handed people and I would think that initially and even +[2861.400 --> 2864.920] you know Khabeta would probably wouldn't claim this right attention in the network I think it's +[2864.920 --> 2868.520] very much linked to kind of right-handedness because with left-handed people we already know that +[2868.520 --> 2872.920] they are they have more lateral representation or even representation you know possibly even in the +[2872.920 --> 2876.040] left-hand atmosphere but I think it's a really interesting question because I think it's really +[2876.040 --> 2880.280] related to that so if we're really saying there are different subtypes and Keely just how +[2880.280 --> 2883.640] does handedness relate to that because sometimes it's not as straightforward and saying left-handedness +[2883.640 --> 2888.120] then it's kind of mediated them by the left attention and it's more bilateral so this is another +[2888.120 --> 2892.920] I think interesting question is just whenever I kind of have student populations I ask everybody who's +[2892.920 --> 2896.600] kind of left-handed and then you have an occasion a year where a lot get a lot of left-handness and you +[2896.600 --> 2901.240] can do studies and then this year we just had absolutely no left-handness so it's it's definitely on +[2901.240 --> 2904.280] the agenda to look into it. +[2904.760 --> 2906.760] Much of a left-handness and I'm wondering if you guys... +[2909.720 --> 2911.160] I was trying. +[2911.160 --> 2915.000] Instead of getting the term up you can't be called in theory. +[2915.000 --> 2916.040] I am a defense. +[2919.160 --> 2923.560] I mean if any kind of become a coding theory is that actions are stored together with the +[2923.560 --> 2929.480] conceptual effect, settings, actions on the environment don't sound good at actions. +[2929.480 --> 2932.520] We could come from here we know like an anti-pointing house +[2932.520 --> 2933.880] behavior. +[2933.880 --> 2938.600] If the patients cannot be able to engage in the effect of the action, +[2938.600 --> 2941.400] then of course they would be in the action itself. +[2941.400 --> 2944.280] No I would completely agree with that and I think there's another way of looking at that +[2944.280 --> 2947.560] because I've kind of really squeezed it into kind of David's and Mel's kind of framework +[2947.560 --> 2951.400] but I think the problem the difference really there is you know in direct action you have like +[2951.400 --> 2955.400] egocentric coding and allocentric coding and I would then more look at that so as soon as you +[2955.400 --> 2958.520] need allocentric coding which you're basically saying this is the object in relation to another one +[2958.520 --> 2962.680] in relation to kind of the environment and this is how I would say and this is really what they +[2962.680 --> 2966.600] have the problem and you write and you can then say this is clearly in this case the perception +[2966.600 --> 2969.960] driving the action I would completely agree with that and I would also really push it and I've +[2969.960 --> 2974.120] told Daniel a little bit about this I would actually say that quite a lot of actions really you +[2974.120 --> 2977.960] can't ignore the perceptual side because especially when actions are more complex like for example +[2977.960 --> 2980.600] in driving you have a lot of heavy perceptual input that you need to act on. +[2980.600 --> 2983.000] I think this is one of the tasks where that comes in. +[2983.560 --> 2989.320] It's just I think if you purely kind of look at that model then you would kind of miss some +[2989.320 --> 2993.160] spedibilities and I think this is really the point that David was trying to make at the time as well +[2993.160 --> 2997.000] because if you really think perceptual actions are the same then the spedabilities that some +[2997.000 --> 3001.160] patients have like agnostic patients and neglect patients I think you'd miss that you know because +[3001.160 --> 3005.560] because the spedabilities that they have are kind of limited but they are there and I think just +[3005.560 --> 3009.640] thinking about a dual wood model allows you to kind of identify them so I'm thinking it's just +[3009.640 --> 3013.800] more helpful in terms of the approach. I think you can do a little bit of the lodges into +[3013.800 --> 3019.720] you can't approach the area. Yeah I mean to me I think this is an interesting question because I do +[3019.720 --> 3023.160] because I think the temple but you see everything murders into the temple of the temple of it's just +[3023.160 --> 3036.600] really clever so yeah you know true. I'm very nice talk thank you. I put your line by section +[3036.600 --> 3042.360] and you're here the fatigue. Are you sure that people the right lines move left and people the +[3042.360 --> 3048.360] left lines move right and they're sort of getting better and getting rid of their bias? No and in fact +[3048.360 --> 3051.560] I mean this was very much one of the criticisms you know because all you're showing basically +[3051.560 --> 3055.080] is that the submission to the mean that people are getting better and the reason that we think the +[3055.080 --> 3059.640] people aren't getting better is because the kerf its increases so I don't really quite see how +[3059.640 --> 3062.760] people can improve on the task and at the same time become more variable because if you then +[3062.760 --> 3066.440] look at kerf words it was basically the difference between the beginning and the end point of the +[3066.440 --> 3070.760] asymptote is the kerf its become wider so clearly over time there's stronger more with the task +[3070.760 --> 3075.000] rather than showing learning. But they're becoming more variable in their response but they're +[3075.000 --> 3081.640] calibrating to what they have to be doing. It's two different aspects of the kerf itself. Yeah but +[3081.640 --> 3085.400] I would expect them but but they're also uncorrelated so I think if that's true I would expect them +[3085.400 --> 3089.480] to be correlated. I would then expect there to be a relationship between the between the increase +[3089.480 --> 3094.760] in kerf withs and the reduction in bias and it was uncorrelated. How many of you then said +[3094.760 --> 3102.120] expect that people could then be probably different aspects of the problem. Do you need an increase +[3102.120 --> 3109.720] in inter-emknowledge to be causing an reduction in the increase? I mean I kind of had exactly this +[3109.720 --> 3113.320] kind of question in Aberdeen and I kind of think it should be related but this was exactly the +[3113.320 --> 3116.920] argument that come back. So I think what we should then do and what we haven't done you should +[3116.920 --> 3121.400] have a vertical control condition. So if you had a vertical control condition you would then expect +[3121.400 --> 3125.880] again the regression to the main you would expect the increase in variation but not the shift in bias +[3125.880 --> 3129.320] and that's what we should have done and we didn't and we got it published so I think we were lucky +[3132.920 --> 3138.760] I was curious about the mechanism of the changes after the raw and +[3138.760 --> 3143.400] the relation. Do you think it's some sort of compensation or something that we use all the time? +[3143.400 --> 3147.960] So if you took a normal participant and gave them a raw that was a rig that had led shot. Yes +[3148.920 --> 3154.200] Would you cause a long-term change in there? I think this is a little bit I think but people try to +[3154.200 --> 3157.400] look at the prison adaptation because I think people will kind of adapt to that and then there's a +[3157.400 --> 3161.640] certain carryover kind of over time and then you lose it and I think it is an interesting question +[3161.640 --> 3165.800] because with neglect and prison adaptation they adapt to that and I would expect they would adapt +[3165.800 --> 3170.120] to the different weighting as well and they do then carry that over for weeks and months so in prison +[3170.120 --> 3174.360] adaptation neglect patients really seem to be using that adaptation long term whereas people like +[3174.360 --> 3178.440] you and me don't and nobody really knows why that's the case but I think it's true I think the +[3178.440 --> 3182.600] mechanisms are actually very similar and kind of you kind of adapting to the feedback that you +[3182.600 --> 3186.520] get and you then kind of transfer that into your behaviour long term and nobody knows why but +[3186.520 --> 3190.040] that's what neglect patients kind of you seem to be doing so I think Stephanie is going to play +[3190.040 --> 3192.600] with one but they're a little bit more and kind of looking at different weights and kind of +[3192.600 --> 3196.600] density or people adapt to that and whether they do and I would suspect they do because I think +[3197.320 --> 3201.080] people don't really know how prison adaptation works and they get exactly what this mediates but +[3201.080 --> 3205.080] I think the vital structures are implicated in that so I think you can actually do this why +[3205.080 --> 3210.040] and provide the structures which are maybe unimpaired and then kind of use it more long term but it's +[3210.040 --> 3213.240] my thinking we think that isn't very clear but that's kind of what I feel. +[3218.280 --> 3224.680] I'm curious about how you said the delayed pointing would be impaired in patient life. +[3225.320 --> 3228.120] Do you have an idea how long does it take? +[3228.120 --> 3233.400] Yes I mean we've done one experiment on that and we basically had a delay it was quite long +[3233.400 --> 3237.160] so we had a condition exactly like the poor pointing task and then they were basically so the +[3237.160 --> 3241.640] light would then come off they had to wait for five seconds and they then had to reach and they +[3241.640 --> 3245.800] were then very impaired in the reaches for the left space actually so then they really couldn't and +[3245.800 --> 3249.960] again I think my argument there is that it has to produce a more long term spatial mapping so +[3249.960 --> 3254.120] they clearly have to retain that mapping kind of more long term to then perform the reach so +[3254.120 --> 3259.000] what we looked at was five seconds I think male's argument would be that delay kicks in after +[3259.000 --> 3262.680] a few milliseconds I'm not so quite so sure about that actually because I think you need a bit +[3262.680 --> 3267.080] of a bigger delay before the task becomes difficult for something for any like patients so was that +[3267.080 --> 3272.360] your question? Yes because we know that the direct term for this is the same in two groups +[3272.360 --> 3276.520] either two streams and that's one remaining in the scale of that couple of years but I think what +[3276.520 --> 3280.920] you're talking about is much longer so yes exactly because there are different interpretations in +[3280.920 --> 3284.120] relation to how quick the timing is in the door to stream and we just really kind of wanted to +[3284.120 --> 3287.880] kind of move away from that debate and really saying we just want to make sure it's really very +[3287.880 --> 3291.480] very long it is really five seconds but but I think Otters kind of shown that a way is basically +[3291.480 --> 3297.720] saying there is no sudden shift you know from immediate to kind of more long term delay and I +[3297.720 --> 3300.520] would actually agree with that I think you know the deterioration is probably going to be gradual a +[3300.520 --> 3305.720] bit like what he found in for for take attacks here so yeah I don't think there is so I don't +[3305.720 --> 3308.760] believe in my idea of it suddenly just disintegrating I don't think so +[3310.280 --> 3315.800] are you from like a memory? Yes exactly +[3318.520 --> 3323.880] you might be a document about you know online being okay and you show clearly that you're +[3323.880 --> 3330.520] pro-pointing the patient goes accurate with the object and it contrasts entirely with the +[3330.520 --> 3334.840] beginning where you show a straight line and the patient marks one end of it when asked the +[3334.840 --> 3339.960] mark middle and there's something that doesn't quite jowl for me and it may be just +[3340.760 --> 3345.400] immediately if you've got one line and you can pick it up in the middle then somehow you appreciate +[3345.400 --> 3351.560] the two ends and my real question comes in how do you account for extinction because you know +[3351.560 --> 3358.280] you've got you've got two objects then I guess the patient will ignore a controversial side +[3359.400 --> 3364.200] but you know had they been asked to point to the middle of the two objects then that's under +[3364.200 --> 3370.520] direct control you're doing it here and now and in effect or maybe they don't show the effect if +[3370.520 --> 3374.600] they're reaching the middle of two objects? No I think that's quite a different task because if +[3374.600 --> 3377.800] you're reaching I think these experiments are done so if you basically have two objects and you +[3377.800 --> 3381.080] ask people to reach for the middle they really struggle with that because they can they can't +[3381.080 --> 3385.400] maintain sort of two objects at the same time single objects got two ends you know when does +[3386.680 --> 3389.960] but I believe I think they can't do it because it's a bit like Lyme a section because you have +[3389.960 --> 3393.960] a huge perceptual input so if you have the word initially like a long word you ask them to pick +[3393.960 --> 3397.480] it up they don't pick it up correctly so the reason that they can actually do it is because they're +[3397.480 --> 3401.880] then getting the proprioceptive feedback that it's tilted I mean Ian Robertson already showed +[3401.880 --> 3405.000] initially that there's a big difference between kind of pointing to a word and grasping so people +[3405.000 --> 3408.600] are already a little bit better at grasping but they're not perfect and I think one of the reasons +[3408.600 --> 3412.440] that this training works and apart from doing the action is because in the final cortex you have +[3413.160 --> 3416.760] proprioceptive as well and people are actually using the proprioceptive feedback to kind of improve +[3416.760 --> 3421.400] on that task and then in the long term that kind of filters on to the perception because it's not +[3421.400 --> 3425.080] you right because because you have a long word so perceptual they can't do it so without the +[3425.080 --> 3428.600] bit is why you know if you just ask them to point to the end so you know they don't benefit so +[3428.600 --> 3433.880] if the object gets smaller and smaller then for generally a bit of grasp it should go for the midpoint +[3433.880 --> 3438.280] this is make proportion at less errors it's only when it's a very long one we have to look at it +[3438.280 --> 3441.160] and I think this is also an important video about the pointing task that I found because they're +[3441.160 --> 3445.720] basically pointing kind of to a single target so I think if you had a target with lots of distractors +[3445.720 --> 3451.800] people would just be desperate. There's things in the back and the lap and I think it's over a second +[3451.800 --> 3456.280] second. Yeah no but I think the problem with neglect is actually no no but they were actually +[3456.280 --> 3458.840] sitting in contact completely and they were in fact I should have said that they were kind of +[3458.840 --> 3462.200] pretty much sitting in darkness so we're sitting in total darkness and then the lights would come on +[3462.200 --> 3465.160] yeah no so there wasn't a clue because I think this is an important point because I suddenly +[3465.160 --> 3468.440] see turning into search task they're awful yeah they can't they couldn't do it. +[3470.520 --> 3477.080] And you had a question for me to make some thought but I think we should thank my partner. +[3477.080 --> 3481.080] Thank you thank you thank you thank you very much. +[3484.280 --> 3488.600] Especially this is like my little genius. Oh fantastic wow this is actually my first time +[3488.600 --> 3491.560] being at everybody half-worn actually maybe as an excellent example you gave one to me +[3491.560 --> 3495.560] thank you. Hey I must have already passed my last two. diff --git a/transcript/allocentric_1-JE7NSi6Fk.txt b/transcript/allocentric_1-JE7NSi6Fk.txt new file mode 100644 index 0000000000000000000000000000000000000000..4c65ab703ab42b300cd4a7ae71bacba005dee372 --- /dev/null +++ b/transcript/allocentric_1-JE7NSi6Fk.txt @@ -0,0 +1,112 @@ +[0.000 --> 6.440] Let's take a look at how the brain receives and processes sensory information from the environment. +[6.440 --> 12.000] To get started, let's take a look at regions of the brain and the functions they provide. +[12.000 --> 19.000] Let's first look at the three most major subdivisions of the brain called the forebrain, +[19.000 --> 22.000] the midbrain, and the hindbrain. +[22.000 --> 27.000] The hindbrain consists of these two purplish regions. +[27.000 --> 32.000] The lowest, most bluish region, is the lower portion of the brain stem. +[32.000 --> 35.000] And just above that is the cerebellum. +[35.000 --> 39.000] The hindbrain is the oldest, most primitive region of the brain. +[39.000 --> 47.000] It connects the brain with the rest of the body and maintains basic physiological functions necessary for survival, +[47.000 --> 55.000] such as respiration, heart rate, sleep, wakefulness, and coordination of movement. +[55.000 --> 61.000] The midbrain, which we can see in pink, is the topmost part of the brain stem. +[61.000 --> 70.000] It provides a passageway for messages traveling between the forebrain and the rest of the body via the spinal cord. +[70.000 --> 75.000] It also is responsible for orienting responses to sensory stimuli. +[75.000 --> 83.000] For example, if you have a ball flying at your head, you can thank the midbrain for coordinating your unconscious automatic response +[83.000 --> 92.000] to duck or block the ball from hitting you before the conscious part of the brain has had a chance to process what is going on. +[92.000 --> 96.000] Finally, we have the forebrain in the gold and yellow regions. +[96.000 --> 101.000] And this is evolutionarily the most recent development of the brain. +[101.000 --> 107.000] The forebrain consists of the smaller diencephalon, which is situated just above the midbrain, +[107.000 --> 111.000] and the much larger telencephalon or cerebrum. +[111.000 --> 119.000] The forebrain handles our most complex and integrated thinking, particularly the outer cerebrum, +[119.000 --> 123.000] and is the region of the brain most involved in perceptual processing. +[123.000 --> 127.000] Let's take a deeper look at this outer cerebrum. +[127.000 --> 133.000] The first thing to note about the cerebrum is that it, along with the cerebellum of the hind brain, +[133.000 --> 139.000] is divided into two hemispheres, a right hemisphere and a left hemisphere. +[140.000 --> 148.000] Another noteworthy characteristic of the cerebrum is that it has an outer layer to it called the cerebral cortex. +[148.000 --> 153.000] Of the cerebrum, this is where our most complex thinking takes place. +[153.000 --> 158.000] The cortex gets its name because cortex means bark in Latin. +[158.000 --> 165.000] Like the bark on a tree, the cortex surrounds the brain as an outer layer. +[165.000 --> 171.000] Do note that the coloring of this image is distorted a bit because it was taken with MRI. +[171.000 --> 177.000] Typically, the outer cerebral cortex layer is a gray color, and the inner portion is white, +[177.000 --> 180.000] such as can be seen in this figure here. +[180.000 --> 190.000] The darker gray areas are actually called, gray matter, and the whitish areas are called, as you might have guessed, white matter. +[190.000 --> 196.000] If you look closely, you will see that the gray matter is made up of the cell bodies of neurons, +[196.000 --> 203.000] and the white matter is made up of the long, milen-covered axons of these neurons. +[203.000 --> 207.000] Let's take a closer look at one of these neurons. +[207.000 --> 211.000] Neurons are information carrying cells in the brain. +[211.000 --> 219.000] Neurons can take on a handful of different shapes, but most neurons have a cell body that receives signals from other cells, +[219.000 --> 225.000] and an axon that carries and transmits its own signal to other cells. +[225.000 --> 231.000] Some cells, like the one here, have milen-sheaths attached to their axons. +[231.000 --> 240.000] These milen-sheaths help the electrical signals that travel along the axon to speed up so that messages can be sent faster. +[240.000 --> 249.000] Anyway, axons aren't naturally white, but when they are covered in milen, they take on a whitish color to them. +[249.000 --> 252.000] So that is how we get gray matter and white matter. +[252.000 --> 264.000] Gray matter consists of the decision-making cell bodies of the neurons, and white matter consists of the wiring or the connections within the brain. +[264.000 --> 272.000] And here is an image of a human brain where you can see portions of gray matter and white matter. +[272.000 --> 276.000] Anyway, back to our miscolored MRI image. +[276.000 --> 288.000] In addition to the cerebral cortex, the cerebrum also includes some subcortical structures, so named because they lie deep within the cerebrum beneath the cortex. +[288.000 --> 297.000] Let's take a look at the brain's two subcortical systems, the limbic system and the basal ganglia. +[297.000 --> 303.000] The limbic system is one of the earliest regions of the four brain to develop in the course of evolution. +[303.000 --> 308.000] It helps process our motivation for behaviors, emotion, and memory. +[308.000 --> 315.000] You'll start to notice that most structures within the brain come in pairs, one for each side. +[315.000 --> 321.000] Two important limbic structures I want to point out are the amygdala and the hippocampus. +[321.000 --> 329.000] The amygdala is the red almond-shaped structure in this figure, and amygdala actually means almond. +[329.000 --> 336.000] The amygdala plays a big role in some of our most basic emotions, such as fear and anger. +[336.000 --> 342.000] Another important limbic structure is the hippocampus, the purplish blue structure. +[342.000 --> 349.000] Hippocampus means seahorse, and it was so named because the structure actually resembles a seahorse. +[349.000 --> 354.000] The hippocampus plays a major role in the formation of new memories. +[354.000 --> 362.000] It also helps us with our sense of allocentric space, which has to do with where we are located within our environment. +[362.000 --> 367.000] Navigating with a map, for example, makes use of allocentric space. +[367.000 --> 373.000] Next, let's take a look at the other subcortical system, the basal ganglia. +[373.000 --> 384.000] The basal ganglia helps us form associations between our actions or other events around us and certain environmental stimuli. +[384.000 --> 391.000] You may have learned about classical and operant conditioning, such as with Pavlov and his drooling dog, +[391.000 --> 395.000] where he conditioned the dog to drool to the sound of a bell. +[395.000 --> 398.000] That would be mediated by the basal ganglia. +[398.000 --> 406.000] They also help us control our voluntary motor responses, including skeletal muscle movement and eye movements. +[406.000 --> 412.000] You'll notice that the amygdala is part of both the limbic system and the basal ganglia. +[413.000 --> 421.000] And another important basal ganglia structure is the thalamus, a key brain structure for sensation of perception, +[421.000 --> 428.000] that acts as a relay station for most sensory information. +[428.000 --> 433.000] The cerebrum can also be divided into four main lobes. +[433.000 --> 439.000] The frontal lobe up front, the parietal lobe on the upper sides towards the back of the head, +[439.000 --> 446.000] the temporal lobe on the lower sides of the head, and the occipital lobe in the very back. +[446.000 --> 458.000] The frontal lobe is in charge of our executive functions, including planning, organizing, decision making, problem solving, and reasoning. +[458.000 --> 464.000] It also plays a role in our more complex emotions and emotional assessment of situations. +[465.000 --> 474.000] It helps us with our fine, very controlled motor movements and motor programs, such as typing and texting, piano playing, and even speech. +[474.000 --> 477.000] It is very important in the production of language. +[477.000 --> 488.000] We have a specific area only within the left frontal lobe, usually, for some people it's on the right, called Broca's area, that specializes in speech production. +[489.000 --> 495.000] The frontal lobe also plays a role in our sense of taste and smell. +[495.000 --> 501.000] The parietal lobe's primary role is our sense of egocentric space. +[501.000 --> 509.000] Different from allocentric space, processed by the hippocampus, egocentric space tells us how to interact with our environment. +[509.000 --> 515.000] It tells us where our bodies are in space, whether we are right side up or upside down. +[516.000 --> 523.000] The parietal lobe also manages our sense of touch and helps us pay attention to the world around us. +[523.000 --> 534.000] The temporal lobe helps us with the recognition of objects and plays a big role with memory formation, working closely with the hippocampus. +[534.000 --> 541.000] Because it interacts with the hippocampus, it is also responsible for our sense of allocentric space. +[541.000 --> 550.000] It plays a role in language comprehension, and just like the frontal lobe has a specific area just on one side for language production, +[550.000 --> 556.000] the temporal lobe has a specific area called Wernicke's area for language comprehension. +[556.000 --> 563.000] It, too, is typically found on the left side of the brain, but in some individuals it can be on the right. +[563.000 --> 568.000] Finally, the temporal lobe helps us process sound or audition. +[569.000 --> 580.000] And last but not least, the occipital lobe's primary function is simply but very importantly, the processing of vision. +[580.000 --> 587.000] I also wanted to elaborate a bit more on the cerebellum here, even though it is considered part of the hind brain. +[587.000 --> 597.000] The cerebellum is very important for sensory motor integration, referring to how our sensory and motor systems work together to guide our action. +[597.000 --> 606.000] For example, some researchers have conducted experiments using prism goggles, which when worn display the world upside down. +[606.000 --> 619.000] As you can imagine, when people wearing prism goggles try to walk anywhere or reach for anything, or pretty much perform any sort of action that requires visual input, they really struggle. +[620.000 --> 629.000] However, with time and a lot of practice, people eventually learn to interact with their world in the same way as if their vision were right-side up. +[629.000 --> 638.000] And interestingly, after adapting to the prism goggles, it takes some time to readjust to the real world what's taking them off. +[638.000 --> 644.000] Anyway, it is the cerebellum that is responsible for integrating our vision and motor actions. +[644.000 --> 649.000] The cerebellum helps us adapt to new mappings, such as with prism goggles. +[649.000 --> 660.000] It also helps with coordination, especially when we need to make quick adjustments, such as keeping ourselves from falling when we trip over a rock, and it helps with posture. +[660.000 --> 671.000] And amazingly, it constitutes only 10% of the brain's mass, but contains over half of its neurons. +[672.000 --> 680.000] And as I mentioned before, the cortex of the brain, the very outer layer, is responsible for our most complex processing. +[680.000 --> 687.000] We can divide the cortex into the various regions that are responsible for early and more complex processing. +[687.000 --> 698.000] Any regions labeled as primary cortex are the earlier more elementary processing areas that handle the more basic dimensions of sensory information. +[698.000 --> 706.000] So for example, the primary visual cortex is the first portion of the cortex to receive and process visual information. +[706.000 --> 718.000] And it handles the earliest stages of visual processing, such as the recognition of lines of various orientations and edges. +[719.000 --> 730.000] Association cortex, labeled in purple, on the other hand, is an area that is more complex and integrative with our memory and past experience. +[730.000 --> 737.000] Visual Association cortex, for example, helps us recognize whole objects and people. +[738.000 --> 750.000] The last thing I'd like to mention is that each of our senses has a primary pathway in which stimuli from the environment travels from sensory receptors to that senses primary cortex. +[750.000 --> 754.000] Here we see the primary pathway for vision. +[754.000 --> 762.000] Sensory receptors in the eyes respond to light and transduce that light into a neural signal the brain can understand. +[762.000 --> 771.000] That neural signal exits the eyes along their optic nerves and reaches the thalamus, that sensory relay we looked at earlier. +[771.000 --> 780.000] From the thalamus, the signals travel on to the primary visual cortex in the asypetal lobes located in the back of the brain. +[780.000 --> 791.000] Each sense has its own primary pathway like this that goes from the sensory receptors all the way to the primary cortex for that sense. +[792.000 --> 797.000] Now that this video is over, consider briefly writing down for memory what you have learned. +[797.000 --> 804.000] This sort of practice retrieving for memory is one of the best things you can do to remember what you just learned. diff --git a/transcript/allocentric_1by5J7c5Vz4.txt b/transcript/allocentric_1by5J7c5Vz4.txt new file mode 100644 index 0000000000000000000000000000000000000000..378c28050909d64bb356eaadb2c9cd940f01fa86 --- /dev/null +++ b/transcript/allocentric_1by5J7c5Vz4.txt @@ -0,0 +1,46 @@ +[0.000 --> 7.040] There are approximately 285 million people with visual impairments around the world. +[7.040 --> 16.880] Making your app accessible not just opens it up to these users, but it has a potential to improve design for everyone. +[16.880 --> 21.160] Most people are familiar with an accessibility service called TalkBack, +[21.160 --> 25.080] which is a screen reader utility for people who are blind and visually impaired. +[25.240 --> 33.320] With TalkBack, the user performs input via gestures such as swiping or dragging or an external keyboard. +[33.320 --> 36.920] The output is usually spoken feedback. +[36.920 --> 39.560] There are two gesture input modes. +[39.560 --> 46.360] The first one is touch exploration, where you drag your finger across the screen, +[46.360 --> 50.040] and the second one is linear navigation. +[50.040 --> 58.440] Where you swipe left and right with your finger until you find the item of interest. +[58.440 --> 63.720] Once you arrive to the item you're interested in, you double tap on it to activate. +[63.720 --> 71.800] The primary way in which you can attach alternative text description for your UI elements to be spoken by TalkBack +[71.800 --> 75.960] is by using an Android attribute called Content Description. +[75.960 --> 80.200] If you don't provide Content Description for an image button, for example, +[80.200 --> 83.400] the experience for TalkBack user can be jarring. +[90.600 --> 94.440] For decorative elements such as spacers and dividers, +[94.440 --> 101.320] setting Content Description to null will tell TalkBack to ignore and not speak these elements. +[101.320 --> 105.800] Make sure to not include Control Type or Control State in your Content Description, +[106.440 --> 113.080] words like buttons selected, checked, etc. as Android natively does that for you. +[113.720 --> 119.960] AndroidLint automatically show you which UI controls like Content Descriptions. +[119.960 --> 126.040] To keep TalkBack spoken output tidy, you can arrange related content into groups by using +[126.040 --> 131.720] Focusable Containers. When TalkBack encounters such a container, it will present the content +[131.720 --> 138.440] as a single announcement. For more complex structures such as tables, you can assign focus to a container +[138.440 --> 145.080] holding one piece of the structure such as a single row. Grouping content both reduces the +[145.080 --> 151.880] amount of swipe in the user has to do while streamlining speech output. Here is an example of how +[151.880 --> 164.680] ungrouped table content works. And here's the same content with grouping applied. +[165.640 --> 173.800] Content grouping activity, song details, name, hey Jude, artists, the Beatles cost $1.45. +[173.800 --> 181.000] You should manually test your app with TalkBack and ICE closed to understand how a blind user +[181.000 --> 186.120] may experience it. We also provide accessibility scanner as an app in Google Play. +[187.080 --> 192.760] It suggests accessibility improvements automatically by looking at content labels, +[192.760 --> 199.080] clickable items, contrast, and more. Vision impairments doesn't just refer to blindness. +[199.960 --> 207.800] 65% of our population is far-sighted, for example. With careful design, you can make sure that many +[207.800 --> 213.720] of your visually impaired users can have a positive experience without having to rely on TalkBack. +[214.360 --> 221.320] Begin by making sure that UI of your apps works with other accessibility settings, including +[221.320 --> 230.840] increased font size and magnification. Keep your touch targets large, at least 48 by 48 DP. +[231.400 --> 237.160] This makes them easier to distinguish and touch. Provide adequate color contrast. +[237.880 --> 243.480] The worldwide web consortium created color contrast accessibility guidelines to help. +[244.040 --> 251.400] And to assist users with color deficiencies, use cues other than color to distinguish UI elements. +[252.200 --> 259.320] For example, more descriptive instructional text. If you're using custom views or drawing your app +[259.320 --> 268.120] window using OpenGL, you need to manually define accessibility metadata so that accessibility +[268.120 --> 274.440] services can interpret your app properly. The easiest way to achieve this goal is to rely on +[274.440 --> 281.240] the ExploreByTouch helper class. With just a few methods, you can build a hierarchy of virtual views +[281.240 --> 287.960] that are accessible to TalkBack. Making your app accessible doesn't just open it to new users. +[287.960 --> 294.040] It helps to make the world a better place, one app at a time. To read more about developing and +[294.040 --> 300.920] testing your apps for users with visual impairments, check out the links below. Also check out the video +[301.640 --> 310.760] on developing for users with motor impairments. diff --git a/transcript/allocentric_2lfVFusH-lA.txt b/transcript/allocentric_2lfVFusH-lA.txt new file mode 100644 index 0000000000000000000000000000000000000000..10320c770ae03aeb1524a7cce528f6cc333f7ce9 --- /dev/null +++ b/transcript/allocentric_2lfVFusH-lA.txt @@ -0,0 +1,892 @@ +[0.000 --> 7.840] Acompanha em ViuPoint +[7.840 --> 10.600] Nós teremos uma conferência +[10.600 --> 14.080] Creatividade across modalities em viewpoint +[14.080 --> 17.480] Constructions com a professora Ives Weitzer +[17.480 --> 20.600] da Universidade da California in Berkeley +[20.600 --> 23.600] Professor Ives Weitzer +[23.600 --> 26.580] CRAS +[31.520 --> 33.660] Vielen amor, é muito bom você +[33.660 --> 36.040] Estou vindo... +[38.520 --> 41.800] Eu esqueçando em mim +[41.800 --> 45.900] Éastllowoso e muita fields de canoe +[45.900 --> 48.640] Em underestimate aquele cogente +[48.640 --> 50.000] Porque umaνεção de geração +[50.000 --> 51.760] Finder a vida da� +[51.760 --> 55.040] seus opentes yay anos naraumafa +[55.200 --> 58.120] pros exemplos do dia final. +[58.280 --> 61.320] Então, spice þ faithful +[61.520 --> 63.800] porque fleeingiro isso em dólares +[63.880 --> 66.280] apre o ap Above. +[67.520 --> 69.400] ο Olivietrofedora temos nossa +[69.600 --> 72.660] proteção ku +[72.820 --> 75.760] modelling freaking +[75.860 --> 77.220] Phillipo +[77.580 --> 80.080] As thunderstorms, +[80.080 --> 89.460] sobre alguns weighsessos sobre predicted sem intersecuractions, +[89.660 --> 93.260] pra tanto a plaintor月 que vai鼓ender na peça. +[95.060 --> 98.260] Mas�pera está em um exemplo sobre preto te torcendo, +[98.520 --> 102.120] que depois me deixe uma bustedura. +[102.920 --> 107.240] Isso nu +[111.080 --> 114.540] e disse que ele tinha esse jarras na sincerityira +[115.560 --> 119.120] e acabou deixando a coração de lugar e comлоu +[121.780 --> 124.520] que o jarras sempre tinha c theological +[127.240 --> 129.360] por que foi Cana +[129.360 --> 132.280] e para Garcia ou caring +[135.440 --> 137.280] não poderia era um ecodeirro +[138.160 --> 139.520] é nada premise +[140.080 --> 147.800] nouns +[148.400 --> 152.800] o faz de 공ia +[152.800 --> 155.600] artefacto +[155.600 --> 158.600] de coisaились設ados a growing +[158.600 --> 161.840] é tranquilo +[161.840 --> 165.740] 197 +[165.740 --> 167.160] genera a sua corralidade +[167.160 --> 169.200] né gente benta +[169.200 --> 169.520] Isso é mal Gulfe +[169.520 --> 172.400] eu altijdbroken como ele硬 faire 고민 Sun +[173.040 --> 175.940] chicarrora, transformeت uma +[176.700 --> 178.540] ]: apagСantedas em uma +[178.680 --> 180.100] col queda +[181.340 --> 182.340] sono +[182.900 --> 184.420] 저 Fighter3 +[184.520 --> 185.540] ruim +[187.080 --> 189.900] Então ainda temos um +[190.180 --> 192.700] ROBERT +[192.940 --> 194.480] vez +[195.360 --> 196.780] falar que sä +[196.780 --> 202.380] o nosso próprio viewpoint para ter feito um account, mas nós agora temos de neuroscience +[202.380 --> 209.540] que nós temos que ter que fazer, incluindo as construções e afordas de outras +[209.540 --> 212.140] pessoas e perhaps os animais presentes. +[212.140 --> 216.300] Então, basicamente quando você está em um lugar com outras pessoas, +[216.300 --> 221.420] algum parte de você é sempre sempre aware de o que não é o que você pode ver, o que você pode +[221.420 --> 228.420] ver, mas o que eles podem ver e o que eles podem ver. +[228.420 --> 234.740] E nós vamos fazer agora, porque nós vamos fazer a literatura, nós também podemos usar +[234.740 --> 243.380] essas habilidades para construir as percepções e afordas de pessoas que não estão lá +[243.380 --> 249.380] ou exigendo as pessoas imaginadas. +[249.380 --> 255.780] Então, comunicar uma comunicação de understandable envolvendo a simulação, então, +[255.780 --> 263.460] esse é um termo de cognitão de estes, que diz, essentially, o que eu estou fazendo +[263.460 --> 271.940] quando eu estou tentando comunicar com você, eu estou tentando você simular, para ir à sua +[271.940 --> 278.940] situação que estou descrindo. +[278.940 --> 286.940] Então, a simulação é o que estou tentando fazer, não memoramos o sete de facos, mas +[286.940 --> 292.020] simular a situação de world. E isso é necessariamente que nós só temos que +[292.020 --> 298.820] fazer a simulação. Porque eu não sei qualquer outro lugar para simular a situação, eu +[298.820 --> 307.100] tenho que imaginar isso em algum lugar. Ok, entramos a dar um辖 de 3 parameters de +[307.100 --> 314.860] classes. Ok. Então, línguas sobre isso. Eu tenho todas essas +[314.860 --> 320.300] diferentes sistemas de espacial que eu represento em línguas, e isso inclui coisas +[320.800 --> 326.020] muito, muito eager de tal, gew Acenteie. +[326.020 --> 334.160] Mulher da faz Valoriano temos te vertigo. +[334.160 --> 337.980] Tua a forma deельgar e estar replas. +[337.980 --> 344.960] Em Scotland vamos ao ponto de proph camping. +[344.960 --> 348.600] e eu só disse, por um momento em que vocês não sabem isso, +[348.600 --> 352.800] existem os systems de linguística no mundo, +[352.800 --> 355.440] onde eu não poderia dizer, +[355.440 --> 361.600] oh, olha, você tem um cronho ali, eu tenho que dizer, +[361.600 --> 367.700] oh, olha, você tem um cronho ali, ou um salto do céu, +[367.700 --> 370.080] eu tenho que saber o direção do céu do céu, +[370.080 --> 373.480] então eu tenho que ser able a gente saber isso. +[373.540 --> 377.200] Ok, assim antecedendo de onde eu vou. +[378.340 --> 381.020] Nabre, quando eu acho que tenho Yoga, +[381.020 --> 384.720] asiıyla comentava como o seu civilization planteou essas coisas, +[384.920 --> 387.500] um commonly douse de waveira слово de kitty, +[387.500 --> 389.700] é que lidasse à zmanda, +[390.060 --> 393.020] a 말씀드� comeu, o próximo e etc. +[394.320 --> 397.100] Uma outrailians differently do tipo, +[397.100 --> 399.020] um que vem de intervention longuama comiture, +[399.020 --> 402.020] mas é um意ousedlande, +[403.480 --> 417.860] Aqui aqui, highway Jimin K +[417.860 --> 420.640] Skills de pássito, se você raising seus hands +[420.640 --> 424.620] e seu Bash positive você pode ser mais e retreat lace. +[425.700 --> 429.520] Talvez chirping um blockedu comothing. +[429.880 --> 432.360] É um stuffed solve護loiero. +[432.360 --> 436.000] De sais como é o assessa do mutador, +[436.500 --> 439.300] tipo decide de um jogador ou outro. +[440.420 --> 444.520] O movimento do meu armado poderia representar o movimento e tal. +[444.520 --> 459.120] E foi tudo que depoispo fico se for endured, por exemplo, na snail queisa tivermos as funções +[459.120 --> 470.520] информ Mye Valley como pra parkedool Levi pitching a gente no primeiro timeline. +[470.520 --> 472.920] owing. +[473.580 --> 476.160] Nós já us demos noted, +[476.520 --> 480.240] que Hoísia aqui é feira eappeiro. +[480.400 --> 486.800] mas também está é um gourdeio �ardá dummy gestsky questo accountual de droogam Baptido lain +[487.960 --> 489.620] Aquela arrumado da avaient Lars +[489.720 --> 490.720] para reproverem isso. +[490.860 --> 492.820] Rather стран da докумência Motion +[493.280 --> 494.680] deance +[495.220 --> 497.120] pleche aailleurs +[497.120 --> 501.140] como na走了 Rumocion, +[501.140 --> 504.980] como se tenía uma conexão no lado da elasticсть, +[504.980 --> 506.960] e de particular, +[506.960 --> 510.580] eu poderia lá na errada do início e só me�� meτά. +[513.440 --> 517.040] 심, então aставляma de estar affectedas, +[517.040 --> 519.420] no quê avaliando esse tempo Chromecos Battle +[519.420 --> 521.620] raha é prevencido a matem sechs Caucasados +[521.620 --> 521.700] daortuntalka, +[521.700 --> 527.000] ou de demain, que eu faci pra esterbar... +[531.080 --> 537.280] wavelength ou blankamento eu sayo de gesto. +[537.280 --> 544.560] Na disputa helical, שהוא sempre percebe eu que não dentro do environment. +[544.560 --> 546.720] É tão bomule. +[546.720 --> 558.400] trap, então a gente detecta que o organismo todas essas las que acabava os Manuel e Neração +[558.900 --> 575.740] sua das asiaturas de outras pessoas, e minhas pacitudes alike assim como a execução de graphite initiate +[575.740 --> 581.560] waves do teng 화� הוא biếtil +[581.560 --> 586.620] não até questiones do +[586.620 --> 588.520] voce se Saintiu +[588.520 --> 591.820] ninguém conhece, se precisa近ar +[591.820 --> 594.000] então aqui estiverizam alguns +[594.000 --> 596.300] enconthos icómes +[596.300 --> 598.740] que dizer isso eta +[598.740 --> 599.340] ele vira +[599.340 --> 606.360] com umalona cruzista, Manhattan. +[606.360 --> 609.440] solta. +[612.400 --> 613.400] Sim, deixa meu cigetinho rozumerem, se não tiver您щado, +[613.420 --> 617.120] se temчто do geri, +[618.280 --> 620.500] se não tended dar como seria multifask, +[628.720 --> 630.300] o quê 뉴스 Veio com Eggman +[632.300 --> 633.160] työrtico retiga. +[633.160 --> 646.540] Agora, eu vou tentar, isso vai ser mais difícil com o green over here. +[706.540 --> 736.540] Aqui é o centro da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da +[736.540 --> 747.700] Bose +[747.700 --> 761.900] De novo economia, acho que seja só na Insten Stone aqui né então então era muito offensive para +[761.900 --> 766.840] sensitively shocking about anکta exponencial a umaonal traumática, +[766.840 --> 769.540] então tem måsa deichtigos exploitados para submitted +[769.540 --> 776.760] fixar o fato de você voltar em todaoffenda合防za Falls. +[778.320 --> 781.820] Est storyline, +[784.820 --> 787.000] esta espécie não muda, +[787.000 --> 793.180] pico, faço vazio é se manipulasse em +[793.280 --> 796.500] 49 até mesmo希 Paco. +[796.640 --> 798.580] repositística da Bethane +[798.600 --> 801.860] gaat e الا-nimic vousざ quer +[801.880 --> 803.680] esse tratamento, pois isso não灌çava +[805.060 --> 807.640] por papa que eu este assuming +[808.500 --> 809.920] ito que agora se бывает +[810.900 --> 812.160] com engines +[812.420 --> 814.280] ok +[815.280 --> 815.720] KN +[817.000 --> 834.760] Ok, então, em interacias, eu não tenho um espaço em mim, mas eu também tenho um espaço +[834.820 --> 839.220] de revalor parazugarmos a uma das pessoas na reção. +[846.740 --> 851.380] Como eu faço em uma Ugharl elevation. +[851.380 --> 858.380] Vamos dizer, é claro que não há mais uma vez que seus espécieuxes não se ativam. +[858.380 --> 862.380] Tem um pouco de espécie entre seus espécieuxes. +[862.380 --> 866.380] E agora, isso significa que há uma espécieuxa conversacional, +[866.380 --> 868.380] incluindo também o seu espécieuxo. +[868.380 --> 872.380] E, crucially, há uma espécieuxa de espécieuxes. +[872.380 --> 875.380] Então, o espécieuxo directo para a espécieuxa, +[875.380 --> 879.380] você vai ser directo para que ele tenha feito isso. +[882.380 --> 885.380] other espécieuxes podem ser aqui. +[885.380 --> 888.380] Mas, se eu quiser dizer, não se dizer nada agora, +[888.380 --> 891.380] ou se eu quiser fazer um ponto, +[893.380 --> 897.380] isso vai ser muito difícil se eu fazer isso, +[897.380 --> 899.380] ou isso, +[899.380 --> 902.380] se o address ele foi aqui. +[902.380 --> 905.380] Eu tenho que fazer isso por que ele tenha feito isso. +[905.380 --> 908.380] E o resto é um espécieuxo, um espécieuxo. +[908.380 --> 911.380] Agora, isso significa que, como eu disse, +[911.380 --> 916.380] que quer fazer um ponto, quer fazer um ponto de uma conversação de uma conversação. +[916.380 --> 922.380] Eles não podem fazer isso por que eles têm que fazer isso por que eles estão aqui. +[922.380 --> 927.380] E, no mesmo tempo, o que é o que é o que é o centro de uma conversação +[927.380 --> 931.380] é que não só se retenir sobre sua espécieuxa, +[931.380 --> 934.380] mas sobre a outra espécieuxa. +[934.380 --> 939.380] Isso realmente faz o que quer fazer, você quer fazer a conversação. +[939.380 --> 942.380] Então, tem um espécieuxo, +[942.380 --> 947.380] e é um espécieuxo, +[947.380 --> 951.380] mas um espécieuxo e um extrúcido de uma espécieuxa. +[952.380 --> 954.380] Ok. +[954.380 --> 956.380] E nós só precisamos fazer isso. +[956.380 --> 959.380] Ah, nós podemos dizer isso, agora, então, como eu disse, +[959.380 --> 961.380] para interaccional de ajustes interaccional, +[961.380 --> 963.380] você retenir sobre a espécieuxa ou, +[963.380 --> 966.380] ou, por exemplo, a outra espécieuxa, +[966.380 --> 969.380] que é extrainteraccional. +[969.380 --> 971.380] Isso não tem que fazer um ponto de uma espécieuxo. +[971.380 --> 973.380] Então, no McNeill Lab, +[973.380 --> 977.380] tem um lugar que tem que fazer um lugar onde os estudantes +[977.380 --> 980.380] retenir sobre a espécieuxa e a outra espécieuxa. +[980.380 --> 983.380] Então, o caso que eles estavam fazendo +[983.380 --> 986.380] foi uma espécieuxa ou um papel de papel. +[986.380 --> 989.380] Então, eles foram firstos com uma espécieuxa, +[989.380 --> 993.380] e eles tinham que fazer isso sem. +[993.380 --> 995.380] Ok. +[995.380 --> 997.380] E o instructor de uma espécieuxa +[997.380 --> 1001.380] foi realmente retenir sobre a espécieuxa +[1001.380 --> 1005.380] e ajudando a fazer uma espécieuxa +[1005.380 --> 1008.380] ou um papel de papel. +[1008.380 --> 1011.380] Então, por exemplo, quando você retenir sobre a espécieuxa +[1011.380 --> 1013.380] ou uma espécieuxa, +[1013.380 --> 1015.380] é extrainteraccional, +[1015.380 --> 1017.380] se isso é um lugar onde ele está, +[1017.380 --> 1019.380] como uma espécieuxa ou um papel de papel, +[1019.380 --> 1021.380] ou se eu estou tentando realmente fazer uma espécieuxa +[1021.380 --> 1025.380] e fazer uma conversação. +[1025.380 --> 1027.380] Ok. +[1035.380 --> 1037.380] Sim, isso é importante. +[1037.380 --> 1039.380] 13. +[1043.380 --> 1045.380] Ok. +[1045.380 --> 1048.380] Então, quando estamos interessando, +[1048.380 --> 1050.380] nós dissemos que nós somos fosos, +[1050.380 --> 1053.380] e nós só disse que não é necessariamente +[1053.380 --> 1055.380] o meu próprio fio, +[1055.380 --> 1057.380] o que eu estou fazendo, +[1057.380 --> 1060.380] eu estou imaginando fosos. +[1060.380 --> 1062.380] Então, se eu dizer que ele disse, +[1062.380 --> 1064.380] e eu disse, +[1064.380 --> 1066.380] ou não, +[1066.380 --> 1068.380] você sabe que esses gestos não estão direcidos +[1068.380 --> 1070.380] em você, eles estão direcidos +[1070.380 --> 1072.380] em uma espécieuxa ou um papel de papel +[1072.380 --> 1074.380] em uma conversação. +[1074.380 --> 1078.380] Mas você também se nota que eu já deixei os roles. +[1078.380 --> 1080.380] Eu disse que ele disse, +[1080.380 --> 1082.380] como isso, eu disse, ou não, +[1082.380 --> 1086.380] então eu não tinha nem mais tempo para ir lá. +[1086.380 --> 1088.380] Eu deixei os roles, +[1088.380 --> 1090.380] e isso é o que Perry Jansen costa +[1090.380 --> 1093.380] um pouco mais de 100 de 18 de agressão de rotação +[1093.380 --> 1096.380] em americanos e sangueis de exemplos. +[1096.380 --> 1098.380] Ok. +[1100.380 --> 1102.380] Também documento em sangueis de rotação +[1102.380 --> 1108.380] que os roles de rotação são de um tipo de espécieuxa. +[1108.380 --> 1110.380] Então, por exemplo, eu poderia dizer, +[1110.380 --> 1112.380] então ele disse, +[1112.380 --> 1114.380] e eu disse, +[1114.380 --> 1116.380] ou não, +[1116.380 --> 1120.380] então são dois ways de regras de rotação +[1120.380 --> 1122.380] em uma espécieuxa, +[1122.380 --> 1124.380] e o investimento de um tipo de espécieuxa, +[1124.380 --> 1126.380] e o que você quer fazer, +[1126.380 --> 1128.380] que ele pode fazer, +[1128.380 --> 1130.380] e o investimento de um tipo de espécieuxa. +[1130.380 --> 1132.380] Ainda mais, +[1132.380 --> 1134.380] eu posso representar, +[1134.380 --> 1136.380] isso é, como nós já dissem, +[1136.380 --> 1138.380] um bom app, +[1138.380 --> 1141.380] representando mais de um +[1141.380 --> 1143.380] uma pessoa, +[1143.380 --> 1145.380] e um ponto de tempo, +[1145.380 --> 1147.380] é a mesma. +[1147.380 --> 1150.380] Eu não posso literalmente não ser aware +[1150.380 --> 1153.380] de como você pode ver. +[1153.380 --> 1155.380] Então, +[1155.380 --> 1159.380] a funa é, como eu represento, +[1159.380 --> 1163.380] se eu tenho as duas coisas em que eu estou falando, +[1163.380 --> 1167.380] como eu represento as duas coisas em uma espécieuxa, +[1167.380 --> 1169.380] que é uma espécieuxa, +[1169.380 --> 1171.380] então a melhor forma de representar uma espécieuxa, +[1171.380 --> 1173.380] é uma espécieuxa. +[1173.380 --> 1175.380] Então, eu tenho uma espécieuxa, +[1175.380 --> 1177.380] eu tenho uma espécieuxa, +[1177.380 --> 1179.380] mas, por exemplo, +[1179.380 --> 1181.380] há dois pessoas que estão representando as duas coisas, +[1181.380 --> 1183.380] eu posso ir para a outra espécieuxa, +[1183.380 --> 1185.380] eu acho que estão entre essas duas coisas, +[1185.380 --> 1187.380] mas eu posso também te dar uma outra espécieuxa, +[1187.380 --> 1189.380] e então a Paul Dutas, +[1189.380 --> 1191.380] eu vou te dar uma espécieuxa, +[1191.380 --> 1193.380] mas também é uma espécieuxa, +[1193.380 --> 1195.380] então ele tem um grande exemplo +[1195.380 --> 1197.380] da história do American Sign Language, +[1197.380 --> 1200.380] e a história é sobre alguns dois coisas +[1200.380 --> 1202.380] que são os cardes e um novo. +[1202.380 --> 1205.380] Então ele tem alguém com seus cardes, +[1205.380 --> 1207.380] você vê os cardes em meu hand, +[1207.380 --> 1209.380] e a outra espécieuxa, +[1209.380 --> 1211.380] ele se enlue, +[1211.380 --> 1215.380] então, espera um minuto, +[1215.380 --> 1217.380] o meu corpo é agora representando +[1217.380 --> 1219.380] o card player que está sentindo, +[1219.380 --> 1222.380] e meu hand é representando +[1222.380 --> 1225.380] o card, que é a outra espécieuxa, +[1225.380 --> 1226.380] mas, +[1226.380 --> 1229.380] o meu hand é representando +[1229.380 --> 1233.380] o card, que é a outra espécieuxa, +[1233.380 --> 1235.380] e eu vou só dar uma espécieuxa, +[1235.380 --> 1237.380] para o meu hand é representado, +[1237.380 --> 1240.380] é só que eles tenham... +[1240.380 --> 1245.380] Então eu posso ir para pessoas, +[1245.380 --> 1248.380] que é o que é aqui, +[1249.380 --> 1252.380] ok, vamos para o quarto, +[1259.380 --> 1261.380] ok, vamos fazer 15 para o momento, +[1261.380 --> 1262.380] alright, +[1269.380 --> 1271.380] então, em gestora em sua espécie, +[1271.380 --> 1273.380] eu tenho constantemente +[1273.380 --> 1276.380] de uma espécieuxa e de uma espécieuxa +[1276.380 --> 1278.380] de processos. +[1280.380 --> 1281.380] Então, +[1282.380 --> 1284.380] se eu realmente representar +[1284.380 --> 1287.380] uma espécieuxa, não meditário, +[1287.380 --> 1289.380] eu posso dizer algo que, +[1290.380 --> 1291.380] ele veio para eu, +[1291.380 --> 1293.380] e eu vou mover o meu hand para eu, +[1294.380 --> 1296.380] e ele vai para a espécieuxa, +[1296.380 --> 1298.380] para eu me vencer. +[1298.380 --> 1301.380] Mas, se eu for descrubar para você, +[1301.380 --> 1302.380] por exemplo, +[1302.380 --> 1304.380] eu vou dizer como eu vou aguentar o gredo, +[1305.380 --> 1307.380] então eu posso dizer, +[1307.380 --> 1309.380] então, primeiro você vai para a espécieuxa, +[1309.380 --> 1311.380] você tem que fazer admissões, +[1311.380 --> 1313.380] então você vai fazer as classes basicamente, +[1313.380 --> 1315.380] então você vai fazer isso, +[1315.380 --> 1317.380] e meu hand vai para a espécieuxa, +[1317.380 --> 1319.380] e a espécieuxa, +[1319.380 --> 1322.380] eu não vou fazer isso, +[1322.380 --> 1325.380] e vai finalmente você vai aguentar o gredo. +[1325.380 --> 1326.380] Então, +[1326.380 --> 1328.380] uma espécieuxa, +[1328.380 --> 1330.380] que não é sobre a espécieuxa, +[1330.380 --> 1332.380] é sobre uma espécieuxa, +[1332.380 --> 1336.380] a Crisis EsெUSは processos como esté장. +[1336.380 --> 1337.380] right? +[1337.380 --> 1339.380] Estamos K cardiovascularerlo, +[1339.380 --> 1342.380] e processos muito mais ajudam por o começo, +[1343.380 --> 1345.380] mas tem que estar cansado errando esse gredo +[1345.380 --> 1348.380] e isso vai ser uma espécieuxa como eurem Eye. +[1348.380 --> 1350.380] Então, está idiotamente difícil, +[1350.380 --> 1351.380] consegue viver x altos, +[1351.380 --> 1354.380] gins com uma espécieuxa para eu tomar a forma de adam, +[1354.380 --> 1357.360] ela tem que governar com 300 ajudar +[1357.380 --> 1358.520] e que nos Liuan está supplementado para +[1358.520 --> 1362.040] eu já comecei a melodearTech e advisors, +[1362.380 --> 1363.380] Ok. +[1366.380 --> 1369.380] Nós só fiz 16 e 17, ok. +[1372.380 --> 1376.380] Ok, e vamos ver o 18 de uma segunda. +[1376.380 --> 1377.380] Ok. +[1378.380 --> 1382.380] Então, um facto crucial sobre estes espaises, +[1382.380 --> 1387.380] tem todos os tipos de interna-dictifices. +[1388.380 --> 1391.380] Ai tentamos dos testamentos de中共, +[1392.380 --> 1398.380] mas coke é dejointast чем a rede, +[1399.380 --> 1401.380] a dusca Prima, +[1402.380 --> 1408.380] Lenart e sua ação número de pessoas sem acreditar em sinal才 de emagreza. +[1409.380 --> 1414.380] Então, nós vamos ver digamos que você não está brincando, +[1414.380 --> 1440.580] nuggets al favorite você, ou parece um +[1440.580 --> 1443.200] para mim é só letter bind para você, +[1443.200 --> 1445.180] para luchar para você tomar attempted. +[1445.180 --> 1447.740] Esse é só um tecido indicatório +[1447.740 --> 1451.380] que o healthy tide aqui desderemlin +[1451.380 --> 1455.020] e isso é só um tecido formulário que avançará... +[1455.020 --> 1460.160] papa na monitoreia du paramilme langamente. +[1460.160 --> 1463.560] 事importante do país que é nomeado +[1463.560 --> 1467.140] ter o lugar do seuavoro que você pode ter na verdade. +[1467.140 --> 1474.820] Então o molho está na cor, o molho está na cor, o molho está na cor, o molho está na cor, +[1474.820 --> 1479.520] então o molho está na cor, e então o molho está na cor. +[1479.520 --> 1482.520] Ok. +[1482.520 --> 1486.360] So 19. +[1486.360 --> 1491.020] É só um dia abstracto aqui. +[1491.840 --> 1496.080] Sim, a criticismista do cucumbers sem masa para... +[1496.080 --> 1500.600] É a estruturalocks de sutura sobre Manhattan e SmartMan认? +[1500.600 --> 1502.560] Bem. +[1502.560 --> 1506.120] Ora. +[1506.120 --> 1510.540] Porqueений li devia averag화를 a gente ditch? +[1510.540 --> 1515.360] Como deveria ampler? +[1515.360 --> 1517.400] Aqui se responsabe armas a gente menos. +[1517.400 --> 1523.400] Hoje nós conseguimos ver resemblémosensionais todas asucksativas tortured +[1523.400 --> 1527.380] а tudo, como nos índoles são cometendo as pravhas? +[1527.400 --> 1534.900] O adolescências deука do ví� про� located em vários sites +[1534.900 --> 1538.380] Intoxinamos o essencial Sem Encon alive. +[1538.400 --> 1541.400] посanos podem 감사ádate isso, mas houverayou preso com houveray também +[1541.400 --> 1545.240] Na inválvio, é que é uma texta histórica. +[1545.560 --> 1550.120] É assim que se inventaria, +[1550.240 --> 1555.400] o meningolo é muito económico, +[1556.120 --> 1558.720] se foram tomados em outros inst профritos, +[1558.880 --> 1561.640] se seus linguais têm isso dos bosses. +[1561.640 --> 1567.120] O mesmo잖아요 queECTEM bom Festival Battery. +[1567.120 --> 1573.560] Outro disclosure é que elas não se conformar aweder doem dos bichos individuales. +[1574.580 --> 1578.080] Então aqui está... que estamos aqui agora para um problema de bonito, +[1579.940 --> 1582.120] itatedão bites yummy. +[1583.040 --> 1586.200] Então waveset��라고요. +[1587.140 --> 1589.600] Sem saudniejs裹 na inverse, +[1589.880 --> 1593.200] Trop payofften op crave a cortexia fortes antes de pular. +[1593.560 --> 1595.600] Então agora é uma exemplo quanto. +[1595.600 --> 1604.140] Politico, ela sabe então, Já negou Pathon, me escreve rapidamente Songs. +[1605.140 --> 1607.060] Professor E. +[1607.200 --> 1614.420] Eu acho que estou развитida com osifier 오늘은. +[1614.640 --> 1621.740] topia travail Jú unemployendo. +[1622.100 --> 1624.260] Speaker Yellow +[1624.260 --> 1628.900] Então em inglês em inglês, e a gente tem visto isso emranean +[1628.900 --> 1633.100] EQ a embracing quê, que temos que, assim como eu estou Repeat. +[1635.580 --> 1638.020] Mas ele é que? +[1638.140 --> 1641.240] O uso da maneira de terpie, +[1641.240 --> 1646.520] faz uma calendaria scientifica entre a Use do antecessorral delauseu Melissa +[1646.520 --> 1648.140] e spell, o ia não ao internationamento drank, +[1648.140 --> 1651.040] para que que que é um esea tournaments. +[1651.040 --> 1655.040] Não é o mesmo. +[1655.040 --> 1663.040] A Now é não o que o Narraturus é o que o Narraturus é o que o Narraturus é o primeiro aspecto de Narraturus. +[1663.040 --> 1671.040] E a Mami, nós estamos aqui, é o Narraturus Mami. +[1671.040 --> 1675.040] A Now é a minha equilíbrio. +[1675.040 --> 1683.040] Então, até agora, é o nosso prato, e o prato do Narraturus é o primeiro aspecto do Narraturus. +[1683.040 --> 1686.040] Nesse exemplo que ela traz, +[1686.040 --> 1696.040] e que nós temos a introdução de um personagem com o Chi. +[1696.040 --> 1698.040] Seria ela. +[1698.040 --> 1706.040] Mas, no que segue, e que nós temos a Mami, um ponto de vista, +[1706.040 --> 1710.040] porque não é simplesmente o ponto de vista do Narraturus. +[1710.040 --> 1719.040] Esse Now, e esse Zoom, eles revelam que é a perspectiva da personagem que está sendo colocada ali. +[1719.040 --> 1730.040] E o Mami, essa referênciação que se estabelece, também é pela perspectiva do Chi e não do Narraturus propriamente. +[1730.040 --> 1732.040] Ok. +[1732.040 --> 1736.040] Ok, então vamos para o 23. +[1736.040 --> 1740.040] Ok. +[1740.040 --> 1742.040] Então, +[1742.040 --> 1746.040] Liven Bondelinote, +[1746.040 --> 1750.040] suggested that there's this added kind of indirect speech. +[1750.040 --> 1751.040] So, +[1751.040 --> 1754.040] o que nós estamos apenas olhando, +[1754.040 --> 1759.040] o passplus agora é o nosso exemplo de que alguns pessoas chamam +[1759.040 --> 1763.040] o pre-indirect speech and style. +[1763.040 --> 1765.040] Ok. +[1765.040 --> 1771.040] Ok. Então, que ela está trazendo um exemplo de construções com agora, mas passado. +[1771.040 --> 1775.040] No Inglês, no com o passado. +[1775.040 --> 1778.040] E o que tem sido estudado, +[1778.040 --> 1781.040] é que essas construções revelam na verdade, +[1781.040 --> 1784.040] construções de discurso indireto livre, +[1784.040 --> 1787.040] em que você tem a mezclagem de um ponto de vista. +[1787.040 --> 1791.040] Dona Rador com o ponto de vista de uma personagem. +[1791.040 --> 1793.040] Ok. +[1793.040 --> 1794.040] Ok. +[1794.040 --> 1796.040] Então, aqui em Liven Bondelinote, +[1796.040 --> 1802.040] ele nos nota um tipo de mixed viewpoint em language, +[1802.040 --> 1806.040] que ele chama distanced indirect speech and thought. +[1806.040 --> 1808.040] E aqui é um exemplo, +[1808.040 --> 1811.040] então você estava fazendo o que você estava. +[1811.040 --> 1813.040] Então, se você está na verdade, +[1813.040 --> 1817.040] em que os pronouns são exatamente correctos +[1817.040 --> 1820.040] para a sua corrente situais de distanced, +[1820.040 --> 1822.040] é uma direção de alguma coisa. +[1822.040 --> 1823.040] Ok. +[1823.040 --> 1825.040] Mas a experiência não é, +[1825.040 --> 1826.040] não é? +[1826.040 --> 1831.040] É uma coisa que não é a corrente situais que está sendo falando. +[1831.040 --> 1833.040] Ok. Então, +[1833.040 --> 1838.040] nesta construção, que mostram esse distanciamento, +[1838.040 --> 1842.040] uma das construções identificadas por vando ela note, +[1842.040 --> 1850.040] é essa construção de discurso indireto distanciado, +[1850.040 --> 1851.040] que é o que ele chama. +[1851.040 --> 1854.040] Quando você tem uma construção como essa do exemplo, +[1854.040 --> 1858.040] em que o Yu não é para se referir a uma situação +[1858.040 --> 1863.040] imediata de interação do falante com seu interlocutor. +[1863.040 --> 1866.040] Mas é para se referir alguma coisa passada +[1866.040 --> 1870.040] que vem marcado daí nesse exemplo com o oos. +[1870.040 --> 1872.040] Ok. +[1872.040 --> 1874.040] Ok. +[1874.040 --> 1881.040] Então, vamos dizer que esses combinadores de vôndios +[1881.040 --> 1884.040] são as marcas que são apropriadas +[1884.040 --> 1886.040] para o que o vôndo da narrativa, +[1886.040 --> 1888.040] e o vôndo da narrativa, +[1888.040 --> 1890.040] se mexer com um outro, +[1890.040 --> 1894.040] produz um imprenso de dois vôndios de corres, +[1894.040 --> 1896.040] o vôndo da narrativa, +[1896.040 --> 1898.040] e o vôndo da narrativa. +[1898.040 --> 1899.040] Ok. +[1899.040 --> 1902.040] E eles também vão ter uma forma para nós. +[1902.040 --> 1903.040] Ok. +[1903.040 --> 1907.040] Então, o que ela está mostrando é que, pela linguagem, +[1907.040 --> 1912.040] pelas construções e pelas marcas linguísticas que são oferecidas, +[1912.040 --> 1916.040] o que acontece é que algumas vão se especializar para o ponto de vista, +[1916.040 --> 1919.040] donar a dor, outras da personagem, +[1919.040 --> 1923.040] e nós como leitores aquilo que nós percebemos é que existe uma mezcla +[1923.040 --> 1927.040] desses dois pontos de vista no discurso. +[1927.040 --> 1928.040] Ok. +[1928.040 --> 1929.040] Ok. +[1929.040 --> 1932.040] Então, não se acontece em uma linguagem spoken, +[1932.040 --> 1934.040] que é uma coragem de gesto, +[1934.040 --> 1936.040] e eu vou ter que abrir um pouco aqui, +[1936.040 --> 1938.040] porque nós vamos fazer um pouco de tempo. +[1938.040 --> 1939.040] Mas, ok. +[1939.040 --> 1941.040] Vamos tentar 25. +[1941.040 --> 1944.040] Então, o que ela diz é que isso também foi percebido. +[1944.040 --> 1946.040] Não só na linguagem literária, +[1946.040 --> 1949.040] a linguagem verbal, mas também foi estudado +[1949.040 --> 1952.040] na linguagem de sinais americana. +[1953.040 --> 1955.040] Ok. +[1955.040 --> 1960.040] Então, isso é reconhecido em gesto que eu posso ter duas coisas de vista, +[1960.040 --> 1965.040] que são os quais são os quais é o seu ponto de vista e observar os pontos. +[1965.040 --> 1971.040] Então, vamos dizer que eu vou fazer isso para representar uma actividade de uma representação +[1971.040 --> 1974.040] de que alguém pode fazer se eles estão rolando. +[1974.040 --> 1977.040] Ou eu vou fazer isso para fazer, ok. +[1977.040 --> 1980.040] Acompanha como uma pessoa pode ser quando eles estão running. +[1980.040 --> 1983.040] Então, se eles estão rolando, +[1983.040 --> 1986.040] eles vão fazer isso. +[1986.040 --> 1989.040] Eu vou fazer um trajectory, +[1989.040 --> 1992.040] que é o mais um ponto de vista e observar. +[1992.040 --> 1994.040] Então, vamos fazer um ponto de vista. +[1994.040 --> 1997.040] Sim, nós temos um ponto de vista. +[1997.040 --> 2001.040] E eu acho que eu vou fazer isso para... +[2001.040 --> 2004.040] Ok. Então, vamos me ver. +[2004.040 --> 2008.040] Então, ela diz que na linguagem gesto, +[2008.040 --> 2011.040] ela se acompanha a fala corrente, +[2011.040 --> 2016.040] o que acontece é que podem ser misturados tanto o ponto de vista do falante, +[2016.040 --> 2018.040] quanto o ponto de vista de um observador. +[2018.040 --> 2021.040] Então, se eu faço esse gesto, +[2021.040 --> 2023.040] eu estou provavelmente falando, +[2023.040 --> 2028.040] falando de uma ação de remar ou se eu faço assim, +[2028.040 --> 2029.040] uma ação de correr. +[2029.040 --> 2032.040] Mas se eu faço isso para correr, +[2032.040 --> 2037.040] por exemplo, eu já estou incorporando o ponto de vista de um observador +[2037.040 --> 2040.040] que não é necessariamente da pessoa que está falando. +[2040.040 --> 2045.040] E aí o que acontece é que esses dois pontos de vista do falante, +[2045.040 --> 2047.040] do observador, eles se misturam o tempo todo +[2047.040 --> 2052.040] enquanto nós estamos produzindo fala com o gesto. +[2052.040 --> 2058.040] Ok, então, agora vou mostrar uma história de como isso funciona. +[2058.040 --> 2061.040] E eu acho que vai ser só para 29 no treino. +[2061.040 --> 2064.040] 29,31. +[2064.040 --> 2069.040] Então, essa é uma história que ela vai mostrar que foi documentado numa pesquisa. +[2069.040 --> 2072.040] Ok, então, em esse exemplo aqui, +[2072.040 --> 2074.040] esse é um americano storyteller, +[2074.040 --> 2076.040] telling a história para um amigo, +[2076.040 --> 2079.040] e ela está se enactando +[2079.040 --> 2082.040] um stronador de fico de fico, +[2082.040 --> 2084.040] de forma de se manter. +[2084.040 --> 2086.040] Então, o que está acontecendo aqui? +[2086.040 --> 2089.040] Nós temos duas pessoas e a moça. +[2089.040 --> 2091.040] É uma contadora de histórias, +[2091.040 --> 2096.040] e ela está contando para o interlocutor uma determinada história +[2096.040 --> 2102.040] em que o personagem é um oficial desconfiado que está segurando um documento. +[2102.040 --> 2107.040] Ok, então, você pode ver que ela está se atingindo na forma de se sentir, +[2107.040 --> 2112.040] na forma de se sentir, que os olhos estão se atingindo na forma de se sentir, +[2112.040 --> 2116.040] e você pode ver que o seu forehead rincole está se atingindo. +[2116.040 --> 2119.040] Então, você pode perceber que o olhar dela está direcionado +[2119.040 --> 2121.040] para esse documento imaginário, +[2121.040 --> 2123.040] a cabeça dela está abaixo, +[2123.040 --> 2126.040] como se ela tivesse interagindo com papel, +[2126.040 --> 2129.040] e a testa dela está enrugada, +[2129.040 --> 2132.040] num tom de setecismo. +[2132.040 --> 2136.040] Ok, agora, em 30, +[2140.040 --> 2142.040] ela está agora sendo o seu猿ô, +[2142.040 --> 2145.040] o猿ô está respondendo a oficial. +[2145.040 --> 2147.040] Então, ela está se atingindo na forma de se sentir, +[2147.040 --> 2151.040] ela está bien adoptadoaria e Adapt antsiej parecia depressず e escrutado. +[2151.040 --> 2170.860] Ela знакa se example de rep +[2170.860 --> 2175.060] poryangon sofema não por Bloco do Peço bora pela withdrawal da было intersectionacionada. +[2175.060 --> 2178.540] Agora ela muda de pont de vista, ele incorpora a dois. +[2178.540 --> 2181.640] Porque agora ela é o eo dela passado, +[2181.640 --> 2184.940] então ela tá agindo todo inocente com o sorriso, +[2184.940 --> 2190.640] mas as mãos dela ainda estão segurando aquele formulário imaginário. +[2190.640 --> 2193.340] E ela não precisou mudar a postura corporal, +[2193.340 --> 2196.860] a única coisa que ela fez foi levantar a cabeça +[2196.860 --> 2199.120] e dirigir o olhar dela para interlocutor. +[2199.120 --> 2202.960] Então aí, existe a incorporação de um outro ponto de vista, +[2202.960 --> 2204.720] o desse eo passado dela. +[2205.720 --> 2209.720] Ok, então em todas essas pícitas, +[2209.720 --> 2212.720] acho que você pode ver, ela não está olhando +[2212.720 --> 2214.720] para a pessoa que ela está falando. +[2214.720 --> 2216.720] Ela está olhando para o espaço. +[2216.720 --> 2217.720] O primeiro ela está olhando para o espaço +[2217.720 --> 2220.720] eu tinha por ela tendo a ser oficial. +[2220.720 --> 2222.720] E então, como ela speaking para o seu passado, +[2222.720 --> 2224.720] ela está olhando para o espaço +[2224.720 --> 2225.720] e ela está olhando para ele, +[2225.720 --> 2229.720] e ela está olhando para o espaço oficial. +[2229.720 --> 2232.720] Então veja que nesse... +[2233.720 --> 2236.720] nessas duas figuras, nessas duas imagens o que acontece é +[2236.720 --> 2238.720] o ponto de vista dela direcionado +[2238.720 --> 2240.720] é primeiro para o formulário +[2240.720 --> 2242.720] e depois para um espaço distante, +[2242.720 --> 2246.720] não necessariamente para o interlocutor com quem ela está interagindo. +[2248.720 --> 2249.720] Então, ninguém que... +[2249.720 --> 2251.720] Então quando você vê uma pessoa contando uma história assim, +[2251.720 --> 2252.720] você nunca vai pensar, +[2252.720 --> 2254.720] para onde que essa pessoa está olhando. +[2254.720 --> 2256.720] Você sabe o que ela está olhando? +[2256.720 --> 2258.720] Então, ninguém que... +[2258.720 --> 2260.720] Então quando você vê uma pessoa contando uma história assim, +[2260.720 --> 2261.720] você nunca vai pensar, para onde que essa pessoa está olhando? +[2261.720 --> 2264.720] Você sabe o que eles estão fazendo e para onde eles estão olhando? +[2264.720 --> 2267.720] Você sabe o que eles estão fazendo e para onde eles estão olhando? +[2268.720 --> 2271.720] Então, agora o real world interlocutor +[2271.720 --> 2274.720] vai perguntar e agora vai perguntar 31. +[2274.720 --> 2276.720] E nesse slide agora, +[2276.720 --> 2282.720] o interlocutor do mundo real fez uma pergunta para contadora. +[2283.720 --> 2285.720] Então agora ele se torna a base de ele, +[2285.720 --> 2289.720] né, instead de entrar a história da land, +[2289.720 --> 2294.720] e se nos denuncia que uma pessoa ainda está olhando naquela documenta. +[2295.720 --> 2298.720] E agora ela se vira para ele, ela se dirige para ele, +[2298.720 --> 2302.720] só que note em que uma das mãos dela ainda está segurando +[2302.720 --> 2305.720] o formulário imaginário da história. +[2307.720 --> 2309.720] Então, o meu ponto aqui é que +[2309.720 --> 2311.720] esse tipo de mistura, +[2311.720 --> 2312.720] uma parte da verdade, +[2312.720 --> 2315.720] esse ponto é que o seu corpo é o narrador. +[2316.720 --> 2317.720] E a outra parte da verdade, +[2317.720 --> 2319.720] o narrador é o narrador, +[2319.720 --> 2323.720] esse é o tipo de mistura que nós estamos vendo +[2323.720 --> 2324.720] em freindira, +[2324.720 --> 2326.720] em fizesis e direitos. +[2327.720 --> 2330.720] Então, o que ela quer mostrar aqui +[2330.720 --> 2332.720] é exatamente o seguinte, +[2332.720 --> 2335.720] o que nós vimos na literatura, com os exemplos de literatura, +[2335.720 --> 2337.720] do discurso indireto livre, +[2337.720 --> 2340.720] o discurso indireto distanciado que eles chamam, +[2341.720 --> 2344.720] é que na fala normal, com o gesto, +[2344.720 --> 2346.720] nós conseguimos também fazer incorporação +[2346.720 --> 2348.720] desses múltiplos pontos de vista, +[2348.720 --> 2350.720] em que, por um lado, +[2350.720 --> 2352.720] ela está incorporando o narrador, +[2352.720 --> 2354.720] ela mesma como narradora, +[2354.720 --> 2357.720] e por outro lado, ela incorpora o ponto de vista de um personagem, +[2357.720 --> 2359.720] porque ela ainda mantém segura +[2359.720 --> 2361.720] aquele formulário imaginário. +[2366.720 --> 2368.720] Ok, então, quando nós vemos esses misturas, +[2368.720 --> 2371.720] é uma parte da sua body enacting a literatura, +[2371.720 --> 2373.720] uma parte da sua body enacting a sua character, +[2374.720 --> 2376.720] ou quando nós vemos esses misturas, +[2376.720 --> 2378.720] quando nós vemos, +[2378.720 --> 2380.720] agora é a parte da sua character, +[2380.720 --> 2382.720] e a parte da sua body enacting a literatura, +[2382.720 --> 2387.720] esses são os nossos síndios que a língua +[2387.720 --> 2390.720] ou a body enacting a uma parte da sua body +[2390.720 --> 2393.720] porque eles são incompatibles com a outra. +[2394.720 --> 2396.720] Então, o que acontece nessas construções +[2396.720 --> 2398.720] em que nós temos, por exemplo, +[2398.720 --> 2401.720] o corpo manifestando dois pontos de vista +[2401.720 --> 2403.720] ou por meio da linguagem também, +[2403.720 --> 2404.720] nos exemplos de literatura, +[2404.720 --> 2406.720] nós temos a mistura do agora, +[2406.720 --> 2408.720] com o passado, +[2408.720 --> 2411.720] o que acontece é que tudo isso serve de evidência +[2411.720 --> 2413.720] para mostrar que nós conseguimos +[2413.720 --> 2416.720] mezclar ou mergir esses dois, +[2416.720 --> 2419.720] esses múltiplos pontos de vista. +[2422.720 --> 2425.720] Ok, e vamos fazer 32, +[2425.720 --> 2428.720] nós estamos ficando no fim, ok? +[2428.720 --> 2432.720] Segurem aí que ela já está chegando no final. +[2488.720 --> 2491.720] Então, ela está mostrando como é que isso também pode ser visto na arte. +[2491.720 --> 2495.720] Ela se refere a uma pintura da cena da anunciação +[2495.720 --> 2497.720] do nascimento do menino Jesus, +[2497.720 --> 2500.720] em que um anjo conversa com a virgem. +[2500.720 --> 2503.720] E o que acontece é que nessa pintura +[2503.720 --> 2505.720] a luz é... +[2505.720 --> 2507.720] a luz é... +[2507.720 --> 2509.720] a luz é... +[2509.720 --> 2511.720] a luz é... +[2511.720 --> 2513.720] a luz é... +[2513.720 --> 2515.720] a luz é... +[2515.720 --> 2517.720] a luz é... +[2517.720 --> 2519.720] a luz é... +[2519.720 --> 2522.720] configurada de tal forma para que o observador +[2522.720 --> 2525.720] consiga participar da cena. +[2525.720 --> 2526.720] Mas na pintura, +[2526.720 --> 2528.720] nessa... +[2528.720 --> 2530.720] na capela em que essa pintura está, +[2530.720 --> 2534.720] o que acontece é que existe uma luz externa +[2534.720 --> 2537.720] que entra por uma janela sobre a virgem, +[2537.720 --> 2539.720] projetada desse modo. +[2539.720 --> 2541.720] Então, o que acontece é que nós temos aqui +[2541.720 --> 2543.720] um outro ponto de vista sendo colocado, +[2543.720 --> 2545.720] que é o olho de Deus. +[2546.720 --> 2549.720] Ok, 33, nós estamos ficando aqui, estamos almosto. +[2552.720 --> 2554.720] Então, em arte, +[2554.720 --> 2557.720] tem essa habilidade para usar secondary gaze, +[2557.720 --> 2560.720] então eu posso ter uma character depictiva +[2560.720 --> 2562.720] que está olhando em algo e isso faz eu, +[2562.720 --> 2564.720] como um vio de pintura, +[2564.720 --> 2566.720] olha para algo, +[2566.720 --> 2568.720] ou se vio pensar, +[2568.720 --> 2570.720] o que é que ele está olhando? +[2570.720 --> 2572.720] O que está lá? +[2572.720 --> 2575.720] E isso também é no Comics e Film, +[2575.720 --> 2577.720] o mesmo tipo de coisa que está lá com +[2577.720 --> 2580.720] um e-draucar a character, +[2580.720 --> 2583.720] e eu posso ir para fora, +[2583.720 --> 2585.720] entre o que está olhando em a character, +[2585.720 --> 2587.720] o primeiro um, +[2587.720 --> 2589.720] e aí que o outro é que está olhando. +[2589.720 --> 2592.720] Ou eu, +[2592.720 --> 2595.720] eu realmente vejo uma character, +[2595.720 --> 2597.720] e eu vejo eles, +[2597.720 --> 2601.720] sobre a parte de uma character que eu sei +[2601.720 --> 2603.720] que é o que está falando. +[2603.720 --> 2604.720] É certo então. +[2604.720 --> 2606.720] Ele tem duas coisas. +[2606.720 --> 2609.720] Então, em outro tipo de manifestação +[2609.720 --> 2611.720] de múltiplos pontos de vista na arte, +[2611.720 --> 2614.720] ela fala em relação as pinturas, +[2614.720 --> 2616.720] aos quadrinhos, aos filmes, +[2616.720 --> 2619.720] quando você tem, por exemplo, +[2619.720 --> 2620.720] numa pintura, +[2620.720 --> 2623.720] uma das pessoas sendo retratada +[2623.720 --> 2626.720] que têm um olhar direcionado para alguma coisa. +[2626.720 --> 2628.720] E você, como observador, +[2628.720 --> 2630.720] é levado direcionar o seu olhar também, +[2630.720 --> 2633.720] ou pelo menos imaginar o que é que aquilo, +[2633.720 --> 2636.720] o que a pessoa representada naquele quadro, +[2636.720 --> 2637.720] está olhando. +[2637.720 --> 2640.720] No caso dos quadrinhos ou dos filmes, por exemplo, +[2640.720 --> 2642.720] em que você tem alternância das personagens, +[2642.720 --> 2644.720] o que você tem na verdade +[2644.720 --> 2646.720] é o ponto de vista de uma, +[2646.720 --> 2647.720] o ponto de vista da outra, +[2647.720 --> 2649.720] ou às vezes quando você vê +[2649.720 --> 2651.720] um outro personagem +[2651.720 --> 2654.720] pelas costas de um personagem falante. +[2654.720 --> 2657.720] Então, essa negociação de ponto de vista +[2657.720 --> 2659.720] também é alcançada +[2659.720 --> 2663.720] nesses outros meios de expressão. +[2665.720 --> 2667.720] Ok, 2 horas e estamos fazendo. +[2667.720 --> 2669.720] 35. +[2669.720 --> 2671.720] É o que é 34. +[2673.720 --> 2675.720] Ok, então nós estamos dizendo que a viewpoint +[2675.720 --> 2678.720] é não só sempre, +[2678.720 --> 2680.720] mas é múltiplos. +[2680.720 --> 2682.720] É isso que estamos sempre experiencing +[2682.720 --> 2684.720] nós estamos todos nos aguardando, +[2684.720 --> 2685.720] estamos sempre experiencing +[2685.720 --> 2686.720] uma ação de ponto. +[2686.720 --> 2688.720] Nós estamos experiencing múltiplos de teus pontos +[2688.720 --> 2690.720] e a mesma é verdade, +[2690.720 --> 2692.720] a literatura e etc. +[2692.720 --> 2696.720] Então, o ponto principal de tudo isso que ela está querendo mostrar +[2696.720 --> 2699.720] é que existe um ponto de vista, +[2699.720 --> 2702.720] ele permeia tudo aquilo que nós fazemos +[2702.720 --> 2704.720] e ele nunca vem sozinho, +[2704.720 --> 2707.720] porque nós sempre temos a consciência do outro, +[2707.720 --> 2709.720] da presença do outro, +[2709.720 --> 2711.720] e quando nas artes, na literatura +[2711.720 --> 2714.720] e até mesmo na fala banal do dia a dia, +[2714.720 --> 2717.720] esses pontos de vista eles se misturam. +[2719.720 --> 2723.720] Mas a ordenaria de que nós experiencingmos isso em vida +[2723.720 --> 2724.720] é que, por exemplo, +[2724.720 --> 2727.720] nós distribuímos um ponto de vista actual +[2727.720 --> 2728.720] do mundo, +[2728.720 --> 2730.720] então nós temos o meu próprio ponto de vista de vista. +[2730.720 --> 2732.720] Nós temos os outros, +[2732.720 --> 2734.720] nós temos os outros, +[2734.720 --> 2736.720] e aí os meus patens de mim +[2736.720 --> 2738.720] representam um combinação +[2738.720 --> 2740.720] de estas diferentes diferentes pontos de vista. +[2740.720 --> 2742.720] Na minha viewpoint, a sua viewpoint, +[2742.720 --> 2744.720] estão lá em lá, +[2744.720 --> 2746.720] e eu estou representando mais de uma, +[2746.720 --> 2748.720] não posso ver isso aqui. +[2748.720 --> 2750.720] Meu corpo é só fazendo isso. +[2750.720 --> 2753.720] E o modo básico, pelo qual nós experimentamos isso, +[2753.720 --> 2757.720] é primeiro que nós colocados no mundo, +[2757.720 --> 2760.720] temos a experiência do nosso ponto de vista, +[2760.720 --> 2762.720] do ponto de vista do nosso interlocutor, +[2762.720 --> 2764.720] e de todos os outros, +[2764.720 --> 2767.720] que pôvam um espaço que nós compartilhamos. +[2767.720 --> 2771.720] E essa percepção acaba sendo armazenado +[2771.720 --> 2773.720] no cérebro de algum modo +[2773.720 --> 2775.720] que permita que essas coisas elas vão +[2775.720 --> 2777.720] setersando nas nossas estruturas conceituais. +[2786.720 --> 2789.720] Só como eu kopo, +[2789.720 --> 2790.720] Snaxas que punk era acontecendo no embríno +[2790.720 --> 2792.720] de sua Construção generado +[2792.720 --> 2795.720] é portrayed para vocês tamos +[2795.720 --> 2796.720] por determinado spas carne, +[2796.720 --> 2799.720] mas não ruga um lançamento político e representaticie +[2799.720 --> 2801.720] contra os outros, +[2801.720 --> 2803.720] porque a gente não pode lembrar isso aqui. +[2803.720 --> 2807.720] por causa da representação indicação de um idioma de idioma, +[2807.720 --> 2809.720] como uma linguagem de idioma, sangue, +[2809.720 --> 2813.720] written, gesture, painting, film e etc. +[2813.720 --> 2817.720] Então, o fato de nós termos todas essas estruturas no nosso cérebro, +[2817.720 --> 2819.720] não quer dizer muita coisa na verdade. +[2819.720 --> 2823.720] A gente precisa sair desse cérebro dessa mente invisível +[2823.720 --> 2827.720] e para verificar como é que tudo isso se manifesta +[2827.720 --> 2830.720] nas múltiplas formas de linguagem. +[2831.720 --> 2838.720] E diferentes mídias têm diferentes formas de fazer um baleiro do ponto. +[2838.720 --> 2846.720] Então, em spoken and written language, eu tenho a possibilidade de usar agora +[2846.720 --> 2850.720] com os últimos tempos, a possibilidade de mudar pronúncias, +[2850.720 --> 2854.720] tempos, formos, labels, como mamães e etc. +[2855.720 --> 2860.720] Em spoken language, eu tenho a possibilidade de replicar um caráter intonation +[2860.720 --> 2863.720] e depois de uma outra caráter intonation. +[2870.720 --> 2872.720] Então, não é o que é o que é o gesture, +[2872.720 --> 2876.720] o gesture não tem como tens markers e pronúncias, etc. +[2876.720 --> 2881.720] Mas eu tenho as manhas de uma personagens de arte. +[2882.720 --> 2889.720] Então, o que acontece é que os diferentes meios ou as diferentes mídias +[2889.720 --> 2895.720] vão ter os seus recursos próprios para essas manifestações de múltiplas pontos de vista. +[2895.720 --> 2898.720] Então, na linguagem escrita e na linguagem falada, +[2898.720 --> 2901.720] por exemplo, um dos recursos que nós temos é, por exemplo, +[2901.720 --> 2905.720] essa construção com agora mais passado em que você consegue fazer +[2906.720 --> 2910.720] se distanciamente de ponto de vista, ou você consegue, pela troca, +[2910.720 --> 2915.720] de tempos verbais, também mudar o ponto de vista, ou pela referenciação, +[2915.720 --> 2919.720] como você usar a MAMI no meio de um discurso indireto livre +[2919.720 --> 2924.720] para mostrar, para revelar, o ponto de vista de uma determinada personagem, +[2924.720 --> 2930.720] ou na linguagem falada também que você consegue replicar intonação +[2931.720 --> 2935.720] daquilo que uma pessoa te disse, ou nos gestos em que você consegue, +[2935.720 --> 2940.720] com o seu corpo, realizar diferentes personagens ao mesmo tempo. +[2940.720 --> 2944.720] Então, as diferentes mídias cada uma vai ter o seu meio +[2944.720 --> 2950.720] pelo qual esses múltiplas pontos de vista podem ser expressos e negociados. +[2953.720 --> 2956.720] Ok, então, o que é o take-home do que é que essas mídias diferentes +[2957.720 --> 2960.720] de forma que prova SUN e DEA 정도 dos stating de botros, +[2960.720 --> 2961.660] que também é todo mundo que +[2961.660 --> 2964.720] era algo que pode ter que ter que ter a surrounded entre +[2964.720 --> 2968.080] z amendas, mas também que é um opposito de +[2968.080 --> 2971.240] os políticos, também, que a gente precisa falar sobre +[2971.240 --> 2974.720] fé, ela precisa falar de pessoa e subst aimada de senão, +[2974.720 --> 2979.080] o que você precisa falar watery, que você pode fazer nuntes +[2979.080 --> 2983.320] ou não sabemos전, que ou schön promocilada circumando +[2983.320 --> 2985.320] que nos apoiemos para a construção de diferentes pontos de vista. +[2985.320 --> 2989.320] Nós conseguimos construir uma rede complexa de pontos de vista, +[2989.320 --> 2993.320] para narrativa, em que, se eu na conversação, +[2993.320 --> 2997.320] eu tenho o meu ponto de vista, mas consigo mezclar com o outro valante, +[2997.320 --> 2999.320] então, eu tenho que ver que, na verdade, +[2999.320 --> 3001.320] a gente não tem que ver o ponto de vista, +[3001.320 --> 3003.320] o que você pode ver, +[3003.320 --> 3005.320] se eu não tenho que ver, +[3005.320 --> 3007.320] o que você pode ver, +[3007.320 --> 3009.320] o que você pode ver, +[3009.320 --> 3011.320] o que você pode ver, +[3011.320 --> 3013.320] você pode ver que você pode ver, +[3013.320 --> 3015.320] se eu não tenho que ver, +[3015.320 --> 3017.320] se não tenho que ver, +[3017.320 --> 3019.320] omicamente, eu tenho que ver o ponto de vista do narrador, +[3019.320 --> 3021.320] pelo ponto de vista de uma personagem, +[3021.320 --> 3023.320] o que acontece no final é que, +[3023.320 --> 3027.320] nós temos uma rede complexa de pontos de vista +[3027.320 --> 3031.320] tal como ela é revelada na linguagem. +[3031.320 --> 3034.320] E isso é tudo agora. +[3034.320 --> 3035.320] Muito obrigado. +[3035.320 --> 3037.320] E é isso. +[3037.320 --> 3039.320] Obrigado. +[3039.320 --> 3049.320] Temos perguntas? +[3049.320 --> 3062.440] Só um esclarecimento que, no início que a gente estava tendo bastante problema com +[3062.440 --> 3066.440] o delay, a gente achou que atrapalharia um pouco a tradução. +[3066.440 --> 3075.840] Então, mas aí nós pegamos essa parte final justamente para fazer o rescaldo de tudo +[3075.840 --> 3077.840] aquilo que ela disse. +[3078.840 --> 3081.840] Alguma pergunta? +[3081.840 --> 3084.840] Nós temos questões. +[3091.840 --> 3094.840] Olá, eu não sei se você conhece-se. +[3094.840 --> 3095.840] É isso? +[3095.840 --> 3101.840] Eu sou Renata Mancini e por favor, por favor, por favor, é o bom lecture. +[3102.840 --> 3103.840] É não... +[3103.840 --> 3106.840] É melhor? +[3106.840 --> 3107.840] É melhor? +[3107.840 --> 3109.840] Agora, aqui está. +[3109.840 --> 3120.720] A minha pergunta não é uma pergunta mais curiosidade, porque eu estava se perguntando se +[3180.720 --> 3186.960] vocês们 não hunham uma ingüenza o诉quetes que-se entretophy竖ilecy se +[3186.960 --> 3193.700] fosse umamerka com a sério ideal deisko sotto Co pas destro da encarrag police +[3199.720 --> 3200.720] Ok? +[3200.720 --> 3206.160] Isso aqui oversa um século de sobre isso, agora Supercr downloaded performer nessa +[3206.160 --> 3219.600] ethical mas é 있는데 velvetiasichen essas estátocadas de Myancínios de +[3219.600 --> 3225.820] rapporto que eu conheci com essecho DeepHand deanson ou o ac Estado Ofance, +[3225.820 --> 3230.680] calories inteirais dividing, pr veulent. +[3230.680 --> 3236.140] está o strikes na descrição de했어요. +[3236.500 --> 3242.680] O trabalho nesse surround mathematical é muito o pouco que a gente faz no meio que eu instrumentali. +[3242.880 --> 3245.940] gentlemente acho vegetable eu não posso fazer nada. +[3246.440 --> 3250.400] Mas vocêceralza onde toda o trabalho se decir? +[3250.520 --> 3254.440] Nem tinha pouca Impactos Artes Computadores. +[3254.780 --> 3256.740] Mesteem naшьerière. +[3256.740 --> 3262.260] Se a teoria com moda isso, se a resposta da professor Aiv foi sim. +[3262.260 --> 3268.100] E ela se tomou um exemplo de uma pesquisa que ela conduziu com um professor da Universidade +[3268.100 --> 3269.820] San Diego, Rafael Núñez. +[3269.820 --> 3275.060] Eles trabalharam com uma língua falada no Chile, uma língua chamada a Imara. +[3275.060 --> 3282.100] E nessa língua, o passado é pra frente e o futuro é pra trás. +[3282.100 --> 3287.100] O reverso daquilo que nós temos como mente, certo? +[3287.100 --> 3294.900] O eo continua sendo centro, mas o futuro é pra frente e o passado é pra trás. +[3294.900 --> 3296.900] Oi? +[3296.900 --> 3302.740] O futuro é pra trás e o passado é pra frente. +[3302.740 --> 3304.740] Desculpa, isso mesmo. +[3304.740 --> 3309.180] O meu ocidentalismo está me impedindo aqui. +[3309.180 --> 3316.060] Mas aí ela deu outros exemplos de, por exemplo, em algumas culturas, apontar no importa se é assim, +[3316.060 --> 3319.580] se é assim, apontar pode ser do mesmo jeito. +[3319.580 --> 3327.300] Você pode, você ainda está apontando e você pode apontar pela boca, fazer assim ou com a cabeça +[3327.300 --> 3333.820] se você está com a mão cheia de material, cheia de livros e te perguntam onde está a tal coisa. +[3333.820 --> 3336.180] E você fala lá. +[3336.180 --> 3345.180] Então impendentemente do modo pelo qual você faz, você acaba realizando a gesta de apontar, por exemplo. +[3345.180 --> 3352.180] Para nós é diferente, a gente pode fazer assim, assim, assim. +[3353.180 --> 3356.180] Mais perguntas? +[3362.180 --> 3364.180] Certo. +[3367.180 --> 3369.180] Connexão ruim. +[3374.180 --> 3376.180] Ok. +[3376.180 --> 3380.180] Ok, então, nós gostamos de ter um segundo. +[3383.180 --> 3385.180] Ok. +[3385.180 --> 3389.180] Então, obrigado, por sua conferência. +[3389.180 --> 3392.180] Foi bem. +[3392.180 --> 3394.180] Obrigado. +[3394.180 --> 3397.180] Sim, obrigado. diff --git a/transcript/allocentric_2vwQyeV-LQ4.txt b/transcript/allocentric_2vwQyeV-LQ4.txt new file mode 100644 index 0000000000000000000000000000000000000000..ce1d45128b920452c9cff2017a1a199b0ecdc51f --- /dev/null +++ b/transcript/allocentric_2vwQyeV-LQ4.txt @@ -0,0 +1,774 @@ +[0.000 --> 10.320] Welcome. +[10.320 --> 12.160] Thank you for joining me this afternoon. +[12.160 --> 20.560] I'm Linda Silverman and I'm happy to share the visual spatial learner concept with you. +[20.560 --> 22.880] How many of you have heard this before? +[22.880 --> 24.880] Visual spatial? +[24.880 --> 27.640] Oh, that's why you're here. +[27.640 --> 29.640] How many of you are visual spatial? +[30.640 --> 32.640] You're not sure. +[32.640 --> 35.640] Well, we'll look at a slide. +[35.640 --> 41.640] How many of you are like this lady over here with the file cabinets where everything's neat? +[41.640 --> 45.640] How many of you are more like that fellow? +[45.640 --> 49.640] Definitely a visual spatial crowd here. +[50.640 --> 59.520] I don't think of visual spatial learners as being disorganized. +[59.520 --> 64.040] I think of them as being differently organized. +[64.040 --> 74.960] So if you are neat, Nick, if you're like the very orderly person on the left and you want +[74.960 --> 83.160] to straighten out the materials of someone like that in your life, someone you live with +[83.160 --> 89.640] or someone you teach with, they will never find what they're looking for again because +[89.640 --> 94.160] there are filers and there are pillars. +[94.160 --> 100.040] And the people who make piles know what day of the week they put it down there and they +[100.040 --> 104.200] know how far down the pile to look. +[104.200 --> 110.040] And if you try to organize them the way you are organized, they can't ever find anything +[110.040 --> 112.080] again. +[112.080 --> 121.280] So the point of the cartoon is really to show that there are different organizational +[121.280 --> 122.280] systems. +[122.280 --> 131.960] It doesn't mean that the neat woman is the smart one and the disorganized male is not a +[131.960 --> 133.120] smart. +[133.120 --> 138.120] There are just different ways of being in the world. +[138.120 --> 145.920] This gentleman has more potential for creativity. +[145.920 --> 149.280] So that's not to be looked down upon. +[149.280 --> 153.800] It's just because the organization system is different. +[153.800 --> 156.520] But I am not a real visual spatial learner. +[156.520 --> 164.400] New people are the real experts, the ones who identify as being visual spatial because +[164.400 --> 170.280] you live it and you have it from the inside out. +[170.280 --> 171.400] I don't. +[171.400 --> 174.240] I am spatially impaired. +[174.240 --> 183.240] So my job is just to help people who are like me understand people who are like you. +[183.240 --> 186.560] And I'll give you a perfect example. +[186.560 --> 196.160] I went to the gas station and I was late for work and I went, I mean you can probably +[196.160 --> 202.200] tell in the picture that I went up to the gas tank and got out of the car and realized +[202.200 --> 209.000] that the gas tank was on one side, the get pumps were on the other side. +[209.000 --> 217.320] So I got back in the car and I drove around and when I got out of the car the gas pump +[217.320 --> 222.920] was on one side, the gas tank was still on the opposite side. +[222.920 --> 231.200] So I got back in the car, I drove around the pump again and I got out of the car and +[231.200 --> 234.000] I still had them in the wrong position. +[234.000 --> 240.840] By this time the guys inside the shop were laughing so hard, I couldn't even get gas, +[240.840 --> 242.960] I was too embarrassed. +[242.960 --> 249.080] And I ended up going 35 miles on the freeway with an empty gas tank. +[249.080 --> 251.080] It's true. +[251.080 --> 256.320] So I don't come to this from internal knowing. +[256.320 --> 263.280] But I did notice that a lot of the people who wrote about the visual spatial experience +[263.280 --> 273.480] were males who had been damaged by the school system, damaged and marginalized and made +[273.480 --> 276.000] to feel bad about themselves. +[276.000 --> 280.120] And they were not fond of teachers. +[280.120 --> 292.040] And I was a classroom teacher and I am more sequential and so I thought okay I can be +[292.040 --> 293.520] the translator. +[293.520 --> 301.600] I can be the medium for the people who can't explain how they think, how they get to their +[301.600 --> 303.000] answers. +[303.000 --> 305.120] They can't show their work. +[305.120 --> 310.400] They just know and they don't know how they know but they just know. +[310.400 --> 319.640] So when you hear the word visual spatial just what comes to mind for you, just shout out +[319.640 --> 322.400] something. +[322.400 --> 324.400] 3D. +[324.400 --> 327.200] Creative. +[327.200 --> 334.120] I'm repeating because we're filming this and I want everyone to be able to hear it. +[334.120 --> 336.400] No sound concept of time? +[336.400 --> 338.400] Yeah, that's true. +[338.400 --> 342.200] There's a reason for that. +[342.200 --> 347.440] Time is processed in the left hemisphere. +[347.440 --> 350.560] The right hemisphere lives in the eternal now. +[350.560 --> 353.000] There is no time. +[353.000 --> 359.240] So if you have no sense of time you're hanging out in your right hemisphere which is what +[359.240 --> 361.200] these people do. +[361.200 --> 367.280] And if you're very time conscious you're hanging out in your left hemisphere and time runs +[367.280 --> 368.800] your life. +[368.800 --> 373.600] Time dictates how you should spend your life. +[373.600 --> 380.040] But there are people who really do not have any time consciousness. +[380.040 --> 382.440] What else do you think of? +[382.440 --> 383.440] Yes. +[383.440 --> 390.040] I'm sorry I couldn't hear that. +[390.040 --> 393.040] I'm so helicopter view. +[393.040 --> 394.520] Oh the helicopter view. +[394.520 --> 396.480] Yes, yes, yes, yes. +[396.480 --> 397.680] That's something I don't have. +[397.680 --> 398.880] I can't do that. +[398.880 --> 403.680] I can't imagine what a building looks like from the top down. +[403.680 --> 405.160] Don't have that capacity. +[405.160 --> 411.080] How many of you can imagine what's up here or what's over there and you know you have +[411.080 --> 415.160] an internal map, internal sense of direction. +[415.160 --> 417.280] I have none whatsoever. +[417.280 --> 419.280] I have no idea where I am. +[419.280 --> 424.640] I'd have someone take me by the arm to the toilet and get me back otherwise I'd never +[424.640 --> 425.960] be found again. +[425.960 --> 431.360] Yes, it wasn't just that I didn't understand the words. +[431.360 --> 438.040] I don't even understand the concept. +[438.040 --> 439.040] What else? +[439.040 --> 440.040] Imagination. +[440.040 --> 444.360] Oh yes, imagination. +[444.360 --> 446.360] What else? +[446.360 --> 455.880] Well, I'll share with you with some of the teachers that I have worked with. +[455.880 --> 464.520] I have said when I've asked them this question, artistic, mathematical, blessed the computer, +[464.520 --> 473.840] great imagination, laughter, needs more time, wonderful synthesizer. +[473.840 --> 478.200] I need to see you when you're talking. +[478.200 --> 484.040] What's things together without the directions? +[484.040 --> 497.320] Chest club, can't spell, scattered, doesn't show the work and illegible handwriting. +[497.320 --> 501.360] Does that, yes? +[501.360 --> 510.000] Is it possible that someone is strong, which will speak to learner and is very good at +[510.000 --> 511.000] spelling? +[511.000 --> 515.920] Yes, I knew you were going to ask that question because I was just thinking about the +[515.920 --> 518.920] man who wrote to me. +[518.920 --> 523.760] I put out a question on my website when I was writing the book Upside Down Brilliant +[523.760 --> 526.320] about, do you relate to these? +[526.320 --> 529.160] Can you tell, share your stories? +[529.160 --> 538.120] And this man said, I relate to everything except I'm such a good speller and he misspelled +[538.120 --> 544.760] three words in this little two sentences. +[544.760 --> 548.640] But I can't explain people who really can spell. +[548.640 --> 552.360] It's called a photographic mind. +[552.360 --> 560.840] So if you can see it and you have that photographic image, you can remember how to spell it. +[560.840 --> 567.280] And that is the secret to teaching spelling to visual spatial learners. +[567.280 --> 570.360] You have to get them to visualize. +[570.360 --> 575.520] Can't visualize, can't spell because they can't sound it out. +[575.520 --> 578.040] They have to see it. +[578.040 --> 579.440] Do you see the words? +[579.440 --> 580.440] Yes. +[580.440 --> 581.880] That's the secret. +[581.880 --> 585.800] But I did immediately think about this man who said, yeah, but I spell. +[585.800 --> 588.880] I really can spell and he couldn't. +[588.880 --> 589.880] Yes. +[589.880 --> 590.880] Yes. +[590.880 --> 604.280] Oh, well, I don't know what it's like in the Netherlands, but in the United States, all +[604.280 --> 611.560] of our teachers are taught and all of our achievement tests say you have to show the +[611.560 --> 614.920] steps that you took to get to your answer. +[614.920 --> 618.840] If you can't show your work, you don't know anything. +[618.840 --> 622.840] You have to demonstrate how you got to the answer. +[622.840 --> 630.700] Well, if you didn't take a series of steps to get to an answer, how can you show your work? +[630.700 --> 634.040] Your work is, ah, I see it. +[634.040 --> 636.200] That's your work. +[636.200 --> 637.720] You see it all at once. +[637.720 --> 640.360] You just see it in your head. +[640.360 --> 647.780] And the people who write the American textbooks, the people who teach the teachers how to teach, +[647.780 --> 654.100] and the people who write the achievement tests all believe that everyone takes a series +[654.100 --> 656.800] of steps to get to an answer. +[656.800 --> 660.040] So you should be able to show your work. +[660.040 --> 662.080] That's what that means. +[662.080 --> 666.080] So, ah, let's see if you're a visual spatial learner. +[666.320 --> 675.280] I want you to write down if you have paper, you know, how many of these fit you. +[675.280 --> 681.280] This is a short list, but see if, just make a tally mark. +[681.280 --> 685.600] Are you a big picture thinker? +[685.600 --> 689.640] Do you solve problems in unusual ways? +[689.640 --> 692.240] Do you learn concepts all at once? +[692.240 --> 696.040] You get this, ah-ha, I got it. +[696.600 --> 701.360] Do you need to see relationships in order to learn? +[701.360 --> 705.000] Do you have a vivid imagination? +[705.000 --> 709.080] Can you feel what others are feeling? +[709.080 --> 712.440] Are you good at reading maps? +[712.440 --> 716.840] Do you often lose track of time? +[716.840 --> 719.440] Do you struggle with spelling? +[719.440 --> 723.960] Are you organizationally impaired? +[724.000 --> 729.480] Now you don't have to fit all of them, but if you fit the majority, +[729.480 --> 732.280] you're probably are more visual spatial. +[732.280 --> 735.200] How many of you fit half of them? +[735.200 --> 736.880] Half fit you. +[736.880 --> 740.240] How many more than half fit you? +[740.240 --> 745.200] Yeah, we are, you're much more of a visual spatial audience. +[745.200 --> 750.040] So, this is going to be group therapy, I guess. +[750.040 --> 759.120] So maybe when the video tape is done and it gets posted on YouTube, +[759.120 --> 762.760] more people will understand what you already know. +[762.760 --> 766.840] So, how many of you are teachers? +[766.840 --> 771.200] Okay, so how can you tell if your students are visual spatial? +[771.200 --> 774.240] I mean, this is one of the cartoons. +[774.240 --> 779.480] I'm going to show you a series of cartoons from upside down brilliance. +[779.480 --> 785.840] Do they know things without being able to explain how or why? +[785.840 --> 787.040] How did you get this answer? +[787.040 --> 789.760] I don't know, I just know. +[789.760 --> 792.400] Do they lose track of time? +[792.400 --> 797.680] Do they have difficulty with time tests? +[797.680 --> 806.960] Do they remember what they see but forget what they hear? +[806.960 --> 814.000] And then, do they have the most creative reason for not having their homework done that +[814.000 --> 817.720] you have ever encountered in all your years of teaching? +[817.720 --> 822.480] Don't you think they get extra credit for real creative excuses? +[822.480 --> 826.600] I think they should get extra credit. +[826.600 --> 830.920] So who are these visual spatial learners? +[830.920 --> 834.040] They are the children we call twice exceptional. +[834.040 --> 836.840] They're gifted with learning disabilities. +[836.880 --> 842.520] They're their underachievers who aren't exactly doing what we would like them to do. +[842.520 --> 851.000] They're your creative learners, your artists, musicians, mathematicians and builders, and +[851.000 --> 853.760] your future surgeons. +[853.760 --> 861.200] You really want your surgeon to know where everything is in relation to everything else and +[861.200 --> 865.040] put things back exactly where they were. +[865.040 --> 867.160] It's a visual art. +[867.160 --> 874.040] And in order to get into the field of surgery, you have to take a spatial test to show that +[874.040 --> 875.800] this is something you can do. +[875.800 --> 877.880] I would not make a good surgeon. +[877.880 --> 880.680] You don't want me operating on you. +[880.680 --> 885.000] So how many of you believe in learning styles? +[885.000 --> 891.720] And how do you differentiate for students in your classroom with different learning styles? +[891.720 --> 898.720] What models or methods or how do you help kids who learn differently? +[898.720 --> 901.720] Yes. +[901.720 --> 909.720] Oh, thank you. +[909.720 --> 913.720] Thank you. +[913.720 --> 935.720] Do you have a good role in the children's level of teaching? +[935.720 --> 937.720] Will you say that again, please? +[937.720 --> 945.960] We teach with goals and the people that choose how they will learn for that goal in their +[945.960 --> 949.720] own manner and learning style. +[949.720 --> 951.720] That's wonderful. +[951.720 --> 955.200] Who else? +[955.200 --> 958.200] Thank you, Mom. +[958.200 --> 962.400] Yes. +[962.400 --> 967.000] I differentiate in the instruction methods. +[967.000 --> 973.120] Sometimes I do it verbally, but with all the children, I take pens that I see how the +[973.120 --> 977.000] figures are visualized. +[977.000 --> 979.560] I had trouble hearing that. +[979.560 --> 982.760] The instruction I give in different modes. +[982.760 --> 989.360] Sometimes it's just verbally instruction and sometimes I use pens to visualize how it's +[989.360 --> 990.360] built up. +[990.360 --> 995.720] You're aware that some learn better verbally, some learn better visually. +[995.720 --> 1002.440] So I have heard several times since I got here that a lot of people are using the multiple +[1002.440 --> 1005.800] intelligence model by Howard Gardner. +[1005.800 --> 1009.760] How many of you are using Gardner's model? +[1009.760 --> 1013.720] How many of you have been taught Gardner's model? +[1013.720 --> 1020.720] So how many intelligence are there? +[1020.720 --> 1030.480] Eight, nine, ten? +[1030.480 --> 1032.200] It's a little confusing, isn't it? +[1032.200 --> 1036.840] It depends now what day is this. +[1036.840 --> 1042.920] The intelligences have evolved and changed over the years. +[1042.920 --> 1051.760] There's the original seven intelligences in frames of mind, which came out in 1983. +[1051.760 --> 1062.680] And that includes linguistic, which is your verbal, musical, logical mathematical, spatial, +[1062.680 --> 1067.960] motily kinesthetic, interpersonal and interpersonal. +[1067.960 --> 1072.400] And then afterwards some new intelligences came about. +[1072.400 --> 1077.000] Do you know what the new ones were? +[1077.000 --> 1079.200] Spiritual God asked. +[1079.200 --> 1080.520] Oh, natural. +[1080.520 --> 1087.800] Yeah, it sounds like it should be, or at least naturalistic, but it's the naturalist. +[1087.800 --> 1091.160] It doesn't have the same grammar. +[1091.160 --> 1099.440] But existential is one that has almost made it, but we're not quite sure. +[1099.440 --> 1105.800] I'm not sure how he decides when something is in or out, but last time I heard that it +[1105.800 --> 1107.720] was very close to being in. +[1107.720 --> 1111.240] How many of you thought spiritual was one? +[1111.240 --> 1120.800] It was, but it got canned because Gardner said that spirituality is not universal. +[1120.800 --> 1125.520] Okay. +[1125.520 --> 1134.800] So a lot of people think that because I'm the visual spatial learner person, that it comes +[1134.800 --> 1140.240] out of this model, but actually it doesn't. +[1140.240 --> 1148.960] The way in which I'm looking at visual spatial is through hemisphericity, not through multiple +[1148.960 --> 1151.160] intelligences. +[1151.160 --> 1156.640] And there is some overlap between Gardner's spatial intelligence and the visual spatial +[1156.640 --> 1158.600] learner, obviously. +[1158.600 --> 1167.320] But you notice that there is one major word missing in Gardners, and that's the word visual. +[1167.320 --> 1176.720] So that visual piece is not a part of that model. +[1176.720 --> 1183.480] There was another multiple intelligences model before Gardner that I'm seeing as a couple +[1183.480 --> 1185.240] nodding heads. +[1185.240 --> 1194.280] How many of you were exposed to Gilford, JP Gilford, and his structure of intellect? +[1194.280 --> 1201.680] He had at one time 120 intelligences. +[1201.680 --> 1210.360] And then before he died, he split figural into auditory, figural, and visual figural, +[1210.360 --> 1214.240] and ended up with 150 intelligences. +[1214.240 --> 1217.240] I wonder how many Gardner will have. +[1217.240 --> 1223.280] But Gilford's model was all the rage in the United States when I was teaching at the University +[1223.280 --> 1224.560] of Denver. +[1224.560 --> 1232.240] And we had to have all of our students learn the little names, acronyms, that went with +[1232.240 --> 1233.840] each cell. +[1233.840 --> 1239.720] So evaluation of figural units was EFU. +[1239.720 --> 1245.320] And they had to learn this when they were in graduate school and gifted education. +[1245.320 --> 1247.200] They didn't like that much. +[1247.200 --> 1256.800] But it has an interesting shape because Gilford was visual spatial. +[1256.800 --> 1264.160] If how many of you know about Bloom's taxonomy, it's total linear sequential. +[1264.160 --> 1266.760] Compare that to this. +[1266.760 --> 1267.760] You've got a cube. +[1267.760 --> 1269.760] It's got dimensionality. +[1269.760 --> 1274.520] This is a visual spatial thinker. +[1274.520 --> 1279.360] Gardner's model is sequential. +[1279.360 --> 1286.040] So now we're going to change all together and talk about a whole other realm, which is +[1286.040 --> 1287.760] personality type. +[1287.760 --> 1292.760] How many of you know your personality type on the Myers-Briggs? +[1292.760 --> 1296.240] It's sometimes it's called the MBTI. +[1296.240 --> 1298.240] What are you? +[1298.240 --> 1300.240] I-N-F-P, that's the gifted type. +[1300.240 --> 1302.240] What are you? +[1302.240 --> 1303.240] Yes. +[1303.240 --> 1304.240] Pardon? +[1305.240 --> 1306.240] I-N-T-P. +[1306.240 --> 1307.240] OK. +[1307.240 --> 1316.800] I-N-T-P's make great college professors, but they don't ever write tests that anyone understands. +[1316.800 --> 1322.760] They're so intellectual, the concreteness that the students are looking for usually isn't +[1322.760 --> 1323.760] there. +[1323.760 --> 1327.680] But I-N-T-P and I-N-F-P are two gifted profiles. +[1327.680 --> 1328.680] Who else? +[1328.680 --> 1329.680] Brave. +[1329.680 --> 1330.680] Mote. +[1330.680 --> 1334.200] You had your hand, for the raise. +[1334.200 --> 1335.200] Mote. +[1335.200 --> 1336.200] What are you? +[1336.200 --> 1337.200] I don't know. +[1337.200 --> 1338.200] You don't know. +[1338.200 --> 1339.200] I'm a mix. +[1339.200 --> 1340.200] I'm a mix. +[1340.200 --> 1344.200] And sometimes I'm just like that. +[1344.200 --> 1347.200] Sometimes you're one, sometimes you're other. +[1347.200 --> 1348.200] OK. +[1348.200 --> 1351.440] So you're in the middle. +[1351.440 --> 1362.080] The introverted, intuitive feeling perceiving is the most typical gifted child and gifted +[1362.080 --> 1364.480] adult profile. +[1364.480 --> 1370.960] And the extroverted, sensing, thinking, judging in the United States is the most typical +[1370.960 --> 1372.960] teacher profile. +[1372.960 --> 1379.160] So there's a real mismatch between the typical student and the typical teacher. +[1379.160 --> 1384.280] The reason this is up here when we're talking about learning styles is that there are a +[1384.280 --> 1393.120] lot of books about using the personality types as a basis for teaching styles and learning +[1393.120 --> 1395.320] styles in the classroom. +[1395.320 --> 1400.760] Have any of you ever seen learning styles based on the Myers-Briggs? +[1400.760 --> 1402.920] There's really good books on this. +[1402.920 --> 1412.000] So if we go by the Myers-Briggs, there is 16 different learning styles based on the +[1412.000 --> 1415.920] 16 different personality types. +[1415.920 --> 1423.000] If we go by Gardner's model, we've got eight and three quarters, maybe nine different +[1423.000 --> 1427.360] learning styles based on the multiple intelligences. +[1427.360 --> 1433.000] If we go by Gilford, we've got 150 different intelligences. +[1433.000 --> 1438.920] And if we have 150 learning styles to go with them, that would be challenging. +[1438.920 --> 1446.480] But this is supposed to be the very best and most comprehensive learning styles inventory +[1446.480 --> 1448.640] that's ever been developed. +[1448.640 --> 1452.120] Are any of you familiar with Dunn and Dunn? +[1452.120 --> 1455.720] Dunn and Dunn's elements of learning style? +[1455.720 --> 1457.120] You are. +[1457.120 --> 1458.120] Have you ever tried it? +[1458.120 --> 1460.560] Have you ever done it in the classroom? +[1460.560 --> 1462.560] You haven't done Dunn and Dunn. +[1462.560 --> 1463.560] Okay. +[1463.560 --> 1467.760] So this is the most comprehensive. +[1467.760 --> 1473.960] There's environmental, emotional, sociological, physical, psychological. +[1473.960 --> 1477.800] And then there are environmental elements. +[1477.800 --> 1479.600] Silence versus sound. +[1479.600 --> 1482.840] Are you more comfortable in a silent environment? +[1482.840 --> 1489.000] Bright versus low light, warm versus cool temperatures, formal versus informal design +[1489.000 --> 1490.960] of space. +[1490.960 --> 1496.640] Then there's emotional elements, motivation, persistence, responsibility, structure versus +[1496.640 --> 1498.080] options. +[1498.080 --> 1503.960] Then there are sociological elements, thinking and working with peers alone and pairs in teams +[1503.960 --> 1506.880] with adults and in several ways. +[1506.880 --> 1513.760] And then there are physical elements, perceptual strengths, auditory, visual, tactile, conicetic, +[1513.760 --> 1516.800] with or without intake of food or drink. +[1516.800 --> 1521.840] And time of day or night, I had to decide to just put day or night. +[1521.840 --> 1525.720] Otherwise, if I had to put time in there, I couldn't have done this. +[1525.720 --> 1528.240] Day versus passivity. +[1528.240 --> 1533.920] And then there are the psychological elements, global versus analytic, hemispheric preference +[1533.920 --> 1537.800] and impulsivity versus reflectivity. +[1537.800 --> 1545.400] So if we tried to come up with the number of different learning styles that this would +[1545.400 --> 1553.520] generate, we would have eight environmental, eight emotional, six sociological, three +[1553.520 --> 1559.320] perceptual, six other physical and six psychological elements. +[1559.320 --> 1566.480] How many possible learning styles do you think there might be, according to Dun and Dun? +[1566.480 --> 1570.280] That's a very good guess. +[1570.280 --> 1574.240] 41,472. +[1574.240 --> 1581.040] I was a classroom teacher and there are a limited number of hours in the day. +[1581.040 --> 1587.880] And while I respect what all of my colleagues have accomplished in terms of raising awareness +[1587.880 --> 1593.880] about learning style and I appreciate their work, I have to believe that there is an easier +[1593.880 --> 1597.440] way to prepare for students with different learning styles. +[1597.440 --> 1605.520] So the model I'm sharing with you only has two parts. +[1605.520 --> 1611.720] One that talks to the left hemisphere, one that talks to the right hemisphere. +[1611.720 --> 1616.160] And I'm not planning on adding another hemisphere. +[1616.160 --> 1621.160] So it's not going to grow, it's not going to change, it's going to stay the way it is, +[1621.160 --> 1623.400] and it gets even better. +[1623.400 --> 1631.520] You don't have to worry about the one of them because you already know how to reach +[1631.520 --> 1634.400] auditory sequential learners. +[1634.400 --> 1642.120] Those are the happy campers who come to school, bring you flowers, love your lessons, +[1642.120 --> 1645.560] love the homework, and are doing a great job. +[1645.560 --> 1649.560] They're enjoying school and it all works for them. +[1649.560 --> 1654.920] So I don't need to give you any advice at all about working with children who are good +[1654.920 --> 1661.040] step-by-step learners, who are good listeners, who attend to details. +[1661.040 --> 1668.640] They learn by trial and error, they teach in words, they learn in words, and ideas. +[1668.640 --> 1673.160] If you ask what the right answer is, they know that there's a right answer that they +[1673.160 --> 1674.760] can get. +[1674.760 --> 1680.400] And they're time conscious, they get their homework in on time, and they're analytical. +[1680.400 --> 1688.960] So instead of our talking about how to create an environment where both types of students +[1688.960 --> 1690.200] are happy. +[1690.200 --> 1695.800] I think we have to be acknowledging of the fact that one group of these students is already +[1695.800 --> 1701.200] happy, and one group of these students is not so happy. +[1701.200 --> 1704.360] They're not as happy coming to school. +[1704.360 --> 1706.680] They're not as engaged. +[1706.680 --> 1709.320] They're sometimes marginalized. +[1709.320 --> 1712.440] They sometimes feel stupid. +[1712.440 --> 1717.120] They are often not pit for the gifted programs. +[1717.120 --> 1725.840] They're the ones who are going to be just below the cut-off score to qualify for profisions. +[1725.840 --> 1729.720] And they're the kids that we're missing. +[1729.720 --> 1731.960] They are the cameramen. +[1731.960 --> 1735.600] They are the photographers. +[1735.600 --> 1738.720] They are the architects. +[1738.720 --> 1741.120] They are the engineers. +[1741.120 --> 1743.360] They are the builders. +[1743.360 --> 1749.600] They are the people who invent paradigm shifts, and they're important. +[1749.600 --> 1757.840] And we have to recognize that they exist and start to make school at least visual-spatial +[1757.840 --> 1759.920] friendly. +[1759.920 --> 1766.880] The good news about just thinking about this one group of children is that it's been +[1766.880 --> 1774.720] demonstrated that if you make learning more accessible for visual-spatial learners, +[1774.720 --> 1778.800] everybody in the classroom learns better. +[1778.800 --> 1787.520] So the things that you do for this one group also turn on the brain for all of the students. +[1787.520 --> 1790.560] So everybody benefits. +[1790.560 --> 1795.960] The visual-spatial learner learns more all at once, whole part learning. +[1795.960 --> 1802.240] They have to see the big picture, and then they can understand how the parts relate to +[1802.240 --> 1803.840] the whole. +[1803.840 --> 1806.520] They're very keen observers. +[1806.520 --> 1812.560] If you are wearing a colored contact lens, they're the ones that will say, weren't your +[1812.560 --> 1815.080] eyes brown yesterday? +[1815.080 --> 1820.520] If you change a bulletin board, they're the first ones to notice. +[1820.520 --> 1822.240] Big picture thinkers. +[1822.240 --> 1825.560] I get this aha moment. +[1825.560 --> 1832.600] They have strong images, and those who are not good visualizers have strong feelings +[1832.600 --> 1833.880] of knowing. +[1833.880 --> 1839.200] So some of them don't visualize they just know intuitively or in their gut. +[1839.200 --> 1842.600] They come up with unusual solutions to problems. +[1842.600 --> 1847.000] They lose track of time, and they're intuitive. +[1847.000 --> 1854.920] And these are the kids that I'm hoping that we can pay more attention to. +[1854.920 --> 1862.680] And the person who influenced my thinking the most on this population is a brain researcher +[1862.680 --> 1867.120] in the United States named Jerry Levy. +[1867.120 --> 1870.160] There's a book called Left Brain, Right Brain. +[1870.160 --> 1872.080] I don't know whether any of you have come across it. +[1872.080 --> 1874.800] Well, I see one nods, bring her in Deutsch. +[1874.800 --> 1880.920] They credit Jerry Levy with having discovered the functions of the left hemisphere and the +[1880.920 --> 1883.920] functions of the right hemisphere in her research. +[1883.920 --> 1886.800] She was still a graduate student. +[1886.800 --> 1894.480] And she said that unless the right hemisphere is activated and engaged, this is not just +[1894.480 --> 1896.640] in visual spatial children. +[1896.640 --> 1905.120] This is in every human being, in every learner, unless the right hemisphere is activated and engaged. +[1905.120 --> 1910.320] Attention is low, and learning is poor. +[1910.320 --> 1914.160] Because we all have both hemispheres. +[1914.160 --> 1921.000] Even if we bring our left hemisphere to school, our right hemisphere comes with it. +[1921.000 --> 1927.000] And if we want a student to be alert and engaged, we have to get that right hemisphere into +[1927.000 --> 1931.520] the act for all of our students. +[1931.520 --> 1938.320] So these are how the two hemispheres work differently. +[1938.320 --> 1946.000] The left hemisphere is sequential, analytic, and temporal, meaning time bound. +[1946.000 --> 1950.120] Time exists because of the left hemisphere. +[1950.120 --> 1957.720] And the right hemisphere is much more aware of space, spatial relations, it's holistic. +[1957.720 --> 1965.520] And instead of being analytic and breaking things down, it's synthetic and brings things +[1965.520 --> 1967.000] together. +[1967.000 --> 1971.320] And these, how the parts can relate to the whole. +[1971.320 --> 1976.800] How many of you have heard that the left hemisphere is also verbal? +[1976.800 --> 1979.760] We're taught that a lot. +[1979.760 --> 1982.920] I don't think that that's accurate though. +[1982.920 --> 1986.400] And I'm going to give you an example of this. +[1986.400 --> 1989.320] I want you to pretend that I'm your mother. +[1989.320 --> 1993.040] I am old enough to be most of your mother's anyway. +[1993.040 --> 1995.960] And I want you to pretend that you're nine years old. +[1995.960 --> 1997.360] Can you do that? +[1997.360 --> 2001.600] Okay, your downstairs, I'm upstairs. +[2001.600 --> 2004.720] And this is what you see in here. +[2004.720 --> 2007.560] Do you hear me? +[2007.560 --> 2010.800] Now what am I conveying to you? +[2010.800 --> 2019.280] What did you get out of my communication? +[2019.280 --> 2022.160] I'm angry. +[2022.160 --> 2025.520] How do you know I'm angry? +[2025.520 --> 2027.080] Ton of voice. +[2027.080 --> 2028.080] What else? +[2028.080 --> 2029.080] Volume. +[2029.080 --> 2030.080] Volume. +[2030.080 --> 2031.080] What else? +[2031.080 --> 2032.080] Volume. +[2032.080 --> 2033.080] Volume. +[2033.080 --> 2034.080] Volume. +[2034.080 --> 2035.080] Volume. +[2035.080 --> 2036.080] Volume. +[2036.080 --> 2037.080] Yeah. +[2037.080 --> 2040.320] And my facial expression? +[2040.320 --> 2044.040] My hands on my hips, my body language. +[2044.040 --> 2049.320] Your left hemisphere doesn't process any of that. +[2049.320 --> 2053.040] Only your right hemisphere is aware of all these elements. +[2053.040 --> 2056.800] There's something else that your right hemisphere is aware of. +[2056.800 --> 2063.040] Your right hemisphere remembers what happened to you the last time I looked like that. +[2063.040 --> 2068.400] Your right hemisphere is already figuring out what the consequences are going to be, +[2068.400 --> 2075.200] because it sees the big picture of what happened last time, what you're doing now, and +[2075.200 --> 2077.320] the trouble you're going to get into. +[2077.320 --> 2082.400] And what I'm going to do if you don't stop what you're doing that's getting me that +[2082.400 --> 2083.640] angry. +[2083.640 --> 2089.360] So the right hemisphere has the context. +[2089.360 --> 2098.720] In understanding verbal information, you have to have more than just an ability to decode +[2098.720 --> 2100.160] the words. +[2100.160 --> 2107.680] If your left hemisphere was all you had to work with and your right hemisphere wasn't operating, +[2107.680 --> 2112.280] the answer to my question would have been yes. +[2112.280 --> 2116.440] Do you hear me? +[2116.440 --> 2121.760] That left hemisphere is going to say yes, I hear you. +[2121.760 --> 2126.880] Because that's all that the left hemisphere got out of what I said. +[2126.880 --> 2133.960] It understood the words and it can produce words, which are words are sequential. +[2133.960 --> 2138.120] If I said those same words out of order, I would have a thought disorder and you wouldn't +[2138.120 --> 2140.240] understand what I'm saying. +[2140.240 --> 2147.520] If you didn't understand the order of the words that I was saying, you couldn't follow +[2147.520 --> 2149.600] my discussion. +[2149.600 --> 2155.400] So for us to communicate, speech is sequential. +[2155.400 --> 2158.600] Listening is sequential. +[2158.600 --> 2161.600] It's auditory, sequential. +[2161.600 --> 2164.880] But it doesn't get at the full meaning. +[2164.880 --> 2169.680] You've got to have more than just an understanding of the words. +[2169.680 --> 2170.920] Do you hear me? +[2170.920 --> 2173.400] Yes, I hear you. +[2173.400 --> 2180.440] And begin to understand the meaning of what I just said. +[2180.440 --> 2187.800] The meaning came from your right hemisphere, from picking up all the rest of the information +[2187.800 --> 2190.560] and putting it together into a hole. +[2190.560 --> 2198.000] So the left hemisphere is dealing with the text, but the right hemisphere has the context. +[2198.000 --> 2206.280] The whole situation, background or environment relevant to something happening. +[2206.280 --> 2213.440] So the right hemisphere plays a very powerful role in understanding verbal communication. +[2213.440 --> 2217.160] Nonverbal is a part of verbal communication. +[2217.160 --> 2220.560] It gives you context. +[2220.560 --> 2226.240] The left hemisphere enables you to take things apart and analyze them and compare them. +[2226.240 --> 2229.360] And name them, name the parts. +[2229.360 --> 2234.720] But it's the right hemisphere that puts them all together and enables you to enjoy smelling +[2234.720 --> 2237.160] the flower. +[2237.160 --> 2243.040] So there are many, many gifts of our right hemisphere that we do not honor in school. +[2243.040 --> 2246.000] We're not teaching to these gifts. +[2246.000 --> 2249.400] We're not grading children on these gifts. +[2249.400 --> 2251.520] We're not giving them marks. +[2251.520 --> 2255.800] And they're not getting awards and excellence for these gifts. +[2255.800 --> 2261.040] Not their important life gifts. +[2261.040 --> 2266.360] You can't see the beginning for scientific, became tiffy for some reason. +[2266.360 --> 2275.520] But that said, scientific and technological proficiency, holistic and whole part thinking, +[2275.520 --> 2283.080] artistic expression, imagination, invention, discovery. +[2283.080 --> 2290.680] Back bottom one, for some reason you can't see the top word, but that's emotional responsiveness. +[2290.680 --> 2298.320] And the D is missing, or I guess it's black, whole, holographic understanding, intuitive +[2298.320 --> 2301.240] knowledge and spirituality. +[2301.240 --> 2303.680] These are the gifts of the right hemisphere. +[2303.680 --> 2305.640] And they're pretty important gifts. +[2305.640 --> 2309.760] I don't want to talk about just one of them. +[2309.760 --> 2313.760] How important is intuition? +[2313.760 --> 2319.200] How important is intuition to you? +[2319.200 --> 2326.080] Has intuition ever saved your life or saved the life of someone you know? +[2326.080 --> 2328.760] You just say it's pretty important. +[2328.760 --> 2333.520] Do you give marks in intuition in school? +[2333.520 --> 2338.320] Do you develop children's intuition? +[2338.320 --> 2339.320] You do. +[2339.320 --> 2344.160] I think it happens automatically, but that's more that the children just, exactly what you +[2344.160 --> 2347.920] just gave with the facial expression and the arms and the thing. +[2347.920 --> 2351.400] I think children learn that really quickly in school. +[2351.400 --> 2352.400] They do. +[2352.400 --> 2353.920] It's true. +[2353.920 --> 2356.720] But we have to acknowledge. +[2356.720 --> 2359.960] We have to say it's important. +[2359.960 --> 2367.360] We have to say that your intuition is valuable and good that you've got it and keep working +[2367.360 --> 2375.040] with it and keep counting on it because there is another way of knowing beside your logic. +[2375.040 --> 2379.360] Your intuition has a big picture. +[2379.360 --> 2384.720] It steps outside of time. +[2384.720 --> 2386.360] Think about that. +[2386.360 --> 2393.120] That's how it saves lives because it knows what's going to happen. +[2393.120 --> 2395.240] Your logic doesn't. +[2395.240 --> 2400.160] Your logic lives in time and it can't know the future. +[2400.160 --> 2402.560] But your intuition can. +[2402.560 --> 2403.960] You have to listen to it. +[2403.960 --> 2409.840] How many of you have had experiences where your intuition told you something and you didn't +[2409.840 --> 2415.480] listen and you regret it? +[2415.480 --> 2421.120] Because your logical mind says, well, how do you know that? +[2421.120 --> 2423.760] And you can't answer the question. +[2423.760 --> 2424.920] How do you know that? +[2424.920 --> 2426.240] You just know. +[2426.240 --> 2432.320] You don't know how you know but you're getting a message and the message knows something +[2432.320 --> 2437.480] but you can't explain how it knows what it knows. +[2437.480 --> 2447.720] That is a very powerful part of what you are born with that needs to be honored and developed +[2447.720 --> 2455.480] for your safety, your future and the future of everyone in your life. +[2455.480 --> 2459.080] So now I'm going to talk about two students. +[2459.080 --> 2466.600] We're going to assume that student A has a certain set of skills that student B doesn't +[2466.600 --> 2472.800] have and we're going to assume that student B has a certain set of skills that student +[2472.800 --> 2474.920] A doesn't have. +[2474.920 --> 2478.800] So student A has need handwriting. +[2478.800 --> 2482.360] Student B type 60 words a minute. +[2482.360 --> 2485.040] Student A is good at spelling. +[2485.040 --> 2487.520] Student B is a good visualizer. +[2487.520 --> 2491.240] Student A has instant recall of facts. +[2491.240 --> 2494.720] Student B loves geometry and physics. +[2494.720 --> 2496.080] Student A is well-rounded. +[2496.080 --> 2499.840] Student B is brilliant in one area. +[2499.840 --> 2502.200] Student A is a convergent thinker. +[2502.200 --> 2504.480] Knows how to get to the right answer. +[2504.480 --> 2506.800] Student B is creative. +[2506.800 --> 2513.720] Student A is skilled at wrote memorization and student B understands complex concepts. +[2513.720 --> 2515.800] Student A shows steps easily. +[2515.800 --> 2519.040] Student B sees the big picture. +[2519.040 --> 2521.000] Student A is a good analyzer. +[2521.000 --> 2523.960] B a good synthesizer. +[2523.960 --> 2525.640] A is punctual. +[2525.640 --> 2529.400] B has a more fluid sense of time. +[2529.400 --> 2531.520] Here follows directions well. +[2531.520 --> 2536.280] B is an excellent problem solver. +[2536.280 --> 2540.040] Which of these students has a higher grade point average? +[2540.040 --> 2541.560] Higher marks. +[2541.560 --> 2551.600] A. And which of these students do you think is more employable in the 21st century? +[2551.600 --> 2559.160] But we continue our traditions and we continue to teach what we're commanded to teach in +[2559.160 --> 2567.200] the way we're commanded to teach it because that's what we're expected to do as teachers. +[2567.200 --> 2578.160] And if we want to keep our jobs, we continue to make all of the A group the important ones. +[2578.160 --> 2582.960] And we don't spend as much time on the B group. +[2582.960 --> 2588.840] Now I'm making assumptions here and please correct me if this doesn't apply to the Netherlands +[2588.840 --> 2589.840] at all. +[2589.840 --> 2592.720] It may only be an American phenomenon. +[2592.720 --> 2600.400] But in American schools, you are rewarded for following directions, turning in assigned +[2600.400 --> 2607.440] work on time, memorization of facts, fast recall, showing the steps of your work, neat +[2607.440 --> 2614.360] legible handwriting, accurate spelling, punctuality, good organization and tightiness. +[2614.360 --> 2617.160] Are those values in a Dutch school? +[2617.640 --> 2618.640] Still. +[2618.640 --> 2619.640] Okay. +[2619.640 --> 2628.040] So, what jobs in adult life require this set of skills? +[2628.040 --> 2630.320] What are we training our kids to be? +[2630.320 --> 2631.320] Yes. +[2631.320 --> 2632.320] Teachers. +[2632.320 --> 2634.720] Teachers, you got it. +[2634.720 --> 2639.040] We're training all of this kids to be teachers. +[2639.040 --> 2643.720] There are other jobs that this will equip them to do. +[2643.720 --> 2652.560] Social management, good executive secretary, accountant, auditor. +[2652.560 --> 2656.200] I mean, there are some things, some good things. +[2656.200 --> 2658.240] I'm not saying these are bad things. +[2658.240 --> 2660.440] I'm saying they're not enough. +[2660.440 --> 2663.400] How many of you teach gifted children? +[2663.400 --> 2671.240] Are they all going to become teachers or middle managers or accountants or bookkeepers? +[2671.240 --> 2672.560] Probably not. +[2672.560 --> 2681.240] So I've actually inquired at higher level technical institutes. +[2681.240 --> 2685.080] What they're looking for in a new hiring. +[2685.080 --> 2687.240] What are the skills they want? +[2687.240 --> 2693.240] They're new employees to have when they come into their positions. +[2693.240 --> 2696.200] And this is what I've been told. +[2696.200 --> 2705.560] If you want a job that's going to pay a considerable amount of money in a leadership position, these +[2705.560 --> 2710.240] are what you're going to have to come into that interview with. +[2710.240 --> 2715.640] The ability to predict trends. +[2715.640 --> 2720.080] The ability to grasp the big picture. +[2720.080 --> 2724.480] The ability to think outside the box. +[2724.480 --> 2728.160] Being a risk taker. +[2728.160 --> 2732.280] Problem finding as well as problem solving. +[2732.280 --> 2736.080] So that you find the problems to solve. +[2736.080 --> 2741.640] Combining your strengths with others' strengths to build a strong team. +[2741.640 --> 2744.160] Computer literacy. +[2744.160 --> 2746.600] Combining with complexity. +[2746.600 --> 2750.840] And the ability to read people well. +[2750.840 --> 2755.280] That's helpful if you're in some area where you have to sell your ideas. +[2755.280 --> 2759.880] You have to be able to read your audience. +[2759.880 --> 2763.200] Read the buyer. +[2763.200 --> 2770.680] Are we preparing our students for these higher level positions? +[2770.680 --> 2774.120] Are we giving them this set of skills? +[2774.120 --> 2775.720] We could. +[2775.720 --> 2780.920] If we weren't so worried about the other set of skills. +[2780.920 --> 2784.440] Because traditionally, that's what school was about. +[2784.440 --> 2785.440] Yes? +[2785.440 --> 2794.280] The miss match. +[2794.280 --> 2810.040] I'd like you to say that again so they can pick it up on the video. +[2810.040 --> 2811.040] It's important. +[2811.040 --> 2812.920] What you just said is important. +[2812.920 --> 2819.920] So it's even worse because most students and pupils already know this is going on. +[2819.920 --> 2831.560] They know this list is becoming more important than it becomes more important to have these qualities. +[2831.560 --> 2837.600] The gap between pupils and teachers becomes more and more obvious every day. +[2837.600 --> 2840.720] And then what happens to the student? +[2840.720 --> 2842.040] They lose interest. +[2842.040 --> 2843.040] They lack interest. +[2843.040 --> 2845.040] They become disengaged. +[2845.040 --> 2852.040] Just a sec. +[2852.040 --> 2859.240] Thank you. +[2859.240 --> 2865.640] I don't totally agree with the formal speaker because I think it's the difference, the gap, +[2865.640 --> 2869.040] between the system and the wishes of teachers. +[2869.040 --> 2870.840] I believe that that's true. +[2870.840 --> 2882.840] I have heard enough stories in the few days I've been here to know that you're caught between the expectations of you as a teacher within the system. +[2882.840 --> 2890.040] And the knowledge that your students have that in order for them to get a job, they need something different. +[2890.040 --> 2894.040] I understand that this is not your fault. +[2894.040 --> 2900.040] I'm not blaming because I was a classroom teacher and I know what that's like. +[2900.040 --> 2903.040] And I was fired enough times that I know what it's like. +[2903.040 --> 2906.840] So yeah, it's not easy. +[2906.840 --> 2915.840] It's not easy being a teacher today caught between these different agendas and expectations. +[2915.840 --> 2917.440] That's hard. +[2917.440 --> 2925.040] So how do you add this to what you're doing so that you can keep your job but still prepare your students? +[2925.040 --> 2927.040] Yes. +[2927.040 --> 2935.040] I think it's something we have to do because you also see this trend in business. +[2935.040 --> 2945.040] There is still, I was talking to her and I said, what if you put this on your CV, then you won't get a job. +[2945.040 --> 2955.040] But on the other side, there are businesses growing at this moment who just wants to have this on your CV and not the other one. +[2955.040 --> 2961.040] Because we have a lot of them in Holland at this moment and they're growing. +[2961.040 --> 2968.040] So we have to change it because the students won't fit into the new jobs. +[2968.040 --> 2978.040] So much of thank you, much of what we've been doing has been to prepare students for jobs for a different century. +[2978.040 --> 2981.040] Not the century they're in. +[2981.040 --> 2987.040] And yes, you are stuck in a teaching position. +[2987.040 --> 2998.040] But if you can begin the dialogue with whoever makes the decisions about what gets taught in school, +[2998.040 --> 3001.040] maybe you can begin to change things. +[3001.040 --> 3004.040] Somebody has to start somewhere. +[3004.040 --> 3006.040] We all have to. +[3006.040 --> 3009.040] Right. +[3009.040 --> 3015.040] How many of you are familiar with Daniel Pink, a whole new mind? +[3015.040 --> 3019.040] These are some quotes from his book. +[3019.040 --> 3021.040] I never pronounced this word right. +[3021.040 --> 3025.040] Is it seismic or seismic? +[3025.040 --> 3033.040] There is a seismic though as yet undetected shift now underway in much of the advanced world. +[3033.040 --> 3043.040] We are moving from an economy and a society built on the logical, linear, computer-like capabilities of the information age. +[3043.040 --> 3055.040] To an economy and a society built on the inventive and pathic big picture capabilities of what's rising in its place, the conceptual age. +[3055.040 --> 3065.040] Now one of the reasons why I think Daniel Pink can be helpful is that he's talking about an economic reality, +[3065.040 --> 3081.040] that the jobs that we're preparing students to hold in the 21st century are all going to be what is outsourced to other countries where they can get the labor cheaper. +[3081.040 --> 3091.040] And if we want the students to have jobs, if we want the Netherlands to be strong economically, +[3091.040 --> 3101.040] we're going to have to teach them to do and to think in ways beyond what can be outsourced. +[3101.040 --> 3113.040] And that I think because the school system is an economic endeavor within the general economy of the country, +[3113.040 --> 3116.040] this can begin to reach people. +[3116.040 --> 3120.040] I think his words are very powerful. +[3120.040 --> 3125.040] He says the keys to the kingdom are changing hands. +[3125.040 --> 3137.040] The future belongs to a very different kind of person with a very different kind of mind, creators and empathizers, pattern recognizers and meaning makers. +[3137.040 --> 3153.040] These people, artists, inventors, designers, storytellers, caregivers, consolers, big picture thinkers will reap society's richest rewards and share its greatest joys. +[3153.040 --> 3159.040] That richest rewards is the piece that I think they'll understand. +[3159.040 --> 3177.040] What I notice in the United States is that all of the corporations with whom I deal accept the very biggest companies like Bank of America are becoming more service oriented. +[3177.040 --> 3190.040] And you go into a hotel and they answer to any question is yes, or you go into a restaurant and the answer is you got it or perfect. +[3190.040 --> 3205.040] People are being trained to be more aware of service, being more responsive to what the public needs, fearful of the ratings that they're going to get on internet if they do a bad job. +[3205.040 --> 3210.040] Don't report us. Don't make us look bad. +[3210.040 --> 3227.040] So there is an economic benefit to the entire country and to the school system within the country to begin to be aware of the shifts in emphasis that are going on internationally. +[3227.040 --> 3237.040] It isn't enough to be a fast calculator. No one is going to wake you at four o'clock in the morning and say what's four times seven. +[3237.040 --> 3255.040] I mean, they're just not going to do that. There's a calculator now. And if a calculator can do it, we don't need to spend four years teaching somebody what a calculator can do. +[3255.040 --> 3271.040] Oh, my goodness. We have some missing pieces here. So how many of your students do you think are visual spatial? What would you guess based on what we've talked about? What percentage in your classroom? +[3271.040 --> 3291.040] What just a guess? What do you think? Over 50. Wow. I never would have guessed that. But I was wrong. But what would you think? Yeah. Pardon? 80%. Wow. +[3291.040 --> 3308.040] So maybe you have, I believe from what I've seen so far that you might be right. You, I think the Netherlands is more visual than the United States. I do. I think what I've seen. I think you might be right. +[3308.040 --> 3327.040] I have data from the United States from our studies, but I never dreamed that there were that many students. So this we we invented a visual spatial identifier. And it has a self report and an observer report. +[3327.040 --> 3343.040] And I'm just giving you a few of the sample items. There's it's not a lot. It was developed for teachers. So we've only got I think 14 items altogether. And then we've got a longer one that we're using in a clinical setting. +[3343.040 --> 3369.040] It's got 36 items and that's for clinicians. But the teacher version and the student version has things like I hate speaking in front of a group. I think mainly in pictures and set of words. I know more than others think I know. I have a hard time explaining how I came up with my answers. This one I am good at spelling as a not. +[3369.040 --> 3396.040] I have a wild imagination. It was easy for me to learn my math facts, not. And what we found with that last one was interesting. We picked up visual spatial girls who never memorized their math facts. It was a more gender fair question. I never would have guessed that that would turn out like that. But we got more girls in our sample with that question. +[3396.040 --> 3406.040] So a few of them are reverse not many. And these this is what it looks like. And these are the. +[3407.040 --> 3410.040] These are the results of the study. +[3410.040 --> 3425.040] We worked with 4th, 5th and 6th graders in city schools and rural schools that were a mix of Caucasian and Hispanic. +[3425.040 --> 3450.040] A very large range of socioeconomic diversity. A lot of lower and lower middle class children in the sample. And about 1 third of them came out strongly visual spatial. Only a quarter of them came out strongly auditory sequential. And about 45 of them were mixed. +[3450.040 --> 3478.040] So we took a look at the group that was mixed that had a little of each. And we tried to see where were their preferences. And in that group twice as many of them lean toward visual spatial. They weren't strong, but that was their preference. They leaned in that direction only 15 of them, 15% lean toward auditory sequential. +[3478.040 --> 3491.040] So our research with 750, 4th, 5th and 6th graders, white, Hispanic, urban, rural, all socioeconomic ranges, all IQ ranges. +[3491.040 --> 3505.040] We saw that more than 60% in an American school were visual spatial. I'm guessing that it would be higher here, just from the people that I've met. +[3506.040 --> 3520.040] And we found much higher percentages in gifted classrooms and in Navajo and in twice exceptional. There's a school for gifted children with learning disabilities, a high school. +[3521.040 --> 3544.040] In California, I think we found 87% of them were visual spatial. So if you had to give a guess about just all of the children in Holland, what percentage of all the children do you think might be visual spatial? +[3544.040 --> 3557.040] All of the students. I mean there's no way to be wrong here because we don't know what's right. So what's just your best guess? What do you think? +[3558.040 --> 3564.040] I want to see how applicable you think this concept might be here. Yes. +[3574.040 --> 3591.040] Why do we have methods in the Netherlands, which are based on learning based on language instead of spatial learning while we have so much students who prefer that? +[3591.040 --> 3611.040] Is it changed over the years? Yes. It certainly has in the United States. I don't know if it's changed here, but the percentage of visual spatial learners is increasing in the United States. Is it increasing year two, you think? +[3611.040 --> 3631.040] I think one of the reasons is that we are in an image oriented world. And that is that iconic world is increasing. The children are exposed to more visual. They weren't maybe a generation ago. +[3631.040 --> 3649.040] School was much more nonverbal, not much nonverbal. So yeah, I think the whole society, look at how many children are playing visual games and playing with cell phones and playing with iPads. +[3649.040 --> 3661.040] We have a very visual oriented society, but our teaching methods haven't become more visual. The children have. +[3661.040 --> 3681.040] So we have prized these left hemispheric skills for thousands of years. We're using a traditional model that was handed down to us generation after generation after generation. +[3681.040 --> 3699.040] But the right hemispheric skills of imagery, computer literacy, using your mind as a camera, this is becoming more important in the 21st century. +[3699.040 --> 3715.040] And for us to help our students become employable, I really think we have to prepare them for the visually oriented creative careers that await them, particularly our gifted kids. +[3715.040 --> 3729.040] And I believe that the visual spatial learners are going to become our next generation of leaders. The ones who were marginalized in school and felt stupid are going to end up being in leadership positions. +[3729.040 --> 3746.040] So this really finishes the first half of this session, not this session, but my presentation. And I'm going to continue in the next session talking about specific strategies. +[3746.040 --> 3758.040] But I separated out so that part one was about the theory and the construct. And part two was about how to teach the children. +[3758.040 --> 3770.040] What questions do you have about all of this information that I shared today? Well, that's handy. +[3770.040 --> 3785.040] I wonder, is it possible that all gifted children or people are from origin, visual, spatial thinkers? I work with gifted adults. +[3785.040 --> 3799.040] And I sometimes become people into my room and they seem ultimately of the rational side. +[3799.040 --> 3819.040] And I often can help them by discovering the visual spatial abilities. +[3819.040 --> 3833.040] Is that something known about that? I agree. I have to say yes, no, and yes. Many questions. +[3833.040 --> 3846.040] Were all of these children originally visual spatial? Yes. At some point in all of our development, we all were visual spatial. +[3846.040 --> 3859.040] And it is called Ideic memory, EIDIC memory. And I probably misspelled that, didn't I? +[3859.040 --> 3874.040] Anyway, the Ideic memory is the early knowledge base that young children have until the age of around eight. +[3874.040 --> 3887.040] They learn visually, they take in information visually, they store it visually, they have almost a photographic memory. +[3887.040 --> 3903.040] But about nine years old, something happens. About nine years old, that left hemisphere really starts to kick in and take over. +[3903.040 --> 3923.040] And instead of the Ideic memory, you've got verbal mediation and categorical reasoning that supplants it. Ideic memory goes only so far, developmentally, and then all of a sudden there's a switch. +[3923.040 --> 3952.040] And you start thinking with your left hemisphere, except the visual spatial learners. They don't stop. They don't make the switch. When everybody else becomes more auditory sequential, they don't give up that Ideic memory and start to use categorical, verbal, analytical reasoning in its place. +[3952.040 --> 3977.040] They keep that as their main way of knowing. But when you're gifted, something else happens. When you're gifted, you've got that left hemispheric, analytical, verbal connecting going on, the great ability to categorize. +[3978.040 --> 4002.040] And you also have the Ideic memory and the right hemisphere, and they work more complementarily. And the higher your intelligence, the higher your measured intelligence, the more likely you are to be visual spatial. +[4002.040 --> 4021.040] So when you do studies of the highly gifted, they lead with the visual spatial. And then they have no trouble going back and forth and back and forth because the brain is a very integrated organ, and it uses everything it has. +[4021.040 --> 4045.040] And so the fastest way to get to a solution is to take a picture of it in your mind, to see it, to see it all at once. And then if you have to explain it to somebody else, then you have to go back to that left hemisphere, and you have to do the translation and the integration. +[4045.040 --> 4058.040] So that the higher the intelligence, the more likely the person is to be both, but to have a visual spatial preference. +[4058.040 --> 4077.040] So I have two theories about your clients. You have both. You have both that left hemispheric facility, you write, and you have the right hemispheric facility. And you have learned to integrate them. +[4077.040 --> 4090.040] My guess is that you attract people like yourself who are highly gifted, have both. And they're more likely to come to you, the highly gifted. +[4090.040 --> 4101.040] That's my guess. There's another hypothesis, and that is also, I've been playing with this in the last few days. +[4101.040 --> 4121.040] I think it's something about being Dutch. Serious. No, I'm serious, because I have noticed that the people I've had conversations with in the past few days think differently from Americans. +[4121.040 --> 4139.040] I think differently from people I've encountered in other countries. I've found a lot of people who think like you think in Denmark, but not a whole lot of people that I've talked with in other places, especially in the United States. +[4139.040 --> 4161.040] I have a feeling it has to do with being multilingual. There's something about being multilingual, which I think somehow integrate, I don't know, but I think it integrates the hemispheres in some way that us monolinggles don't get. We don't have that. +[4161.040 --> 4179.040] You are always interacting with people of different linguistic backgrounds. We're not. Those synapses aren't firing. Now, we don't have that experience, but they do in Denmark. +[4179.040 --> 4207.040] I think being surrounded by different linguistic bases somehow is causing some integration of the right and left hemisphere that's unusual. It's just a hypothesis. I don't know what I'm talking about. I'm just trying to make sense of either all of you are highly gifted or there's something about being Dutch. +[4209.040 --> 4223.040] I don't know. How are we doing time wise? We have time? +[4223.040 --> 4252.040] I'm sorry, I couldn't hear her. One question? Yes. You told us that at nine years old, something happens with the left hemisphere. Is that because of the way we teach children, or is it also with children who doesn't have any schooling? +[4252.040 --> 4262.040] Oh, that we switch to the left hemisphere. That's a natural part of child development. Your right hemisphere develops first. +[4262.040 --> 4280.040] Thank you. So the right hemisphere is interacting with the world for the very first eight years of life. And then developmentally, the left hemisphere really starts to kick in around nine. +[4280.040 --> 4296.040] Have you noticed changes in children around nine? Sit something different about nine around nine. Yeah. +[4296.040 --> 4308.040] I wonder what would happen if we would have an education more direct towards the visual spatial learner? I wonder the same thing. Would we all become very gifted? +[4308.040 --> 4325.040] Maybe. If we integrate them. We're always hearing about how we only use a small percent of our intelligence. Maybe it's that right hemisphere that has all the gold in it that needs to be discovered and revealed and nurtured. +[4325.040 --> 4337.040] Maybe that's where all the rest of that brain power can come from. I'm guessing yes. I think I made a statement like that and upside down brilliance. +[4337.040 --> 4348.040] What would it be like if our whole school system, our whole structure of education worldwide became more visual spatial. +[4348.040 --> 4368.040] So that we have that left hemisphere analytical facility, but we also have the ability to visualize the ability to synthesize the ability to access our intuition, our intuitive knowing, our spirituality. +[4368.040 --> 4388.040] What if we had it all? What would life look like under those circumstances? It's a really good question. Is it? +[4388.040 --> 4400.040] Thank you. I would like to add something. In the way that you are an example of that, I missed one word and it is joy and humor. +[4400.040 --> 4409.040] Good. Very good. Very important part of being in the right hemisphere. You're absolutely right. +[4409.040 --> 4427.040] I see it in every word you're saying. So I would thank you for that. And at the same time I would like to ask every teacher to start tomorrow with joy and humor in your classes. +[4427.040 --> 4435.040] You're completely right on. There's no wisdom without humor. +[4435.040 --> 4447.040] The right hemisphere actually is the part of our brain that understands humor. The left hemisphere can understand puns. +[4447.040 --> 4464.040] But the right hemisphere is what gets most of the jokes. And the joy to feel joy. I don't know. I mean, I'm hearing different conflicting information about brain research that I don't understand. +[4464.040 --> 4479.040] But the book that suggests that you're right is the book by now I'm blanking. It's my stroke of insight by who was who wrote that? +[4479.040 --> 4493.040] My stroke of insight. Jill Bolte Taylor. She says the same thing. She says if you want to know joy, you better step into your right hemisphere. +[4493.040 --> 4502.040] Because that's where it is. Yeah. And that book had a profound impact on me. It's a beautiful book. +[4502.040 --> 4517.040] If you haven't read it, what I'd recommend that you do is write down her name and look her up on her TED talk. It'll be the best 18 minutes you've spent in a long time. +[4517.040 --> 4539.040] It's Jill J. I. L. L. Bolte B-O-L-T-E Taylor T-A-Y-L-O-R. Jill Bolte Taylor. And you put that into YouTube or her TED talk will come up. +[4539.040 --> 4553.040] I must have watched it 40 times. And I get something different out of it every single time. She was, she's a brain researcher who experienced a massive left hemisphere stroke. +[4553.040 --> 4572.040] And then healed off for her a long period of time. And then was able to tell what happened to her. The spiritual awareness that came out of that loss of the left hemisphere completely. +[4572.040 --> 4593.040] It's so inspiring. And she does talk about peace and joy and humor. And then the other person who completely supports what you're saying is Robert Ornstein. And he wrote the book The Right Mind. +[4593.040 --> 4611.040] And he has throughout the book pictures that if you, he talks about sharing these pictures with individuals with left hemispheric strokes and individuals with right hemispheric strokes. +[4611.040 --> 4624.040] And the people who had right hemispheric strokes did not understand what was going on in the pictures. And they couldn't, they couldn't understand cartoons. +[4624.040 --> 4637.040] They couldn't understand a lot of visual humor. They missed it completely. Because that right hemisphere was so important to humor. Appreciation of humor. +[4637.040 --> 4650.040] Yeah. So we're going to be talking a little bit more about that in the next session. What time are we supposed to stop? Now. Thank you. You've been very kind. +[4650.040 --> 4655.040] Thank you. diff --git a/transcript/allocentric_4F3xCBcsLFg.txt b/transcript/allocentric_4F3xCBcsLFg.txt new file mode 100644 index 0000000000000000000000000000000000000000..0c394c5a5fc666d1395d660f74c082c098da7d67 --- /dev/null +++ b/transcript/allocentric_4F3xCBcsLFg.txt @@ -0,0 +1,144 @@ +[0.000 --> 11.840] Hello, my name is Ashley Sellers. I'm a speech language pathologist and the owner and +[11.840 --> 18.360] operator of speech language and beyond. I'm coming to you today to introduce a video +[18.360 --> 26.600] of a session that I completed with a non-verbal child that is around three years old. I wanted +[26.600 --> 32.680] to do this video because I feel sometimes at therapists when we work with children at a non-verbal +[32.680 --> 38.800] or even with parents who have children that they're attempting to communicate with on a daily basis +[38.800 --> 44.480] at a non-verbal, we get so caught up in them using words that we're not paying attention to the +[44.480 --> 49.640] things that they're showing us that they do know or the ways that they are able to communicate. +[49.640 --> 56.320] Now mind you, I know the goal of the therapy is to lead them to the use of words, but we have +[56.320 --> 63.280] to be realistic in knowing that that may not come overnight, it may not even happen at all, or it may +[63.280 --> 69.160] not even happen when we expect it to. So I never promise parents that I can get their child to the +[69.160 --> 74.880] point that they are talking, but I can break down the ways that they are attempting to communicate +[74.880 --> 80.520] or the ways that they are building on their ability to be able to communicate. And I feel like a lot +[80.520 --> 86.280] of times we miss out on the things that they're showing us that they know and how they are attempting +[86.280 --> 91.960] to communicate with us, and when we miss out on those opportunities, we miss out on the things +[91.960 --> 97.720] that we can do to expand what they already know to get them closer to the point of being able to +[97.720 --> 104.960] use words as a way to communicate. So through this video, you're going to see the live recording, it was a +[104.960 --> 111.120] 20-minute session, really it was a 30-minute session, and I was only able to record 20 minutes of, but out of +[111.120 --> 117.120] that 20 minutes, I've really just broken it down to where it's pretty much like maybe six minutes of +[117.120 --> 122.520] the therapy where I can highlight to you when the child made eye contact, when they followed +[122.520 --> 127.960] them through on a command, when they attempted to communicate. I just really want you to look at the +[127.960 --> 134.600] video, pay attention to the ways the child is showing us, look, I hear you, I understand you, and I just +[134.600 --> 140.080] need more time to get to the point that I can use words, but I am attempting to communicate with you in +[140.080 --> 145.440] other ways. I hope this video helps. I hope that it provides some strategies of some things that you +[145.440 --> 150.600] can do at home or within your therapy session, and also to encourage you to let you know that you're +[150.600 --> 157.800] doing more than what you think you are doing to help your child. The key is we cannot push them past +[157.800 --> 163.480] the point that they are ready to communicate. When they're ready to communicate with us, they will give +[163.480 --> 169.960] us what they have. It is our job whether they're using words or not at this particular point in time, +[169.960 --> 175.840] to stimulate their language, to build on their receptive vocabulary, to store the right information +[175.840 --> 181.920] within their locked term and short term memory so that when they are to the point that they're ready to +[181.920 --> 189.240] give us that language, we've already demonstrated it to them within the appropriate context in order for us to +[189.240 --> 197.040] give it back. All we have to do is be patient, be prayerful, and always put forth a lot of effort in our daily +[197.040 --> 203.920] routines to make sure that we're giving them numerous language opportunities. So I hope that this video helps. +[203.920 --> 210.680] If you have any questions, please feel free to contact me. My contact information will be listed below in the +[210.680 --> 213.120] description box. So thank you, and I hope that you're +[273.120 --> 275.120] doing well. +[275.120 --> 277.120] Video cam, let me show you another one. +[277.120 --> 280.120] I guess this one. What's this? +[280.120 --> 284.120] Look, camera, take a picture. +[284.120 --> 288.120] Cheese! You try. Camera. +[288.120 --> 292.120] Can you take a picture? Camera. +[292.120 --> 298.120] You use it to take a picture. You hold it up. +[298.120 --> 300.120] Cheese! +[300.120 --> 304.120] Now you can take the picture. Camera. +[304.120 --> 307.120] Camera. So look, here's the other one. +[307.120 --> 311.120] Video. Look at the video cam. +[311.120 --> 314.120] Look, so you have the video cam. +[314.120 --> 317.120] You use it to record so you can see. +[317.120 --> 320.120] You have the camera. Cheese! +[320.120 --> 325.120] Take pictures. See? +[325.120 --> 328.120] You put it to your eye. Take my picture. +[328.120 --> 331.120] Here. Can you take my picture? +[331.120 --> 335.120] Can you take my picture? +[335.120 --> 339.120] Hold it up. See? Let me take a picture. Say cheese! +[339.120 --> 343.120] Cheese! Camera. +[343.120 --> 347.120] So look, camera. +[347.120 --> 349.120] Video recorder. +[349.120 --> 352.120] Look, what else do I have now? I have a tool. +[352.120 --> 355.120] Look at this. What is that? +[355.120 --> 358.120] You see the screwdriver? +[358.120 --> 361.120] screwdriver. +[361.120 --> 366.120] And then here is a screw. +[366.120 --> 369.120] screwdriver and screw. +[369.120 --> 372.120] We're fixing something. Can you try? +[372.120 --> 375.120] screwdriver. +[375.120 --> 381.120] Good job. Put it ahead. +[381.120 --> 384.120] Alright, here we go. +[384.120 --> 387.120] What's this? +[387.120 --> 391.120] Look, eyes. +[391.120 --> 393.120] Eyes. +[393.120 --> 396.120] And then eyes. Those are your eyes. +[396.120 --> 402.120] Look, eyes. Put the eyes on for me. Where they go? +[402.120 --> 405.120] Put eyes here. +[405.120 --> 408.120] Eyes. +[408.120 --> 411.120] Look, can you put them right there? Eyes. +[411.120 --> 414.120] Hold it. +[414.120 --> 419.120] Look, eyes. Put them right here, Danyang. +[419.120 --> 422.120] Very good. Eyes. +[422.120 --> 426.120] You see with your eyes. Danyang, where are your eyes? +[426.120 --> 431.120] Eyes. Good job. Eyes. +[431.120 --> 434.120] Danyang, where's your nose? +[434.120 --> 436.120] Where's nose? +[436.120 --> 438.120] Look. +[438.120 --> 440.120] Nose. +[440.120 --> 442.120] Nose. +[442.120 --> 444.120] Nose. +[444.120 --> 446.120] Nose. +[446.120 --> 448.120] Where are you going to put his nose? +[448.120 --> 452.120] Where are you going to put his nose? +[452.120 --> 456.120] Where's nose? +[456.120 --> 464.120] There it is. Good job. Nose. +[464.120 --> 466.120] Nose. That's right. +[466.120 --> 467.120] Look. +[467.120 --> 468.120] Jar. +[468.120 --> 470.120] Jar. +[470.120 --> 473.120] Look what we're going to put in this jar. +[473.120 --> 476.120] Danyang, sit up. What is this? +[476.120 --> 478.120] What is this? +[478.120 --> 480.120] What is it? +[480.120 --> 482.120] Cookie? +[482.120 --> 485.120] Cookie? +[485.120 --> 487.120] Cookie? +[487.120 --> 489.120] What are you doing to cookie? +[489.120 --> 491.120] Look, eat. +[491.120 --> 493.120] Cookie. +[493.120 --> 496.120] Eat. Cookie. +[496.120 --> 500.120] Now can we put it in the jar? Cookie? +[500.120 --> 502.120] Where is it? +[502.120 --> 504.120] Look. Phone. +[504.120 --> 505.120] Hello. +[505.120 --> 510.120] May I speak to Danyang? Can you talk on the phone? +[510.120 --> 513.120] Look. Look at me push the number. +[513.120 --> 518.120] Two, two, nine, three, four, seven, five, eight, seven, five. +[518.120 --> 521.120] Wing, wing, wing, wing, wing, wing. +[521.120 --> 523.120] Hello. +[523.120 --> 527.120] Good job. Hello. +[527.120 --> 529.120] May I speak to Danyang? +[529.120 --> 532.120] Look. Bye bye. +[532.120 --> 535.120] Bye bye. Phone. +[535.120 --> 539.120] Yep. Push the number. That's how you dial the number. +[539.120 --> 545.120] Can you call? Hello. Hello. +[545.120 --> 549.120] Can you talk on the phone? Hello. +[549.120 --> 552.120] Hello, Miss Ashley. +[552.120 --> 555.120] Look. Bye bye. Hang it up. +[555.120 --> 558.120] Bye bye. +[558.120 --> 561.120] Very good. So we got phone. +[561.120 --> 564.120] Phone. +[564.120 --> 567.120] Car. Drive the car. +[567.120 --> 569.120] Wing, wing, wing. +[569.120 --> 570.120] Let me stay here. +[570.120 --> 572.120] And truck. +[572.120 --> 573.120] Ro, ro, ro. +[573.120 --> 574.120] Truck. +[574.120 --> 576.120] All right. Listen. +[576.120 --> 578.120] Truck. +[578.120 --> 581.120] Phone. +[581.120 --> 583.120] Car. +[583.120 --> 584.120] Danyang. +[584.120 --> 587.120] Give me car. +[587.120 --> 588.120] Put the car in my hand. +[588.120 --> 589.120] Look. +[589.120 --> 591.120] Give me car. That's your mouth. Good job. +[591.120 --> 592.120] Give me car. +[592.120 --> 594.120] Mouth. +[594.120 --> 596.120] Mouth. Where's nose? diff --git a/transcript/allocentric_4_5dayHDdBk.txt b/transcript/allocentric_4_5dayHDdBk.txt new file mode 100644 index 0000000000000000000000000000000000000000..c3b50149bf8f303b274fb91bbb5a92af59cf9b64 --- /dev/null +++ b/transcript/allocentric_4_5dayHDdBk.txt @@ -0,0 +1,49 @@ +[0.000 --> 10.100] Communication is an essential part of our daily lives. +[10.100 --> 15.700] It is how we express ourselves, share our thoughts and ideas, and connect with others. +[15.700 --> 20.740] In this video, you will learn about the two main types of communication. +[20.740 --> 25.980] Verbal and non-verbal communication. +[25.980 --> 31.100] Non-verbal communication is the use of speech or spoken words to exchange information, +[31.100 --> 34.060] emotions, and thoughts. +[34.060 --> 39.900] Non-verbal communication, on the other hand, is the use of body language, gestures, facial +[39.900 --> 44.260] expressions, and tone of voice to convey a message. +[44.260 --> 49.340] It is a powerful tool that can be used to communicate feelings, emotions, and attitudes +[49.340 --> 55.100] without the use of words. +[55.100 --> 59.620] Non-verbal and non-verbal communication are important, and they often work together +[59.620 --> 62.980] to create a complete message. +[62.980 --> 68.380] Non-verbal cues can help us understand the tone and intention behind someone's words. +[68.380 --> 73.620] At the same time, verbal communication provides context and clarity to the message being +[73.620 --> 76.620] conveyed. +[76.620 --> 82.620] Verbal communication is essential in negotiations, where clear and explicit language is critical. +[82.620 --> 87.380] While non-verbal communication is essential in interpersonal communication where emotional +[87.380 --> 90.860] cues play an important role. +[90.860 --> 98.460] For some examples of verbal communication, face-to-face conversation, giving a speech, +[98.460 --> 105.020] telephonic conversation, sending voice note, taking interviews, group discussion in the +[105.020 --> 107.860] workplace. +[107.860 --> 111.380] Here are some examples of non-verbal communication. +[111.380 --> 113.460] Notting head in approval. +[113.460 --> 117.740] Showing a thumbs up, sign to express positive feelings. +[117.740 --> 119.060] Smiling at someone. +[119.060 --> 122.660] A confident handshake is a welcoming gesture. +[122.660 --> 124.860] Giving a hug to show affection. +[124.860 --> 130.980] To talk in a raised voice while an anger. +[130.980 --> 136.740] Non-verbal communication can be more effective than verbal communication in some situations. +[136.740 --> 142.100] For example, when someone says something but their body language suggests something different, +[142.100 --> 147.180] we are more likely to believe their non-verbal cues over their words. +[147.180 --> 152.020] Non-verbal communication is also essential in situations where words are not enough to convey +[152.020 --> 153.460] a message. +[153.460 --> 158.540] Such as when comforting a loved one, expressing empathy or showing respect. +[158.540 --> 163.500] On the other hand, verbal communication is essential in negotiations, where clear and +[163.500 --> 166.580] explicit language is necessary. +[166.580 --> 171.700] But it is more easily influenced by external factors such as language barriers, background +[171.700 --> 178.180] noise, and distractions. +[178.180 --> 183.620] In today's world, we are increasingly relying on technology for communication. +[183.620 --> 187.860] And this has made it more challenging to convey non-verbal cues. +[187.860 --> 193.260] When communicating through text, for example, we lose the tone of voice and facial expressions +[193.260 --> 195.900] that help us understand the message. +[195.900 --> 200.580] It is therefore essential to be aware of the limitations of each type of communication +[200.580 --> 204.140] and use them appropriately. +[204.140 --> 208.820] Understanding the nuances of each type of communication can help us become better communicators +[208.820 --> 213.060] and build stronger relationships with others. +[213.060 --> 215.460] Thanks for watching this video. +[215.460 --> 219.820] If you find this video informative, please like the video and don't forget to subscribe +[219.820 --> 221.500] to EducationLeves Extra. diff --git a/transcript/allocentric_4nAcRL-6ujk.txt b/transcript/allocentric_4nAcRL-6ujk.txt new file mode 100644 index 0000000000000000000000000000000000000000..e1284c6761321e200d2233fcab95a4f462ee5f3c --- /dev/null +++ b/transcript/allocentric_4nAcRL-6ujk.txt @@ -0,0 +1,389 @@ +[0.000 --> 5.000] So now we're getting into where we started this whole journey last year. +[5.000 --> 7.000] It's about emotions in the face. +[7.000 --> 12.000] When we start looking at profiling people, +[12.000 --> 16.000] like I said before, I do it in a very systematic way. +[16.000 --> 19.000] The way I was taught, we learned all about Jing first, +[19.000 --> 22.000] then we learned all about Qi, which is personality and temperament, +[22.000 --> 25.000] then we learned about the Shen, which is the sexuality, +[25.000 --> 29.000] the romance, psychopathy, things like that. +[29.000 --> 34.000] There's a lot of things that are, it's all important, +[34.000 --> 37.000] but depending on where you're going to be applying or information, +[37.000 --> 40.000] some things are more important than others. +[40.000 --> 45.000] So the first thing we want to look at we began with was separating into Yinen Yang. +[45.000 --> 49.000] So we have this divide across, and we have Yinen Yang to the right, +[49.000 --> 52.000] Yinen Yang to the left. +[52.000 --> 57.000] So when we start to look at people, the first thing we look at is, +[57.000 --> 64.000] we try to look at the overall features that jumped out of us. +[64.000 --> 70.000] We want to look at their inner nature versus their outer nature. +[70.000 --> 73.000] So we look at people through our right side, +[73.000 --> 78.000] we look at their right side, we will see the face they want us to see. +[78.000 --> 81.000] We look at them through our left eye, at their left side, +[81.000 --> 85.000] we'll see the face they keep behind closed doors. +[85.000 --> 88.000] But what do we look for? +[88.000 --> 91.000] We look for emotions. +[91.000 --> 94.000] We look for differences in the symmetry. +[94.000 --> 100.000] So as we look, if we have lines above our eye in this area, +[100.000 --> 105.000] this indicates somebody who has a healthy degree of skepticism. +[105.000 --> 108.000] These are people who don't take anything at face value. +[108.000 --> 112.000] They have to see it, they have to experience it, and an incident area. +[112.000 --> 115.000] So if you see someone like that, you say, you know, you don't really, +[115.000 --> 117.000] you might say something, when you talk about traits, by the way, +[117.000 --> 119.000] you don't say you're a skeptic. +[119.000 --> 122.000] That's not... +[122.000 --> 125.000] No, I'm not. +[125.000 --> 128.000] You might say something like, you know, you have these lines right here, +[128.000 --> 130.000] the indicator, a healthy dose of skepticism. +[130.000 --> 133.000] So you don't always take things at face value, do you? +[133.000 --> 135.000] No, that's... +[135.000 --> 138.000] You want to describe the trait not the label. +[138.000 --> 141.000] Does that make sense? +[141.000 --> 146.000] Because there's not a lot of people who like to admit they're stubborn. +[146.000 --> 150.000] Most stubborn people, I know, think they're pretty easy going. +[150.000 --> 154.000] And if you try to convince them otherwise... +[154.000 --> 157.000] Right? +[157.000 --> 160.000] So skepticism lines, very, very... +[160.000 --> 162.000] By the way, when would it be good to have somebody... +[162.000 --> 166.000] What kind of job would we want to put somebody in a position of skepticism? +[166.000 --> 167.000] An auditor? +[167.000 --> 169.000] An auditor might be good, right? +[169.000 --> 170.000] Quality control. +[170.000 --> 171.000] Quality control? +[171.000 --> 173.000] What's that? +[173.000 --> 174.000] Police. +[174.000 --> 175.000] Absolutely. +[175.000 --> 176.000] Right? +[176.000 --> 179.000] So, you know, and if you see if you work in with a cop and all of a sudden you see this stuff, +[179.000 --> 182.000] you know he's probably pretty good at his job. +[182.000 --> 183.000] Right? +[183.000 --> 184.000] Skepticism. +[184.000 --> 186.000] Next line, yes. +[186.000 --> 191.000] Can you say healthy skepticism? +[191.000 --> 195.000] Is there a point where you can tell me it's too much? +[195.000 --> 197.000] It's very deep. +[197.000 --> 198.000] Right? +[198.000 --> 205.000] Remember, the intensity of the trait is measured by the depth and the breadth of the marking. +[205.000 --> 213.000] Just like in handwriting analysis, the intensity of emotional expression is related to the handwriting pressure and the slant. +[213.000 --> 224.000] We can look at the intensity or the depth of a feeling or emotion or an issue or a trait by how deeply it's marked that area. +[224.000 --> 230.000] Maybe I'm missing something here, but for the skepticism on there it seems to be on the left side. +[230.000 --> 232.000] You keep pointing to your right side. +[232.000 --> 233.000] Is this a mirror? +[233.000 --> 234.000] This is a symmetrical. +[234.000 --> 235.000] Okay. +[235.000 --> 236.000] Yeah. +[236.000 --> 237.000] Right? +[237.000 --> 244.000] What we're going to do is we're going to look at, first we look at the big picture, which is where are they marked? +[244.000 --> 245.000] Right? +[245.000 --> 250.000] And then we look at Yin, right versus left, or internature versus outer nature. +[250.000 --> 251.000] Right? +[251.000 --> 255.000] So I can look at you and I can see how you're marked in general. +[255.000 --> 262.000] If I want to go deeper, now I split exterior persona, internal persona. +[262.000 --> 270.000] And I can, I can, because I may notice that on your external persona, your lips turn up. +[270.000 --> 274.000] On your internal persona, they turn down. +[274.000 --> 282.000] This could show somebody who's to send it to give a positive face to the outer world, but inside they're not very happy. +[282.000 --> 283.000] They're very disappointed. +[283.000 --> 284.000] Right? +[284.000 --> 285.000] Yes. +[285.000 --> 289.000] Would that show up as a smirk where you had an angle with it? +[289.000 --> 291.000] You know, where you got one side up? +[291.000 --> 294.000] I'm going to show you an understanding question. +[294.000 --> 297.000] Hold on a second, guys. +[297.000 --> 299.000] Let me have a, let's finish Daniel's question. +[299.000 --> 301.000] Restate your question. +[302.000 --> 307.000] Would that show up in a smirk where the person has a skew? +[307.000 --> 313.000] If there's a, if there's a, if it defined a symmetry that, and there's no obvious reason for it, then yeah, it could be. +[313.000 --> 315.000] Well, I'm talking about it. +[315.000 --> 316.000] In expression, not necessarily. +[316.000 --> 318.000] Well, again, if we, we're looking, we're not looking at expressions. +[318.000 --> 324.000] We're looking at an expressionless face, more or less, and seeing what the wrinkles show us. +[324.000 --> 328.000] If we take, when you take your facing app, you're going to do this in a minute. +[328.000 --> 331.000] First, you're going to do just a generic reading of people. +[331.000 --> 334.000] Then you're going to take your facing app, and you're going to take picture of yourself. +[334.000 --> 335.000] And you're going to look. +[335.000 --> 339.000] You're going to look at how your face combines with itself. +[339.000 --> 342.000] And these things, you'll, you'll see a different person. +[342.000 --> 348.000] You'll see when all of the right side is together, you'll see what your public persona looks like. +[348.000 --> 353.000] When all of the left sides are together, you'll see what your inner nature looks like. +[353.000 --> 361.000] And you can do this reading based on what you have already, and get a better, a better snapshot. +[361.000 --> 366.000] When we're doing this in person, if I'm looking at Kim, if I look at her, I'm right-eyed dominant, +[366.000 --> 369.000] then I'm going to see her public persona. +[369.000 --> 373.000] Because I'm right-eyed dominant, so I focus on that information. +[373.000 --> 378.000] If I want to see her inner persona, then I look at her through my left eye, or I can cover my left eye, +[378.000 --> 381.000] and I'll see a different side of her. +[381.000 --> 384.000] Follow me. +[384.000 --> 390.000] Again, these can be very subtle, and sometimes they can be very, very obvious. +[390.000 --> 392.000] They can be very, very obvious. +[392.000 --> 396.000] Robert, you had a question? +[396.000 --> 401.000] I was just looking to build on what Daniel said, because one of Echman's, in micro expressions, +[401.000 --> 407.000] is if you turned down one corner of your mouth, that's skepticism, and I think that's kind of what you were getting at. +[407.000 --> 413.000] And so that could be, if you're skeptical enough, on your private side, then you could eventually get in. +[413.000 --> 418.000] Again, that particular characteristic doesn't have that definition in this system. +[418.000 --> 420.000] And it's a micro expression. +[420.000 --> 424.000] So we're not looking, like I said before, we're not looking at micro expressions. +[424.000 --> 428.000] We're looking at the consequences of a lifetime of expressions. +[428.000 --> 434.000] How the constant use of that trait, or that expression, or feeling of that emotion, +[434.000 --> 438.000] marks the face and the musculatures of the face. +[438.000 --> 441.000] Sort of like the canvases today. +[441.000 --> 445.000] Yeah. Anybody else? +[445.000 --> 448.000] Okay. +[448.000 --> 452.000] Let me make this picture bigger. +[452.000 --> 457.000] These are ones that we want to spend, by the way, when we look at people, +[457.000 --> 461.000] we spend most of our time looking here, +[461.000 --> 463.000] just so you know. +[463.000 --> 472.000] If you want to be systematic about it, I would divide this into three sections, but we'll cover that in a minute. +[472.000 --> 478.000] So when we look at the eyes now, we're going to do this in count in a counterclockwise rotation. +[478.000 --> 483.000] Looking at the sides of the eyes here, this is a joy. +[483.000 --> 490.000] When the lines go up, not past the eyebrow. +[490.000 --> 494.000] When the eyes are when they have a little crow's feet, we're looking at somebody who's experiencing a lot, +[494.000 --> 496.000] who has experienced a lot of joy. +[496.000 --> 506.000] Most of you have some degree of joy markings. +[506.000 --> 510.000] If you... +[510.000 --> 512.000] If the line... See, the trait... +[512.000 --> 516.000] I'm going to talk about it right here, but I'm going to diagram it over here. +[516.000 --> 524.000] The line travels up past the eyebrow. +[524.000 --> 527.000] You now have mania. +[527.000 --> 530.000] Excessive joy becomes mania. +[530.000 --> 534.000] This is your bipolar, your manic depressives. +[534.000 --> 539.000] These are people who are just up and up in the morning tweeting. +[539.000 --> 543.000] Right? +[543.000 --> 549.000] So extreme joy, mania. +[549.000 --> 556.000] When the lines come down this way, you're seeing sadness lines. +[556.000 --> 561.000] We've all had a healthy degrees of sadness in our life. +[561.000 --> 572.000] When they start to travel down the cheeks through the lung area, now you're dealing with sorrow. +[572.000 --> 575.000] These people may start to develop lung problems. +[575.000 --> 582.000] In fact, what you'll find out in cases like emphysema, COPD, asthma, allergies, +[582.000 --> 587.000] as you unpack them, usually a lot of times grief and anger come up. +[587.000 --> 591.000] Grief goes to the lungs, which is the next trait. +[591.000 --> 600.000] When those lines extend beyond here, now you're looking at grief. +[600.000 --> 601.000] So those are the three degrees. +[601.000 --> 611.000] You have sadness, sorrow, grief. +[611.000 --> 613.000] Humor lines, they're not real. +[613.000 --> 615.000] They don't show real well here. +[615.000 --> 622.000] Humor lines, if you were to look, let me do this. +[622.000 --> 624.000] Can you guys see that back there? +[624.000 --> 627.000] Okay, I made that a little bigger. +[627.000 --> 638.000] Humor lines are usually seen in the lips themselves. +[638.000 --> 641.000] They're usually seen with a little line down the center. +[641.000 --> 645.000] Sometimes you can have lines like this. +[645.000 --> 650.000] So if you see lines in the lips, they're usually some, especially a big one in the middle. +[650.000 --> 654.000] That's usually the indication that they have a pretty good sense of humor. +[654.000 --> 661.000] Some of you know people like this, right? +[661.000 --> 664.000] Am I not being bled in on the joke? +[664.000 --> 672.000] Who's the one that needs to have stick? +[672.000 --> 679.000] So humor here. +[679.000 --> 690.000] Okay, going from the center down, people were asking about this. +[690.000 --> 696.000] Two lines indicates impatience. +[696.000 --> 699.000] They're at the stoplight, the stoplight's only 30 seconds away from changing. +[699.000 --> 705.000] They're already gunning the engine. +[705.000 --> 712.000] When you see three lines, this is usually a bit of a gift of somebody who has managed, +[712.000 --> 718.000] has learned how to manage their temper, has managed, learned how to manage their anger. +[718.000 --> 724.000] So you might say something like, you know what, there was a time in your life when you really had a bad temper +[724.000 --> 726.000] when you really got impatient with people. +[726.000 --> 729.000] And over time, you seem to have learned to really manage it well. +[729.000 --> 732.000] You manage it much better than you used to. +[732.000 --> 735.000] Yeah. +[735.000 --> 738.000] Yes. +[738.000 --> 746.000] I really do need a mic runner for this. +[746.000 --> 757.000] Some of the ones you're going to see a lot in therapy are lost love lines. +[757.000 --> 760.000] Oh, actually disempowerment in lost love. +[760.000 --> 773.000] Lost love lines start at the inner canthus and they descend down, sometimes merging with or parallel to the sorrow, +[773.000 --> 777.000] the grief lines or the purpose lines. +[777.000 --> 787.000] Now, if you notice, lost love and the sadness, if you extend those lines out, they all end up at the same spot. +[787.000 --> 792.000] And don't they seem related and see there's an orderliness to it. +[792.000 --> 797.000] There's an organization to this that kind of floats to the surface. +[797.000 --> 800.000] Lost love does not necessarily mean romantic love. +[800.000 --> 807.000] Lost love means there was some part of your life that was extremely important to you. +[807.000 --> 814.000] That was a very big piece of who you were or are as a person that you enjoyed. +[814.000 --> 823.000] And at some point in your childhood or your teens or whatever, something happened and that part is no longer there. +[823.000 --> 831.000] What I mean is, what I mean is it's not it's not no longer there, it's that your ability to do that is gone. +[831.000 --> 843.000] Sometimes athletes who are very, very strong, very, very talented, they have an injury and they can no longer play. +[843.000 --> 845.000] You don't get one of these. +[845.000 --> 854.000] Sometimes you'll meet somebody, you have a lifestyle that you love and things you enjoy doing, you meet somebody that you fall in love with. +[854.000 --> 861.000] That person doesn't like or approve of those things, you stop doing them. +[861.000 --> 864.000] It could also be a person. +[864.000 --> 867.000] It's something that was a big part of who you were as a human being. +[867.000 --> 871.000] That was in many cases part of your path. +[871.000 --> 873.000] You've lost it in some way. +[873.000 --> 875.000] Your face will mark. +[875.000 --> 878.000] Okay? +[878.000 --> 883.000] Question? +[883.000 --> 899.000] Do you find that the lines that come down coincide with the blockages for not following their road or their golden path in light? +[899.000 --> 901.000] Can you restate the question on that side? +[901.000 --> 905.000] So if they have a lot of sadness, grief and sorrow, that's creeping in. +[905.000 --> 910.000] Do you often find that there's a blockage where they're not following their path in life? +[910.000 --> 913.000] This could be caused by this. +[913.000 --> 916.000] No, in fact they're usually very different. +[916.000 --> 930.000] But they can be related in the sense that the person that caused them to not be able to do this is draining them, +[930.000 --> 933.000] forcing them to nurture and take care of them. +[933.000 --> 935.000] So there's separate but related. +[935.000 --> 937.000] Does that make sense? +[937.000 --> 941.000] Okay. Because this is a lot of what happens in bad relationships. +[941.000 --> 946.000] You get somebody who's a control freak who is very suspicious, very paranoid. +[946.000 --> 950.000] Somebody who's very demanding. +[950.000 --> 954.000] Many times what will happen is they'll start to slowly cut you off from your friends. +[954.000 --> 958.000] They won't let you do things with other people. +[958.000 --> 964.000] They'll start to demand all of your attention and all of your resources. +[964.000 --> 969.000] So now you'll develop lost love lines because you can no longer do the things you love to do. +[969.000 --> 976.000] And you'll start to develop bitterness and over nurturing lines because now all of your energy is being sucked by this person. +[976.000 --> 978.000] Does that make sense? +[978.000 --> 980.000] Okay, someone had a question. +[980.000 --> 989.000] I just, while you're on the eyes, I notice a lot that people kind of have like almost like a checkered lines under their eyes or the puffy bags. +[989.000 --> 990.000] Just one of those. +[990.000 --> 997.000] Well, the area under the eyes relates to the kidney and fluid management. +[997.000 --> 1005.000] So many times what you've got here is either tired kidneys, especially if they're dark or purplish. +[1005.000 --> 1012.000] Many times when you have these puffy bags under the eyes, these are tears we haven't finished shedding yet. +[1012.000 --> 1015.000] There's tears we haven't finished shedding. +[1015.000 --> 1021.000] When you have that crisscross pattern in an area like that, remember what we talked about what a dry riverbed looks like? +[1021.000 --> 1025.000] Those are areas where you've got zinc depletion. +[1025.000 --> 1029.000] You remember when we talked about what a dry riverbed looks like, how you get those cracks? +[1029.000 --> 1031.000] He was asking about these crisscross lines. +[1031.000 --> 1036.000] This is usually an indication that there's a zinc, there's a deficiency or a weakness of the zinc in that area. +[1036.000 --> 1041.000] It hasn't progressed to a big line because it's not trauma-based, it's just overuse. +[1041.000 --> 1043.000] Does that make sense? +[1043.000 --> 1047.000] This is kidneys, this is lung. +[1047.000 --> 1054.000] Okay, so if they have a lot of those wrinkles there, ask if they have lung problems or allergies or stuff like that. +[1054.000 --> 1061.000] Questions? We're good so far? +[1061.000 --> 1066.000] You go with this? +[1074.000 --> 1077.000] These are big two. +[1077.000 --> 1083.000] These are called disempowerment lines. +[1083.000 --> 1093.000] I don't call them disempowerment lines but that's what Lillian calls them because I'm much more interested in describing what this means. +[1093.000 --> 1096.000] Can you see that? +[1096.000 --> 1109.000] When you have lines that extend down almost in, I had one lady, looked like somebody took an exacto knife and just etched lines down the side of her nose from the inner campus down. +[1109.000 --> 1119.000] In cases like this, in this behavior it's very, very similar to the suspended needle where somebody expressed anger. +[1119.000 --> 1126.000] The pushback was sociable, the ramifications of that anger were so strong that they just held themselves in check. +[1126.000 --> 1136.000] It's not exactly the same though because with a disempowerment line, at some point in your life or at some point in the person's life, they expressed their feelings. +[1136.000 --> 1150.000] They expressed their opinion and the pushback, the negative pushback, the negative response was so overwhelming they felt the need to appease, to placate. +[1150.000 --> 1152.000] So I call them placating lines. +[1152.000 --> 1157.000] These are people who do whatever they do just to keep the peace. +[1157.000 --> 1160.000] They don't necessarily, they're not just simply choking back their anger. +[1160.000 --> 1165.000] They're trying to make amends for having a thought, for having an opinion. +[1165.000 --> 1168.000] So they spend their life appeasing people. +[1168.000 --> 1171.000] You'll see this a lot. +[1171.000 --> 1175.000] I see it a lot, especially where abuse is concerned. +[1175.000 --> 1178.000] Especially where abuse is concerned, molestations. +[1178.000 --> 1184.000] Molestation, not quite so much, but where I see spousal issues a lot. +[1184.000 --> 1189.000] People who always feel like they're apologizing for being alive. +[1190.000 --> 1192.000] You'll see this. +[1192.000 --> 1194.000] Right? +[1194.000 --> 1198.000] And if you got them, it doesn't mean you're a bad person, it doesn't mean you're a woose. +[1198.000 --> 1202.000] It means you did the best you could with the information you had. +[1202.000 --> 1204.000] None of these traits are bad. +[1204.000 --> 1208.000] They're just like the check engine light on the dashboard. +[1208.000 --> 1210.000] They really are. +[1210.000 --> 1213.000] Right? When you're driving down the road, the check engine light goes off. +[1213.000 --> 1216.000] Oh my god, I got to get the light fixed. +[1217.000 --> 1218.000] You don't do that. +[1218.000 --> 1221.000] Oh, oil needs changing. +[1221.000 --> 1223.000] Engine needs servicing. +[1223.000 --> 1225.000] Gotta put coolant in the radiator. +[1225.000 --> 1227.000] That's all these facial things mean. +[1227.000 --> 1230.000] They're the light on the, they're the check engine lights on the dashboard. +[1233.000 --> 1235.000] Yes sir. +[1235.000 --> 1237.000] It may be. +[1237.000 --> 1239.000] Is there light on? +[1239.000 --> 1242.000] Nope, it's a little bit longer. +[1243.000 --> 1250.000] And maybe an assumption, but when you work with children or adolescents, I'm assuming you see these less. +[1250.000 --> 1258.000] Yes, in fact, Lillian taught me that you shouldn't read children because they're very impressionable. +[1258.000 --> 1261.000] They're very impressionable. +[1261.000 --> 1266.000] And so the things you say can become prophecies for them. +[1266.000 --> 1272.000] So I was taught encouraged children read adults. +[1272.000 --> 1279.000] But you can look at children's growing facial structures and kind of see things evolving. +[1279.000 --> 1280.000] Right? +[1280.000 --> 1282.000] But again, remember, they're still changing. +[1282.000 --> 1283.000] They're not stuck. +[1283.000 --> 1284.000] They're going to constantly grow. +[1284.000 --> 1289.000] So as you work on your own stuff, especially if you're working with, you know, if you have children, +[1289.000 --> 1294.000] the fastest way to fix your kids is to fix you. +[1294.000 --> 1297.000] And that's what the Chinese say. +[1297.000 --> 1306.000] The Chinese tell us that the Jing markings, the things that you bring from lifetime to lifetime, are always present. +[1306.000 --> 1312.000] So much, like it goes like nine generations back, nine generations forward. +[1312.000 --> 1317.000] If you fix something in the present moment, it fixes it nine generations back. +[1317.000 --> 1321.000] It's like that entanglement theory. +[1322.000 --> 1325.000] It'll fix it seven generations forward as well. +[1325.000 --> 1332.000] So as you resolve your stuff, you may find your kids moving through similar issues faster and easier, +[1332.000 --> 1336.000] or not even coming up at all. +[1336.000 --> 1339.000] Yeah. +[1339.000 --> 1349.000] I just had a revelation when you were saying that, because I was trying to figure out if my daughter was just maturing or just changing rapidly. +[1350.000 --> 1355.000] As I've been rapidly changing, I've been noticing her communications become more open. +[1355.000 --> 1358.000] She abandoned coloring her hair. +[1358.000 --> 1362.000] I guess that was focused on the past. +[1362.000 --> 1363.000] Yeah. +[1363.000 --> 1365.000] And the science is there now too. +[1365.000 --> 1367.000] They did it with insects. +[1367.000 --> 1373.000] They found out that if they caused a traumatic accident, a traumatic event for one generation, I think it was fruit flies. +[1373.000 --> 1375.000] I could be wrong. +[1375.000 --> 1381.000] If they changed their genetic makeup, their offspring had it too. +[1381.000 --> 1384.000] They've seen the same thing at Holocaust survivors. +[1384.000 --> 1392.000] Where the grandchildren of Holocaust survivors carry the genetic markers from the time the grandparents spent in the camps. +[1392.000 --> 1395.000] They just didn't realize it can go the other way, which is what's Chinese are saying. +[1395.000 --> 1397.000] It doesn't go just forward. +[1397.000 --> 1398.000] It goes backwards. +[1398.000 --> 1406.000] Now, I've had direct experience with genetic memory because I've actually worked with people who've taken on the memories of their transplanted organs. +[1406.000 --> 1413.000] I had to do parts therapy and regression on the organs. +[1413.000 --> 1414.000] I get the cool stuff. +[1414.000 --> 1416.000] I don't get smoke cessation or weight loss. +[1416.000 --> 1420.000] I get the interesting stuff. +[1420.000 --> 1423.000] Let's see where I'm at here. +[1423.000 --> 1434.000] If we go a little further down, and we talked about humor lines already, this is another one you're going to see a lot of. +[1434.000 --> 1440.000] This manifests as little dimpling in the chin on the chin. +[1440.000 --> 1443.000] They're not necessarily horizontal lines. +[1443.000 --> 1447.000] They're just like that little dimpling feeling. +[1447.000 --> 1448.000] You see that a lot. +[1448.000 --> 1452.000] You got someone who's got a lot of repressed fear. +[1452.000 --> 1456.000] They're usually very fearful people. +[1456.000 --> 1460.000] Now, that's modulated depending on how strong the chin is. +[1460.000 --> 1463.000] It's very big and jutting chin. +[1463.000 --> 1469.000] You're not going to see that much fear directly because they usually have a lot of stubbornness and willfulness. +[1469.000 --> 1476.000] But when you see a lot of lines and dimpling down in this area, and I see dimpling more than anything else. +[1476.000 --> 1481.000] Fear. +[1481.000 --> 1487.000] You guys are all standing here checking your face out. +[1487.000 --> 1490.000] Yes. +[1490.000 --> 1494.000] Could be. +[1494.000 --> 1503.000] My experience has been that this kind of fear is almost always early childhood. +[1503.000 --> 1506.000] I don't see a lot of PTSD marking this way. +[1506.000 --> 1512.000] Unless I'm thinking when I see PTSD, I'm thinking more wartime trauma. +[1512.000 --> 1517.000] But you can have PTSD from many different forms of influence. +[1517.000 --> 1519.000] But I usually see this in clinic. +[1519.000 --> 1521.000] Your experience may be different. +[1521.000 --> 1525.000] Clinically, when I see this, it's usually childhood stuff. +[1525.000 --> 1527.000] Lifetime stuff. +[1527.000 --> 1528.000] Does that make sense? +[1528.000 --> 1529.000] I don't know if it makes sense. +[1529.000 --> 1533.000] That's just what I've observed. +[1534.000 --> 1540.000] This is where I love my touchscreen. +[1540.000 --> 1545.000] Have we covered enough traits for you to start playing a little bit? +[1545.000 --> 1550.000] Or do you want to go through the whole thing and then play? +[1550.000 --> 1553.000] You guys want to read each other. +[1553.000 --> 1556.000] Here's what I want you to do. +[1556.000 --> 1558.000] You want to break up into groups of three. +[1558.000 --> 1562.000] We're going to take 45 minutes for this. +[1562.000 --> 1568.000] You're 10 to 15 minutes for each person. +[1568.000 --> 1569.000] You're going to connect. +[1569.000 --> 1573.000] You're going to just kind of get in rapport with them a little bit. +[1573.000 --> 1574.000] Small talk. +[1574.000 --> 1577.000] You don't have to talk about anything in particular. +[1577.000 --> 1582.000] And what you want to do is systematically, you want to start at the top of the head +[1582.000 --> 1588.000] and work clockwise. +[1588.000 --> 1591.000] That's if you want to be linear and logical about it. +[1591.000 --> 1595.000] If you want to do it the old school way, you just kind of connect with them +[1595.000 --> 1599.000] and notice whatever feature calls your attention first. +[1599.000 --> 1600.000] And talk about that feature. +[1600.000 --> 1602.000] And talk about those things. +[1602.000 --> 1611.000] This will make even more sense when we start putting in the headlines in there. +[1611.000 --> 1616.000] This is one of the oldest pictures of face reading. +[1616.000 --> 1619.000] This is like several thousand or so. +[1619.000 --> 1622.000] So this is not new. +[1622.000 --> 1626.000] So the first thing I want you to do, I think, you know, don't worry about reading too much. +[1626.000 --> 1632.000] So much as seeing the traits and noticing how people are marking. +[1632.000 --> 1636.000] If you want to, you can inquire about certain things. +[1636.000 --> 1640.000] Pay attention to what happens to their emotions when you do this. +[1640.000 --> 1644.000] But now it's just, it's just, work with as many different people as possible. +[1644.000 --> 1647.000] And just look. +[1647.000 --> 1652.000] Right? If you want to take out your face app and start looking at things in terms of, +[1652.000 --> 1656.000] well, what are they on privately versus what are they doing publicly? +[1656.000 --> 1657.000] You can do that. +[1657.000 --> 1661.000] But again, I just want you to kind of enjoy reading what, you know, playing with what you see. +[1661.000 --> 1665.000] And seeing if you can isolate and remember what each of the different things are. +[1665.000 --> 1669.000] So that makes sense. It's just kind of a little, yet to no faces kind of a thing. +[1669.000 --> 1671.000] So let's break up into groups of three. +[1671.000 --> 1674.000] We'll come back and finish the facial map. +[1674.000 --> 1677.000] And we'll start talking about ears. diff --git a/transcript/allocentric_4nCR3yBBCHE.txt b/transcript/allocentric_4nCR3yBBCHE.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1fe1077040b704eced49761e75d4245583f680f --- /dev/null +++ b/transcript/allocentric_4nCR3yBBCHE.txt @@ -0,0 +1,2 @@ +[0.000 --> 30.000] 1.0.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1 +[30.000 --> 55.960] 1.0.......................................................................................................................................................................................................................... diff --git a/transcript/allocentric_7Dga-UqdBR8.txt b/transcript/allocentric_7Dga-UqdBR8.txt new file mode 100644 index 0000000000000000000000000000000000000000..5f5b12050de04aa457ad07de2a329a13fb4b1c3c --- /dev/null +++ b/transcript/allocentric_7Dga-UqdBR8.txt @@ -0,0 +1,180 @@ +[0.000 --> 7.120] Hello everybody, my name is Dan, I'm an animator, and this is New Frame Plus, a series about video game animation. +[7.120 --> 13.520] I have a question, how do you communicate character and personality from a first-person view? +[13.520 --> 18.680] Conveying character through performance is one of the animator's most important jobs. +[18.680 --> 27.320] Animating proper physicality, applying the 12 principles, reinforcing gameplay, those things are all important and can be quite difficult to achieve, +[27.320 --> 29.600] but they are ultimately fundamentals. +[29.600 --> 35.640] On top of all of that, the character animator's job is to create appealing character performances, +[35.640 --> 39.480] to visually reinforce who these characters are through movement. +[39.480 --> 44.240] But how do you do that from a first-person view when you've got nothing but hands and a gun? +[44.240 --> 45.200] Back in the fight! +[45.200 --> 46.440] A lot of games don't. +[46.440 --> 53.240] Most shooters' first-person animation is strictly functional, intended to clearly convey what your character is doing, +[53.240 --> 56.400] but not exactly telling us anything about them. +[56.440 --> 57.800] Alright, I'm shooting. +[57.800 --> 59.040] Now I'm running. +[59.040 --> 61.200] Now I'm reloading, and so on. +[61.200 --> 65.600] Which is not to say that these animations don't tell us some things about the character. +[65.600 --> 72.200] In most any military shooter, the functional gun-handling animations reinforce the capability of our player character, +[72.200 --> 76.120] their familiarity with their weapon and their abilities as a soldier. +[76.120 --> 83.000] Mirror's Edge uses faith's arms and legs to help the player understand what faith is doing as she navigates the world, +[83.000 --> 89.400] which helps to show the physicality of the movement, and makes the player feel even more cool as they run and jump around. +[89.400 --> 95.800] But it doesn't necessarily tell us much about faith herself, other than the fact that she's a very skilled free runner. +[95.800 --> 101.160] But there are games out there that manage to communicate a lot of character from a first-person view, +[101.160 --> 104.240] and one of those games is Blizzard's Overwatch. +[104.240 --> 109.480] In terms of game animation, Overwatch is a masterclass in character appeal. +[109.480 --> 112.360] This game is all about its characters. +[113.160 --> 122.160] Every single member of this cast is unique and just loaded with personality, more than in almost any game I have ever seen. +[122.160 --> 127.480] And almost all of that in-game personality is conveyed through animation. +[127.480 --> 136.400] There's very little dialogue or plot in the game itself, so animation and character design do the bulk of the heavy lifting in defining who these people are. +[136.400 --> 142.480] The way they carry themselves, their victory poses, their emotes, their play of the game glamour shots. +[142.480 --> 145.600] It all paints a picture of who these people are. +[145.600 --> 153.520] But this game is a first-person shooter, which means that you're going to spend the vast majority of the time seeing nothing but their hands and a weapon. +[153.520 --> 160.080] So how have Blizzard's animators managed to continue expressing personality using only these elements? +[160.080 --> 167.200] Ultimately, the answer is that they gave every single character their own completely unique set of first-person animations, +[167.200 --> 171.680] and built in lots of contrast between how each character goes about things. +[171.680 --> 174.000] But let's get into specifics. +[174.000 --> 178.560] First, and this is more of a character design point, but I think it's worth bringing up. +[178.560 --> 186.640] No matter who you're playing, the animators have made sure that the characters' hands and or weapon are almost always on screen. +[186.640 --> 191.440] This not only allows each weapon's unique design to show you who you're playing at a glance, +[191.440 --> 199.040] but also showcases each weapon's unique animation, which also just happens to reflect the personality of the weapon's owner. +[199.040 --> 203.760] Soldier 76's Assault Rifle is a finely tuned precision machine. +[203.760 --> 209.520] Everything on this weapon moves quickly and sharply, snapping perfectly into place, like a salute. +[209.520 --> 213.280] There is not a loose or flimsy part to be found on this weapon. +[213.280 --> 218.640] Like its owner, this weapon has a few signs of wear, but it is a well-maintained instrument. +[218.640 --> 224.480] Contrast that with Junkrat's launcher, which he clearly built himself from scrap and spare parts. +[224.480 --> 228.880] This weapon is rickety and crude, held together by duct tape and a wish, +[228.880 --> 234.160] which you can easily see by the way so many of the pieces vibrate and loosely shake. +[234.160 --> 238.640] Like none of these pieces were designed to fit together, but it gets the job done. +[238.640 --> 243.520] It perfectly reflects Junkrat's slightly unhinged, twitchy enthusiasm. +[243.520 --> 248.160] Lucio's Sonic Amplifier pulses rhythmically like a pounding subwoofer. +[248.160 --> 251.680] Bastion's machine gun constantly shudders just a little bit, +[251.680 --> 254.400] like an older, cruder generation of machine. +[254.400 --> 260.400] And a few of its moving parts, like the hinge on this site, seem to have gotten just a little looser with age. +[260.400 --> 265.520] Just having each weapon visible on screen reinforces personality in so many, +[265.520 --> 267.440] tiny, subtle little ways. +[267.440 --> 269.440] Same goes for each character's idol. +[269.440 --> 272.720] Even when the player is just standing there, not doing anything, +[272.720 --> 279.120] each character's idol animations and tiny little fidgets are unique and informed by their personality. +[279.120 --> 282.960] Junkrat is twitchy and antsy, eager to cause some mayhem. +[282.960 --> 288.560] Genji is very contained and controlled, prepared to strike when just the right moment comes. +[288.560 --> 292.080] Cometra's hand movement is delicate and flowing like a dancer, +[292.080 --> 294.880] especially the fingers on her free hand to the left. +[294.880 --> 300.080] McCree's grip and pistol aim are steady, while Mei's aim isn't quite as trained. +[300.080 --> 303.280] Her weapon bobs and drifts on screen much more. +[303.280 --> 306.160] Diva constantly adjusts her grip on her controls, +[306.160 --> 310.160] and you can see lots of sharp, tiny nudges of the sticks as she sits there. +[310.160 --> 313.040] Little twitches like her arms are tensed with focus. +[313.040 --> 315.360] She is prepared to react in an instant. +[315.360 --> 320.400] And Zenyatta just gently hovers, his clasped hands drifting up and down. +[320.400 --> 324.000] And the best part is, unlike almost all of the other characters, +[324.000 --> 328.560] he doesn't even have a fidget. He is meditative and completely serene. +[329.120 --> 334.560] Characters even breathe differently. Watch the soft rising and lowering of the weapons. +[334.560 --> 340.240] Junkrat's breaths are quick and excited, the end of his launcher rises and falls pretty rapidly. +[340.240 --> 344.000] Roadhog's breathing is totally relaxed because he doesn't care. +[344.000 --> 348.320] Hanzo's breathing is controlled, there is very little drift on that bow. +[348.320 --> 352.000] And with Diva you see almost no drift at all, which makes sense because her +[352.000 --> 355.520] mech controls are locked in place. Her breathing wouldn't affect them. +[355.520 --> 359.360] But outside the cockpit, the mech's guns do sway gently, +[359.360 --> 362.400] as if the machine itself has a little bit of life to it. +[362.400 --> 366.960] You can read a lot into this kind of subtlety that may or may not have been intended to convey +[366.960 --> 372.720] specific things, but the point remains, they are all different, and they all feel pretty appropriate. +[372.720 --> 377.280] But okay enough about stillness. Let's move around, because every Overwatch character has +[377.280 --> 381.600] their own distinct walk, with their own distinct rhythm and quality of movement. +[381.600 --> 386.160] Even though you can't see their feet, you can feel how they run by watching the movement of +[386.160 --> 390.800] their gun and hands, which is reinforced by some very subtle camera movement. +[390.800 --> 394.080] Reinhart stops around in his heavy armor like a yager. +[394.080 --> 397.280] There's large vertical movement punctuating each stride, +[397.280 --> 401.520] and the wide horizontal sway on his hammer sells the shoulder rotation, +[401.520 --> 407.360] and the twist up his torso as he walks. Genji, on the other hand, runs with rapid quiet steps, +[407.360 --> 412.080] light on his feet like an ninja. There's very little vertical punctuation to his run, +[412.080 --> 417.520] he almost coasts along. Zenyatta literally coasts, so you feel no footsteps at all, +[417.520 --> 422.880] although you do see a slight increased bobbing in his hands, just a hint of increased effort for +[422.880 --> 428.240] motion, not to mention a dash of contrast just to make moving feel different from stillness. +[428.240 --> 432.960] Lucio skates around the field rather than running, and you can feel that difference when controlling +[433.360 --> 438.640] him. His free arm swings back and forth like a skater, and his gunhand very subtly pulls back +[438.640 --> 443.840] and forth in rhythm with it. The punctuated movement on him is much more horizontal than vertical, +[443.840 --> 448.960] because, you know, skates. Diva's hands don't show a whole lot of vertical step movement either, +[448.960 --> 454.320] just a quick sharp bob, but we do see a much larger degree of movement on the guns outside, +[454.320 --> 459.040] suggesting that the cockpit keeps pretty steady even when the mech is stomping around. +[459.040 --> 464.080] Junkrat even runs with a slight gallop on his peg leg. See, watch his hands and his gun. +[467.920 --> 473.760] With just a few subtle variations in arm animation and camera bob, you can infer quite different +[473.760 --> 479.200] styles of walking and mobility on each character. Some characters even have completely unique +[479.200 --> 484.080] navigation options, completely custom animation work to accommodate those characters' +[484.080 --> 489.200] individual ways of getting around. Lucio can skate on walls, so they've built a system just +[489.200 --> 494.080] for him that has him holding out his free hand to brush against the surface he's riding on. +[494.080 --> 499.920] Winston, being a guerrilla, uses his free front left hand to run, so that front arm plays into +[499.920 --> 506.880] his run cycle. Hanzo and Genji can climb walls. Soldier 76 has that classic call of duty sprint. +[506.880 --> 512.320] Diva can rocket her mech forward. Widowmaker can grapple hook places, and a lot of these are +[512.320 --> 517.280] strictly functional in terms of animation, but they serve to create further contrast between +[517.280 --> 521.840] these characters, to make them each feel all the more different to inhabit as a player. +[521.840 --> 526.320] And it kinda helps you as a player get into that semi-roll playing mindset, +[526.320 --> 531.280] where you're just in tune with who that character is, where you feel like them when stepping into +[531.280 --> 537.280] their shoes. Like, yeah, I'm Lucio, I'm riding on walls. Hang on, let me just ninja up this here +[537.920 --> 542.080] new big deal. Can't catch me, can't catch me, can't catch me, whoops, I'm over here now. +[545.920 --> 551.600] And oh man, let's talk about reloads. Those are a great opportunity for a flair of personality. +[551.600 --> 559.520] Soldier 76's reload is quick, trained, and efficient. McCree does a combination of classic cowboy +[559.520 --> 564.320] revolver moves, a spin to empty the cylinder, and then a quick flick of the wrist to snap it back +[564.320 --> 569.840] into place. Reaper literally throws his guns away and pulls out new ones, because he saw it in +[569.840 --> 575.280] the matrix and thinks it makes him look cool. Tracer does a quick stylish spin. Bastion's gun +[575.280 --> 580.640] actually opens up to reload internally, and look at how all of these parts feel sort of loose and +[580.640 --> 586.800] wobbly, like he's an old printer. Junkrat just slaps the old mag out of its slot, jams a new one in +[586.800 --> 592.400] and yanks the bolt. He's really not careful with that weapon. May, on the other hand, +[592.400 --> 599.040] daintily twists this little knob, all set. Roadhog just crams a bunch of loose bolts and +[599.040 --> 605.440] springs and crap into his gun, because again, he do not care. Zenyatta doesn't really reload so much +[605.440 --> 612.960] as recenter himself. And I can't even tell for sure what Torb is doing, but it looks neat. +[615.280 --> 620.480] Or what about their hello emotes? McCree does this casual salute slash finger gun. +[620.960 --> 628.400] Reaper gives him the old claw. Farah formally salutes, just like her mom does. +[629.520 --> 635.520] Sombra does this, which is just so perfect. And Bastion does a little robotic hand wave, +[635.520 --> 639.280] or if he's in turret form, he waves with his little repair arm instead. +[641.360 --> 645.360] Saying hello is one of the only emotes done from the first person in the game, +[645.360 --> 649.600] and the animators do not miss this great opportunity for some easy personality. +[651.040 --> 655.280] Ah, man, I could go on talking about all the awesome little touches in this game's first +[655.280 --> 660.560] person animation forever. The beautiful snap to Zenyatta's attacks, which strike this perfect +[660.560 --> 664.560] balance between conveying mechanical power and organic looseness. +[667.120 --> 672.320] The way that every one of Sometra's graceful hand movements is informed by a combination of +[672.320 --> 677.200] finger-tutting and traditional Indian dances. Ooh, or the overlap and the follow-through that +[677.200 --> 681.600] happens on their weapons when you swing the camera around. Have you noticed this? Look at how the +[681.600 --> 687.200] gun drags slightly behind as the camera turns, and then overshoots as the camera stops and then +[687.200 --> 691.760] settles back into position. It's a neat little touch, right? Well, these are different for every +[691.760 --> 696.960] character, too. Sombra's holding her top heavy submachine gun one handed, so there's a bit more +[696.960 --> 703.040] wobble as she tries to keep it steady. McCree's revolver is lighter and he's a sharp marksman, +[703.040 --> 708.240] so he actually leads the turn a little bit with the barrel of his gun, so his aim will get to where +[708.240 --> 714.480] he's turning before his body does. Hanzo's bow drags behind, making it feel like he turns his body +[714.480 --> 720.240] first and then the bow follows after. And Winston's weapon rotates, but the rotation axis happens +[720.240 --> 725.440] closer to the top of the weapon because that's where he holds it. But perhaps the most important +[725.440 --> 730.960] thing to note here in all of this is that none of this emphasizing of character comes at the cost +[730.960 --> 736.880] of gameplay function. Each character may feel unique, but they all handle well, and they all feel +[736.880 --> 742.480] great to control. None of the personality touches are overly distracting or prioritized over the +[742.480 --> 748.640] immediacy of Overwatch's fast paced gameplay. Like just as a really extreme example, take McCree's +[748.640 --> 755.440] combat role. Now, this ability technically involves a quick forward role, and the animators could +[755.520 --> 761.600] absolutely have had the camera do a full 360 degree rotation to mimic the motion of rolling forward +[761.600 --> 766.640] in first person. But, thank goodness, they ultimately decided to have the camera do this little dip +[766.640 --> 771.840] instead to suggest the feeling of rolling forward without completely disorienting the player. +[772.400 --> 776.640] And I don't want to make it sound like Blizzard is the only studio out there doing this. +[776.640 --> 782.000] Other first-person games have succeeded here too. Team Fortress 2's animations don't express +[782.000 --> 786.640] nearly as much character, but it did do a lot of the same things. Each character does have their +[786.640 --> 792.000] own first-person animation set, and some of them do get some fun little flourishes. Games like +[792.000 --> 797.600] Titanfall 2 have mostly functional gameplay animations, but they use the hands and the arms in first +[797.600 --> 801.920] person during story moments to give a better sense of physicality to your camera view, +[801.920 --> 804.960] and to give your player character some acting moments at key points. +[805.920 --> 814.000] Firewatch is a game practically built entirely around first-person interactions like these. +[821.520 --> 826.080] And then there are games like Doom, which not only use the gameplay animations to reinforce +[826.080 --> 831.360] the tone of the game and the impatient brutality of Doom Guy, but they also include some bonus +[831.360 --> 833.600] moments of first-person animated comedy. +[837.840 --> 842.640] I guess the point I'm ultimately trying to make here is that first-person animations can still +[842.640 --> 848.480] be a rich opportunity for performance. As animators, just like with every other animation we make for +[848.480 --> 853.760] a game character, we have to always be mindful of who that character is as we work. +[853.760 --> 858.640] Now most of us may not have Blizzard's budget or production flexibility, but this sort of +[858.640 --> 863.440] characterization is absolutely achievable without those luxuries. I mean, you're going to be +[863.440 --> 866.960] animating all of these moves anyway. Why not give them an extra few minutes of thought? +[867.760 --> 872.560] So whenever you're tasked with animating a character, whether that animation is meant to be seen +[872.560 --> 878.720] close up, or far away, or even from a first-person perspective, look for every opportunity to let +[878.720 --> 883.760] character inform that performance. It only takes a little bit more time and thought to do, +[883.760 --> 889.200] and it can have a huge impact. Prioritizing gameplay doesn't have to come at the expense of +[889.200 --> 896.000] character. Thank you all for watching, and special thanks to Matt Bain, who gave a great talk about +[896.000 --> 900.960] Overwatch's first-person animation stuff at GDC. It's available for free in the GDC vault, +[900.960 --> 905.280] or you can go check it out on their YouTube channel. And if you happen to be in the mood for more +[905.280 --> 910.560] Overwatch animation talk for me, I did make another episode earlier about Tracer and Posed Design, +[910.640 --> 915.040] which you can check out here. And consider subscribing if you haven't yet, because I have +[915.040 --> 919.680] got more new frame-plus episodes in the works, but they're kind of big, so just hang in there, +[919.680 --> 923.920] I promise I'll try to not keep you waiting too long. Until then... diff --git a/transcript/allocentric_8O3FC86WjWU.txt b/transcript/allocentric_8O3FC86WjWU.txt new file mode 100644 index 0000000000000000000000000000000000000000..40c73b29c1d76359dcefdc7b525e63f128b2b7bb --- /dev/null +++ b/transcript/allocentric_8O3FC86WjWU.txt @@ -0,0 +1,432 @@ +[0.000 --> 2.000] Good morning. +[4.000 --> 6.000] Say hello. +[6.000 --> 8.000] What are you eating? +[8.000 --> 10.000] What are you eating? +[12.000 --> 14.000] Hey guys, welcome to another video. I think you're really going to like this one. +[14.000 --> 20.000] We're going to go over a bunch of different ways that Abigail, my non-verbal autistic daughter, +[20.000 --> 24.000] communicates and stay tuned to the end because we are going to be teaching her +[24.000 --> 28.000] a new word in sign language that she can use on the internet. +[28.000 --> 30.000] Abigail has a lot of different forms of communication. +[30.000 --> 34.000] She uses an iPad for communication which you'll see in a minute. +[34.000 --> 38.000] She does some modified sign language so that's different from ASL, American Sign Language. +[38.000 --> 42.000] And then she uses body language quite a bit. +[42.000 --> 46.000] A lot of her language that she uses like this body language +[46.000 --> 50.000] is just something that we learned from being around her all the time. +[50.000 --> 54.000] Of course, when she's happy, when she's sad, when she's upset about something, +[54.000 --> 58.000] she doesn't really need to communicate emotions. +[58.000 --> 62.000] And she doesn't really have the capacity to understand the need to communicate emotions, +[62.000 --> 64.000] how she's feeling. +[64.000 --> 70.000] I don't know that she necessarily understands emotions as it may be able to give them a definition. +[70.000 --> 76.000] She doesn't really have the capacity to understand the need to communicate emotions, +[76.000 --> 80.000] emotions as it may be able to give them a definition. +[80.000 --> 82.000] Like this is how sad feelings are. +[82.000 --> 86.000] This is how happy feelings, most of her communication, +[86.000 --> 92.000] is done through or done for once in needs. +[92.000 --> 96.000] This one for example, she is signing for bathroom a lot +[96.000 --> 100.000] and she's not necessarily asking for bathroom. +[100.000 --> 102.000] She does scroll through her signs. +[102.000 --> 104.000] She had actually just gone to the bathroom. +[104.000 --> 108.000] That's more of like attention seeking. +[108.000 --> 112.000] So we really have to read what's going on around us at the time +[112.000 --> 118.000] to fully understand what she's communicating and what she's asking for. +[118.000 --> 122.000] I think it's really important to understand that nonverbal +[122.000 --> 126.000] is not necessarily a trait of autism. +[126.000 --> 132.000] Autism is an individual diagnosis, but there are comorbidities that go along +[132.000 --> 136.000] with autism, not all the time, sometimes. +[136.000 --> 138.000] Sometimes they go hand in hand, sometimes they're, you know, +[138.000 --> 140.000] some are more frequent than others. +[140.000 --> 144.000] Abigail also has a paika diagnosis, which means she will mouth and inedible objects. +[144.000 --> 148.000] She's still a lot more of that when she was younger. +[148.000 --> 152.000] And you often see that with autism, but it does not, it's not part of autism. +[152.000 --> 156.000] That makes sense. Same thing with her, with her communication, +[156.000 --> 160.000] or lack there, you know, lack there of a verbal communication. +[160.000 --> 166.000] She can't talk and that could be a diagnosis of a praxia +[166.000 --> 170.000] or it could be a diagnosis of anything else, +[170.000 --> 172.000] but that's not necessarily autism. +[172.000 --> 177.000] She also has sensory processing disorder that often times goes hand in hand with autism, +[177.000 --> 182.000] but there are children and adults that have sensory processing disorder +[182.000 --> 184.000] and don't have a diagnosis of autism. +[184.000 --> 188.000] So her behaviors are also communication. +[188.000 --> 192.000] When she ran, getting drinks, because she was excited, she was doing a good job. +[192.000 --> 196.000] And that's a behavior that is also a communication. +[196.000 --> 200.000] Abigail uses an iPad to communicate, +[200.000 --> 204.000] and we are pushing more and more use of that iPad. +[204.000 --> 208.000] She'll combine sign language with her iPad, quite a bit, +[208.000 --> 211.000] but the cool thing about the iPad is that it's universal. +[211.000 --> 215.000] Anybody can understand it, because it gives her a voice, +[215.000 --> 219.000] just a natural voice that she can use in the everyday world, +[219.000 --> 224.000] she doesn't just have to rely on her parents or caregivers to understand what she's saying +[224.000 --> 227.000] with her modified sign language or body language. +[227.000 --> 231.000] That stuff works at home and at therapy in at school, +[231.000 --> 235.000] but the iPad will give her much more access to the world. +[235.000 --> 239.000] So we really work on that in speech therapy +[239.000 --> 244.000] and just throughout the day, at home, getting her to use that more and more. +[244.000 --> 247.000] And here are some of Abigail's modified signs. +[247.000 --> 250.000] We'll just run through them real quick. +[250.000 --> 253.000] But if you've been watching our videos for a while, +[253.000 --> 256.000] you know that we always have an app for the beep, +[256.000 --> 260.000] and her one of her favorite signs that Abby does is that the app for the beep on this, +[260.000 --> 262.000] something that Summer taught her. +[262.000 --> 264.000] It's pretty cute. +[264.000 --> 266.000] Show me golf card. +[266.000 --> 268.000] A pie. +[268.000 --> 269.000] A pie. +[269.000 --> 270.000] A pie. +[270.000 --> 272.000] Golf card. +[272.000 --> 273.000] Like this. +[273.000 --> 274.000] Hey. +[274.000 --> 275.000] A. +[275.000 --> 282.000] You show me cereal. +[282.000 --> 283.000] Serial. +[283.000 --> 284.000] Yeah. +[284.000 --> 285.000] Show me cracker. +[285.000 --> 287.000] That's chip. +[287.000 --> 288.000] Show me cracker. +[288.000 --> 289.000] Yeah. +[289.000 --> 291.000] Can you show me cookie? +[291.000 --> 292.000] Cookie. +[292.000 --> 295.000] What else do we know? +[295.000 --> 296.000] All done. +[296.000 --> 297.000] Show me all done. +[297.000 --> 298.000] Show me all done. +[298.000 --> 300.000] All done. +[300.000 --> 301.000] All done. +[301.000 --> 302.000] Bath. +[302.000 --> 303.000] Hey. +[303.000 --> 306.000] Can you show me bath? +[306.000 --> 307.000] Bath. +[307.000 --> 308.000] Yeah. +[308.000 --> 310.000] What do you say? +[310.000 --> 311.000] Show me help. +[311.000 --> 312.000] Do you need help? +[312.000 --> 313.000] That's music. +[313.000 --> 314.000] That's... +[314.000 --> 315.000] Okay, stop. +[315.000 --> 316.000] Hands up. +[316.000 --> 317.000] Show me help. +[317.000 --> 318.000] Show me open. +[318.000 --> 319.000] Open. +[319.000 --> 320.000] Show me break. +[320.000 --> 321.000] Break. +[321.000 --> 322.000] Break. +[322.000 --> 323.000] Wait. +[323.000 --> 324.000] Snack. +[324.000 --> 330.000] I do want cookies under a two-valid book. +[330.000 --> 331.000] Which one? +[331.000 --> 333.000] Show me in your iPad. +[333.000 --> 334.000] Nature-bound box. +[334.000 --> 335.000] Okay. +[335.000 --> 336.000] There you go. +[336.000 --> 337.000] Okay. +[337.000 --> 338.000] Okay. +[338.000 --> 339.000] Okay. +[339.000 --> 340.000] Okay. +[340.000 --> 341.000] Okay. +[341.000 --> 342.000] Okay. +[342.000 --> 343.000] Okay. +[343.000 --> 344.000] Okay. +[344.000 --> 345.000] Okay. +[345.000 --> 346.000] Okay. +[346.000 --> 347.000] Okay. +[347.000 --> 348.000] Okay. +[348.000 --> 349.000] Okay. +[349.000 --> 350.000] Okay. +[350.000 --> 351.000] Okay. +[351.000 --> 352.000] Okay. +[353.000 --> 355.000] Okay. +[355.000 --> 357.000] Okay. +[357.000 --> 358.000] Okay. +[358.000 --> 378.920] Here is youronto uk muscle and other exercises that allow this activity to take out the +[378.920 --> 379.920] dive in. +[379.920 --> 380.920] Right. +[380.920 --> 384.200] She'll watch toy unboxing, openings, whatever. +[384.200 --> 386.160] But she says YouTube kids on there, +[386.160 --> 387.320] she navigates to that pretty well. +[387.320 --> 389.760] She has Spotify with a playlist. +[389.760 --> 392.800] I'll have to post one of her playlists sometime. +[392.800 --> 395.160] Yeah, it's not just a communication device. +[395.160 --> 397.200] We want her to love her iPad. +[397.200 --> 399.600] We want her to be able to communicate with it +[399.600 --> 402.720] and also just enjoy having it. +[402.720 --> 404.520] So it's on her at all time. +[404.520 --> 407.120] One problem we do have though is the battery runs out +[407.120 --> 410.080] super quick, which is on it all day. +[410.400 --> 413.240] That's pretty typical for most kids. +[414.280 --> 417.360] The most important thing to me is that my daughter's happy. +[417.360 --> 420.080] And she's clearly very, very happy. +[421.280 --> 422.800] One of the keys to keeping her happy +[422.800 --> 424.760] is increasing her communication. +[424.760 --> 426.800] One of the biggest frustrations +[426.800 --> 429.520] and when she has her angry moments in her meltdowns +[429.520 --> 431.600] comes from an inability to communicate. +[431.600 --> 435.800] So it's our job to give her the tools that she needs +[435.800 --> 438.600] to communicate and have access to the world +[438.600 --> 440.640] and to stay happy. +[442.200 --> 444.240] Oh, +[444.240 --> 445.560] are you here? +[445.560 --> 447.240] Oh, +[447.240 --> 448.240] what's up? +[449.560 --> 450.680] You want to eat? +[450.680 --> 453.160] Will we have your brother in a moment to go eat, okay? +[455.160 --> 456.160] Oh, +[456.160 --> 458.360] yeah, +[458.360 --> 459.200] we are. +[459.200 --> 460.200] Me too. +[460.200 --> 464.320] Okay, so we have done this before in a video. +[464.320 --> 466.880] We taught you a sign. +[466.880 --> 469.520] Do you remember what that sign was? +[470.520 --> 472.000] Do you remember what that sign was? +[472.000 --> 473.080] See you. +[473.080 --> 474.520] I don't know. +[474.520 --> 477.200] We were at a fast food restaurant and we were traveling. +[478.120 --> 478.960] And we taught her a sign. +[478.960 --> 480.120] I know what. +[480.120 --> 481.400] I taught her this one. +[481.400 --> 482.560] Hey. +[482.560 --> 483.720] This sign's signed. +[483.720 --> 485.480] I don't remember this one. +[485.480 --> 486.800] I don't remember which one I remember this one. +[486.800 --> 487.800] Yep. +[487.800 --> 488.640] Yep. +[488.640 --> 489.480] She did learn this. +[489.480 --> 490.320] Yep. +[490.320 --> 492.440] So we have a sign that's going to be really useful +[492.440 --> 496.000] to Abigail because she always signs for the wrong thing. +[496.520 --> 497.520] Huh? +[497.520 --> 499.360] What is this? +[499.360 --> 500.200] That is close. +[500.200 --> 502.040] It is not a cookie. +[502.040 --> 502.880] It's a donut. +[502.880 --> 506.200] And I'm going to show you how to say donut, okay? +[506.200 --> 508.680] Here, look, we're going to do what's your preferred +[508.680 --> 509.520] signing hand? +[509.520 --> 510.440] What do you think? +[510.440 --> 511.440] I think it's her left. +[511.440 --> 512.280] Her left? +[512.280 --> 513.120] Okay. +[513.120 --> 513.800] Can you go like this? +[513.800 --> 514.640] Watch. +[514.640 --> 515.480] Watch. +[515.480 --> 516.320] Ready? +[516.320 --> 517.160] She'll live. +[517.160 --> 519.440] Look, we're going to go donut. +[520.280 --> 521.120] Donut. +[522.720 --> 523.560] Donut. +[524.560 --> 525.920] What is that? +[525.920 --> 528.280] That is a, look at me. +[528.280 --> 529.120] Donut. +[529.880 --> 531.000] Can you do it? +[533.800 --> 535.040] Donut. +[535.040 --> 536.520] Good job. +[536.520 --> 537.400] Would you like a bite? +[537.400 --> 538.240] Another donut? +[538.240 --> 539.080] Yes. +[539.080 --> 540.160] All right, there you go. +[540.160 --> 541.000] All right. +[544.120 --> 545.240] This is so good. +[545.240 --> 546.240] It's best, right? +[546.240 --> 547.120] What's that called? +[548.440 --> 549.560] It's not a cookie. +[549.560 --> 550.560] It's a donut. +[551.480 --> 552.320] Close. +[553.840 --> 554.840] Donut. +[554.840 --> 556.160] Ready? +[556.160 --> 557.360] Donut. +[557.360 --> 558.960] Hold your hand like that. +[558.960 --> 559.800] Donut. +[559.800 --> 562.440] So I'm just going to do less and less. +[562.440 --> 563.360] Hand over hand. +[563.360 --> 565.040] So I kind of just let go of her hand a little bit. +[565.040 --> 565.880] Donut. +[565.880 --> 566.720] Good job. +[566.720 --> 567.720] Show me again. +[569.000 --> 569.520] Donut. +[569.520 --> 570.880] Good job. +[570.880 --> 572.080] That was very good. +[572.080 --> 572.680] Ready? +[572.680 --> 574.280] What's that called? +[574.280 --> 575.120] What is that? +[577.840 --> 578.680] Donut. +[578.680 --> 579.760] Abigail with her muscle control. +[579.760 --> 582.200] She has to, she has to really focus on +[582.200 --> 583.480] what her hands are doing. +[586.800 --> 588.720] You're chocolate all over your face. +[588.720 --> 589.960] Say it's a really good donut. +[589.960 --> 590.640] OK, ready? +[590.640 --> 592.600] Look, we're going to make our hand like this. +[592.600 --> 594.000] Look at your hands, see it? +[594.000 --> 594.520] Like that? +[594.520 --> 595.680] OK. +[595.680 --> 600.600] We're going to go donut so close. +[600.600 --> 602.920] Like this. +[602.920 --> 603.760] Donut. +[603.760 --> 606.760] Good job. +[606.760 --> 607.880] Bring your hand to your face. +[607.880 --> 609.480] Not your face to your head. +[609.480 --> 611.800] Donut. +[611.800 --> 612.800] Donut. +[612.800 --> 614.600] So your hand feels ready? +[614.600 --> 616.120] You do it. +[616.120 --> 618.120] Donut. +[618.120 --> 619.120] Good job. +[619.120 --> 620.120] That was really good. +[620.120 --> 621.120] That was great. +[621.120 --> 623.120] That was excellent. +[623.120 --> 627.600] I'm going to take out smaller pieces so you can do it one. +[627.600 --> 628.360] OK. +[628.360 --> 629.160] What is that? +[629.160 --> 629.680] Hold on a minute. +[629.680 --> 630.200] I'll hurt you. +[630.200 --> 631.200] OK. +[633.520 --> 635.520] What do you want? +[635.520 --> 638.120] Yeah, what's that called? +[638.120 --> 640.920] Great, great, great proximity there. +[640.920 --> 641.920] Donut. +[641.920 --> 643.160] Yep, that's perfect. +[643.160 --> 646.400] Good job. +[646.400 --> 650.720] So some of us said the sign for donut is like this? +[650.720 --> 653.760] It's like a, the way they explain it on the website. +[653.760 --> 654.880] It's like a, can I see? +[654.880 --> 657.440] It's like a C. And then you're going up to your mouth like this. +[657.440 --> 658.760] Like donut. +[658.760 --> 662.160] Or there was, you made ours with both your hands. +[662.160 --> 665.120] And you did a circle, which is a big difference. +[665.120 --> 669.000] So Abby stems a lot of times with her fingers like this. +[669.000 --> 671.840] So we didn't think that would be a good way. +[671.840 --> 672.840] Right. +[672.840 --> 673.840] So that's why we chose this one. +[673.840 --> 675.840] Yes, that's why we do modified signs of that. +[675.840 --> 678.440] If you notice like, like Abby, give me a thumbs up. +[678.440 --> 680.080] It took a lot of, yep, there we go. +[680.080 --> 682.680] It took a lot of work to get her to develop a mover hand like that. +[682.680 --> 687.920] We had to manipulate her hand for her to get her to feel what that's like. +[687.920 --> 693.600] She does have some muscle development that's delayed in her hands. +[693.600 --> 695.720] So it's harder for her to do some of these. +[695.720 --> 696.720] Ready? +[696.720 --> 697.720] Show me. +[697.760 --> 699.760] Donut is also very hard. +[699.760 --> 701.840] She can't just look and do what we're doing. +[701.840 --> 702.840] Ready? +[702.840 --> 704.360] Show that. +[704.360 --> 705.840] Donut. +[705.840 --> 707.520] You do it. +[707.520 --> 709.040] Open up. +[709.040 --> 709.880] Donut. +[709.880 --> 711.160] Good job. +[711.160 --> 711.920] I like that. +[711.920 --> 716.280] I didn't even think about the sign for food being so much. +[716.280 --> 717.040] Yeah. +[717.040 --> 717.920] So she just did it. +[717.920 --> 718.920] I know. +[718.920 --> 721.520] Look, do this with your hand. +[721.520 --> 722.520] Open it. +[725.600 --> 727.280] Turn your head. +[727.280 --> 728.280] We're going to touch here. +[728.280 --> 729.280] Donut. +[729.280 --> 730.280] Ready? +[730.280 --> 732.280] That was good. +[732.280 --> 733.280] Donut. +[733.280 --> 734.280] Good job. +[734.280 --> 735.280] Good job. +[735.280 --> 736.280] Small bite. +[736.280 --> 737.280] Say my fingers. +[737.280 --> 738.280] Hold on. +[738.280 --> 739.280] You're trying so hard. +[739.280 --> 740.280] Donut. +[740.280 --> 741.280] Good job. +[741.280 --> 745.280] Now there's no motivation, right? +[745.280 --> 756.280] Ah, that's not a cookie. +[756.280 --> 757.280] What is that call? +[757.280 --> 758.280] No. +[758.280 --> 759.280] What is that call? +[759.280 --> 760.280] Donut. +[760.280 --> 761.280] Good job. +[761.280 --> 762.280] Listen, it's all gone. +[762.280 --> 763.280] All gone. +[763.280 --> 764.280] She's like, no, it's not. +[764.280 --> 765.280] I know there's another one in the bag. +[765.280 --> 766.280] You guys are lying. +[766.280 --> 767.280] I don't know. +[767.280 --> 768.280] At least that's all we're going to have tonight. +[768.280 --> 769.280] Okay, you ready? +[769.280 --> 770.280] What do we just eat? +[770.280 --> 771.280] It was so close. +[771.280 --> 772.280] I like how you're head up. +[772.280 --> 773.280] I like how you're doing your thumb. +[773.280 --> 774.280] Because that's different than eat. +[774.280 --> 775.280] Donut. +[775.280 --> 776.280] Donut. +[776.280 --> 777.280] Donut. +[777.280 --> 778.280] Donut. +[778.280 --> 779.280] Donut. +[779.280 --> 780.280] Donut. +[780.280 --> 781.280] Donut. +[781.280 --> 782.280] Donut. +[782.280 --> 783.280] Donut. +[783.280 --> 784.280] Donut. +[784.280 --> 785.280] Donut. +[785.280 --> 786.280] Donut. +[786.280 --> 787.280] Show me again. +[787.280 --> 788.280] Show me again. +[788.280 --> 789.280] Donut. +[789.280 --> 790.280] Donut. +[790.280 --> 791.280] I like it. +[791.280 --> 792.280] Good work. +[792.280 --> 793.280] Okay, we'll work on that. +[793.280 --> 794.280] So we'll just continue to use that every time that we go into Donut. +[794.280 --> 795.280] Yeah. +[795.280 --> 796.280] It's like every day. +[796.280 --> 797.280] You can do it every day. +[797.280 --> 798.280] You can do it every day. +[798.280 --> 799.280] Hey, good job. +[799.280 --> 800.280] I love you. +[800.280 --> 801.280] I'm so proud of you. +[801.280 --> 802.280] You do the greatest. +[802.280 --> 803.280] That's my biggest challenge. +[803.280 --> 804.280] Can I have a kiss? +[804.280 --> 805.280] You give me a kiss. +[805.280 --> 806.280] You give me a kiss. +[806.280 --> 807.280] Yeah. +[807.280 --> 808.280] I love you. +[808.280 --> 809.280] I love you. +[809.280 --> 810.280] I'm so proud of you. +[810.280 --> 811.280] You give me a kiss. +[811.280 --> 812.280] You give me a kiss. +[812.280 --> 813.280] You give me a kiss. +[813.280 --> 814.280] You give me a kiss. +[814.280 --> 815.280] You give me a kiss. +[815.280 --> 816.280] Thank you. +[816.280 --> 817.280] We're all done. +[817.280 --> 818.280] You can say bye to everybody. +[818.280 --> 819.280] Say thanks for watching. +[819.280 --> 820.280] Bye guys. +[820.280 --> 821.280] Say. +[821.280 --> 822.280] Say. +[822.280 --> 823.280] Say. +[823.280 --> 824.280] Say. +[824.280 --> 825.280] Say. +[825.280 --> 826.280] Say. +[826.280 --> 827.280] Say. +[827.280 --> 828.280] Say. +[828.280 --> 829.280] Say. +[829.280 --> 830.280] Say. +[830.280 --> 831.280] Say. +[831.280 --> 832.280] Say. +[832.280 --> 833.280] Say. +[833.280 --> 834.280] Say. +[834.280 --> 835.280] Say. +[835.280 --> 836.280] Say. +[836.280 --> 837.280] Say. +[837.280 --> 838.280] Say. +[838.280 --> 839.280] Say. +[839.280 --> 840.280] Say. +[840.280 --> 841.280] Say. +[841.280 --> 842.280] Say. +[842.280 --> 843.280] Say. +[843.280 --> 844.280] Say. +[845.280 --> 850.280] I love you. +[850.280 --> 853.280] You love yourself. +[853.280 --> 854.280] I know. +[854.280 --> 855.280] You good job. +[855.280 --> 856.280] It's nice job. +[856.280 --> 858.280] Are you all done? diff --git a/transcript/allocentric_CISLJ2xL7UY.txt b/transcript/allocentric_CISLJ2xL7UY.txt new file mode 100644 index 0000000000000000000000000000000000000000..028b4f69420f069700a78706573981e565453a23 --- /dev/null +++ b/transcript/allocentric_CISLJ2xL7UY.txt @@ -0,0 +1,602 @@ +[0.000 --> 1.640] Thank you, Kate. +[1.640 --> 3.400] I start showing my screen. +[3.400 --> 4.200] Can everyone hear me? +[4.200 --> 6.600] OK, my audience sometimes a bit. +[6.600 --> 7.600] OK. +[7.600 --> 9.800] OK, and I will show my screen. +[12.800 --> 15.800] Can everyone see my screen? +[15.800 --> 16.300] Yes. +[16.300 --> 17.100] Yes, that's great. +[17.100 --> 18.100] Thank you. +[18.100 --> 22.280] OK, so I'm going to start by saying, +[22.280 --> 26.000] I think that the pandemic changed the way that we live and the way +[26.000 --> 26.600] we work. +[26.600 --> 28.360] We interact with each other. +[28.360 --> 31.240] For many of us, it also changed the way that we experience space, +[31.240 --> 33.360] both locally and globally. +[33.360 --> 36.760] And core to this is the awareness of your body, the way that your body +[36.760 --> 41.560] situates itself in space and how we feel our body in many different ways +[41.560 --> 45.120] and the way that you inhabit and move in space. +[45.120 --> 50.160] So before we begin, I'd like to take a moment and allow you to situate +[50.160 --> 54.160] yourself in space, the space that you're inhabiting right now, +[54.160 --> 55.920] allow you to feel your body. +[55.920 --> 59.120] Most likely, everyone here is sitting in front of a screen +[59.120 --> 62.480] and you're kind of back against the back of a chair. +[62.480 --> 69.360] So I'd like you to ask you to close your eyes and feel your feet on the ground. +[69.360 --> 72.720] Feel your back on the back of the chair. +[72.720 --> 75.760] Breathe in and out. +[75.760 --> 80.600] First, feel the tip of your toes, sensation traveling through your feet +[80.600 --> 84.960] into your heels, up into your ankles. +[85.000 --> 90.600] Further up through your calves, your knees, your thighs, +[90.600 --> 93.880] your hips, into your stomach. +[93.880 --> 98.920] Feel your stomach expanding and contracting with every breath you draw. +[98.920 --> 103.560] You're chest filling up with air and emptying itself with air +[103.560 --> 106.880] as you inhale and exhale. +[106.880 --> 113.840] Feel your fingers into your wrists, up your arms into your shoulders. +[113.840 --> 122.440] Draw your shoulders up as you inhale and then drop them down as you exhale. +[122.440 --> 127.840] Feel the nape of your neck, sensation radiating up into the back of your head. +[127.840 --> 131.160] Feel your eyelids on your eyes, the tip of your nose, +[131.160 --> 133.840] air flowing in and out of your nose. +[133.840 --> 136.280] Now visualize the space around you. +[136.280 --> 142.280] It's air flow into your body, your breath exhaling back into the space. +[142.320 --> 145.200] Picture the extents of the space around you, +[145.200 --> 148.400] the height of the ceiling above you, +[148.400 --> 152.240] the distance to the walls in front and behind of you, +[152.240 --> 155.840] to your left and to your right. +[155.840 --> 159.320] Perhaps you can conjure up images of the textures around you, +[159.320 --> 161.320] the tactile properties that they have, +[161.320 --> 166.200] and what these feel like if you were to run your fingers over them. +[166.200 --> 168.680] Now zoom out. +[168.720 --> 172.720] Think of the room you're in, in the house you are in. +[172.720 --> 176.400] Think of the house as it sits on your street. +[176.400 --> 179.320] That street in relation to the city. +[182.400 --> 185.000] Open your eyes. +[185.000 --> 188.360] While you were breathing, you were focused on yourself. +[188.360 --> 193.840] You have you pointed inwards aware of your body, your body in space. +[193.840 --> 196.680] It's likely that at some point you switched your mental view +[196.680 --> 199.720] from a first person to a third person view. +[199.720 --> 202.280] Or you might have held in mind simultaneously +[202.280 --> 205.240] a first person and a third person point of view, +[205.240 --> 207.880] creating attention towards. +[207.880 --> 212.720] This tension, the tension between your body is felt within and in space +[212.720 --> 216.920] and your body is calculated from the outside and located in space +[216.920 --> 219.200] is a tension of spatial experience, +[219.200 --> 223.360] both being a body in space and having a body in space. +[223.360 --> 225.240] This tension or threshold +[225.280 --> 227.240] underlies a lot of my thinking +[227.240 --> 229.680] and the experimental performance and allocentric view +[229.680 --> 232.400] that I'm going to talk about today. +[232.400 --> 235.280] Thresholds both as little spatial thresholds +[235.280 --> 238.240] and as abstract notions are interesting to me +[238.240 --> 243.680] as they are connectors and separators, spaces in and of themselves. +[243.680 --> 247.920] Moving through thresholds means going from one space to another, +[247.920 --> 249.760] a change happening. +[249.760 --> 252.000] I will in a sense talk about thresholds +[252.040 --> 254.680] as integration and dissociation +[254.680 --> 258.160] and introduce views and thinking from three different angles, +[258.160 --> 260.920] cognition, architecture and dance. +[260.920 --> 264.040] At times, I will refer directly from one to the other +[264.040 --> 266.720] to times parallels across the three are implied. +[268.160 --> 271.440] To do all of this, I will use the experimental performance +[271.440 --> 274.560] I designed with my colleagues Stephen Gage and Alexander Whitley +[274.560 --> 277.560] at the Bartlett School of Architecture, right before the pandemic. +[278.520 --> 281.160] For this, we designed a labyrinth on a floor +[281.200 --> 284.440] and a camera capturing a third person point of view +[284.440 --> 287.480] and VR gobbles showing an oblique view down +[287.480 --> 289.560] as one moves in the space. +[289.560 --> 291.800] We were interested in questions such as +[291.800 --> 295.360] what experience might one have physically navigating space +[295.360 --> 297.120] while visually seeing oneself +[297.120 --> 299.400] through the eyes of a notional other. +[299.400 --> 301.520] What is it like to be watching oneself +[301.520 --> 303.920] to become the observer and the observed, +[303.920 --> 305.560] agent and body? +[305.560 --> 308.440] What kind of liminal embodiment might arise? +[308.480 --> 311.520] What is it like to be the tension, the threshold, +[311.520 --> 314.120] being and having a body emphasized? +[314.120 --> 316.480] In essence, rather than run experiments, +[316.480 --> 318.640] we designed an enactment of ideas +[318.640 --> 320.520] that one can inhabit and experience +[320.520 --> 323.000] dissociation and integration as threshold. +[325.040 --> 327.600] To form this, we needed to design a space. +[327.600 --> 329.760] So why did we use a labyrinth? +[329.760 --> 332.320] A labyrinth is a form of a complex journey +[332.320 --> 335.520] dating back to Minoan times that has lots of turns. +[335.520 --> 337.840] We thought that these might be difficult to navigate +[337.840 --> 340.640] with one's world and body view modified. +[340.640 --> 343.520] I might bring the mentioned tension to the fore. +[343.520 --> 345.520] There are also mystical associations. +[345.520 --> 348.440] The person walking a labyrinth is being observed +[348.440 --> 352.320] by a divine third party who becomes one with the pilgrim. +[352.320 --> 355.600] In a sense, a first person and third person experience +[355.600 --> 357.760] in one attention threshold. +[358.840 --> 361.200] Our aim was to use an immersive setup +[361.200 --> 364.440] as a way of reflecting on knowledge from different disciplines, +[364.440 --> 366.440] cognitive science, architecture dance +[366.440 --> 368.200] through embodied experience. +[368.200 --> 371.320] To hear paraphrase the choreography Steve Paxton, +[371.320 --> 374.440] we wanted to explore some of the physical possibilities +[374.440 --> 378.080] to refocus the focusing mind, time, space, gravity, +[378.080 --> 379.760] opening up the creativity. +[381.080 --> 383.360] Furthermore, labyrinths are also historically +[383.360 --> 385.840] intimately tied to movement and dance, +[385.840 --> 388.640] an important aspect within our enactment. +[388.640 --> 391.120] As history has it, Ariadans dance floor +[391.120 --> 393.720] was the prototype that got deadless, the commission +[393.720 --> 395.560] to build the labyrinth the conossals. +[396.720 --> 401.440] Bringing in a simple foundation from cognitive science, +[401.440 --> 404.720] we can ask what kind of representation of space and body +[404.720 --> 406.720] are needed to navigate the world +[406.720 --> 409.200] and how might this representation be changed +[409.200 --> 410.680] in the enactment? +[410.680 --> 413.200] Well, of course, individual differences +[413.200 --> 416.160] make experience personal for all of us +[416.160 --> 418.680] as a basis for internal representation of space +[418.680 --> 420.280] and ourselves within it. +[420.280 --> 422.960] The brain's cognitive map provides a framework +[422.960 --> 425.480] for spatial experience that is filled with purpose +[425.520 --> 428.080] and is filled with personal experience. +[428.080 --> 430.600] The cognitive map is the brain's spatial model +[430.600 --> 433.640] of spatial relationships of the external world +[433.640 --> 435.640] in relation to itself. +[435.640 --> 437.680] The cognitive map underlies the ability +[437.680 --> 441.080] to successfully navigate and perform actions in space +[441.080 --> 443.400] and it charts both what is in the world +[443.400 --> 445.680] as well as what happens there. +[445.680 --> 448.240] Good differently and expressed in other terms, +[448.240 --> 450.720] geometry and phenomenology. +[450.720 --> 453.680] Cohesive spatial representations are established +[453.680 --> 458.120] by integrating both egocentric first person +[458.120 --> 461.120] and alacentric third person reference frames +[461.120 --> 463.640] combining many egocentric positions +[463.640 --> 466.400] into an alacentric overview and so constructing +[466.400 --> 468.120] an internal model of the world. +[469.360 --> 472.200] An egocentric representation is where the location +[472.200 --> 475.640] and orientation of objects are relative to your body. +[475.640 --> 478.400] An alacentric representation is where the location +[478.400 --> 481.840] orientation are constructed with respect to other objects +[481.840 --> 485.320] and environmental features independent of your body. +[485.320 --> 488.760] The term alacentric was adopted in our experimental performance. +[488.760 --> 491.240] However, it must be said that a truly alacentric view +[491.240 --> 493.120] is actually not really possible. +[493.120 --> 495.840] As no one view alone can be alacentric +[495.840 --> 498.320] and alacentric is observer independent. +[500.640 --> 502.680] The cognitive map I mentioned is constructed +[502.680 --> 504.800] in the wider hippocampal network +[504.800 --> 506.600] where activity in the posterior, +[506.600 --> 508.920] the backside of the brain hippocampus +[508.920 --> 511.440] is sensitive to distances along the path +[511.440 --> 514.120] and therefore indicates a more egocentric role. +[514.120 --> 516.520] Activity in adjacent interrallal cortex +[516.520 --> 518.840] is correlated with euclidean distance +[518.840 --> 521.520] where a vector to a goal and is therefore directed +[521.520 --> 523.640] at a more alacentric spatial parsing. +[525.240 --> 527.720] So what kind of input do our bodies rely on +[527.720 --> 529.760] to construct the cognitive map? +[529.760 --> 534.520] Primary modalities are incited individuals, vision and movement. +[534.520 --> 536.440] Both is translation through space +[536.440 --> 539.440] and is proprioceptive movement of the body. +[539.440 --> 541.920] Relevant to my interest, an important process +[541.920 --> 543.640] bringing together vision and movement +[543.640 --> 545.320] for cognitive mapping and navigation +[545.320 --> 547.360] is known as path integration. +[547.360 --> 550.520] Path integration combines egocentric information +[550.520 --> 552.640] from visual feedback and idio-thetic +[552.640 --> 555.200] that self-emotion cues from movement +[555.200 --> 557.880] into an alacentric representation. +[557.880 --> 559.840] Path integration helps to construct +[559.840 --> 562.320] an always current spatial representation +[562.320 --> 565.240] as location is continually and dynamically updated +[565.240 --> 567.040] by vision and movement feedback +[567.040 --> 568.800] as we travel around the world. +[569.640 --> 572.760] The specifics of resources available at any given time +[572.760 --> 574.320] as well as individual difference, +[574.320 --> 576.280] for example, in terms of background +[576.280 --> 578.000] such as being an architect or dance. +[579.000 --> 581.240] Influence when and how egocentric +[581.240 --> 584.120] or alacentric reference was dominated. +[584.120 --> 585.360] Sorry about that. +[587.640 --> 589.680] There are interesting implications +[589.680 --> 591.760] in terms of spatial ability. +[591.760 --> 593.640] Spatial ability is not innately fixed +[593.640 --> 594.840] but it is trainable. +[594.840 --> 597.040] By using one's brain in specific ways, +[597.040 --> 600.600] connectivity and skill sets can be enhanced or altered. +[600.600 --> 603.280] Architects or dancers have trained themselves +[603.280 --> 606.600] to think about space very differently in different ways +[606.600 --> 608.680] and this might in turn influence +[608.680 --> 611.560] how they re-experience a space. +[611.560 --> 613.640] As an architect, I'm able to switch views +[613.640 --> 615.160] of the world frequently. +[615.160 --> 617.160] I'm able to mentally rotate the world +[617.160 --> 619.600] or take different perspectives easily. +[619.600 --> 621.320] It is possible that each of you +[621.320 --> 623.640] constructed different views in the exercise +[623.640 --> 625.280] we started with. +[625.280 --> 628.840] To me, space is both me and mine +[628.840 --> 630.400] and recalling William James, +[630.400 --> 633.240] the line with threshold is difficult to draw +[633.240 --> 636.280] as I negotiate and often overlay both. +[636.280 --> 638.840] I hold the first person and third person view +[638.840 --> 640.960] of the world in my mind quite easily. +[643.040 --> 646.600] Space as me and mine requires spatial representation +[646.600 --> 649.040] as a foundation for action and thinking +[649.040 --> 651.040] and operations like mental rotation +[651.040 --> 653.800] and perspective taking are your key. +[653.800 --> 656.440] For mental rotation abilities allow people +[656.440 --> 658.800] to hold objects in mind and rotate them +[658.800 --> 662.040] so that you're able to see them from many different angles. +[662.040 --> 664.600] For perspective taking allows dynamic shifts +[664.600 --> 668.040] in one's imagination to inhabit specific positions +[668.040 --> 670.040] within a scene at will. +[670.040 --> 674.400] Skill-navigators employ these, for example, also in map reading. +[674.400 --> 676.200] As an architect, I'm fairly good at both +[676.200 --> 678.320] and I don't need to even think about performing +[678.320 --> 679.560] these operations. +[683.800 --> 688.800] Our other centric view set up would take ideas +[691.480 --> 694.440] of mentally rotating or taking perspectives +[694.440 --> 697.440] into an embodied realm and allow a literal enactment +[697.440 --> 700.040] or for flexing on one's own functioning +[700.040 --> 703.360] but entails shifting the perspective of the same agent +[703.360 --> 706.000] rather than reifying different internal agents +[706.000 --> 708.200] for self regulating each other. +[708.200 --> 710.440] As was suggested by Arthur Widera, +[710.440 --> 712.400] responding to the William James code +[712.400 --> 714.200] and his position that you saw earlier +[714.200 --> 717.000] on the duplex nature of understanding oneself +[717.000 --> 719.760] for within and from the outside. +[719.760 --> 722.000] In essence, the aim was to experience +[722.000 --> 725.240] how rather than splitting oneself into object and agent, +[725.240 --> 728.920] one was simultaneously being agent and object. +[728.920 --> 731.600] The interest lying as much as being an agent +[731.600 --> 735.720] reflecting on experience as in executing actions. +[735.720 --> 738.120] An architect's or choreographer's process +[738.120 --> 740.960] is an inversion of sorts of the cognitive construction +[740.960 --> 744.360] process of going from egocentric to allocentric. +[744.360 --> 747.120] However, even when going from an allocentric overview +[747.120 --> 750.040] to egocentric experience in the planning process, +[750.040 --> 752.120] architects and also choreographers +[752.120 --> 754.680] often continuously loop between both. +[757.800 --> 760.040] At the heart of many architectural queries +[760.040 --> 762.040] is this gap between first person +[762.040 --> 764.400] and third person experience. +[764.400 --> 767.840] Architects, notionally inhabit an external viewpoint, +[767.840 --> 769.640] looking down and feeling down +[769.640 --> 772.080] into nation buildings in the design process +[772.080 --> 774.600] and operating from an allocentric understanding +[774.600 --> 778.880] quasi-uncentric to construct egocentric experiences. +[778.880 --> 782.400] In doing so, we inhabit both viewpoints, +[782.400 --> 785.280] where both agent and object both me and mine +[785.280 --> 787.840] and we shift the perspective of the same agent +[787.840 --> 791.200] rather than refine different internal agents or selves. +[793.400 --> 795.880] Disability to switch from third person +[795.880 --> 798.760] to first person is decisive in design thinking. +[798.760 --> 801.840] But as human experience and human experience architecture +[801.840 --> 804.360] not only from a static view, but dynamically, +[804.360 --> 808.720] thinking about and designing for movement as a link is key. +[808.720 --> 810.240] The Swiss architect, the co-busier, +[810.240 --> 813.720] described as experiencing space in the following way. +[813.720 --> 816.240] Architecture is appreciated while on the move +[816.240 --> 818.080] with one's feet while walking, +[818.080 --> 820.440] moving from one place to another. +[820.440 --> 822.320] A true architectural commonad +[822.320 --> 824.200] offers constantly changing views +[824.200 --> 826.520] and expected at the time surprising. +[829.480 --> 832.480] Co-bus, he developed this idea inspired by himself +[832.480 --> 834.880] moving through the Athens of Prophilis +[834.880 --> 837.360] and then he built his first architectural commonad +[837.360 --> 839.440] in this famous Ville Savoy. +[839.440 --> 842.320] His description of an experience of somebody +[842.320 --> 843.600] moving through a building +[843.600 --> 846.040] and the way he used this in his design +[846.040 --> 848.680] is different from what we otherwise often find +[848.680 --> 850.800] in processes of designing a building. +[852.400 --> 854.640] In the process of designing a building, +[854.640 --> 857.520] we often find that concepts and tools are being used +[857.520 --> 858.880] that do not place the body +[858.880 --> 861.560] and the importance of movement at the center. +[861.560 --> 864.080] And design ideas are frequently developed in plan +[864.080 --> 867.120] or using simple overall concepts of arrangements +[867.120 --> 871.560] referred by the Biorart term of an organizing party pre +[871.560 --> 872.920] or party. +[872.920 --> 875.400] A party describes a relationship of parts +[875.400 --> 878.120] that is notionally independent of the observer +[878.120 --> 879.680] experiencing on the ground +[879.680 --> 883.440] and thus third person in terms of spatial reference frames. +[883.440 --> 887.480] As an initial idea, it is somewhat allocentric +[887.480 --> 890.400] and has yet to consider first person experience. +[890.400 --> 892.400] Could differently, it is conceptual +[892.400 --> 895.080] but that does not yet address the perceptual +[895.080 --> 897.440] which as much as co-busier in the modern movement +[897.440 --> 898.480] have been criticized, +[898.480 --> 900.560] their architecture then did achieve +[900.560 --> 903.160] as much as often their design method did achieve. +[904.840 --> 907.080] Buildings that do not infer the development +[907.080 --> 909.200] of a simple idea or a party +[909.200 --> 912.680] consider dynamic interpretation can remain static. +[912.680 --> 915.480] Spaces can lack fluidity and movement capabilities +[915.480 --> 917.800] and wayfinding of buildings is impeded. +[917.800 --> 920.480] Building experience can be diminished. +[920.480 --> 922.480] While buildings such as the Seattle library +[922.480 --> 924.800] which you see here designed by OMA's +[924.800 --> 929.080] Diagrammatic method include a range of interesting spaces +[929.080 --> 930.840] for dwelling in, they are difficult +[930.840 --> 933.440] and often not enjoyable to navigate. +[933.440 --> 936.080] The Seattle Public Library has indeed required a lot +[936.080 --> 939.640] of post-op fancy analysis and wayfinding improvement. +[942.880 --> 945.160] A split between conception and perception +[945.160 --> 947.760] has also brought about understandings of architecture +[947.760 --> 949.840] as networks of relationships +[949.840 --> 952.440] that allows architects and architectural theoreticians +[952.440 --> 955.120] to describe architectures as a system. +[955.120 --> 957.640] The philosopher Villain Flusser here suggests +[957.640 --> 959.960] the architect does not design objects anymore +[959.960 --> 961.240] but relations. +[961.240 --> 963.320] Instead of thinking in geometric terms +[963.320 --> 966.880] the architect has to project networks of equations. +[966.880 --> 969.280] Effectively what such avenues have in common +[969.280 --> 970.840] is a shift in viewpoint +[970.840 --> 973.880] and an explicit dissociation of experimental, +[973.880 --> 977.480] experiential composites, a divorce of conception +[977.480 --> 979.240] from perception. +[980.480 --> 984.240] Overall, critiquing this conceptual approach to architecture +[984.240 --> 986.760] the philosopher Bernard Bormer suggests +[986.760 --> 988.720] that buildings and spaces and reality +[988.720 --> 991.440] are not freely and effortlessly available. +[991.440 --> 993.280] They have to be walked through. +[993.280 --> 995.960] Bermett argues for an integration of the perceptual +[995.960 --> 998.480] with the conceptual architecture designed +[998.480 --> 1000.680] to achieve atmosphere. +[1000.680 --> 1003.520] Architects like Peter Zomtor are at the forefront +[1003.520 --> 1006.280] of achieving atmospheric architecture. +[1006.280 --> 1008.440] Architects like him often do this +[1008.440 --> 1010.720] by inferring a first person experience +[1010.720 --> 1013.120] in a third person's spatial representations, +[1013.120 --> 1014.440] such as a plan. +[1014.440 --> 1018.040] This way they can walk themselves around a hypothetical building +[1018.040 --> 1020.840] after constructing it as a three-dimensional entity +[1020.840 --> 1022.280] in their mind. +[1022.280 --> 1024.680] Drawing on processes like perspective taking, +[1024.680 --> 1029.120] mental rotation, or perhaps an intuitive understanding +[1029.120 --> 1031.600] of processes such as path integration +[1031.600 --> 1034.080] to architects such as Zomtor or Kobase, +[1034.080 --> 1036.840] effortless switch. +[1036.840 --> 1039.800] Architects like this still draw on conceptual tools, +[1039.800 --> 1042.320] such as simple sketches or parties +[1042.320 --> 1044.800] in the stage of design ideation, +[1044.800 --> 1047.200] but have the ability to, even in this stage, +[1047.200 --> 1050.000] already integrate first person experience. +[1052.760 --> 1054.600] Of course, architecture is not alone +[1054.600 --> 1057.040] in the ability of switching and simultaneously +[1057.040 --> 1059.240] inhabiting space through movement. +[1059.240 --> 1061.440] Indeed, most architects will fairly conceptual +[1061.440 --> 1063.600] internal understandings of this in mind +[1063.600 --> 1066.040] when designing space for moving bodies. +[1066.040 --> 1069.360] The answers in choreographers as a contrast approach this +[1069.360 --> 1073.040] from a rather more perceptual perspective, +[1073.040 --> 1075.680] designing movement of bodies itself. +[1075.680 --> 1078.680] Working from or switching between different perspectives +[1078.680 --> 1081.080] is a strong feature of dance practice, +[1081.080 --> 1084.720] both in training and in the choreographic process. +[1084.720 --> 1087.400] A choreographer often makes perspectival shifts +[1087.400 --> 1090.680] by stepping in and out of the choreography +[1090.680 --> 1093.200] in order to understand both the shape and effect +[1093.200 --> 1095.360] from the outside and the feeling and functioning +[1095.360 --> 1096.360] from the inside. +[1098.920 --> 1101.840] Visual and self-motion feedback is an important part +[1101.840 --> 1104.800] of dance practice with mirrors traditionally used +[1104.800 --> 1108.280] as a way of giving a dancer an outside eye on their movement, +[1108.280 --> 1110.600] as they're moving and switching views. +[1110.600 --> 1112.720] This helps in achieving a desired aesthetic, +[1112.720 --> 1114.720] such as the clean lines and alignment +[1114.720 --> 1116.200] and postures of the body, +[1116.200 --> 1119.280] associating an image of their body in movement +[1119.280 --> 1121.000] with the feeling they're experiencing +[1121.000 --> 1122.400] as they're executing it. +[1124.480 --> 1126.440] The kinosphere, for example, +[1126.440 --> 1128.680] is a conceptual wave understanding space +[1128.680 --> 1130.800] around the body that helps them do this +[1130.800 --> 1134.280] in order to visualize themselves in different ways +[1134.280 --> 1138.200] as they are practicing or choreographing dance movements. +[1138.200 --> 1141.440] It is composed of personal and peri-personal space +[1141.440 --> 1143.520] and integrates all the movement potential +[1143.520 --> 1145.000] spatial planes and connections +[1145.000 --> 1147.840] that are available in this process. +[1147.840 --> 1151.440] Rudolf Laban, who's the inventor of the kinosphere, +[1151.440 --> 1153.960] explained to tell us this fear around the body +[1153.960 --> 1157.360] whose periphery can be reached by easily extended limbs +[1157.360 --> 1160.000] without stepping away from that place, +[1160.000 --> 1161.440] which is the point of support +[1161.440 --> 1163.080] when standing on one foot. +[1177.840 --> 1201.840] I'm sorry, I'm not sure why this isn't moving up. +[1201.840 --> 1203.240] Here we go. +[1203.240 --> 1205.120] Expanding on the idea of the kinosphere, +[1205.120 --> 1207.120] choreographer, such as William Forsyth, +[1207.120 --> 1209.960] who you just saw, have worked with types of body image +[1209.960 --> 1212.040] that exist in the imagination +[1212.040 --> 1214.560] as a way of providing creative tools for dancers +[1214.560 --> 1217.080] to work with while improvising or creative move, +[1217.080 --> 1219.040] creating movement material. +[1219.040 --> 1222.240] For example, a dancer might imagine one of their +[1222.240 --> 1224.320] previous positions or movements, +[1224.320 --> 1226.600] freeze it in space and use this as a basis +[1226.600 --> 1228.120] for generating new movement. +[1228.120 --> 1230.640] By moving around the space in the imagined body, +[1230.640 --> 1232.200] it is occupied. +[1232.200 --> 1234.320] The imagined body is occupied. +[1234.320 --> 1246.960] Equally, this can be done by holding in mind other volumes +[1246.960 --> 1248.640] in the space and moving in relation +[1248.640 --> 1252.400] to the mental rotation of perspective, again, +[1252.400 --> 1253.280] are key. +[1253.280 --> 1256.040] Like any skill, the ability to hold images in mind +[1256.040 --> 1257.680] while performing physical actions +[1257.680 --> 1260.160] requires a substantial amount of practice +[1260.160 --> 1263.560] that becomes a powerful tool for a dancer or an architect +[1263.560 --> 1267.520] once acquired, enabling them to execute complex cognitive tasks +[1267.520 --> 1269.840] as they're dancing or designing. +[1269.840 --> 1271.840] In this, they hold multiple representations +[1271.840 --> 1274.000] of themselves in space and in mind. +[1280.480 --> 1282.720] Setting outside of themselves and understanding +[1282.720 --> 1285.720] of their bodies can also be explored using techniques +[1285.720 --> 1287.880] such as contact improvisation, where +[1287.880 --> 1291.200] understanding of oneself through the other is formed. +[1291.200 --> 1294.320] In this, dancers are both object and agent at once, +[1294.320 --> 1296.960] blurring lines of me and mine to explore +[1296.960 --> 1299.760] some of the physical possibilities. +[1299.760 --> 1301.880] I link in different sensory modalities +[1301.880 --> 1303.880] in different ways for linking conception +[1303.880 --> 1306.600] of movement to sensory execution. +[1306.600 --> 1309.640] In contact improvisation, this means modifying action +[1309.640 --> 1312.600] in response to tactile kinesthetic information +[1312.600 --> 1317.400] through a contact point with another person's body. +[1317.400 --> 1320.000] So with all of this, what did we do? +[1320.040 --> 1322.160] Using ideas and knowledge from architecture, +[1322.160 --> 1325.200] dance and cognition, we want to create a setting +[1325.200 --> 1327.200] of both being and having a body +[1327.200 --> 1329.640] and see how a skilled dancer in the first instance, +[1329.640 --> 1331.400] tier, would navigate our laboratory +[1331.400 --> 1334.160] with a third person vision, with third person vision +[1334.160 --> 1335.680] and first person movement. +[1340.600 --> 1343.720] The laboratory in the setup was seen in two different projections +[1343.720 --> 1344.480] in the headset. +[1344.480 --> 1346.880] In one, the dancers observed to herself +[1346.880 --> 1349.200] in an axonometric view to reflect +[1349.200 --> 1351.480] on the geometry of space per se. +[1351.480 --> 1353.960] In the other view, a perspective of view +[1353.960 --> 1357.600] to reflect on the way the human visual system sees space. +[1357.600 --> 1360.720] In the axonometric view, while the space was viewed +[1360.720 --> 1363.400] with true measurement, a vertical distortion of the body +[1363.400 --> 1366.920] was seen when she moved backwards from the picture plane. +[1366.920 --> 1370.120] This was purely because of technical reasons. +[1371.680 --> 1374.800] In the first instance, we'd ask to travel the labyrinth +[1374.800 --> 1377.240] from a normal view without the viagogles +[1377.240 --> 1379.600] and then after that, using the viagogles +[1379.600 --> 1381.800] with a third person point of view. +[1384.240 --> 1386.000] She then navigated, seeing herself +[1386.000 --> 1387.840] from a third person point of view +[1387.840 --> 1389.640] for the normal perspective of view. +[1389.640 --> 1393.320] Then with this kind of quasi-axonometric view space. +[1393.320 --> 1395.040] In both conditions, the time spent +[1395.040 --> 1398.720] the negotiating the lab, stabilized after a while, +[1398.720 --> 1401.080] shorter time taken to navigate the perspective +[1401.080 --> 1403.600] than the time in the axonometric view. +[1403.600 --> 1406.560] In following conversations, tier described the sense +[1406.560 --> 1408.640] of being in the perspective as normal, +[1408.640 --> 1411.760] her body, for her, seeming to remain the same size, +[1411.760 --> 1413.880] although visually it was diminishing +[1413.880 --> 1416.160] the further she moved away from the camera. +[1417.160 --> 1420.520] In both views, there were errors seen on the curbing pathways +[1420.520 --> 1423.400] and deviations often from the line of the path. +[1423.400 --> 1425.920] This type of error became more frequently +[1425.920 --> 1429.480] frequent when tier moved quickly, sharp turns with difficult +[1429.480 --> 1433.880] and slow, and she used, as she says, for feet like arrows. +[1433.920 --> 1437.760] Interestingly, when information from a diathetic self-motion +[1437.760 --> 1440.200] and visual, the visual system diverged, +[1440.200 --> 1443.040] there's a tendency to rely on vision for movement +[1443.040 --> 1444.600] and it seems even a dance right. +[1444.600 --> 1447.640] So highly attuned with her body's movement in space, +[1447.640 --> 1449.800] still relied heavily on vision +[1449.800 --> 1451.400] to correct what she was doing. +[1457.520 --> 1460.840] Following a series of runs through the labyrinth, +[1460.840 --> 1463.320] we wanted to extend the demand on movement +[1463.320 --> 1465.720] include more diverse gestures. +[1465.720 --> 1468.440] We asked you to author three scenarios, +[1468.440 --> 1470.400] a previously learned dance phrase +[1470.400 --> 1473.760] to learn, then second to learn a new movement sequence +[1473.760 --> 1476.360] and to play a chasing game with Annex, +[1476.360 --> 1480.040] who was one of the people coming up with the design itself +[1480.040 --> 1482.400] that also tears choreographer. +[1482.400 --> 1484.280] Tier first performed a dance phrase +[1484.280 --> 1486.160] which she was already in command of. +[1486.160 --> 1488.840] This grew primarily on proprioceptive movement +[1488.840 --> 1490.680] without translating through space, +[1490.680 --> 1492.760] but she still had to maintain correct position +[1492.800 --> 1494.360] and heading direction. +[1494.360 --> 1495.880] She performed this competently +[1495.880 --> 1498.120] in terms of detail of bodily movement, +[1498.120 --> 1500.640] likely by drawing on her internal sense of movement +[1500.640 --> 1503.400] and not needing as much visual input. +[1503.400 --> 1505.720] Her ability to maintain her spacing, +[1505.720 --> 1508.400] the position relative to the performance area +[1508.400 --> 1511.040] was compromised, her sense of direction affected +[1511.040 --> 1514.520] by the unusual view of her body in space. +[1514.520 --> 1517.640] Tier's ability to perform a standard ballet exercise +[1517.640 --> 1519.080] in both balance. +[1519.080 --> 1522.560] The adage sequence on one leg was incredibly difficult +[1522.560 --> 1525.680] with her vestibular system somewhat disrupted. +[1525.680 --> 1527.040] After all of this, Alex, +[1527.040 --> 1528.320] her choreographer, Trotter, +[1528.320 --> 1529.760] a new movement phrase, +[1529.760 --> 1531.640] which he demonstrated for her. +[1531.640 --> 1533.960] Her ability to keep hold of information +[1533.960 --> 1536.360] both in movement, detail and spacing +[1536.360 --> 1539.560] was better in this situation than in all others. +[1539.560 --> 1541.600] Illustrating just how adept she is +[1541.600 --> 1543.400] at translating visual information +[1543.400 --> 1545.120] by viewing a demonstrator +[1545.120 --> 1548.200] and copying the movement without focusing on herself. +[1553.080 --> 1557.560] The final scenario was then a chasing game. +[1557.560 --> 1560.480] Tier chased Alex in order to capture him. +[1560.480 --> 1563.480] They also attempted performing collaborative gestures +[1563.480 --> 1564.680] such as touching hands. +[1564.680 --> 1567.880] Both novel translation and proprosyptive actions +[1567.880 --> 1570.360] were required and she could not use +[1570.360 --> 1573.040] already internalized sequences of movement. +[1573.040 --> 1576.360] Tier was here also not able to remain +[1576.360 --> 1579.360] within a dance framework that she was highly skilled at +[1579.360 --> 1580.800] and we witnessed her. +[1580.800 --> 1582.080] She was highly skilled at +[1582.080 --> 1585.160] and we witnessed the highest experience of dissociation +[1585.160 --> 1586.960] with tier moving somewhat clumsily +[1586.960 --> 1590.640] and mixing up left and right more than in the other tasks. +[1601.720 --> 1603.440] Our design constructed scenarios +[1603.440 --> 1606.400] were sensory input from the visual and self-motion system +[1606.400 --> 1607.600] are dissociated. +[1607.600 --> 1608.800] It's interesting to see +[1608.800 --> 1612.080] how spatially directed motor activity was curtailed. +[1612.080 --> 1615.680] Speed slowed down and movement lacking in precision. +[1615.680 --> 1619.880] While tier had continuous access to visual snapshots +[1619.880 --> 1622.160] optic flow was modified. +[1622.160 --> 1623.960] The visual appearance of her environment +[1623.960 --> 1627.240] remained fixed and she only saw her own body move. +[1627.240 --> 1629.280] The integration of multisensory input +[1629.280 --> 1632.280] to enable senseable action is modified +[1632.280 --> 1636.080] and perhaps visual idio-thetic dissociation +[1636.080 --> 1639.240] above all impacted path integration abilities +[1639.240 --> 1641.960] to preserve and estimate accurate movement angles +[1641.960 --> 1645.080] and distant ratios to reference points. +[1645.080 --> 1647.720] When running the lab room for internal sense of movement, +[1647.720 --> 1652.560] vision and movement through space felt disjointed, +[1652.560 --> 1655.000] especially the comparison of across, +[1655.000 --> 1658.760] versus up and down, X and Y axis of her field of view. +[1658.760 --> 1661.880] This was heightened in this quasi-axenometric view +[1661.880 --> 1664.120] and Tier did not experience this higher +[1664.120 --> 1666.720] disjointment when seeing herself in the perspective. +[1674.360 --> 1677.400] Coming back to this work now as we re-enter buildings +[1677.400 --> 1680.120] and can take out work like this more easily, +[1680.120 --> 1682.160] we hope to expand our exploration +[1682.160 --> 1685.040] and in thinking through our bodies and acting knowledge, +[1685.040 --> 1687.560] speculate on implications for architecture, +[1687.560 --> 1690.680] doubts, cognitive science and other fields. +[1690.680 --> 1693.440] But we find ways to design navigation and movement +[1693.440 --> 1695.160] that brings together a first person +[1695.160 --> 1697.320] and a third person view of seeing the world +[1697.320 --> 1699.400] in more enjoyable and somatic ways +[1699.400 --> 1701.000] than current mapping technology +[1701.000 --> 1702.880] or building navigation allows. +[1704.440 --> 1706.400] What you see here on this final slide +[1706.400 --> 1709.880] is another moment when we set up this performance +[1709.880 --> 1712.320] and allowed people to navigate this in ways +[1712.320 --> 1713.680] that they felt comfortable with +[1713.680 --> 1716.880] and there was a hand artist that wanted to try it out. +[1716.880 --> 1720.160] And so he was able to navigate this entire set up, +[1720.160 --> 1723.400] seeing himself the way that you see on the space +[1723.400 --> 1727.800] being mounted in the back while navigating on his hands. +[1727.800 --> 1731.480] So that was a rather enjoyable thing to be watching. +[1738.520 --> 1740.680] So I'd like to thank you all and hopefully, +[1740.680 --> 1743.520] there's some feedback, some interesting thoughts +[1743.520 --> 1746.800] that you might bring to it or some questions you have from me +[1746.800 --> 1749.680] on this work that is still very much in its inception +[1749.680 --> 1752.800] and early on in thinking. +[1754.300 --> 1755.840] Bye. +[1774.000 --> 1783.200] Vaiajata andajad Mmmata diff --git a/transcript/allocentric_HAnw168huqA.txt b/transcript/allocentric_HAnw168huqA.txt new file mode 100644 index 0000000000000000000000000000000000000000..1470d8d1e0701d6eea341dc5fc0a71e457b7248e --- /dev/null +++ b/transcript/allocentric_HAnw168huqA.txt @@ -0,0 +1,558 @@ +[0.000 --> 12.240] Welcome. I'm very excited today to talk about effective speaking in spontaneous situations. +[12.240 --> 16.760] I thank you all for joining us, even though the title of my talk is grammatically incorrect. +[16.760 --> 20.120] I thought that might scare a few of you away. But I learned teaching here at the business +[20.120 --> 24.320] school, catching people's attention is hard. So something as simple as that, I thought +[24.320 --> 28.760] might draw a few of you here. So this is going to be a highly interactive and +[28.760 --> 34.440] participative workshop today. If you don't feel comfortable participating, that's completely +[34.440 --> 38.360] fine. But do know I'm going to ask you to talk to people next to you. There'll be opportunities +[38.360 --> 43.640] to stand up and practice some things because I believe the way we become effective communicators +[43.640 --> 48.880] is by actually communicating. So let's get started right away. I'd like to ask you all to +[48.880 --> 55.040] read this sentence. And as you read this sentence, what's most important to me is that you count +[55.040 --> 60.360] the number of F's that you find in this sentence. Please count the number of F's. Keep +[60.360 --> 76.000] it quiet to yourself. Give you just another couple seconds here. Three, two, one. Raise +[76.000 --> 80.800] your hand please if you found three and only three F's. Excellent. Great. Did anybody +[80.800 --> 90.760] find four? Okay. Anybody find only five F's? Anybody find six? There's six F's. What two +[90.760 --> 98.440] letter word ending in F did many of us miss of? We'll make sure to get this to you so you +[98.440 --> 103.760] can torment your friends and family at a later date. When I first was exposed to this +[103.760 --> 108.800] over 12 years ago, I only found three and I felt really stupid. So I like to start every +[108.800 --> 114.360] workshop, every class I teach with this to pass that feeling. No, no, that's not why I +[114.360 --> 119.680] do this. I do this because this is a perfect analogy for what we're going to be talking about +[119.680 --> 124.440] today. The vast majority of us in this room, very smart people in this room, were not as +[124.440 --> 130.280] effective as we could have been in this activity. We didn't get it right. And the same is true +[130.280 --> 136.600] when it comes to speaking in public, particularly when spontaneous speaking. It's little things +[136.600 --> 141.720] that make a big difference in being effective. So today we're going to talk about little things +[141.720 --> 147.360] in terms of your approach, your attitude, your practice that can change how you feel when +[147.360 --> 152.560] you speak in public. And we're going to be talking primarily about one type of public +[152.560 --> 157.960] speaking. Not the type that you plan for in advance, the type that you actually spend +[157.960 --> 163.280] time thinking about, you might even create slides for. These are the keynotes, the conference +[163.280 --> 170.000] presentation, the formal, toasts. That's not what we're talking about today. We're talking +[170.000 --> 176.040] about spontaneous speaking. When you're in a situation that you're asked to speak off the +[176.040 --> 181.800] cuff and in the moment, what we're going through today is actually the result of a workshop +[181.800 --> 187.080] I created here for the business school. Several years ago, a survey was taken among the students +[187.080 --> 190.880] and they said, what's one of the, what are things we could do to help make you more successful +[190.880 --> 196.880] here? And at the top of that list was this notion of responding to cold calls. Does everybody +[196.880 --> 200.880] know what a cold call is? It's where the mean professor, like me, looks at some students, +[200.880 --> 206.720] what do you think? And there was a lot of panic and a lot of silence. So as a result of +[206.720 --> 210.920] that, this workshop was created in a vast majority of first year students here at the +[210.920 --> 215.600] GSB go through this workshop. So I'm going to walk you through sort of a hybrid version +[215.600 --> 222.680] of what they do. The reality is that spontaneous speaking is actually more prevalent than plan +[222.680 --> 226.520] speaking. Perhaps it's giving introductions. You're at a dinner and somebody says, you know +[226.520 --> 231.320] so and so would you mind introducing them? Maybe it's giving feedback in the moment. Your +[231.320 --> 236.760] boss turns you and says, would you tell me what you think? It could be a surprise toast. +[236.760 --> 241.840] Or finally, it could be during the Q&A session. And by the way, we will leave plenty of time +[241.840 --> 246.640] at the end of our day today for Q&A. I'd love to hear the questions you have about this topic +[246.640 --> 253.040] or other topics related to communicating. So our agenda is simple. In order to be an effective +[253.040 --> 258.960] communicator, regardless of if it's planned or spontaneous, you need to have your anxiety +[258.960 --> 265.760] under control. So we'll start there. Second, what we're going to talk about is some ground +[265.760 --> 269.840] rules for the interactivity we'll have today. And then finally, we're going to get into +[269.840 --> 274.240] the heart of what we will be covering. And again, as I said, lots of activity and I invite +[274.240 --> 282.600] you to participate. So let's get started with anxiety management. 85% of people tell us +[282.600 --> 287.920] that they're nervous when speaking in public. And I think the other 15% are lying. We could +[287.920 --> 294.040] create a situation where we could make them nervous too. In fact, just this past week, +[294.040 --> 299.720] a study from Chapman University asked Americans, what are the things you fear most? And among +[299.720 --> 305.200] being caught in a surprise terrorist attack, having identity, your identity stolen, was +[305.200 --> 310.960] public speaking. Among the top five was speaking in front of others. This is a ubiquitous +[310.960 --> 316.600] fear. And one that I believe we can learn to manage. And I use that word manage very carefully +[316.600 --> 322.400] because I don't think we ever want to overcome it. Anxiety actually helps us. It gives us +[322.400 --> 326.720] energy, helps us focus, tells us what we're doing is important. But we want to learn to +[326.720 --> 331.280] manage it. So I'd like to introduce you to a few techniques that can work and all of +[331.280 --> 336.920] these techniques are based on academic research. But before we get there, I'd love to ask +[336.920 --> 342.080] you, what does it feel like when you're sitting in the audience watching a nervous speaker +[342.080 --> 348.200] present? How do you feel? Just shout out a few things. How do you feel? Uncomfortable. I +[348.200 --> 352.440] heard many of you going, yes, uncomfortable. It feels very awkward, doesn't it? So what +[352.440 --> 357.360] do we do? Now a couple of you probably like watching somebody suffer, but most of us +[357.360 --> 364.200] don't. So what do we do? We sit there and we nod and we smile or we disengage. And +[364.200 --> 367.800] to the nervous speaker looking out at his or her audience seeing a bunch of people nodding +[367.800 --> 373.240] or disengage, that does not help. So we need to learn to manage our anxiety because fundamentally +[373.240 --> 378.600] your job as a communicator, rather, regardless of if it's planned or spontaneous, is to make +[378.600 --> 383.200] your audience comfortable. Because if they're comfortable, they can receive your message. +[383.200 --> 388.080] And when I say comfortable, I am not referring to the fact that your message has to be sugar +[388.080 --> 392.800] coated and nice for them to hear. It can be a harsh message, but they have to be in a +[392.800 --> 399.400] place where they can receive it. So it's incumbent on you as a communicator to help your audience +[399.400 --> 404.080] feel comfortable. And we do that by managing our anxiety. So let me introduce you to a few +[404.080 --> 410.240] techniques that I think you can use right away to help you feel more comfortable. +[410.240 --> 415.000] The first has to do with when you begin to feel those anxiety symptoms. For most people, +[415.000 --> 420.760] this happens in the initial minutes prior to speaking. In this situation, what happens +[420.760 --> 424.920] is many of us begin to feel whatever it is that happens to you. Maybe your stomach gets +[424.920 --> 429.760] a little gurgly, maybe your legs begin to shake, maybe you begin to perspire. And then +[429.760 --> 434.680] we start to say to ourselves, oh my goodness, I'm nervous. Uh oh, they're going to tell +[434.680 --> 439.600] I'm nervous. This is not going to go well. And we start spiraling out of control. So +[439.600 --> 445.960] research on mindful attention tells us that if when we begin to feel those anxiety symptoms, +[445.960 --> 452.240] we simply greet our anxiety and say, hey, this is me feeling nervous. I'm about to do something +[452.240 --> 458.280] of consequence. And simply by greeting your anxiety and acknowledging it that it's normal +[458.280 --> 464.600] and natural. Heck, 85% of people tell us they have it. You actually can stem the tide +[464.600 --> 469.800] of that anxiety spiraling out of control. It's not necessarily going to reduce the anxiety, +[469.800 --> 474.760] but it will stop it from spinning up. So the next time you begin to feel those anxiety +[474.760 --> 481.680] signs, take a deep breath and say, this is me feeling anxious. I notice a few of you +[481.680 --> 486.200] taking some notes. There's a handout that will come at the end that has everything that +[486.200 --> 492.640] I'm supposed to say. Okay. Can't guarantee I'm going to say it, but you'll have it there. +[492.640 --> 496.600] In addition to this approach, a technique that works very well, and this is a technique +[496.600 --> 500.840] that I helped do some research on way back when I was in graduate school, has to do with +[500.840 --> 508.040] reframing how you see the speaking situation. Most of us, when we are up presenting planned +[508.040 --> 514.640] or spontaneous, we feel that we have to do it right. And we feel like we are performing. +[514.640 --> 519.000] How many of you have ever acted, done singing or dancing? I'm not going to ask for performances. +[519.000 --> 523.520] No. Okay. Many of you have. We should note that we could do next year, maybe a talent +[523.520 --> 529.200] show of alums. It looks like we got the talent there. That's great. So when you perform, +[529.200 --> 533.560] you know that there's a right way and a wrong way to do it. If you don't hit the right +[533.560 --> 538.720] note or your right line at the right time, at the right place, you've made a mistake. +[538.720 --> 545.040] It messes up the audience. It messes up the people on stage. But when you present, there +[545.040 --> 549.720] is no right way. There's certainly better and worse ways, but there is no one right way. +[549.720 --> 554.600] So we need to look at presenting as something other than performance. And what I'd like +[554.600 --> 559.720] to suggest is what we need to see this as is a conversation. Right now, I'm having a +[559.720 --> 566.000] conversation with 100 plus people, rather than saying I'm performing for you. But it's +[566.000 --> 571.120] not enough just to say this is a conversation. I want to give you some concrete things you +[571.120 --> 578.760] can do. First, start with questions. Questions by their very nature are dialogic. They're +[578.760 --> 584.480] two way. What was one of the very first things I did here for you? I had you count the number +[584.480 --> 589.240] of F's and raise your hands. I asked you a question that gets your audience involved. +[589.240 --> 594.880] It makes it feel to me as the presenter as if we are in conversation. So use questions. +[594.880 --> 598.760] It can be rhetorical. They can be polling. Perhaps I actually want to hear information from +[598.760 --> 605.300] you. In fact, I use questions when I create an outline for my presentations. Rather than +[605.300 --> 609.840] writing bullet points, I list questions that I'm going to answer. And that puts me in +[609.840 --> 614.360] that conversational mode. If you were to look at my notes for today's talk, you'll see +[614.360 --> 619.360] it's just a series of questions. Right now, I'm answering the question, how do we manage +[619.360 --> 623.840] our anxiety? Beyond questions, another very useful +[623.840 --> 630.920] technique for making us conversational is to use conversational language. Many nervous +[630.920 --> 635.960] speakers distance themselves physically. If you've ever seen a nervous speaker present, +[635.960 --> 641.760] he or she will say something like this, welcome. I am really excited to be here with you. +[641.760 --> 646.560] They pull as far away from you as possible because you threaten us, speakers. You make +[646.560 --> 651.000] us nervous, so we want to get away from you. We do the same thing linguistically. We +[651.000 --> 655.960] use language that distances ourselves. It's not unusual to hear a nervous speaker say +[655.960 --> 662.240] something like, one must consider the ramifications or today we're going to cover step one, step +[662.240 --> 668.320] two, step three. That's very distancing language. To be more conversational, use conversational +[668.320 --> 672.420] language. Instead of one must consider, say, this is important to you. We all need to be +[672.420 --> 677.320] concerned with. Do you hear that inclusive conversational language has to do with the +[677.320 --> 683.120] pronouns. Instead of step one, step two, step three, first what we need to do is this. +[683.120 --> 689.520] The second thing you should consider is here. Use conversational language. Being conversational +[689.520 --> 695.480] can also help you manage your anxiety. The third technique I'd like to share is research +[695.480 --> 699.760] that I actually started when I was an undergraduate here. I was very fortunate to study with Phil +[699.760 --> 706.520] Zimbardo of the Stanford Prison Experiment Fame. Many people don't know that Zim actually +[706.520 --> 712.560] was instrumental in starting one of the very first shyness institutes in the world, especially +[712.560 --> 718.440] in the country. I did some research with him that looked at how your orientation to time +[718.440 --> 724.840] influences how you react. What we learned is if you can bring yourself into the present +[724.840 --> 730.040] moment rather than being worried about the future consequences, you can actually be less +[730.040 --> 735.400] nervous. Most of us when we present are worried about the future consequences. My students +[735.400 --> 738.400] are worried they're not going to get the right grade. Some of you are worried you might +[738.400 --> 742.040] not get the funding, you might not get the support, you might not get the laughs that you +[742.040 --> 748.560] want. All of those are future states. So if we can bring ourselves into the present moment, +[748.560 --> 752.280] we're not going to be as concerned about those future states and therefore we'll be less +[752.280 --> 758.520] nervous. There are lots of ways to become present oriented. I know a professional speaker. +[758.520 --> 764.840] He's paid $10,000 an hour to speak. It's a good gig. He gets very nervous. He's up +[764.840 --> 770.000] in front of crowds of thousands behind the stage what he does is 100 push-ups right before +[770.000 --> 775.120] he comes out. You can't be that physically active and not be in the present moment. Now +[775.120 --> 779.120] I'm not recommending all of us go to that level of exertion because he starts out a breath +[779.120 --> 785.400] and sweaty. But a walk around the building before you speak, that can do it. There are +[785.400 --> 790.000] other ways. If you've ever watched athletes perform and get ready to do their event, they +[790.000 --> 795.240] listen to music. They focus on a song or a playlist that helps get them in the moment. +[795.240 --> 801.560] You can do things as simple as counting backwards from 100 by tough numbers like 17. I'm going +[801.560 --> 805.200] to pause because I know people in the room are trying. Yeah. It gets hard after that +[805.200 --> 809.880] third or fourth one. I know. My favorite way to get present oriented is to say tongue +[809.880 --> 814.840] twisters. Saying a tongue twister forces you to be in the moment otherwise you'll say it +[814.840 --> 819.960] wrong. And it has the added benefit of warming up your voice. Most nervous speakers +[819.960 --> 823.720] don't warm up their voice. They retreat inside themselves and start saying all these +[823.720 --> 828.600] bad things to themselves. So saying a tongue twister can help you be both present oriented +[828.600 --> 833.640] and warm up your voice. Remember I said today we're going to have a lot of participation. +[833.640 --> 837.920] I'm going to ask you to repeat after me my favorite tongue twister. And I like this tongue +[837.920 --> 842.440] twister because if you say it wrong, you say a naughty word. And I'm going to be listening +[842.440 --> 846.840] to see if I hear any naughty words this morning. Okay. Repeat after me. It's only three +[846.840 --> 862.320] phrases. I slit a sheet. A sheet I slit. And on that slitted sheet I sit. Oh very good. +[862.320 --> 870.880] No shits. Excellent. Very good. Now in that moment, in that moment, you weren't worried +[870.880 --> 875.440] about I'm in front of all these people. This is weird. This guy is having me do that. You +[875.440 --> 879.920] were so focused on saying it right and trying to figure out what the naughty word was that +[879.920 --> 885.840] you were in the present moment. That's how easy it is. So it's very possible for us to +[885.840 --> 891.040] manage our anxiety. We can do it initially by greeting the anxiety when we begin to +[891.040 --> 898.360] feel those signs. We can do it when we reframe the situation as a conversation. And we do +[898.360 --> 903.520] it when we become present oriented. Those are three of many tools that exist to help +[903.520 --> 909.200] you manage your anxiety. If you have questions about other ways, I'm happy to chat with you. +[909.200 --> 912.920] And at the end, I'm going to point you to some resources that you can refer to to help +[912.920 --> 920.120] you find additional sources for you. So let's get started on the core part of what we're +[920.120 --> 925.640] doing today, which is how to feel more comfortable speaking in spontaneous situations. Some very +[925.640 --> 932.160] simple ground rules for you. First, I'm going to identify four steps that I believe are +[932.160 --> 937.520] critical to becoming effective at speaking in a spontaneous situation. With each of those +[937.520 --> 941.920] steps, I'm going to ask you to participate in an activity. None of them are more painful than +[941.920 --> 946.400] saying the tongue twister out loud. They may require you to stand up. They might require you to +[946.400 --> 950.960] talk to the person next to you, but none of them are painful. And then finally, I'm going to +[950.960 --> 957.920] conclude with a phrase or saying that comes from the wonderful world of improvisation. Through the +[957.920 --> 962.880] Continuing Studies program here at Stanford, for the past five years, I have cotata class with +[962.880 --> 969.760] Adam Tobin. He is a lecturer in the Creative Arts Department. He teaches film and new media. +[969.760 --> 975.360] And he's an expert at improv. And we've partnered together to help people learn how to speak more +[975.360 --> 981.200] spontaneously. We call it improvisationally speaking. And Adam has taught me wonderful phrases and +[981.200 --> 985.840] ideas from improv that I want to impart to you. They're really stick. That's why I'm sharing them +[985.840 --> 989.600] with you to help you remember these techniques. And again, at the end of all this, you'll get a +[989.600 --> 996.560] handout that has this listed. So let's get started. The very first thing that gets in people's +[996.560 --> 1005.040] way when it comes to spontaneous speaking is themselves. We get in our own way. We want to be +[1005.040 --> 1010.720] perfect. We want to give the right answer. We want our toast to be incredibly memorable. +[1011.520 --> 1018.960] These things are burdened by our effort, by our trying. The best thing we can do, the first step +[1018.960 --> 1026.960] in our process is to get ourselves out of the way. Easier said than done. Most of us in this room +[1026.960 --> 1032.640] are in this room because we are type A personalities. We work hard. We think fast. We make sure that we +[1032.640 --> 1039.200] get things right. But that can actually serve as a disservice as we try to speak in the moment. +[1040.800 --> 1044.240] I'd like to demonstrate a little of this for you and I need your help to do that. So we're going +[1044.240 --> 1050.080] to do our first activity. We are going to do an activity that's called Shout the Wrong Name. +[1051.200 --> 1056.960] In a moment, if you are able and willing, I'm going to ask you to stand. And I am going to ask you +[1056.960 --> 1062.400] for about 30 seconds to look all around you in this environment. And you are going to point at +[1062.400 --> 1066.400] different things. And I know it's rude to point, but for this exercise, please point. I want you +[1066.400 --> 1071.520] to point to things and you are going to call the things you are pointing to out loud anything, +[1071.520 --> 1079.440] but what they really are. So I might point to this and say refrigerator. I might point to this and say +[1079.440 --> 1085.120] cat. I am pointing to anything in your environment around you. It can be the person sitting next to you, +[1085.120 --> 1090.720] standing next to you. You will just shout and shouting is important. The wrong name. +[1091.360 --> 1098.480] So in a moment, I'm going to ask you to stand and do that. Please raise your hand if you already +[1098.480 --> 1105.440] have the first five or six things you're going to call out. Yeah, that's what I'm talking about. +[1106.320 --> 1113.520] We stockpile. You all are excellent game players. I told you the game. Shout the wrong name. +[1113.520 --> 1118.800] And you have already begun figuring out how you're going to master the game. That's your brain +[1118.800 --> 1125.680] trying to help you get it right. I'd like to suggest the only way you can get this activity wrong +[1126.480 --> 1134.240] is by doing what you've just done. There is no way to get this wrong. Okay, even if I call this a +[1134.240 --> 1141.600] chair, no penalty will be bestowed upon you. Okay, because I won't know what you were pointing at. +[1141.600 --> 1145.600] You could have been pointing at the floor under the chair and you called the floor the chair and +[1145.600 --> 1152.160] you were fine. The point is we are planning and working to get it right. And there is no way to +[1152.160 --> 1157.760] get it right. Just doing it gets it right. Okay, so let's try this now. We're going to play this game +[1157.760 --> 1162.240] twice again. It's for 30 seconds. If you are willing and able, will you please stand up? You can do +[1162.240 --> 1167.040] this seated by the way, but if you're willing and able, let's stand up. Okay, in a moment, I am about +[1167.040 --> 1173.280] to say go and I would like for you to point at anything around here, including me. It's okay to +[1173.280 --> 1177.520] point at me. I hope it's not a bad thing you say when you point at me, but point at different +[1177.520 --> 1184.720] things and loudly and proudly call them different than what they are. Ready? Begin! +[1184.720 --> 1200.800] Portchia Pine, California, Salt Shaker, Car, Library, Tennis Racket, Purple, Orange, +[1200.800 --> 1218.000] Puthrid. Hello. Time. Time. That's you can stay standing because in a mere moment, we're going +[1218.000 --> 1222.080] to do it again. So if you're comfortable standing, we're about to do it again. First, thank you. That +[1222.080 --> 1226.880] was wonderful. I heard great words being called out. It was fun. In some of you in the back, we're +[1226.880 --> 1231.280] doing it in sync. So it looked like you were doing some 70s disco dance. It was awesome. Okay, +[1231.920 --> 1238.160] this, this was great. Now, let me ask you just a few questions. Did you notice anything about the +[1238.160 --> 1245.280] words that you were saying? Did we find patterns perhaps? Maybe some of you were going through fruits +[1245.280 --> 1252.000] and vegetables. A few of you were going through things that started with the letter A. Right? +[1252.000 --> 1256.640] That's your brain saying, okay, you told me not to stockpile. So I'm going to try to be a little +[1256.640 --> 1264.560] more devious and I'm going to give you patterns. Okay, same problem. When we teach that class, +[1264.560 --> 1269.040] I told you about that improvisationally speaking class. We'd like to say your brain is there to help +[1269.040 --> 1274.240] you. These things it's doing have helped you be successful. But like a windshield wiper, we just +[1274.240 --> 1280.320] want to wipe those suggestions away and see what happens. Okay, so we're going to do this activity +[1280.320 --> 1286.880] again. This time, try the best you can to thank your brain if it provides you with patterns or +[1286.880 --> 1292.400] stockpiles and just say thank you brain and disregard them. Okay, so let's see what happens when we're +[1292.400 --> 1298.240] not stockpiling and we're not playing off patterns. We'll do this for only 15 seconds. See how this +[1298.240 --> 1314.480] feels, baby steps. Ready? Begin. Codec. Bicycle chain. Skateboard. Bananas. Purple. +[1314.480 --> 1332.160] Dutrid. Time. Please have a seat. Thank you again. Did you notice a difference between the +[1332.160 --> 1341.120] second time and the first time? Yes, was it a little easier that second time? No. That's okay. +[1341.120 --> 1345.920] We're just starting. These skills are not like a light switch. It's not like you learn these +[1345.920 --> 1351.680] skills and then all of the sudden you can execute on them. This is a wonderful game. This is a +[1351.680 --> 1358.400] wonderful game to train your brain to get out of its own way. You can play this game anywhere, +[1358.400 --> 1363.520] anytime. I like to play this game when I'm sitting in traffic. Makes me feel better than the +[1363.520 --> 1368.000] I shout things out. They're not the naughty things that I want to be shouting out, but I shout out +[1368.000 --> 1372.480] things and it helps. You're training yourself to get out of your own way. You're working against +[1372.480 --> 1377.760] the muscle memory that you've developed over the course of your life with a brain that acts very +[1377.760 --> 1382.800] fast to help you solve problems. But in essence, in spontaneous speaking situations, you put too +[1382.800 --> 1389.600] much pressure on yourself trying to figure out how to get it right. So a game like this teaches +[1389.600 --> 1396.640] us to get out of our own way. It teaches us to see the things that we do that prevent us from acting +[1396.640 --> 1405.360] spontaneously. In essence, we are reacting rather than responding to react means to act again. +[1406.320 --> 1410.720] You've thought it and now you're acting on it that takes too long and it's too thoughtful. We want +[1410.720 --> 1418.480] to respond in a way that's genuine and authentic. So the maximum I would like for you to take from +[1418.480 --> 1425.760] this and again, these maxims come from improvisation is one of my favorite dare to be dull. In a room like +[1425.760 --> 1431.840] this telling you dare to be dull is offensive and I apologize, but this will help rather than +[1431.840 --> 1440.560] striving for greatness dare to be dull. And if you dare to be dull and allow yourself that, +[1440.560 --> 1446.560] you will reach that greatness. It's when you set greatness as your target that it gets in the way +[1446.560 --> 1453.440] of you ever getting there because you over evaluate you over analyze you freeze up. So the first step +[1453.520 --> 1461.040] in our process today is to get out of our own way dare to be dull easier said than done, but once +[1461.040 --> 1467.600] you practice in a game just as simple as the one we practiced is a great way to do it. But that's not +[1467.600 --> 1474.480] enough getting out of our own way is important, but the second step of our process has us change how +[1474.480 --> 1481.200] we see the situation we find ourselves in. We need to see the speaking opportunity that we are a part +[1481.840 --> 1490.080] of as an opportunity rather than a challenge and a threat. When I coach executives on Q&A skills +[1490.800 --> 1498.640] when they go in front of the media or whatever investors, they see it as an adversarial experience, +[1499.280 --> 1505.040] me versus them. And one of the first things I work on is change the way you approach it. +[1505.840 --> 1510.720] A Q&A session for example is an opportunity for you. It's an opportunity to clarify. It's an +[1510.720 --> 1516.480] opportunity to understand what people are thinking. So if we look at it as an opportunity it feels very +[1516.480 --> 1522.960] different. We see it differently and therefore we have more freedom to respond. When I feel that you +[1522.960 --> 1529.920] are challenging me I am going to do the bare minimum to respond and protect myself. If I see this +[1529.920 --> 1535.280] as an opportunity where I have a chance to explain and expand I'm going to interact differently +[1535.360 --> 1541.200] with you. So spontaneous speaking situations are ones that afford you opportunities. +[1542.240 --> 1546.240] So when you're at a corporate dinner and your boss turns you and says oh you know him better than +[1546.240 --> 1551.280] the rest would you mind introducing him. You say great thank you for the opportunity rather than +[1552.480 --> 1562.080] right I better get this right. So see things as an opportunity. I have a game to play to help us with +[1563.040 --> 1568.080] this. This is a fun one. The holidays are approaching. We all in this room are going to give and +[1568.080 --> 1574.720] receive gifts. Here's how this game will work. It works best if you have a partner. So I'm hoping +[1574.720 --> 1578.720] you can work with somebody sitting next to you. If there's nobody sitting next to you turn around +[1578.720 --> 1583.200] introduce yourself great way to connect. If not you can play this game by yourself it's just a +[1583.200 --> 1588.080] little harder and you can't do the second part of the game. So after I explain the game this gives +[1588.080 --> 1593.360] you a chance to get to know somebody. Here's how it works. If you have a partner you and your +[1593.360 --> 1599.840] partner are going to exchange imaginary gifts. Pretend you have a gift. It can be a big gift. +[1599.840 --> 1606.400] It can be a small gift and you will give your gift to your partner. Your partner will take the gift +[1606.400 --> 1611.680] and open it up and will tell you what you gave them because you have not you just gave them a gift. +[1611.680 --> 1615.840] So you are going to open up the box and you're going to look inside and you are going to say the +[1615.840 --> 1619.920] first thing that comes to your mind in the moment. Not the thing you have all just thought of. +[1622.400 --> 1627.040] Or the thing after that. Remember what we talked about before? That's still plays. That's still in +[1627.040 --> 1633.040] play. Okay, you're stockpiling. Look in there. My favorite that I said, somebody gave me this +[1633.040 --> 1638.160] a gift during playing this game. I looked inside and I saw a frog leg. I don't know why I saw a +[1638.160 --> 1645.360] frog leg but that's what I said. That's the first part of the activity. Now the opportunity is +[1645.360 --> 1650.160] twofold in this game. The opportunity is for you, the gift receiver to name a gift. That's kind +[1650.160 --> 1655.760] of fun. That's an opportunity. It's not a threat. But the real opportunity is for the gift giver +[1655.760 --> 1661.120] because the gift giver then has to say, so you look and you say thank you for giving me a frog's +[1661.120 --> 1667.840] leg and the person will look at you and say, I knew you wanted a frog's leg because. So whatever +[1667.840 --> 1673.680] you find, the person who has received it is going to say, absolutely, I'm so glad you're happy. I +[1673.680 --> 1680.800] got it for you because. So you have to respond to whatever they say. What a great opportunity. +[1680.800 --> 1683.360] Now some of you are sitting there and they're like, oh that's hard. I don't want to. I make them +[1683.360 --> 1688.000] fool myself. Others of you are, if you're following this advice, they're saying, what a great opportunity. +[1689.200 --> 1693.920] So the game again is played like this. You and your partner will exchange, each will exchange a gift. +[1693.920 --> 1698.240] One will start and the other will follow. The first person will give a gift to the second person, +[1698.240 --> 1703.120] second person opens the box. However big the box is. And if the box is big and you find a penny in it, +[1703.120 --> 1707.600] perfect, doesn't matter. The box is heavy and you find a feather in it, fine. It does, there's +[1707.600 --> 1711.760] no way to get it wrong. Okay. Whatever's in the box is in the box. You can return it and get what +[1711.760 --> 1720.000] you wanted later. Okay. The person then you will name it. You will say thank you for the, whatever +[1720.000 --> 1725.360] you saw in the box. The person who gave it to you will say, I'm so glad you're excited. I got it +[1725.360 --> 1731.280] for you because. And you will give a reason that you got them whatever they decided you gave them. +[1731.920 --> 1736.720] Makes sense? All right. So very quickly just in five seconds, find a partner if you're +[1736.720 --> 1739.840] willing to do this with a partner. Everybody have a partner? Okay. +[1745.200 --> 1752.400] All right. In your partnerships, in your partnerships, pick an A person and a B person. +[1752.400 --> 1759.280] You may stand or sit. It's totally up to you. Pick an A and pick a B. Okay. +[1760.400 --> 1770.720] B goes first. Ha ha ha. All right. B give A a gift. B give A a gift. A thank them. +[1771.760 --> 1774.800] And then B will name and give the reason they gave it to him. +[1782.400 --> 1811.040] If you have not switched, switch please. If you have not switched, switch please. +[1812.400 --> 1841.520] Let's wrap it up in 30 seconds please. Let's wrap it up. +[1842.960 --> 1853.680] All right. If we can all have our seats. +[1859.600 --> 1869.040] If we can all take our seats please. I know I'm telling a room of many +[1869.520 --> 1873.440] NBA Alums to stop talking and that's hard. +[1879.280 --> 1883.760] All right. Ladies and gentlemen, did you get what you wanted? +[1885.200 --> 1890.480] Pretty neat. Hi. You always get what you want. Now for some of you, this was really hard +[1890.480 --> 1895.520] because you were really taking the challenge and not seeing what was in the box until you looked +[1895.520 --> 1901.520] in there. Was anybody surprised by what you found in the box? What did you find, sir? +[1901.520 --> 1911.520] What was in the box? Wow. Nice. Nice. If you've got a Ferrari, you need a transmission. +[1911.520 --> 1914.880] I like it. Who else found something that was surprising? What did you find? +[1916.080 --> 1923.920] A live unicorn. That's a great gift. Right? How was it as the gift giver? Were you surprised +[1924.000 --> 1928.800] at what your partner found in the box? Isn't it interesting that when we give an imaginary gift, +[1928.800 --> 1931.920] knowing that the person's going to name it, we already have in mind what they're going to find? +[1932.800 --> 1936.320] And when they say live unicorn, we go, well, that's interesting, right? +[1938.720 --> 1944.720] The point of this game is to one, remind ourselves we have to get out of our own way like we talked +[1944.720 --> 1951.360] about before. But to see this as an opportunity and to have fun, I love watching people play this game. +[1951.360 --> 1956.240] The number of smiles that I saw amongst you. And I have to admit, when I first started, +[1956.240 --> 1961.440] some of you looked at little dour, a little doubting. But in that last game, you were all smiling and +[1961.440 --> 1968.160] look like you were having fun. So when you reframe the spontaneous speaking opportunity as an opportunity, +[1968.160 --> 1975.920] as something that you can co-create and share, all of a sudden you are less nervous, less defensive, +[1976.480 --> 1980.560] and you can accomplish something pretty darn good, in this case, a fun outcome. +[1981.600 --> 1989.440] This reminds us of perhaps the most famous of all improvisation sayings. Yes, and. A lot of us live +[1989.440 --> 1996.960] our communication lives saying no but. Yes, and opens up a tremendous amount of opportunities. +[1996.960 --> 2000.960] And this doesn't mean you have to say yes, and to a question somebody asks, this just means the +[2000.960 --> 2006.960] approach you take to the situation. So you're going to ask me questions, that's an opportunity. +[2006.960 --> 2014.240] Yes, and I will follow through versus no in being defensive. So we've accomplished the first two +[2014.240 --> 2020.400] steps of our process. First we get out of our own way and Seppkin, we reframe the situation as an +[2020.400 --> 2030.560] opportunity. The next phase is also hard but very rewarding. And that is to slow down and listen. +[2031.440 --> 2037.360] You need to understand the demands of the requirement you find yourself in in order to respond +[2037.360 --> 2044.720] appropriately. But often we jump ahead. We listen just enough to think we got it and then we go +[2044.720 --> 2050.080] ahead starting to think about what we're going to respond and then we respond. We really need to +[2050.080 --> 2055.680] listen because fundamentally as a communicator your job is to be in service of your audience. And if +[2055.680 --> 2060.800] you don't understand what your audience is asking or needs, you can't fulfill that obligation. So we +[2060.800 --> 2072.560] need to slow down and listen. I have a fun game to play. In this game you are going to SPLL, EVRY, +[2072.560 --> 2087.520] THING, YOU, SAY, TO, YOUR, P-A-R-T-N-E-R. I will translate. You're going to get with the same +[2087.520 --> 2092.240] partner you just worked with. And you are going to have a very brief conversation about something +[2092.240 --> 2096.800] fun that you plan to do today. I know this is the most fun you're going to have all day but the +[2096.800 --> 2100.560] next fun thing you're going to do today. You are going to tell your partner what you are going to +[2100.560 --> 2111.200] do that will be fun today but you are going to do so by SPLLINGIT. So you're going to spell it. +[2111.200 --> 2121.440] It's okay if you are not a good speller. You'll see the benefit of doing this. So with the partner +[2121.440 --> 2126.400] you just worked with, person A is going to go first this time. You are simply going to tell your +[2126.400 --> 2132.800] partner actually you're going to SPLL to your partner what it is of fun, something of fun that you're +[2132.800 --> 2140.560] going to do today. Do what you are really going to do for fun and not do things like F-E-E-D-T-H-E-C-A-T. +[2140.560 --> 2146.240] Right? Just because you don't want to spell. Right? So you can use big words. All right. 30 seconds +[2146.240 --> 2149.200] each. Spell to your partner something fun that you're going to do today. +[2154.960 --> 2159.040] Would you like to play? Go ahead. +[2159.040 --> 2161.840] T-G-R-T-H-E-G-A-M-E. +[2162.960 --> 2166.560] Oh my goodness. Say it again. Spell it again. Yep. Yep. +[2168.240 --> 2175.520] E-X-C-E-L-L-E-N-T. I-H-O-P-E-T-H-A-T-H-E-Y-W-I-N. +[2180.960 --> 2182.720] Thank you. That was very good. Thank you. +[2189.040 --> 2209.840] If you have not switched, switch. Take 30 more seconds with the new partner spelling. +[2219.040 --> 2244.080] G-R-E-T-E-X-C-A-M-E-Y-O-U-P-L-E-A-S-E-T-A-K-E-Y-O-U-R-S-E-A-T. +[2249.840 --> 2256.640] So what did we learn? What did we learn besides that we're not so good at spelling? +[2259.280 --> 2266.560] You have to pause between the words. How did this change your interaction with the person you +[2266.560 --> 2275.120] were interacting with? What did you have to do? Focus and listen and you can't be thinking ahead. +[2275.120 --> 2281.920] You have to be in the moment. When you listen and truly understand what the person is trying to say, +[2281.920 --> 2287.760] then you can respond in a better way, a more targeted response. We often don't listen. +[2289.360 --> 2297.040] So we start by getting out of our own way. We then reframe the situation as an opportunity. +[2297.040 --> 2301.760] Those are things we do inside our head. But in the moment of interacting, we have to listen first +[2301.760 --> 2309.920] before we can respond to the spontaneous request. Perhaps my most favorite maxim comes from this +[2309.920 --> 2320.240] activity. Don't just do something. Stand there. Listen, listen, and then respond. +[2322.160 --> 2328.480] Now how do we respond? That brings us to the fourth part of our process. And that is we have to +[2328.480 --> 2334.960] tell a story. We respond in a way that has a structure. All stories have structure. We have to +[2334.960 --> 2341.600] respond in a structured way. The key to successful spontaneous speaking and by the way, plan speaking +[2341.600 --> 2348.640] is having a structure. I would like to introduce you to two of the most prevalent and popular and +[2348.640 --> 2354.480] useful structures you can use to communicate a message in a spontaneous situation. But before we +[2354.480 --> 2359.440] get there, we have to talk about the value of structure. It increases what is called processing +[2359.440 --> 2366.400] fluency, the effectiveness of which or through which we process information. We actually process +[2366.400 --> 2372.640] structured information roughly 40% more effectively and efficiently than information that's not structured. +[2373.600 --> 2378.800] I love looking out in this audience because you will remember as I remember phone numbers when you +[2378.800 --> 2384.720] had to remember them if you wanted to call somebody. Young folks today don't need to remember phone +[2384.720 --> 2388.240] numbers. They just need to look at a picture, push a button, and then the voice starts talking on +[2388.240 --> 2392.560] the other end. Ten digit phone numbers, it's actually hard to remember ten digit phone numbers. +[2392.560 --> 2398.800] How did you do it? You chunked it into a structure. Three, three, and four. Structure helps us remember. +[2400.080 --> 2404.960] The same is true when speaking spontaneously or in a planned situation. So let me introduce you +[2404.960 --> 2409.440] to two useful structures. The first useful structure you have probably heard or used in some +[2409.440 --> 2415.680] incarnation, it is the problem, solution, benefit structure. You start by talking about what the issue +[2415.680 --> 2421.440] is, the problem. You then talk about a way of solving it and then you talk about the benefits of +[2421.440 --> 2426.320] following through on it. Very persuasive, very effective. Helps you as the speaker remember, it +[2426.320 --> 2431.920] helps your audience know where you're going with it. When I was a tour guide on this campus many, +[2432.320 --> 2437.680] many years ago. What do you think the single most important thing they drilled into our head? It +[2437.680 --> 2442.880] took a full quarter, by the way, to train to be a tour guide here. They used to line us up at one +[2442.880 --> 2448.000] end of the quad and have us walk backwards straight and if you failed you had to start over. To this +[2448.000 --> 2452.560] day I can walk backwards in a straight line because of that. As part of that training, what do you +[2452.560 --> 2462.240] think the most important thing they taught us was? Never lose your tour group. I'm not sure, +[2462.240 --> 2469.440] never lose your tour group. The same is true as a presenter. Never lose your audience. The way +[2469.440 --> 2474.080] you keep your audience on track is by providing structure. None of you would go on a tour with me. +[2474.080 --> 2480.000] If I said, hi, my name is Matt, let's go. You want to know where you're going, why you're going +[2480.160 --> 2484.240] there? How long it's going to take? You need to set expectations and structure does that. +[2484.240 --> 2489.760] Problems solution benefit is a wonderful structure to have in your back pocket. It's something +[2489.760 --> 2495.920] that you can use quickly when you're in the moment. It can be reframed so it's not always a problem +[2495.920 --> 2499.920] you're talking about. Maybe it's an opportunity. Maybe there's a market opportunity you want to go +[2499.920 --> 2504.160] out and capture. It's not a problem that we're not doing it but maybe we'd be better off if we did. +[2504.160 --> 2509.280] So it becomes opportunity solution which are the steps to achieve it and then the benefit. +[2510.880 --> 2520.320] Another structure which works equally well is the what so what now what structure? You start by +[2520.320 --> 2526.880] talking about what it is. Then you talk about why it's important and then what the next steps are. +[2527.840 --> 2535.840] This is a wonderful formula for answering questions, for introducing people. So if I'm in the moment +[2535.840 --> 2540.080] somebody asks me to introduce somebody, I change the what to who. I say who they are, why they're +[2540.080 --> 2543.840] important and what we're going to do next. Maybe listen to them, maybe drink our wine, whatever. +[2545.280 --> 2549.840] What so what now what? The reality is this. When you are in a spontaneous speaking situation, +[2549.840 --> 2555.200] you have to do two things simultaneously. You have to figure out what to say and how to say it. +[2555.200 --> 2558.960] These structures help you by telling you how to say it. +[2561.840 --> 2566.240] If you can become comfortable with these structures, you can be in a situation where you can +[2566.240 --> 2573.040] respond very ably to spontaneous speaking situations. We're going to practice because that's what we do. +[2574.000 --> 2577.440] Here's the situation. Is everybody familiar with this child's toy? It's a slinky. +[2577.760 --> 2587.120] You are going to sell this slinky to your partner using either problem solution benefit or +[2587.120 --> 2593.120] opportunity solution benefit. What does the slinky provide you? Or you could use what so what? +[2593.120 --> 2596.640] Now what? What is it? Why is it important? The next steps might be to buy it. +[2597.440 --> 2602.480] By using that structure, see how already it helps you? It helps you focus. +[2603.440 --> 2608.400] We're only going to have one partner sell to the other partner. +[2609.840 --> 2613.840] So get with your partner. One of you will volunteer to sell to the other. +[2614.880 --> 2620.880] Sell a slinky using problem solution benefit or what? What so what? Now what? Please begin. +[2632.480 --> 2642.400] So we have the handouts but I'm also going to be doing the microphone. +[2642.400 --> 2646.240] So when I debrief this, you can go ahead and pass them out. Does that make sense? +[2647.440 --> 2649.600] No, no, after this activity. +[2692.480 --> 2710.000] 30 more seconds please. +[2710.000 --> 2726.000] Excellent. Let's all close the deal, seal the deal. +[2731.040 --> 2735.920] I have never seen more people in one place doing this at the same time. +[2735.920 --> 2741.680] I love it. I teach people to gesture and gesture big. It's great. I love it. +[2741.680 --> 2748.160] So if you were the recipient of the sales pitch, thumbs up. Did they do a good job? +[2748.160 --> 2754.720] Did they use the structure? Awesome. I'm recruiting you all for my next business as my sales people. +[2754.720 --> 2761.040] Please try to ignore this but as we're speaking, the handout I told you about is coming around. +[2761.760 --> 2766.960] On the back of that handout, you are going to see a list of structures, the two we talked about, +[2766.960 --> 2770.800] and several others that can help you in spontaneous speaking situations. +[2771.440 --> 2776.720] These structures help because they help you understand how you're going to say what you say. +[2777.440 --> 2781.680] Structure sets you free and I know that's kind of ironic but it's true. If you have that +[2781.680 --> 2785.040] structure, then you're free to think about what it is you're going to say. +[2786.160 --> 2790.800] It reduces the cognitive load of figuring out what you're saying and how you're going to say it. +[2791.360 --> 2792.720] All of this is on that handout. +[2795.520 --> 2800.400] So what does this all mean? It means that we have within our ability +[2802.320 --> 2807.040] the tools and the approaches to help us in spontaneous speaking situations. +[2807.040 --> 2812.000] The very first thing we have to do is manage our anxiety because you can't be an effective speaker +[2812.720 --> 2818.320] if you don't have your anxiety under control. And we talked about how you can do that by greeting +[2818.320 --> 2821.920] your anxiety, reframing as a conversation and being in the present moment. +[2823.440 --> 2830.240] Once you do that, you need to practice a series of four steps that will help you speak spontaneously. +[2830.240 --> 2835.680] First, you get out of your own way. I would love it if all of you on your way from here to the football +[2835.680 --> 2841.920] game, point at things and call them the wrong name. It'll be fun. If most of us do it, then it won't +[2841.920 --> 2848.960] be weird. If only one and two of us do it will be weird. Second, give gifts. By that I mean see +[2848.960 --> 2856.240] your interactions as ones of opportunity, not challenges. Third, take the time to listen. +[2857.600 --> 2864.800] Listen. And then finally, use structures. And you have to practice these structures. I practice +[2864.800 --> 2868.640] these structures on my kids. I have two kids. When they ask me questions, I usually answer them +[2868.640 --> 2874.080] and what's so what now what? They don't know it. But when they go over to their friends houses and +[2874.080 --> 2878.480] they see their friends ask their dads questions, they don't get what's so what now what. So, you know, +[2878.480 --> 2882.160] you have to practice. The more you practice, the more comfortable you will become. +[2883.600 --> 2888.320] Ultimately, you have the opportunity before you to become more compelling, more confident, +[2888.320 --> 2895.440] more connected as a speaker if you leverage these techniques. If you're interested in learning +[2895.440 --> 2900.000] more, this is where I do a little plug. I've written a book, many of the MBA students who take the +[2900.000 --> 2904.160] strategic communication classes here that I and others teach, read it. It's called speaking up +[2904.160 --> 2909.840] without freaking out. More importantly, there's a website here that I curate called no freaking +[2909.840 --> 2915.200] speaking. And it has lots of information that I've written and others have written about how to +[2915.200 --> 2920.720] become more effective at speaking. So that's the end of my plug. What I'd really like to do is enter +[2920.720 --> 2926.160] into a spontaneous speaking situation with you. And I would love to entertain any questions that +[2926.160 --> 2930.560] you have. There are two people who are running around with microphones. So some of us who remember +[2930.560 --> 2935.280] the Phil Donahue show, we're going to do a little bit of that. If you have a question, the microphone +[2935.280 --> 2941.680] will come and I'm happy to answer it. I think if you can do it on. Yes, yeah. A week in here. +[2941.680 --> 2948.400] You can talk about hostile situations. Hostile situations. Yes. So when you find yourself in a +[2948.400 --> 2954.400] challenging situation, first, it should not be a surprise to you. It should not be a surprise. +[2954.400 --> 2959.200] Before you ever speak, you should think about what is the environment going to be like. So it +[2959.200 --> 2965.280] shouldn't surprise you that there might be some challenges in the room. When there are hostile +[2965.280 --> 2970.960] situations that arise, you have to acknowledge it. So if somebody says, that's a ridiculous idea. +[2970.960 --> 2975.680] Why did you come up with that? To simply say, so the idea I came up with was, right? +[2975.680 --> 2979.120] I can't acknowledge the emotion. I recommend not naming the emotion. +[2980.560 --> 2984.240] So you sound really angry. I'm not angry. I'm frustrated. Now we're arguing over their +[2984.240 --> 2989.120] mental state, emotional state. So I say something like, I hear you have a lot of passion on this +[2989.120 --> 2993.200] issue or I hear there's great concern from you. So you acknowledge the emotion because otherwise +[2993.200 --> 2998.800] it sits in the room and then reframe and respond the way that makes sense. So if somebody raises +[2998.800 --> 3003.040] their hand and says, your product is ridiculously priced. Why do you charge so much? +[3003.440 --> 3009.280] I might say, I hear great concern. And what you're really asking about is the value of our product. +[3009.280 --> 3013.040] And I would give my value proposition and then I would come back and say, and because of the value +[3013.040 --> 3018.480] we provide, we believe it's priced fairly. So you answer the question about price, but you've +[3018.480 --> 3025.760] reframed it in a way that you feel more comfortable answering it. So the way to do this is to practice +[3025.760 --> 3031.200] all the skills we just talked about. The only skill that I'm adding to this is the awareness +[3031.200 --> 3036.480] in advance that you might be in that situation. First, I have to truly listen to what I'm hearing. +[3036.480 --> 3041.840] Right? It's very easy for me when I hear a challenging question to get all defensive and not +[3041.840 --> 3047.040] hear what the person's asking. I see it as an opportunity to reframe and explain. Okay? So again, +[3047.040 --> 3051.760] you have to practice, but that's how I think you address it. Other other questions? I see a +[3051.760 --> 3055.360] question back here. Yes, please. That's first of all, thank you very much. Great, great presentation. +[3055.360 --> 3059.920] Thank you. For a lot of the speaking I do, I have remote audiences. +[3059.920 --> 3065.680] audiences distributed all over the country with telecom. Any tips for those kinds of audiences? +[3065.680 --> 3071.360] So when you are speaking in a situation where not everybody is co-located, okay? In fact, +[3071.360 --> 3074.960] right at this very moment, there are people watching this presentation remotely. +[3076.480 --> 3083.360] What you need to do is be mindful of it. Second, try to include engagement techniques where the +[3083.360 --> 3089.280] audience actually has to do something. So physical participation is what we did here through the +[3089.280 --> 3094.480] games. You can ask your audience to imagine something. Imagine what it would be like if, +[3094.480 --> 3098.000] when we try to achieve a goal. Rather than say, here's the goal we're trying to achieve. +[3098.000 --> 3101.760] Say, imagine what it would be like if. See what that does to you. It pulls you in. +[3101.760 --> 3106.000] I can take polling questions. Most of the technology that you're referring to has some kind of +[3106.000 --> 3111.760] polling feature. You can open up some kind of wiki or Google Doc or some collaborative tool +[3111.760 --> 3116.000] where people can be doing things and you can be monitoring that while you're presenting. +[3116.560 --> 3121.120] So I might take some breaks. I talk for 10, 15 minutes and say, okay, let's apply this and let's +[3121.120 --> 3126.400] go into this Google Doc I've created and I see what people are doing. So it's about variety +[3126.400 --> 3131.280] and it's about engagement. Those are the ways that you really connect to people who are remote from +[3131.280 --> 3137.280] you. Other questions? You're pointing out, I've got to look for where the mic is. Yes, please. +[3137.280 --> 3142.000] This may be a similar to the first question, but I do a lot of expert witness testimony. +[3142.000 --> 3145.920] What's your recommendation for handling cross examination? Specifically. +[3146.960 --> 3149.040] Specifically, I feel like I'm being cross-examined. Right. +[3150.560 --> 3156.080] So in any speaking situation that you go into that has some planned element to it, +[3156.080 --> 3160.800] I recommend identifying certain themes that you think are important or believe need to come out. +[3160.800 --> 3164.960] And then with each one of those themes, have some examples and concrete evidence that you can +[3164.960 --> 3171.760] use to support it. You don't go in with memorized terms or ways of saying it. You just have ideas +[3171.760 --> 3176.000] and themes and then you put them together as necessary. So when I'm in a situation where people +[3176.000 --> 3181.440] are interrogating me, I have certain themes that I want to get across and make sure that I can do +[3181.440 --> 3189.760] that in a way that fits the needs in the moment. If it's hostile, again, the single best tool you have +[3189.760 --> 3194.480] to buy yourself time and to help you answer a question efficiently is paraphrasing. +[3194.480 --> 3200.080] The paraphrase is like the Swiss Army knife of communication. If you remember the show McGiver, +[3200.080 --> 3206.320] it's your McGiver tool. Right. So when a question comes in, the way you paraphrase it allows you the +[3206.320 --> 3213.600] opportunity to reframe it, to think about your answer, and to pause and make sure you got it right. +[3213.600 --> 3216.880] So when you're under those situations, if you have the opportunity to paraphrase, say so what you're +[3216.880 --> 3223.120] really asking about is x, y, and z, that gives you the opportunity to employ one of these techniques. +[3223.120 --> 3228.320] Now I've never been an expert witness because I'm not an expert on anything, but those tools I believe +[3228.400 --> 3234.000] could be helpful. The microphone is back there. Thank you. Thank you so much. This has been so +[3234.000 --> 3238.640] helpful and enjoyable this morning. Thank you. Would you please show the last screen so we can get +[3238.640 --> 3243.680] down the name of the book that you've written and the information? Absolutely. Thank you. +[3244.400 --> 3248.240] I think they actually, you might even have an opportunity to, you know, it's on the sheet too. +[3248.240 --> 3252.480] Everything I said is on the back of that sheet, but I'm happy to have this behind me while I talk. +[3253.200 --> 3262.240] Other questions? Yes, please. Yes. I work with groups that represent many different cultural +[3262.240 --> 3268.160] backgrounds. So are there any caveats or is this a universal strategy? +[3269.280 --> 3275.680] So in terms of, from your perspective as the speaker, I believe this applies. But whenever you +[3275.680 --> 3281.360] communicate, part of the listening aspect is also thinking about is, who is my audience +[3281.360 --> 3285.680] and what are their expectations? So what are the cultural expectations of the audience that I'm +[3285.680 --> 3291.360] presenting to? So there might be certain norms and rules that are expected. So when I travel and do +[3291.360 --> 3298.080] talks, I have to take into account where I'm doing the presentation. So I help present in the +[3298.080 --> 3303.200] Ignite program. And if you have not heard about the Ignite program, here at the GSB, it's fantastic. +[3303.200 --> 3307.840] And I just did a presentation standing in one of these awesome classrooms that have all these +[3307.840 --> 3315.440] cameras. And I just taught 35 people in Santiago, Chile. And I needed to understand the cultural +[3315.440 --> 3321.280] expectations of that area and what they expect and what they're willing to do when I ask them to +[3321.280 --> 3326.640] participate. So it's part of that listening step where you reflect on what are the expectations of +[3326.640 --> 3330.720] the audience. I think we have time for two more questions. And then I'm going to hang around afterwards +[3330.720 --> 3334.880] if anybody has individual questions. But some of these folks really want me to keep on sketching. +[3334.880 --> 3338.160] Yes, please. I wanted to ask a question. One of the things that you've done effectively in your +[3338.160 --> 3343.360] talking and I've seen other effective speakers do is interject humor in their talk. How what are +[3343.360 --> 3348.480] the risks and rewards of trying to do that? Well, first, thank you. And I appreciate all of you laughing. +[3348.480 --> 3352.640] Those are the some total of all my jokes you've heard them. I am not funny beyond those jokes. +[3353.520 --> 3359.120] So humor is wonderfully connecting. It's wonderfully connecting. It's a great tool for connection. +[3359.120 --> 3365.680] It is very, very risky. Cultural reasons get in the way. Sometimes what you think is funny isn't +[3365.680 --> 3372.000] funny to other people. What research tells us is that if you're going to try to be funny, self-deprecating +[3372.000 --> 3378.960] humor is your best bet. Because it is the least risky. There is nothing worse than putting out a joke +[3378.960 --> 3385.040] and having no response. It actually sets you back farther than if you would have gotten where you +[3385.040 --> 3390.560] would have gotten if the joke would have hit. So basic fundamentals you need to think about with humor. +[3390.560 --> 3396.960] One, is it funny? How do I know? I ask other people first. Second, what happens if it doesn't work? +[3396.960 --> 3402.320] Have a backup plan. And then third, if you're worried about the answers to those first two, +[3402.320 --> 3407.440] don't do it. One last question, please. The microphone is right here. And then like I said, +[3407.440 --> 3413.360] I will hang around afterwards. Yes, please. I am sort of on the opposite side of this since I am a +[3413.360 --> 3419.200] journalist and I frequently have to ask spontaneous questions of people who have been through media +[3419.200 --> 3431.680] training. Yes. So any tips for chinks in the armor? Way to ask a question without being antagonistic, +[3431.680 --> 3438.000] but get a facsimile of a straight answer. Well, so let me give you two answers. One is I have young +[3438.160 --> 3444.560] boys and the power of the why is great. Just ask why a couple times and you can get through that first two +[3444.560 --> 3450.800] layers of training. Why do you say that? How do you feel about that? The second bit is to +[3452.400 --> 3456.720] what I have found successful in getting people to, I do this to get people to answer in a more +[3456.720 --> 3461.840] authentic way. What I'll do is I'll ask them to give advice. So what advice would you give +[3461.840 --> 3466.480] somebody who's challenged with this or what advice would you give to somebody in this situation? +[3466.480 --> 3471.520] And by asking for the advice, it changes the relationship they have to me as the question +[3471.520 --> 3476.560] asker and I often get much more rich detailed information. So the power of the why and then put +[3476.560 --> 3483.040] them in a position of providing guidance and that can really work. With that, I am going to thank +[3483.040 --> 3488.560] you very much. I welcome you to ask questions later and enjoy the rest of your reunion weekend. Thank +[3488.560 --> 3494.780] you. diff --git a/transcript/allocentric_HlEWIAiqSoc.txt b/transcript/allocentric_HlEWIAiqSoc.txt new file mode 100644 index 0000000000000000000000000000000000000000..47472e6e8fffa784ee01f6f3b246df2293797873 --- /dev/null +++ b/transcript/allocentric_HlEWIAiqSoc.txt @@ -0,0 +1,160 @@ +[0.000 --> 4.680] Hang on a second. +[4.680 --> 7.200] I think that's an autistic person. +[7.200 --> 15.640] Alrighty then, these are the top signs and traits to look out for if you think an adult +[15.640 --> 17.600] in your life may be autistic. +[17.600 --> 22.800] The first sign to spot an autistic adult is that they prefer a loan time rather than +[22.800 --> 24.600] the company of others. +[24.600 --> 28.960] So while they may like spending time with you, you might be their partner or their friend, +[28.960 --> 33.000] they prefer not to entertain others in their own home. +[33.000 --> 36.560] As an autistic adult, our home really is our safe space and there's no different for +[36.560 --> 37.560] kids. +[37.560 --> 43.000] But as you get older and there's more stresses thrown upon you, more demands placed upon +[43.000 --> 46.000] you, your home really becomes this fortress of solitude. +[46.000 --> 51.680] I'd also say autistic adults, including me, can be very protective in maintaining our +[51.680 --> 53.200] safe place. +[53.200 --> 56.680] And I'd go as far as I say to the detriment of others. +[56.840 --> 59.080] Now you might think, what? +[59.080 --> 64.760] If this is our safe place and other people want to come into that, it doesn't really matter +[64.760 --> 68.480] what effect we have on them to make that go away. +[68.480 --> 73.680] You know, we're super protective of this safe zone to the detriment of others which really +[73.680 --> 76.120] doesn't even appear on our radar. +[76.120 --> 79.880] And the last thing I'd say about safe zones or your home for an autistic adult or someone +[79.880 --> 86.640] you think may be an autistic adult is this disproportionate response, this overreaction +[86.640 --> 91.120] in your mind to the simplest things like adornock or an uninvited guest. +[91.120 --> 93.840] And for me, you could throw in just too many people in my home. +[93.840 --> 96.760] These are the things that you might think who cares, someone out to the door, someone just +[96.760 --> 99.840] rocked up to say hello or you know, there's lots of people here where it all happened +[99.840 --> 100.840] fun. +[100.840 --> 102.480] You might think that for me that's not the case. +[102.480 --> 105.240] This is not not anything mere. +[105.240 --> 108.480] This is a major intrusion on my safe zone. +[108.480 --> 111.880] So yeah, there's going to be different reactions and they're going to seem disproportionate. +[111.880 --> 117.600] Another sign to spot an autistic adult in your life is, do they have communication challenges +[117.600 --> 120.160] or do they communicate in a very different way? +[120.160 --> 125.480] Like I do, do you find them constantly asking questions or interrupting you? +[125.480 --> 127.080] Well, you're trying to tell them something. +[127.080 --> 132.960] Do you find yourself being peppered with follow-up questions that aren't always even relevant +[132.960 --> 134.560] to the topic of the conversation? +[134.560 --> 140.360] Autistic adults often like to question every point of a conversation, dissecting every +[140.360 --> 141.360] last word. +[141.720 --> 143.600] I do this for my wife all the time. +[143.600 --> 151.120] I do it to process what I'm hearing so I can understand it and I can contribute. +[151.120 --> 156.120] Of course that doesn't mean it's not incredibly frustrating for the people in the conversation +[156.120 --> 157.120] with me. +[157.120 --> 158.120] I get that. +[158.120 --> 164.280] But critically, without the endless questions for the most part, autistic people will +[164.280 --> 167.440] tend to simply misinterpret what you're saying. +[167.440 --> 172.120] So but for all these endless questions, we may never interpret correctly what you're +[172.120 --> 173.760] trying to convey to us. +[173.760 --> 176.400] So there's a point to them that is frustrating. +[176.400 --> 179.920] So as an autistic adult, let's say with my wife, if I'm having a conversation or she's +[179.920 --> 184.320] trying to tell me something and let's say I decide, I'm just going to listen from start +[184.320 --> 190.760] to finish, suppress all urges, the chances are I'll misinterpret what she says and I'll +[190.760 --> 193.120] launch some sort of counter attack. +[193.120 --> 196.160] So I'll take it the wrong way, man, attack. +[196.160 --> 200.080] Being that I don't understand as a personal attack on me and I must attack back or I'll +[200.080 --> 202.680] just go off on a tangent that's completely irrelevant. +[202.680 --> 206.480] Autistic adults can also become disinterested in conversations really quickly. +[206.480 --> 211.120] We can lose focus and patience and honestly sometimes I'll just say to my wife, can you +[211.120 --> 212.120] just get to the point? +[212.120 --> 213.400] What are you trying to tell me? +[213.400 --> 215.040] Can you just tell me what you're trying to tell me? +[215.040 --> 216.880] And often there is no point. +[216.880 --> 221.280] See as an autistic person, it doesn't occur to me that people would talk when they have +[221.280 --> 222.720] no point to make. +[222.720 --> 223.800] They would just talk. +[223.800 --> 228.880] My wife is entitled to just vent, to just debrief, to just bitch. +[228.880 --> 231.720] She's entitled to just tell me a story. +[231.720 --> 233.920] No point, just a story she wants to share. +[233.920 --> 236.400] For an autistic person, this can be very confusing. +[236.400 --> 240.840] So it works both ways to understand where it's coming from from both sides. +[240.840 --> 248.040] The next sign, despite an autistic adult, is that they seem to focus their time and energy +[248.040 --> 253.760] inwardly as an inward focus rather than say outwardly focusing. +[253.760 --> 256.920] Like, many neurotypical non-autistic people. +[256.920 --> 262.960] It's been said that women focus on people, while men focus on things, and that may be +[262.960 --> 267.360] right or wrong, but for autistic people it's even more specific than that. +[267.360 --> 272.520] Autistic adults tend to spend a lot of their time, if not all their time, focusing on their +[272.520 --> 275.200] passions, their special interests. +[275.200 --> 279.720] In other words, we adopt an inward focus by default. +[279.720 --> 281.640] It's not something we've chosen to do. +[281.640 --> 285.800] We just wake up and, by default, focus inwardly. +[285.800 --> 291.040] So there's a clear favoring of our passions, our interests over everything else. +[291.040 --> 298.080] And part of that inward focus is a tendency to mask or suppress our true selves and our +[298.080 --> 301.000] true emotions and feelings to keep them inside. +[301.000 --> 306.440] While at the same time, those struggling to interpret, to process and deal with these +[306.440 --> 310.360] emotions and feelings that we're trying to hide. +[310.360 --> 315.080] This next side, despite an autistic adult, is, do they seem to live in a world of their +[315.080 --> 316.080] own? +[316.080 --> 319.960] Autistic adults can sometimes just appear clueless to what's happening around them. +[319.960 --> 321.920] Unaware of what's going on around them. +[321.920 --> 323.000] Stuck in their own little world. +[323.000 --> 327.440] I absolutely can struggle with the awareness of others around me or the awareness of others +[327.440 --> 328.440] in general. +[328.440 --> 333.920] And this would include a lack of awareness of presence, of wants, needs, feelings, and +[333.920 --> 336.200] the intentions of people we're spending our time with. +[336.200 --> 341.960] We can also lack an awareness of time and space, our surroundings, environment and our +[341.960 --> 343.240] own personal needs. +[343.240 --> 347.520] We can also appear to be living in a world of our own because we can really struggle with +[347.520 --> 354.120] identifying body language, verbal and nonverbal cues, voice tone, and just generally language +[354.120 --> 358.680] that can make us feel like we're an alien living on a foreign planet. +[358.680 --> 364.920] The next sign, despite an autistic adult, is that they tend to struggle in multitasking. +[364.920 --> 368.600] So managing multiple tasks demands or even interactions. +[368.600 --> 375.240] For me as an autistic adult, I have a strong urge or need that I must complete a task before +[375.240 --> 378.240] moving on to another task. +[378.240 --> 382.360] And there may not be any logical reason why one task is more important than other to other +[382.360 --> 383.360] people. +[383.360 --> 385.840] But for me, this must be done before I can do this. +[385.840 --> 388.480] And I would put this sign under the banner of executive function. +[388.480 --> 392.080] Okay, so we have executive function challenges. +[392.080 --> 397.960] Like for example, in my case, not being able to appropriately prioritize tasks. +[397.960 --> 401.400] So an example for me is I can put certain tasks first. +[401.400 --> 403.880] I can make them a priority. +[403.880 --> 407.080] While to others, they're not actually important or the priority. +[407.080 --> 408.280] But in my mind, they are. +[408.280 --> 413.920] I can also feel like a strong sense of resentment towards other people or other tasks. +[413.920 --> 417.240] Things that are not remotely connected to my interest or passion. +[417.240 --> 423.840] Being in the way of me doing tasks that are connected to my interests, passions that +[423.840 --> 426.160] are a priority of mine. +[426.160 --> 428.280] People, tasks come up and get in the way. +[428.280 --> 430.160] I'm doing what I want to do. +[430.160 --> 431.160] Bad. +[431.160 --> 434.400] A uni when I was studying law, and this was obviously very bad. +[434.400 --> 439.480] I had to complete one assessment or essay or whatever you want to call those kind of +[439.480 --> 442.920] insomestor assessments one at a time. +[442.920 --> 448.600] It didn't matter if multiple assessments were due at the same time. +[448.600 --> 452.760] I could only work on one at a time for moving on to the next assessment. +[452.760 --> 457.480] I guess I struggled to switch from thoughts and themes and I thought, well, if I'm doing +[457.480 --> 463.440] an assessment on criminal law, how could I possibly concurrently do an assessment on +[463.440 --> 465.160] property law? +[465.160 --> 468.720] I can't, whoa, that's, no, that doesn't compute. +[468.720 --> 470.440] And even if they do it at the same time. +[470.440 --> 475.440] Other sign to spot an autistic adult in your life is they appear just generally super +[475.440 --> 480.280] sensitive to things like smells and tastes and noises and lights. +[480.280 --> 486.440] And I'm talking sensitive to a level that doesn't seem right to you or other people. +[486.440 --> 491.800] In other words, they may be sensitive to smells or tastes or noises or lights that don't +[491.800 --> 492.800] bother anyone else. +[492.800 --> 498.040] So on the surface, it can seem unbelievable, disproportionate, just plain made up. +[498.040 --> 505.040] But sensory processing challenges and hypersensitivity to sensors like smell, touch, taste, noise, +[505.040 --> 506.040] light. +[506.040 --> 511.480] These are very common challenges for autistic people. +[511.480 --> 516.480] A particular paradox that can really frustrate my family is I can be really hypersensitive +[516.480 --> 520.880] to noises so I can get really startled so quickly. +[520.880 --> 526.040] I get started all the time and a lot of times I end up just putting my hands on my ears +[526.040 --> 530.680] because I can't hear this noise anymore or I don't know how to get past this noise. +[530.680 --> 536.080] But the paradox being hypersensitive to noise, but why are you so bloody louder, Ryan? +[536.080 --> 540.160] You're always talking loud, you're so loud, you're banging and clanging, it's funny, +[540.160 --> 541.560] it's a paradox, I guess. +[541.560 --> 546.200] It's interesting and I think it's pretty common as an autistic person, I am really super sensitive +[546.200 --> 550.160] to banging and clanging and noises, but I am that person. +[550.160 --> 558.120] Also, and this is a sign you may have noticed, certain voices or noises or actions can set +[558.120 --> 562.720] off an autistic person straight away out of nowhere and it just makes no sense how that's +[562.720 --> 563.720] possible. +[563.720 --> 566.160] For me, a squeaky door. +[566.160 --> 569.080] Loud eaters can't be in the room with loud eaters. +[569.080 --> 570.800] You know what's worse? +[570.800 --> 571.800] Slopey drinkers. +[571.800 --> 573.320] Do you know what's worse than that? +[573.320 --> 577.880] I'm a loud eater and I'm a Slopey drinker. +[577.880 --> 581.080] Then you'll part of this community means so much to me, so thank you. +[581.080 --> 585.720] For clicking subscribe, joining the community and supporting me, I'm Ryan Kelly, that autistic +[585.720 --> 587.920] guy and till my next video, we'll talk soon. diff --git a/transcript/allocentric_I2azLvESwDY.txt b/transcript/allocentric_I2azLvESwDY.txt new file mode 100644 index 0000000000000000000000000000000000000000..832f751751030831216055d54d76959d99d7f58c --- /dev/null +++ b/transcript/allocentric_I2azLvESwDY.txt @@ -0,0 +1,355 @@ +[60.000 --> 68.440] While mobility techniques themselves are fairly standard, some modifications might be necessary +[68.440 --> 70.240] for deafblind children. +[70.240 --> 74.280] Although the techniques are similar to those used by blind youngsters, the manner in which +[74.280 --> 79.320] these techniques are taught will differ considerably, and that an instructor may have to rely in +[79.320 --> 84.160] a far greater nonverbal component of instruction when working with a deafblind. +[90.000 --> 98.260] During the pre-cane phase of training, a student learns various forms of protective +[98.260 --> 100.940] arm techniques in a familiar area. +[100.940 --> 105.080] The various trailing techniques will be used to develop a good line of travel with a fast +[105.080 --> 114.680] and effective speed so there won't be too great a tendency to veer. +[114.680 --> 118.580] With the knowledge of their own bodies and the means to move through space, students +[118.580 --> 127.500] can not only move purposefully, but protect themselves appropriately as well. +[127.500 --> 132.280] Diffblind students need to learn the concepts of the sighted guide technique very early +[132.280 --> 135.620] and become sensitive to the movements of their guides. +[135.620 --> 141.040] Most of this is done nonverbaly, the child relying on a developing sense of touch and +[141.040 --> 150.760] ability to respond to the movements of the guide. +[150.760 --> 155.240] The specific age at which cane training should begin varies considerably. +[155.240 --> 159.480] There are a number of factors involved related to the individual student. +[159.480 --> 163.680] Rather than citing a chronological age, the instructor might consider a student's attitude +[163.680 --> 169.440] and interest in the cane, the student's level of maturity, the residual vision and hearing, +[169.440 --> 174.040] balance and coordination, their level of self-awareness and body image and their need +[174.040 --> 176.200] to learn the skill. +[176.200 --> 180.240] Sometimes an adventitiously deaf blind child might have a real potential for adopting the +[180.240 --> 185.360] cane and learning some basic skills rather quickly, while on the other hand a congenitally +[185.360 --> 190.040] blind child, a one who has a great deal of difficulty with travel, might take longer to +[190.040 --> 195.400] learn an appropriate technique but might have a real need to travel. +[195.400 --> 200.280] The students learn that the cane is an extension of their own tactile sense and gradually learn +[200.280 --> 203.960] to trust the cues they receive through the cane. +[203.960 --> 209.600] Young children need to learn that the cane is a tool instead of a toy and use it in context +[209.600 --> 216.000] with root travel and various recurring roots such as going to recess, lunch, the toilet, +[216.000 --> 220.160] the play yard, the swimming pool. +[220.160 --> 224.280] A great deal of the usefulness of the cane lay in its coordination with the movement of +[224.280 --> 229.720] the feet, being in step with the cane so that the cane tip covers the area where the +[229.720 --> 231.920] next footstep will fall. +[231.920 --> 235.720] Even from the beginning it's advisable to have the students in step with the movements +[235.720 --> 237.040] of the cane. +[237.040 --> 241.840] The instructor might start the student with one foot back and as the rear foot comes forward +[241.840 --> 245.360] the instructor moves the cane across the student's body. +[245.360 --> 249.760] Then as the student steps again the instructor moves the cane back. +[249.760 --> 253.360] The relaxation and smoothness are real factors here. +[253.360 --> 258.440] An instructor should be interested in smoothness, not a jerky robot-like motion which has a tendency +[258.440 --> 262.600] to tighten the arm and stiffen the student's entire body. +[262.600 --> 267.200] A stiff arm receives fewer and weaker cues through the cane. +[267.200 --> 273.760] The student has to feel what it's like to be in step and begin to internalize that feeling. +[273.760 --> 279.200] When the student gets out of step simply stop the student and begin again, skipping twice +[279.200 --> 283.760] or shifting the cane suddenly might have no meaning at all for a deafblind student with +[283.760 --> 287.080] little inner language or poor communication skills. +[287.080 --> 292.160] For this type of student it's easier to teach the feeling of actually being in step +[292.160 --> 294.840] as opposed to making modifications. +[294.840 --> 300.280] The best means of correction is to stop, get the student into a starting position, and +[300.280 --> 305.200] begin again in step. +[305.200 --> 308.720] There are a number of ways to teach the length and movement of the arc. +[308.720 --> 311.120] One way is to use the auditory sense. +[311.120 --> 316.480] Even if the student is profoundly deaf, most can hear and feel the sound generated by +[316.480 --> 318.760] clapping two wooden blocks together. +[318.760 --> 323.360] Another sound that most profoundly deaf blind students can hear is two sections of metal +[323.360 --> 325.400] pipe banging together. +[325.400 --> 329.560] It's important for the students to realize the approximate width of the cane arc, the +[329.560 --> 334.240] sound can be used to indicate the far reaches of the arc to the student. +[334.240 --> 339.480] Every means may be inappropriate for some students who may require a close hands-on approach, +[339.480 --> 344.680] but may work well for other students in the early stages of instruction and technique. +[344.680 --> 349.120] If the arc is wider on one side of the student's body than the other, he'll usually tend to +[349.120 --> 350.800] veer in that direction. +[350.800 --> 354.820] At this point the instructor may want to straighten the student's cane arm and reposition +[354.820 --> 358.400] his wrist in the center of his body to balance the arc. +[358.400 --> 362.180] This may have to be done frequently in the beginning of training because the student +[362.180 --> 367.320] unused to holding his arm in this position for extended periods of time tends to fatigue. +[367.320 --> 371.900] The student's arm relaxes and slumps closer to the body and the cane arc becomes more +[371.900 --> 374.260] pronounced to that side. +[374.260 --> 378.100] A student can learn to make his own center line check by grasping his own wrist with +[378.100 --> 379.780] his opposite hand. +[379.780 --> 383.620] An instructor can tap the student's wrist a couple of times as he makes this check to +[383.620 --> 387.700] foster an association between the tapping and the wrist and the need for a center line +[387.700 --> 388.820] check. +[388.820 --> 393.500] The development of communication and cues must develop concurrently with development of +[393.500 --> 398.480] cane technique. +[398.480 --> 403.540] The slide technique is so called because the cane tip slides along in constant contact +[403.540 --> 405.260] with the ground. +[405.260 --> 410.420] A touch technique, and moving laterally, may move off the edge of a curb at such an angle +[410.420 --> 414.820] that a deafblind student may not detect it, then suddenly trip off the curb. +[414.820 --> 419.060] The advantage of the slide technique is that a candy-tick to drop off from any point +[419.060 --> 420.460] on the arc. +[420.460 --> 425.180] Even travelers with use of their hearing usually switch from a touch to a slide technique +[425.180 --> 429.900] when their auditory sense tells them that a corner is near. +[429.900 --> 434.580] In some cases, the students will tend to use the cane to trail along a wall, a raised +[434.580 --> 436.180] edge or curb. +[436.180 --> 440.060] Deafblind students who have very little use of the auditory sense don't have the same +[440.060 --> 444.660] use of additional cues that would help them parallel such sounds as pedestrian traffic +[444.740 --> 446.820] or light vehicle flow. +[446.820 --> 451.780] Few deafblind travelers can use sound reflections from buildings and walls to keep a constant +[451.780 --> 453.140] distance. +[453.140 --> 457.980] This is especially true if they use only one hearing aid and the balance of the aided +[457.980 --> 460.780] and un-aided ear and not very close. +[460.780 --> 465.740] Many deafblind travelers, especially those whose impairment is congenital, tend to stay close +[465.740 --> 468.220] to the security of a guiding edge. +[468.220 --> 471.380] This is a slower technique and it's not ideal. +[471.380 --> 476.180] There are places in times where this is extremely inconvenient, for instance a sidewalk during +[476.180 --> 480.900] heavy pedestrian use or near shopping areas where pedestrians are more interested in the +[480.900 --> 485.420] merchandise in the windows than the travelers walking near the walls. +[485.420 --> 490.420] The touch and drag technique is useful for finding the ends of walls, intersecting hallways +[490.420 --> 491.900] and paths. +[491.900 --> 495.780] The students must take care to keep the arc wide enough in the side opposite the wall +[495.780 --> 501.460] or edge to cover themselves from oncoming pedestrians or obstructions. +[501.460 --> 509.380] A mobility instructor working with deafblind travelers must necessarily remain closer because +[509.380 --> 514.340] speech may not be the most effective means of teaching and monitoring a student's techniques. +[514.340 --> 518.500] While a teacher may begin by physically moving into controlling the student's cane, the +[518.500 --> 523.260] touch becomes progressively lighter, the number of adjustments fewer than the instructor +[523.260 --> 525.700] moves gradually further away. +[525.700 --> 530.300] What began as a hand on the student's cane becomes later a hand pressing in the shoulders +[530.300 --> 534.140] and later perhaps a light touch to remind them that they have to make slight adjustments +[534.140 --> 535.660] in the technique. +[535.660 --> 540.620] This gradual moving away places a greater sense of control and responsibility into the +[540.620 --> 542.780] student's own hands. +[542.780 --> 547.420] Justures and fingerspelling might be used to guide a student to make certain adjustments. +[547.420 --> 552.540] Still, some situations might require that the instructor be right there and make an immediate +[552.540 --> 555.660] check by taking a direct hand on the technique. +[555.660 --> 559.140] This is especially true in situations that are potentially dangerous. +[559.140 --> 563.140] The instructor should be in a position to ensure that the student navigates difficult +[563.140 --> 565.540] or dangerous areas safely. +[565.540 --> 573.900] Safety is always the primary concern. +[573.900 --> 576.820] Stairs pose a number of problems for any blind traveler. +[576.820 --> 580.980] The deafblind have additional problems and that they have difficulty hearing people coming +[580.980 --> 583.740] up or down Stero's opposite them. +[583.740 --> 588.500] If two deafblind travelers meet on the stairs, the problems may be compounded. +[588.500 --> 592.420] An instructor may want to have the deafblind traveler exaggerate the turning out of the +[592.420 --> 597.420] wrist used while on the stairs to provide that strength and leverage needed to protect +[597.420 --> 600.700] them against people bumping into them or falling over them. +[600.700 --> 605.140] The arm is in a better position to ward off people who veer into it and still provide a +[605.140 --> 607.740] good position for sensing the stairs. +[607.740 --> 615.260] The arm is stronger pushing when the wrist is turned out like this. +[615.260 --> 619.660] The student may choose to use sidehand rails or banisters in the middle of either side +[619.660 --> 620.660] of the stairs. +[620.660 --> 625.660] An important consideration here is will the line of travel from the stairs place the student +[625.660 --> 631.060] in a good position to contact the next landmark or continue that line of travel? +[631.060 --> 635.340] Instructor might want to consider that at the top or bottom of the stairs when deciding +[635.340 --> 644.140] which side to use when going up or down. +[644.140 --> 648.740] The touch and slide technique combines the advantages of the speed of the touch technique +[648.740 --> 652.340] with the advantages of the sensitivity of the slide technique. +[652.340 --> 656.580] The cane remains in contact with the ground a bit longer at the extremes of the arc where +[656.580 --> 660.820] it touches down and raises very slightly from the movement across the arc in front of +[660.820 --> 662.260] the students. +[662.260 --> 666.740] Those blind travelers with hearing can hear the cane tip moving back and forth and can +[666.740 --> 676.940] make the necessary adjustments on the arc according to the sound. +[676.940 --> 679.380] After blind travelers must do it by feel. +[679.380 --> 683.300] The instructor can guide the student by light touch on the cane and provide frequent +[683.300 --> 687.860] checks until the student becomes proficient with the technique. +[687.860 --> 691.540] Gestures can be used to indicate the movements of the cane and the position at which the +[691.540 --> 695.180] cane touches. +[695.180 --> 701.540] The touch technique. +[701.540 --> 710.300] The slide technique. +[710.300 --> 722.540] The touch and slide technique. +[722.540 --> 727.580] The actual points of contact can be illustrated by a pentail marker attached to the tip of +[727.580 --> 731.140] a cane. +[731.140 --> 733.700] The touch technique. +[733.700 --> 736.460] The slide technique. +[736.460 --> 739.380] The touch and slide technique. +[739.380 --> 743.820] The three point technique is so called because the cane touches three times. +[743.820 --> 745.580] The first on the far side. +[745.580 --> 751.020] The second is a drags back across the student's body to find the curb, drain or other landmark. +[751.020 --> 754.740] The third time in the near side where it clears the area before resuming its arc on the +[754.740 --> 756.260] opposite side. +[756.260 --> 760.260] The three point is especially useful when the student is looking for some feature along +[760.260 --> 768.020] the edge of a sidewalk such as a landmark that would indicate a turn. +[768.020 --> 772.660] The landing is a way to use one landmark, in a sense of relative direction, to find another +[772.660 --> 777.100] landmark or reference point within one or two cane lengths from the body and extended +[777.100 --> 778.100] arm. +[778.100 --> 782.260] A student can either take a new line of direction from the reference point or use the reference +[782.260 --> 787.260] point to contact an additional landmark as a check on his position. +[787.260 --> 790.540] Cross spanning entails changing cane hands. +[790.540 --> 794.460] It can be used to find reference points which are further from the line of travel than +[794.540 --> 803.820] one cane length or to find the middle of two points to get a sense of relative position. +[803.820 --> 807.620] Squaring off is a technique used to initiate a straight line of travel. +[807.620 --> 812.060] A student may use the flat surface of a wall, balance the shoulder blades on it to get +[812.060 --> 817.300] flat against it and move forward in a straight line. +[817.300 --> 821.900] A student might also use a pole and a curb using the cane to make sure his line of direction +[821.900 --> 826.860] is straight and use the pole more for a positional reference than a directional one. +[826.860 --> 830.420] This technique may be used to ensure that the student is on the right position for a +[830.420 --> 834.340] street crossing or for crossing a wide area where there are a few other landmarks or +[834.340 --> 836.300] positional checks. +[836.300 --> 840.340] When a student gets familiar with the crossing, he may not need to back up against the landmark +[840.340 --> 844.500] and square off, but might choose to use it merely as a reference point to get into position +[844.500 --> 846.300] for the crossing. +[846.300 --> 851.860] Just teaching shorelining, relating to walls and edges, is not a complete set of techniques. +[851.860 --> 857.220] Some congenitally deafblind students can't feel subtle changes in the shoreline and without +[857.220 --> 862.420] clear landmarks to indicate specific turns tend to veer and get lost. +[862.420 --> 866.620] They need to balance shorelining techniques with the touch technique and sufficient rate +[866.620 --> 872.580] of speed to bridge open areas even if they tend to rely on shorelining. +[872.580 --> 876.900] The signal to speed up can be done with gentle pressure of the instructor's palm and the +[876.900 --> 878.420] students back. +[878.420 --> 881.420] Slow down gentle pressure on the chest. +[881.420 --> 885.660] It's useful to have the student get used to responding, so he'll react quickly to hand +[885.660 --> 889.500] pressure if there's an obstacle that could injure him. +[889.500 --> 896.060] If a deafblind student has a considerable amount of residual vision, it can use it effectively +[896.060 --> 900.140] as a low vision traveler, a folding cane might be more appropriate. +[900.140 --> 905.220] The diagonal technique can be used as a backup sensory system, detecting curves, stairs +[905.220 --> 909.020] or objects just out of the range of the student's peripheral vision. +[909.020 --> 915.660] This technique also serves as a double check on depth perception. +[915.660 --> 919.620] It can also call attention to the fact that the student has a visual impairment. +[919.620 --> 924.180] This may be critical at corners when a student undertakes a crossing and a car suddenly approaches +[924.180 --> 940.260] at high speed. +[940.260 --> 944.220] A low vision deafblind student can switch to a regular cane technique and unfamiliar +[944.220 --> 952.220] or ambiguous terrain and return to a diagonal technique when they get back unfamiliar territory. +[952.220 --> 959.220] This is a very important technique for a deafblind student to learn how to use a specific +[959.220 --> 960.220] technique. +[960.220 --> 967.220] This technique is very important for a deafblind student to learn how to use a specific +[967.220 --> 968.220] technique. +[968.220 --> 975.220] This technique is very important for a deafblind student to learn how to use a specific technique +[975.220 --> 982.220] to learn how to use a specific technique. +[982.220 --> 991.220] There is no absolutely ideal technique for every specific location or terrain. +[991.220 --> 998.220] A student might have a wide range of cane techniques or just more simple enough for the few routes they travel. +[998.220 --> 1004.220] If the techniques get the students where they want to go and they feel comfortable with them, the techniques are effective. +[1004.220 --> 1011.220] Different cane techniques might be taught the easier to use and afford the student less of a chance of getting lost or missing a landmark. +[1011.220 --> 1017.220] Techniques should be kept as simple as possible with the fewest number of modifications necessary. +[1017.220 --> 1026.220] If a student can master only one simple technique, he can practice it over a variety of different terrains and learn to maximize the effectiveness of that particular technique. +[1026.220 --> 1033.220] In many cases, a student makes modifications according to his own level of skill and his own specific needs. +[1033.220 --> 1040.220] An instructor should ensure that the modification of forward sufficient protection for the student as well as serving effectively as a sensory function. +[1040.220 --> 1050.220] Techniques are building blocks of those skills that will enable the student to have access to his world and to become as much a part of that world as he can be. +[1050.220 --> 1062.220] The technique should be taught in familiar areas so that the child will have a chance to work in those techniques in a meaningful context with enough rate of repetition to ensure being internalized. +[1062.220 --> 1069.220] The technique should fit into an orderly and consistent body of skills the student can master. +[1069.220 --> 1077.220] Techniques should fit the student's physical capabilities and needs. The student shouldn't be forced to learn a classic technique with absolute perfection. +[1077.220 --> 1084.220] The standard techniques are guides and capable of enormous modification while still retaining their usefulness. +[1084.220 --> 1089.220] The techniques should be fitted to the student rather than the student to the technique. +[1090.220 --> 1097.220] There is no ideal technique what works best for the student in any particular circumstance or environment is the best technique. +[1097.220 --> 1103.220] There is no right or wrong techniques, only effective and ineffective. +[1103.220 --> 1118.220] Techniques are the means to use the long cane as a tool to enable students to use their orientation skills, their sensory training, their inner sense of direction to express their needs as travelers to find their own way. +[1120.220 --> 1125.220] The technique should be taught in a specific way. +[1125.220 --> 1131.220] The technique should be taught in a specific way. +[1131.220 --> 1137.220] The technique should be taught in a specific way. +[1138.220 --> 1143.220] The technique should be taught in a specific way. +[1143.220 --> 1149.220] The technique should be taught in a specific way. +[1149.220 --> 1155.220] The technique should be taught in a specific way. +[1167.220 --> 1173.220] The technique should be taught in a specific way. +[1197.220 --> 1203.220] The technique should be taught in a specific way. +[1228.220 --> 1233.220] The technique should be taught in a specific way. +[1233.220 --> 1238.220] A root is basically a travel path to an objective. +[1238.220 --> 1240.220] But it's more than that. +[1240.220 --> 1251.220] A root is an opportunity for students to leave home and school for a time and travel into the world, to make contact with members of the community and partake of the roots and services that fit their specific interests and needs. +[1252.220 --> 1260.220] A root provides the opportunities to use language and sensory skills and to enable the students to taste whatever measure freedom lay within their capabilities. +[1260.220 --> 1263.220] A root is way out and way back. +[1263.220 --> 1270.220] And along that root are a number of experiences, many set up by the mobility instructor, others that function of chance. +[1270.220 --> 1275.220] Different types of roots provide the opportunities for learning different concepts and types of skills. +[1276.220 --> 1287.220] It's along the mobility root that the techniques will be taught, developed, practiced and honed into a workable system of skills that will enable a child to venture into the world successfully and with confidence. +[1296.220 --> 1299.220] Roads are necessary components of a student's program. +[1299.220 --> 1305.220] At certain times of day, a student goes to different locations. A root can be as simple as a trip to a bathroom. +[1305.220 --> 1312.220] A root incorporates a student's need. It's a movement along a regular path to a destination and return. +[1312.220 --> 1321.220] On a daily trip to the snack room, the root is part of that schedule, part of the structure of the time space continuum, which breaks up a student's day. +[1321.220 --> 1330.220] It's along these roots that premobility skills, cane techniques and sensory training come into play and become allied with purpose and direction. +[1330.220 --> 1340.220] The roots themselves are part of the pattern of changes in location, direction, and duration of time that the student experiences on an ongoing basis. +[1340.220 --> 1346.220] The root entails a set of sensory and sensory motor experiences, sequenced in memory. +[1347.220 --> 1354.220] The first basic roots may have to be repeated, innumerable times, and the experiences fit into the student's personal needs. +[1354.220 --> 1362.220] The landmarks must be very easily distinguished. Later on, these experiences serve to divide the root into understandable segments. +[1362.220 --> 1369.220] The experiences themselves become part of the inner language that enables the child to structure movement and direction. +[1370.220 --> 1384.220] The first roots may be learned motorically. The first wall in the root on the right side, turn at the corner and the return is on the left side. +[1384.220 --> 1393.220] Even before we learned the dichotomy of left and right, the sensory motor experiences had been presented. The feelings are there. +[1394.220 --> 1400.220] Trees and headrails occur in certain sides of a root and serve as guides toward objectives. +[1400.220 --> 1410.220] The features and objects in a root become references for structuring some pattern to where the child is and form the experiential basis for learning the language associated with those features. +[1411.220 --> 1424.220] The first roots moving out in a way from a familiar, non-threatening point of origin and return make way for travel along edges. +[1424.220 --> 1432.220] Later the edges lean to turns. This memory of turns both indoor and outside are sets of sequential memory. +[1441.220 --> 1455.220] Things that the child can sense such as the sun detected visually or with the thermal sense in the skin, the wind, the scent of a certain tree or flower, the feelings of the terrain, the rough texture of the ground. +[1455.220 --> 1464.220] These are called cues. Cues call a student's attention to certain features of the root using the student's own sensory channels. +[1465.220 --> 1476.220] Objects or features on the root that the student can touch or contact with the cane are called landmarks. These serve to give a clue to relative direction or distance on a root. +[1476.220 --> 1489.220] There's an interplay of landmarks and cues on a root. The cues call a student's attention through sensory channels that certain features are in the immediate vicinity and the landmarks act as guides and double checks and position. +[1490.220 --> 1502.220] Intervening landmarks are those which give a sense of progress along a root. Some roots, with very long stretches of straight sidewalk, have very few clues to indicate how far along a student is progressed. +[1502.220 --> 1512.220] An advanced traveler may internalize a sense of relative time and distance, but this may or may not develop in a congenitally deafblind student or one with some degree of retardation. +[1512.220 --> 1522.220] It's helpful to have as many intervening cues as possible to give some sense of where a student is on a root and how far it is to the destination. +[1522.220 --> 1538.220] It's ideal, but not always possible, to have three factors involved at each point on a root where a landmark is used. The terrain itself, flat, rough or healing, combinations with other features like a grass edge and concrete, a re-hedged and a driveway. +[1538.220 --> 1561.220] A specific permanent object, like a tree, lamppost, fire hydrant, and some environmental cue. If a student travels a certain root at the same time each day, there may be prevailing wind, the sun may be on a certain side, there may be a scent from a bakery, a gas station, a delicatessen, there may be traffic sounds in a certain direction. +[1568.220 --> 1584.220] Wherever possible, the landmarks can be paired, so there's a double check on a landmark, a cluster of clues, a pole which is next to a tree gives more of a distinction than just the pole itself. +[1584.220 --> 1592.220] When the landmarks are within the span of a cane, the grouping will serve as ready identification of a specific place on the root. +[1592.220 --> 1602.220] As many factors as possible should be used to aid recognition. A student might break off a leaf from a hedge and smell it, take a piece of bark, a mature rub it between his fingers. +[1602.220 --> 1608.220] A student might touch the landmark with the cane to generate a certain sound or vibration. +[1608.220 --> 1628.220] At several stages on a root, the student should take bearings and just where he has come from and where he's going. The student might take bearings at certain points in a sidewalk by finding the curb or positioning whatever traffic sounds heard by residual hearing on a certain side of his body. +[1639.220 --> 1649.220] A pivot point on a root is one from which the student can start a number of other roots. This is the point at which a decision is made regarding the direction and the eventual destination. +[1649.220 --> 1657.220] At the pivot point, a mobility instructor may wish to periodically review the relative direction of a number of destinations. +[1668.220 --> 1672.220] If the student is engaged in a vocational program, a number of roots can be linked together. +[1688.220 --> 1696.220] Students learn the root to do the job, learn to notice the day and time of getting paid, learn the root to and within a bank to engage in necessary banking skills. +[1696.220 --> 1703.220] Other roots to stores, post offices and shops can be learned, whatever suits their individual needs and interests. +[1703.220 --> 1716.220] If something unexpected occurs or if a unique educational opportunity presents itself, something which will peak a student's curiosity by his environment, and may be a good idea to stop for a moment and explore. +[1716.220 --> 1721.220] Experiences and objects enrich in a travel route, give meaning to the root for students. +[1722.220 --> 1731.220] The root is not only a memory of distance and duration of time in relative direction, but of the experiences along that root as well. +[1731.220 --> 1741.220] If there are problems with distance to objectives, lack of public transportation, or problem with bus schedules, it may be feasible to use a system of drop-offs. +[1742.220 --> 1752.220] This student has learned several roots within a shopping center, and has earned a mobility pass, and the right to board the shuttle bus to shop at stores of his own choice. +[1752.220 --> 1765.220] When the student is traveling during the hour-lotted at the shopping center, he must draw upon protective skills, orientation and sensory skills, and use these to arrive at the destinations of his own choice. +[1765.220 --> 1776.220] Once in the stores, the student must combine communication and social skills and money management techniques to buy things of his own choosing. +[1776.220 --> 1785.220] In more complex travel environments, such as supermarkets, it may be more effective to ask for assistance in finding the items desired. +[1785.220 --> 1796.220] The mobility instructor, nor the student, might in turn assist the clerks and managers by teaching them sighted guide techniques and calling their attention to the special needs of the blind. +[1796.220 --> 1804.220] The wide range of choices and the interaction with the sighted guide imparts a depth and richness to the special hour. +[1804.220 --> 1814.220] Sighted guides who work with our students a number of times often show a surprising sensitivity and make the mobility experience positive, successful, and enjoyable. +[1834.220 --> 1857.220] This type of semi-independent mobility, bounded only by prior instruction and the schedule of drop-offs and pickups, gives students an experience with freedom and a taste for independence. +[1857.220 --> 1868.220] It provides a valuable arena for mobility instructors to monitor and assess the student's skills in a fine areas that require further instruction. +[1868.220 --> 1880.220] From these outings where purpose and destinations are chosen by the student, come the feelings of confidence, self-esteem, and the self-image of a successful traveler. +[1888.220 --> 1896.220] While route training is in progress, a mobility instructor can be working on the links to these route simultaneously. +[1896.220 --> 1907.220] The student can assist the instructor in devising a series of large cards to alert the drivers of particular buses of the destinations and the need for special assistance. +[1907.220 --> 1915.220] The signs can be laminated for durability and brailt so the student can tell which one to use at each stage of the trip. +[1915.220 --> 1944.220] The students need to have a great many successful experiences with public transportation to become comfortable enough to use buses on their own. +[1944.220 --> 1972.220] After their period of mobility instruction ends. +[1972.220 --> 1980.220] To further ensure a successful bus trip, the letters in the sign should be large enough so the sign can be read by the bus driver as he opens the door. +[1980.220 --> 1989.220] Although many pedestrians may try to initiate contact, when the Depline Traveler does not respond, many may move away or board the bus along. +[1989.220 --> 1995.220] It's often the driver who assists Depline travelers to their seat and notes their destination. +[2002.220 --> 2017.220] Shopping malls, which in many areas have replaced small business districts and family-operated stores, provide a wide range of mobility features and a variety of links between routes. +[2032.220 --> 2061.220] Although shopping malls may be a considerable distance from the point of the routes origin, possibly requiring access by public transportation, they provide an area protected from traffic and feature a very high frequency of pedestrian movement ensuring valuable assistance and travel and making purchases. +[2092.220 --> 2103.220] It offers free foreign 잠jeong route with conditions based on public transportation and an open message. +[2103.220 --> 2133.200] Features which in one context may be bridges between routes +[2133.200 --> 2143.200] and can be barriers to the depth blind travelers. +[2143.200 --> 2148.200] Deafness and combination with blindness imposes additional constraints in route travel. +[2148.200 --> 2155.200] Cues to obstructions that blind travelers can detect by hearing are not available to most hearing impaired blind travelers. +[2155.200 --> 2160.200] They must rely almost completely on tactile cues. +[2160.200 --> 2170.200] Signs, advertising boards, planters, things that are useful and pleasant for the sighted cause serious problems for blind and deaf blind travelers. +[2170.200 --> 2177.200] The mobility instructor must call special attention to obstructions that the cane may easily miss. +[2177.200 --> 2184.200] The traveler can be alerted to slow down, widen the cane arc, straighten the cane arm to provide more reaction time, +[2184.200 --> 2191.200] and possibly to ready a protective arm position to encounter the oncoming obstruction. +[2191.200 --> 2195.200] White open spaces are special problem areas for the deaf blind. +[2195.200 --> 2208.200] If there are no easily detected, closely spaced landmarks, they must rely on edges taking the long way around to be assured of reliable landmarks. +[2209.200 --> 2214.200] Root training entails learning a number of orientation skills during the course of travel. +[2214.200 --> 2224.200] Students with low vision must learn to use the residual vision to structure components of the route and to be able to recognize those components upon the return to the point of origin. +[2224.200 --> 2231.200] A department store affords considerable opportunity to use the techniques and travel skills learned in other settings. +[2231.200 --> 2235.200] The departments and the merchandise in different sections serve as distinctive landmarks. +[2236.200 --> 2244.200] If a student can remember the words or signs associated with those landmarks, the words and signs serve as orientation sequence clues. +[2244.200 --> 2250.200] In one store, for example, the dresses might come before the book section, then the shooting department where a student might make a turn. +[2250.200 --> 2257.200] The student walks past the cases where wallets and purses are displayed and turns at the corner of the display cases at the escalator. +[2258.200 --> 2265.200] At the top of the escalators, the student makes another turn. +[2265.200 --> 2272.200] It's useful to stop the student at different points along the route to review the direction of origin and destination. +[2272.200 --> 2281.200] There was purpose on a route to see someone, to communicate with somebody, or to carry out something specific like to buy a gift for a friend. +[2281.200 --> 2288.200] The route affords opportunity to practice money management and those numerous skills that are necessary to function effectively in life. +[2292.200 --> 2296.200] Street crossings are often an unavoidable link and numerous mobility routes. +[2296.200 --> 2302.200] This one aspect of deafblind mobility probably engenders more controversy than any other. +[2302.200 --> 2304.200] It's not a yes or no question. +[2304.200 --> 2311.200] The mobility instructor must carefully weigh the decision whether a child has the ability to make a particular crossing in relative safety. +[2316.200 --> 2319.200] Many of the students have considerable residual vision. +[2319.200 --> 2326.200] They require a great deal of training, but may be very effective in providing enough visual information to cross the street safely. +[2326.200 --> 2333.200] If a child has low vision, it might be advisable to make clockwise crossing at an intersection rather than a counterclockwise. +[2334.200 --> 2346.200] Whereas blind students might offer a counterclockwise crossing so as to go with the traffic sounds, deafblind travelers with some residual vision might want to get as close to the stopping cars as possible. +[2346.200 --> 2352.200] Wait one light cycle and actually see a car stop to get a clear signal to go. +[2352.200 --> 2367.200] This may require a greater distance to cross the street, but the signal to go is clever and may be well worth the time in terms of safety. +[2367.200 --> 2384.200] As low vision travelers near the middle of the street during a crossing, they must learn to turn to the side to detect cars that will cross their path. +[2384.200 --> 2390.200] The travelers can either stop, signal, or vary their speed accordingly. +[2391.200 --> 2401.200] The age, size, maturity, appearance, intelligence and motivation of the student are further factors in safe street crossings. +[2401.200 --> 2409.200] If the child is obviously handicapped, her gives the appearance of being blind a motorist might exhibit caution. +[2409.200 --> 2419.200] The color of the student's clothing, the time of day as far as the light is concerned, and even the weather, are factors that affect the visibility of a traveler to motorists. +[2419.200 --> 2426.200] The combinations of residual vision and hearing and how the student uses them are critical factors involved in making safe street crossings. +[2426.200 --> 2440.200] The student's level of understanding of concepts and the good sense in making decisions that affect their safety will weigh heavily determining the safe limits of travel in the route. +[2441.200 --> 2447.200] The rate, speed, volume and grouping of traffic will be important factors. +[2447.200 --> 2453.200] These can vary according to time of day or even time of year in some places with seasonal traffic. +[2453.200 --> 2458.200] Traffic itself is not intrinsically dangerous, uncontrolled traffic is. +[2458.200 --> 2469.200] In some cases, heavy traffic generates low frequency sound cues that can be instrumental in keeping a deafblind traveler oriented to direction. +[2469.200 --> 2486.200] Confusing configurations of crosswalks, islands and combinations of traffic controls may complicate crossing to such an extent that the only travelers who may safely negotiate the street are students with a great deal of residual vision or excellent use of what little vision they may possess. +[2486.200 --> 2497.200] So many factors interrelate and the combinations of those factors are so unique that there can be no final answer as to whether the deafblind can make street crossings. +[2498.200 --> 2518.200] It's up to the mobility instructor to carefully weigh all the pertinent factors involved with each individual student at every crossing in terms of the student's safety and to minimize the dangers by judicious route design, timing of the crossing and effective instruction involving the concepts of safety and danger. +[2519.200 --> 2526.200] If the student's residual senses are such that they can master a crossing then it may become an integral part of the route. +[2526.200 --> 2539.200] That decision involves a tremendous responsibility the part of the mobility instructor. Freedom always entails some degree of risk but safety must always clearly and overwhelmingly outweigh any possible danger. +[2548.200 --> 2563.200] Communication skills might be employed, signs of various kinds used to solicit pedestrian aid. This works well in areas of high pedestrian traffic. +[2564.200 --> 2578.200] If street crossings are to be integrated within the route the child must have an extremely high degree of success with them. +[2579.200 --> 2595.200] The route is the proving ground for a number of sensory skills and techniques. When linked to the community through consumer lessons the route serves as an arena to work on people skills as well. +[2595.200 --> 2605.200] There's a refreshing measure of confidence that develops from the ability to make a decision to go somewhere, travel with as little assistance as possible and do something along the way. +[2606.200 --> 2625.200] The route is a channel through which a student travels toward people, toward experience. The route is a learning process itself a continuum of growth, change, and acquisition of abilities in purpose, a journey of self discovery, a route implies direction, internal direction as well as geographical. +[2625.200 --> 2643.200] It might be said that the route is one of the most important fundamentals of mobility training, the crucible where the skills, techniques, and the willingness to use them come together. Mobility itself is about people, allowing those people the chance to find their own way. diff --git a/transcript/allocentric_I6IAhXM-vps.txt b/transcript/allocentric_I6IAhXM-vps.txt new file mode 100644 index 0000000000000000000000000000000000000000..7389cbada098539edf6754fa4f5a44b77ca5ad71 --- /dev/null +++ b/transcript/allocentric_I6IAhXM-vps.txt @@ -0,0 +1,21 @@ +[0.000 --> 7.880] We all use words and language every day to interact with people at work. +[7.880 --> 11.340] But do we really communicate effectively? +[11.340 --> 14.920] Effective communication can be broken down into three parts. +[14.920 --> 17.560] Listening, understanding and responding. +[17.560 --> 21.240] Let's look at these one by one. +[21.240 --> 26.280] Listening involves hearing the words that are being said, taking in non-verbal cues, +[26.280 --> 32.400] such as body language and facial expressions, plus paying attention to voice modulation. +[32.400 --> 38.840] We then move on to the next stage, understanding or giving meaning to what we have heard. +[38.840 --> 43.520] Most communication breakdowns happen at this stage, because we often misunderstand or +[43.520 --> 46.080] misinterpret what is being said. +[46.080 --> 52.000] When we make errors in interpretation, we are likely to respond incorrectly as well. +[52.000 --> 58.560] For example, your boss asks you if the task that he assigned to you has been completed. +[58.560 --> 63.800] If you interpret that as the boss blaming you for not completing the task, you are likely +[63.800 --> 65.680] to respond with anger. +[65.680 --> 71.320] However, if you interpret that as your boss wanting to just know the status of the task, +[71.320 --> 75.080] you are likely to feel less angry and defensive. +[75.080 --> 80.880] How we interpret what we hear is affected by the thoughts that pop up in our minds when +[80.880 --> 82.680] we are listening. +[82.680 --> 88.840] At Way Forward, we help you catch these automatic thoughts so you can reduce communication errors +[88.840 --> 91.360] and be more productive at work. +[91.360 --> 96.800] For more information, reach out at www.wayforward.co.in diff --git a/transcript/allocentric_IhITqkNTaNo.txt b/transcript/allocentric_IhITqkNTaNo.txt new file mode 100644 index 0000000000000000000000000000000000000000..0efc3a60a4601866e97869f9b4f146e78ba68872 --- /dev/null +++ b/transcript/allocentric_IhITqkNTaNo.txt @@ -0,0 +1,4 @@ +[0.000 --> 10.180] slow disabilities. +[60.000 --> 80.080] lingon +[80.080 --> 82.740] All right. +[83.080 --> 86.560] Good diff --git a/transcript/allocentric_JFkHlqLIuD8.txt b/transcript/allocentric_JFkHlqLIuD8.txt new file mode 100644 index 0000000000000000000000000000000000000000..00b2e3cb6df233079509ad0be16206ece5958a8e --- /dev/null +++ b/transcript/allocentric_JFkHlqLIuD8.txt @@ -0,0 +1,1897 @@ +[0.000 --> 7.000] Welcome to the decoding human behaviour, mastering non-verbal communication, written by mindful literary. +[7.000 --> 15.000] In a world where words often conceal more than they reveal, the art of understanding non-verbal cues becomes paramount. +[15.000 --> 25.000] This comprehensive guide is your key to unraveling the intricate language of gestures, expressions, and tones that shape our daily interactions. +[25.000 --> 31.000] Chapter by chapter we embark on a journey into the depths of human communication. +[31.000 --> 41.000] We begin by exploring the power of body language, deciphering the unspoken messages conveyed through facial expressions, gestures, and posture. +[41.000 --> 52.000] As we move forward, we delve into the nuances of vocal cues from subtle changes in tone to the rhythm of speech, uncovering layers of meaning hidden within every word. +[52.000 --> 68.000] But our exploration doesn't stop there. We dive into the realm of emotional signals, learning to identify micro-expressions, recognize emotional contagion, and decipher the silent language of emotions that colour our interactions. +[68.000 --> 77.000] Detecting deception becomes a skill within reach as we unravel the signs of lying and navigate the complexities of honesty and deception. +[77.000 --> 88.000] As we progress through the chapters, we also uncover the significance of personal style, cultural influences, and individual habits in shaping communication patterns. +[88.000 --> 99.000] We assess personality traits through non-verbal cues, understanding introversion, extroversion, dominance, agreeableness, and more through observable behaviours. +[99.000 --> 111.000] Reading relationships take centre stage as we analyse power dynamics, decode relationship cues, and interpret the subtle dance of non-verbal signals in various contexts. +[111.000 --> 123.000] Ultimately, this audiobook equips you with practical tools to enhance communication, empathy, rapport building, negotiation, leadership, and influence skills. +[123.000 --> 130.000] Prepare to master the art of decoding human behaviour and unlock a deeper understanding of the world around you. +[130.000 --> 134.000] Let him bark on this enlightening journey together. +[134.000 --> 139.000] Chapter 1 Understanding Non-verbal Communication +[139.000 --> 150.000] The power of body language. Body language is a powerful form of non-verbal communication that can reveal a person's thoughts, feelings, and intentions. +[150.000 --> 158.000] It is often said that actions speak louder than words, and this is especially true when it comes to understanding others. +[158.000 --> 166.000] By learning to decode an interpret body language, you can gain valuable insights into people's true emotions and motivations. +[166.000 --> 170.000] Understanding Non-verbal Communication +[170.000 --> 180.000] Non-verbal communication refers to the messages we convey through facial expressions, gestures, posture, eye movements, and other physical cues. +[180.000 --> 192.000] While verbal communication relies on words, non-verbal communication provides additional layers of meaning and can often be more accurate in conveying emotions and attitudes. +[192.000 --> 199.000] The importance of body language. Body language plays a crucial role in our everyday interactions. +[199.000 --> 207.000] It can influence how others perceive us, shape the impression we make, and impact the success of our relationships. +[207.000 --> 216.000] By understanding an interpreting body language, you can enhance your communication skills, build rapport, and establish trust with others. +[216.000 --> 220.000] The universality of body language +[220.000 --> 225.000] One fascinating aspect of body language is its universality. +[225.000 --> 232.000] While verbal languages may differ across cultures, many non-verbal cues are universally understood. +[232.000 --> 240.000] For example, a smile is generally interpreted as a sign of happiness or friendliness, regardless of cultural background. +[240.000 --> 248.000] This universality makes body language a valuable tool for reading others, regardless of the language they speak. +[248.000 --> 250.000] Facial expressions +[250.000 --> 256.000] Facial expressions are one of the most powerful forms of non-verbal communication. +[256.000 --> 265.000] Our faces can convey a wide range of emotions, including happiness, sadness, anger, fear, surprise, and disgust. +[265.000 --> 276.000] By paying attention to subtle changes in facial expressions, you can gain insights into a person's emotional state and their true feelings about a particular situation. +[277.000 --> 279.000] Gestures and posture +[279.000 --> 285.000] Gestures and posture also play a significant role in non-verbal communication. +[285.000 --> 291.000] The way we move our hands, arms, and body can convey meaning and intention. +[291.000 --> 300.000] For example, cross-arms may indicate defensiveness or resistance, while open and relaxed postures can signal openness and receptiveness. +[300.000 --> 308.000] By observing these gestures and postures, you can better understand a person's level of comfort, confidence and engagement. +[308.000 --> 310.000] I Movements +[310.000 --> 315.000] The eyes are often referred to as the windows to the soul and for a good reason. +[315.000 --> 321.000] I Movements can reveal a wealth of information about a person's thoughts and emotions. +[322.000 --> 330.000] For instance, prolonged eye contact can indicate interest or attraction, while avoiding eye contact may suggest discomfort or dishonesty. +[330.000 --> 339.000] By analysing eye movements, you can gain insights into a person's level of engagement, truthfulness, and emotional state. +[339.000 --> 341.000] Micro-expressions +[341.000 --> 347.000] Micro-expressions are fleeting facial expressions that occur within a fraction of a second. +[347.000 --> 354.000] They are often unconscious and can reveal a person's true emotions, even when they are trying to conceal them. +[354.000 --> 364.000] By learning to identify and interpret micro-expressions, you can detect hidden emotions and gain a deeper understanding of a person's true feelings. +[364.000 --> 367.000] Body language clusters +[367.000 --> 376.000] While individual non-verbal cues can provide valuable information, it is essential to consider them in the context of clusters or patterns. +[376.000 --> 383.000] Body language clusters involve multiple non-verbal cues that occur simultaneously and reinforce each other. +[383.000 --> 392.000] For example, a person who is crossing their arms, avoiding eye contact and leaning away may be signaling discomfort or disagreement. +[392.000 --> 400.000] By analysing body language clusters, you can gain a more accurate understanding of a person's thoughts and emotions. +[400.000 --> 403.000] Cultural considerations +[403.000 --> 411.000] While many non-verbal cues are universal, it is crucial to consider cultural differences when interpreting body language. +[411.000 --> 418.000] Different cultures may have unique gestures, postures, and facial expressions that carry different meanings. +[418.000 --> 427.000] It is essential to be aware of these cultural variations and avoid making assumptions based solely on your own cultural background. +[427.000 --> 430.000] Practice and observation +[430.000 --> 435.000] Becoming proficient in reading body language requires practice and observation. +[435.000 --> 441.000] Start by paying attention to your own body language and how it may be perceived by others. +[441.000 --> 449.000] Then begin observing the body language of those around you, both in everyday interactions and in more formal settings. +[449.000 --> 457.000] Look for patterns, clusters, and inconsistencies in non-verbal cues to develop your skills in decoding body language. +[458.000 --> 463.000] In conclusion, body language is a powerful tool for understanding others. +[463.000 --> 476.000] By learning to interpret facial expressions, gestures, posture, eye movements, and other non-verbal cues, you can gain valuable insights into people's thoughts, feelings, and intentions. +[476.000 --> 484.000] Developing your skills in reading body language can enhance your communication abilities, improve your relationships, +[484.000 --> 489.000] and help you navigate social interactions with greater empathy and understanding. +[489.000 --> 493.000] Interpreting facial expressions +[493.000 --> 498.000] Facial expressions are one of the most powerful forms of non-verbal communication. +[498.000 --> 504.000] They provide valuable insights into a person's emotions, thoughts, and intentions. +[504.000 --> 514.000] By understanding and interpreting facial expressions, you can gain a deeper understanding of others and enhance your ability to read people like a book. +[514.000 --> 518.000] The universality of facial expressions +[518.000 --> 525.000] Facial expressions are universal, meaning that they are recognised and understood across different cultures and societies. +[526.000 --> 540.000] Research has shown that people from various backgrounds can accurately identify and interpret basic emotions expressed through facial expressions, such as happiness, sadness, anger, fear, surprise, and disgust. +[540.000 --> 547.000] This universality of facial expression suggests that they are innate and hardwired into our biology. +[547.000 --> 557.000] It also implies that facial expressions are an essential part of human communication, allowing us to convey and understand emotions without the need for verbal language. +[557.000 --> 561.000] The basic facial expressions +[561.000 --> 566.000] There are several basic facial expressions that are universally recognised. +[566.000 --> 574.000] These expressions are characterised by specific muscle movements and can provide valuable insights into a person's emotional state. +[575.000 --> 578.000] Here are the six basic facial expressions. +[578.000 --> 588.000] Happiness, a genuine smile involves the contraction of the muscles around the eyes known as the Dutch and Smile and the lifting of the corners of the mouth. +[588.000 --> 592.000] It indicates a positive and content emotional state. +[592.000 --> 598.000] Sadness, the eyebrows are drawn together and the corners of the mouth are turned downward. +[598.000 --> 602.000] The eyes may appear watery or teary. +[602.000 --> 608.000] Sadness is often associated with feelings of loss, disappointment or grief. +[608.000 --> 614.000] Anger, the eyebrows are lowered and drawn together and the eyes may appear narrowed. +[614.000 --> 618.000] The lips may be pressed together or curled downward. +[618.000 --> 625.000] Anger is typically associated with feelings of frustration, annoyance or hostility. +[625.000 --> 630.000] Fear, the eyebrows are raised and drawn together and the eyes appear widened. +[631.000 --> 633.000] The mouth may be slightly open. +[633.000 --> 640.000] Fear is often associated with feelings of apprehension, anxiety or alarm. +[640.000 --> 645.000] Surprise, the eyebrows are raised and the eyes appear widened. +[645.000 --> 648.000] The mouth may be slightly open. +[648.000 --> 653.000] Surprise is characterised by a sudden and unexpected reaction to something. +[653.000 --> 658.000] Disgust, the upper lip is raised and the nose is wrinkled. +[658.000 --> 661.000] The eyebrows may be lowered. +[661.000 --> 667.000] Disgust is often associated with feelings of aversion, revulsion or distaste. +[667.000 --> 670.000] Micro-expressions +[670.000 --> 675.000] In addition to the basic facial expressions, there are also micro-expressions, +[675.000 --> 680.000] which are brief and involuntary facial expressions that occur in response to an emotion. +[680.000 --> 685.000] Micro-expressions are often fleeting and can be challenging to detect, +[685.000 --> 691.000] but they can provide valuable clues about a person's true feelings or intentions. +[691.000 --> 695.000] Micro-expressions typically last for only a fraction of a second and occur, +[695.000 --> 700.000] when a person tries to conceal or suppress their true emotions. +[700.000 --> 705.000] They can reveal underlying emotions that may contradict the person's verbal communication +[705.000 --> 708.000] or displayed facial expressions. +[708.000 --> 712.000] To detect micro-expressions, it is essential to pay close attention +[712.000 --> 717.000] to subtle changes in facial muscles, such as a quick twitch, a slight movement, +[717.000 --> 720.000] or a momentary change in expression. +[720.000 --> 725.000] Training yourself to recognise micro-expressions can significantly enhance your ability +[725.000 --> 727.000] to read people accurately. +[727.000 --> 730.000] Context and congruence +[730.000 --> 735.000] When interpreting facial expressions, it is crucial to consider the context and congruence +[735.000 --> 738.000] of the person's overall behaviour. +[738.000 --> 743.000] Facial expression should be analysed in conjunction with other non-verbal cues, +[743.000 --> 747.000] such as body language, gestures and vocal tone. +[747.000 --> 751.000] For example, a person may display a smile on their face, +[751.000 --> 756.000] but their body language and vocal tone may indicate discomfort or an ease. +[756.000 --> 762.000] In such cases, the facial expression may not accurately reflect their true emotions. +[762.000 --> 766.000] By considering the congruence of different non-verbal cues, +[766.000 --> 771.000] you can gain a more accurate understanding of a person's emotional state. +[771.000 --> 774.000] Cultural differences +[774.000 --> 780.000] While facial expressions are generally universal, it is important to note that there can be cultural variations +[780.000 --> 784.000] in the interpretation and display of emotions. +[784.000 --> 790.000] Different cultures may have specific facial expressions or gestures that convey unique meanings. +[790.000 --> 796.000] For example, in some cultures, the display of emotions may be more restrained or controlled, +[796.000 --> 799.000] while in others, it may be more expressive. +[799.000 --> 803.000] It is essential to be aware of these cultural differences +[803.000 --> 809.000] and adapt your interpretation accordingly when reading people from different cultural backgrounds. +[809.000 --> 812.000] Practice and observation +[812.000 --> 818.000] Interpreting facial expressions is a skill that can be developed and refined with practice. +[818.000 --> 825.000] One effective way to improve your ability to read facial expressions is through observation and analysis. +[825.000 --> 833.000] Pay attention to the facial expressions of people around you, both in real life interactions and in various forms of media. +[833.000 --> 837.000] Observe how different emotions are expressed through facial expressions +[837.000 --> 844.000] and try to identify the specific muscle movements and changes in expression associated with each emotion. +[844.000 --> 851.000] With time and practice, you will become more adept at recognising and interpreting facial expressions accurately. +[851.000 --> 854.000] Conclusion +[854.000 --> 860.000] Interpreting facial expressions is a valuable skill that can help you understand others on a deeper level. +[860.000 --> 869.000] By recognising and understanding the basic facial expressions, micro-expressions and considering the context and congruence of non-verbal cues, +[869.000 --> 876.000] you can gain valuable insights into a person's emotions, thoughts and intentions. +[876.000 --> 882.000] Remember that facial expressions are universal but can be influenced by cultural differences. +[882.000 --> 893.000] With practice and observation, you can enhance your ability to read people like a book and improve your communication, empathy and overall understanding of others. +[893.000 --> 896.000] Decoding gestures and posture +[896.000 --> 905.000] gestures and posture are powerful non-verbal cues that can provide valuable insights into a person's thoughts, emotions and intentions. +[905.000 --> 914.000] By understanding and interpreting these signals, you can gain a deeper understanding of others and enhance your ability to read people like a book. +[914.000 --> 917.000] The language of gestures +[917.000 --> 927.000] gestures are a form of non-verbal communication that involves the movement of different parts of the body, such as the hands, arms, head and legs. +[927.000 --> 935.000] They can convey a wide range of meanings and emotions and understanding their significance is crucial in decoding people. +[935.000 --> 944.000] Hand gestures, the hands are one of the most expressive parts of the body and their movements can reveal a lot about a person's thoughts and intentions. +[945.000 --> 954.000] For example, open palms are often associated with honesty and openness, while clenched fists may indicate anger or frustration. +[954.000 --> 964.000] Pay attention to gestures such as pointing, waving or tapping as they can provide valuable clues about a person's emotions or emphasis on certain points. +[964.000 --> 971.000] Arm and body movements, the way a person uses their arms and body can also reveal important information. +[972.000 --> 982.000] Crossed arms, for instance, may indicate defensiveness or discomfort, while open and relaxed posture suggest a more welcoming and approachable demeanor. +[982.000 --> 989.000] Additionally, observing the direction of a person's body can give you insights into their interest or engagement. +[989.000 --> 997.000] If someone is leaning towards you, it may indicate attentiveness, while leaning away could suggest a sin interest or discomfort. +[997.000 --> 1002.000] Head movements, the movements of the head can convey a variety of messages. +[1002.000 --> 1012.000] For example, nodding is often associated with agreement or understanding, while shaking the head from side to side indicates disagreement or disbelief. +[1012.000 --> 1021.000] Pay attention to subtle head movements such as tilting or nodding slightly as they can provide additional context to a person's verbal communication. +[1021.000 --> 1024.000] Posture and body alignment. +[1025.000 --> 1031.000] Posture refers to the way a person holds their body while standing, sitting or moving. +[1031.000 --> 1037.000] It can reveal a great deal about a person's confidence, mood and level of comfort. +[1037.000 --> 1044.000] By observing an interpreting posture, you can gain valuable insights into a person's state of mind. +[1044.000 --> 1052.000] Upright posture, a person with an upright posture typically conveys confidence, attentiveness and self assurance. +[1052.000 --> 1057.000] They stand or sit tall with their shoulders back and their head held high. +[1057.000 --> 1063.000] This posture suggests that the person is engaged and open to communication. +[1063.000 --> 1072.000] Slouched posture, on the other hand, a slouched or hunched posture often indicates low confidence, disininterest or fatigue. +[1072.000 --> 1078.000] When someone slouches, their shoulders may be rounded and their head may be drooping. +[1078.000 --> 1084.000] This posture can suggest a lack of engagement or a desire to withdraw from social interaction. +[1084.000 --> 1094.000] Mirroring posture, mirroring is a phenomenon where people unconsciously mimic the body language and posture of those they feel connected to or comfortable with. +[1094.000 --> 1102.000] When two individuals have similar postures, it can indicate rapport, empathy and a positive connection between them. +[1102.000 --> 1109.000] Closed and open postures, the openness or closeness of a person's posture can also provide valuable insights. +[1109.000 --> 1118.000] Open postures, such as arms relaxed at the sides or open palms, suggest approachability and receptiveness. +[1118.000 --> 1128.000] In contrast, closed postures, such as cross-arms or legs, indicate defensiveness, discomfort or a desire to create a physical barrier. +[1128.000 --> 1132.000] Context and cultural considerations. +[1132.000 --> 1141.000] While gestures and posture can provide valuable information, it is essential to consider the context and cultural factors that may influence their meaning. +[1141.000 --> 1152.000] Different cultures may have varying interpretations of certain gestures and what may be considered acceptable or appropriate in one culture may be perceived differently in another. +[1152.000 --> 1161.000] Additionally, it is crucial to consider the individual's baseline behaviour and personality when interpreting gestures and posture. +[1161.000 --> 1168.000] People have unique ways of expressing themselves and what may be true for one person may not apply to another. +[1168.000 --> 1177.000] Therefore, it is essential to observe patterns and clusters of non-verbal cues rather than relying on a single gesture or posture alone. +[1177.000 --> 1180.000] Practice and observation. +[1180.000 --> 1186.000] Decoding gestures and posture requires practice and keen observation skills. +[1186.000 --> 1191.000] To enhance your ability to read people, consider the following tips. +[1191.000 --> 1198.000] Observe and analyse pay closer attention to the gestures and postures of the people around you. +[1198.000 --> 1204.000] Observe how they align with their verbal communication and the context of the situation. +[1204.000 --> 1208.000] Look for patterns and consistences in their non-verbal cues. +[1208.000 --> 1217.000] Be mindful of your own body language, understanding your own body language can help you become more aware of the signals you are sending to others. +[1217.000 --> 1224.000] Practice maintaining an open and confident posture to create a positive impression and encourage open communication. +[1224.000 --> 1231.000] Consider the cluster. Remember that individual gestures or postures may not provide the full picture. +[1231.000 --> 1239.000] Look for clusters of non-verbal cues that align with each other to gain a more accurate understanding of a person's thoughts and emotions. +[1240.000 --> 1247.000] Be respectful and culturally sensitive. Keep in mind that non-verbal cues can vary across cultures. +[1247.000 --> 1253.000] Be respectful and considerate of cultural differences when interpreting gestures and postures. +[1253.000 --> 1263.000] By developing your skills in decoding gestures and posture, you can gain valuable insights into the thoughts, emotions and intentions of others. +[1263.000 --> 1273.000] This understanding will enable you to navigate social interactions more effectively, build stronger relationships and enhance your overall communication skills. +[1273.000 --> 1276.000] Analysing eye movements. +[1276.000 --> 1281.000] The eyes are often referred to as the windows to the soul and for good reason. +[1281.000 --> 1288.000] They can reveal a wealth of information about a person's thoughts, emotions and intentions. +[1288.000 --> 1295.000] By understanding and analysing eye movements, you can gain valuable insights into someone's inner world. +[1295.000 --> 1298.000] The importance of eye movements. +[1298.000 --> 1309.000] Eye movements are a crucial aspect of non-verbal communication. They can provide clues about a person's level of interest, attention and engagement in a conversation. +[1310.000 --> 1318.000] By paying attention to someone's eye movements, you can gauge their level of comfort, honesty and even their thought processes. +[1318.000 --> 1320.000] Eye contact. +[1320.000 --> 1324.000] Eye contact is a fundamental aspect of communication. +[1324.000 --> 1330.000] It establishes a connection between individuals and conveys interest and attentiveness. +[1330.000 --> 1338.000] However, the amount and duration of eye contact can vary depending on cultural norms and personal preferences. +[1338.000 --> 1345.000] When analysing eye contact, it is important to consider both the frequency and duration of eye contact. +[1345.000 --> 1351.000] A person who maintains consistent eye contact is often seen as confident and trustworthy. +[1351.000 --> 1358.000] On the other hand, avoiding eye contact can indicate discomfort, shyness or even deception. +[1358.000 --> 1361.000] Eye movements and thought processes. +[1361.000 --> 1367.000] Eye movements can also provide insights into a person's thought processes. +[1367.000 --> 1380.000] The field of neuro-linguistic programming, NLP, suggests that eye movements are linked to specific cognitive processes, such as visual imagery, auditory processing and internal dialogue. +[1380.000 --> 1386.000] According to NLP, when a person looks up and to the left, they are accessing visual imagery. +[1386.000 --> 1392.000] This could indicate that they are constructing or recalling visual images in their mind. +[1392.000 --> 1402.000] Conversely, looking up and to the right is associated with auditory processing, suggesting that the person is accessing or constructing auditory information. +[1402.000 --> 1409.000] Looking to the left and horizontally is often associated with accessing or constructing internal dialogue. +[1409.000 --> 1415.000] This suggests that the person is engaged in an internal conversation or self-reflection. +[1416.000 --> 1426.000] Looking to the right and horizontally, on the other hand, is linked to accessing or constructing kinesthetic information, such as emotions or physical sensations. +[1426.000 --> 1437.000] While these associations are not universally applicable, they can provide valuable insights into a person's thought processes when considered in conjunction with other non-verbal cues. +[1437.000 --> 1440.000] Pupil dilation. +[1440.000 --> 1446.000] The size of a person's pupils can also reveal important information about their emotional state. +[1446.000 --> 1454.000] Pupil dilation is an involuntary response that occurs when a person is experiencing heightened emotions or arousal. +[1454.000 --> 1462.000] When someone is interested, excited or attracted to something or someone, their pupils tend to dilate. +[1462.000 --> 1470.000] Conversely, when a person is feeling negative emotions such as fear, anger or sadness, their pupils may constrict. +[1470.000 --> 1475.000] Pupil dilation can also be an indicator of deception. +[1475.000 --> 1484.000] When someone is lying, their pupils may dilate due to the increased cognitive load and emotional arousal associated with deception. +[1484.000 --> 1494.000] However, it is important to note that pupil dilation alone is not a foolproof indicator of deception and should be considered in conjunction with other non-verbal cues. +[1494.000 --> 1498.000] Eye movements and emotional expression. +[1498.000 --> 1502.000] The eyes play a crucial role in conveying emotions. +[1502.000 --> 1508.000] Different emotions are associated with specific patterns of eye movements and expressions. +[1508.000 --> 1515.000] For example, when someone is happy, their eyes may crinkle at the corners and their gaze may be relaxed and open. +[1515.000 --> 1523.000] Conversely, when someone is angry, their eyes may narrow and their gaze may become intense and focused. +[1523.000 --> 1532.000] By observing these patterns, you can gain insights into a person's emotional state and better understand their feelings and intentions. +[1532.000 --> 1535.000] Cultural differences in eye movements. +[1535.000 --> 1541.000] It is important to note that eye movements and their interpretations can vary across different cultures. +[1541.000 --> 1551.000] In some cultures, direct eye contact is seen as a sign of respect and attentiveness, while in others, it may be considered rude or confrontational. +[1551.000 --> 1558.000] To accurately analyse eye movements, it is essential to consider cultural norms and individual differences. +[1559.000 --> 1565.000] Familiarise yourself with the cultural context and adapt your interpretations accordingly. +[1565.000 --> 1568.000] The limitations of eye movements. +[1568.000 --> 1575.000] While eye movements can provide valuable insights, it is important to remember that they are just one piece of the puzzle. +[1575.000 --> 1582.000] Non-verbal cues should always be considered in conjunction with verbal communication and other non-verbal behaviours. +[1583.000 --> 1589.000] Additionally, it is crucial to avoid making snap judgments based solely on eye movements. +[1589.000 --> 1600.000] People are complex and their behaviours can be influenced by various factors, including individual differences, cultural norms and situational context. +[1600.000 --> 1602.000] Conclusion. +[1602.000 --> 1611.000] Analyzing eye movements is a powerful tool in decoding people and understanding their thoughts, emotions and intentions. +[1611.000 --> 1621.000] By paying attention to eye contact, pupil dilation and patterns of eye movements, you can gain valuable insights into a person's non-verbal communication. +[1621.000 --> 1632.000] However, it is important to consider cultural differences, individual variations and other non-verbal cues to form a comprehensive understanding of someone's behaviour. +[1632.000 --> 1636.000] Chapter 2. Uncovering verbal cues. +[1636.000 --> 1639.000] Listening beyond words. +[1639.000 --> 1645.000] In our daily interactions, we often focus on the words that people say to us. +[1645.000 --> 1651.000] However, there is a wealth of information that can be gleaned from listening beyond words. +[1651.000 --> 1660.000] Non-verbal cues, vocal tone and speech patterns can provide valuable insights into a person's thoughts, emotions and intentions. +[1661.000 --> 1669.000] By honing our ability to listen beyond words, we can become more adept at decoding people and understanding them on a deeper level. +[1669.000 --> 1672.000] Non-verbal cues in communication. +[1672.000 --> 1677.000] Non-verbal cues play a significant role in communication. +[1677.000 --> 1683.000] They include facial expressions, body language, gestures and posture. +[1684.000 --> 1691.000] When we pay attention to these cues, we can gain a better understanding of a person's true feelings and intentions. +[1691.000 --> 1703.000] For example, cross-arms and a furrowed brow may indicate defensiveness or disagreement, while open body language and a genuine smile may suggest receptiveness and agreement. +[1703.000 --> 1710.000] To effectively read non-verbal cues, it is important to observe them in clusters rather than in isolation. +[1710.000 --> 1719.000] A single gesture or expression may not provide a complete picture, but when combined with other cues, it can reveal valuable insights. +[1719.000 --> 1729.000] Additionally, it is crucial to consider cultural differences in non-verbal communication as gestures and expressions can vary across different cultures. +[1729.000 --> 1732.000] Vocal tone and pitch. +[1732.000 --> 1738.000] Beyond the words we speak, our vocal tone and pitch can convey a wealth of information. +[1738.000 --> 1743.000] The way we say something can often be more revealing than what we actually say. +[1743.000 --> 1754.000] For example, a hesitant or shaky voice may indicate nervousness or lack of confidence, while a firm and assertive tone may suggest self-assurance and conviction. +[1754.000 --> 1762.000] By paying attention to vocal cues, we can detect emotions such as anger, sadness, excitement or fear. +[1762.000 --> 1769.000] Changes in pitch, volume and rhythm can provide clues about a person's emotional state. +[1769.000 --> 1778.000] For instance, a high-pitched voice may indicate anxiety or excitement, while a low and monotone voice may suggest boredom or disininterest. +[1778.000 --> 1781.000] Speech patterns. +[1781.000 --> 1789.000] Speech patterns refer to the way people structure their sentences, use pauses and employ certain words or phrases. +[1789.000 --> 1797.000] These patterns can reveal important information about a person's thought processes, personality traits and emotional state. +[1797.000 --> 1806.000] For example, someone who speaks rapidly and jumps from one topic to another may be experiencing anxiety or excitement. +[1806.000 --> 1813.000] On the other hand, someone who speaks slowly and thoughtfully may be more introspective or cautious. +[1813.000 --> 1819.000] Listening for speech patterns can also help identify patterns of deception or dishonesty. +[1819.000 --> 1828.000] In consistences in the way a person speaks or contradictions between their words and non-verbal cues can be red flags for potential deception. +[1828.000 --> 1831.000] Verbal fillers and pauses. +[1831.000 --> 1840.000] Verbal fillers and pauses are common elements of speech that can provide valuable insights into a person's thoughts and emotions. +[1840.000 --> 1849.000] Verbal fillers, such as M, R or like, are often used when a person is searching for the right words or trying to buy time. +[1849.000 --> 1856.000] Pauses, on the other hand, can indicate hesitation, uncertainty or the need for reflection. +[1856.000 --> 1866.000] By paying attention to verbal fillers and pauses we can gain a deeper understanding of a person's level of confidence, comfort and authenticity. +[1866.000 --> 1874.000] Excessive use of fillers or prolonged pauses may suggest nervousness, lack of preparation or an attempt to deceive. +[1874.000 --> 1876.000] Conclusion +[1876.000 --> 1882.000] Listening beyond words is a skill that can be developed with practice and awareness. +[1882.000 --> 1895.000] By paying attention to non-verbal cues, vocal tone and pitch, speech patterns and verbal fillers and pauses we can gain valuable insights into a person's thoughts, emotions and intentions. +[1896.000 --> 1906.000] This enhanced understanding allows us to communicate more effectively, build stronger relationships and navigate social interactions with greater empathy and insight. +[1906.000 --> 1913.000] So, let's sharpen our listening skills and unlock the hidden messages that lie beyond words. +[1913.000 --> 1916.000] Detecting vocal tone and pitch +[1916.000 --> 1923.000] When it comes to understanding others, non-verbal cues are not the only indicators we can rely on. +[1923.000 --> 1933.000] Verbal cues such as vocal tone and pitch can also provide valuable insights into a person's emotions, intentions and personality traits. +[1933.000 --> 1941.000] Just like body language, vocal cues can reveal hidden messages that may not be expressed through words alone. +[1941.000 --> 1949.000] In this section, we will explore the importance of detecting vocal tone and pitch and how it can enhance our ability to decode people. +[1949.000 --> 1952.000] The power of vocal tone +[1952.000 --> 1958.000] Vocal tone refers to the quality, timbre and emotional resonance of a person's voice. +[1958.000 --> 1963.000] It plays a significant role in conveying emotions and attitudes. +[1963.000 --> 1971.000] By paying attention to someone's vocal tone, we can gain a deeper understanding of their underlying feelings and intentions. +[1971.000 --> 1976.000] Pitch, pitch refers to the highness or loneliness of a person's voice. +[1976.000 --> 1982.000] It can vary depending on factors such as age, gender and cultural background. +[1982.000 --> 1994.000] Generally, a higher pitch is associated with excitement, enthusiasm or anxiety, while a lower pitch is often linked to confidence, authority or seriousness. +[1994.000 --> 2001.000] However, it is essential to consider individual differences and cultural norms when interpreting pitch. +[2002.000 --> 2009.000] Volume, the volume of someone's voice can indicate their level of confidence, assertiveness or even aggression. +[2009.000 --> 2019.000] A louder voice may suggest dominance or a desire to be heard while a softer voice can indicate shyness, submissiveness or a need for privacy. +[2019.000 --> 2028.000] It is crucial to consider the context and cultural norms when interpreting volume as some cultures value softer speech as a sign of respect. +[2028.000 --> 2034.000] Rhythm, the rhythm of speech refers to the speed, cadence and pauses between words. +[2034.000 --> 2045.000] A fast-paced rhythm may indicate excitement, impatience or nervousness while a slower rhythm can suggest thoughtfulness, calmness or even boredom. +[2045.000 --> 2053.000] Pauses in speech can also convey meaning such as hesitation, uncertainty or the need to gather one's thoughts. +[2054.000 --> 2059.000] Melody, the melodic quality of someone's voice can reveal their emotional state. +[2059.000 --> 2065.000] A monotonous or flat tone may indicate boredom, disininterest or even depression. +[2065.000 --> 2073.000] On the other hand, a varied and expressive melody can suggest enthusiasm, engagement or happiness. +[2073.000 --> 2080.000] Paying attention to the rise and fall of pitch within a sentence can provide valuable clues about a person's emotional state. +[2081.000 --> 2083.000] Interpreting vocal cues. +[2083.000 --> 2092.000] Now that we understand the significance of vocal tone, let's explore how we can interpret vocal cues to gain insights into others. +[2092.000 --> 2100.000] Emotional state, vocal tone can reveal a person's emotional state even when their words may suggest otherwise. +[2100.000 --> 2107.000] For example, a person may say they are fine but a strained or shaky tone may indicate distress or sadness. +[2108.000 --> 2117.000] By listening to the emotional resonance in someone's voice, we can better understand their true feelings and respond with empathy and support. +[2117.000 --> 2121.000] Deception, vocal cues can also help us detect deception. +[2121.000 --> 2126.000] When someone is lying, their vocal tone may change subtly. +[2126.000 --> 2133.000] They may sound more hesitant, have a higher pitch or exhibit inconsistencies in their speech patterns. +[2133.000 --> 2140.000] By paying attention to these cues, we can identify potential deception and further investigate the situation. +[2140.000 --> 2147.000] Personality traits, vocal cues can provide insights into a person's personality traits. +[2147.000 --> 2158.000] For example, individuals with a dominant personality may have a louder and more assertive tone, while those who are introverted may speak softly and have a more reserved tone. +[2159.000 --> 2167.000] By analysing vocal cues, we can gain a better understanding of someone's character and adjust our communication style accordingly. +[2167.000 --> 2173.000] Intentions, vocal cues can also reveal a person's intentions or motivations. +[2173.000 --> 2181.000] For instance, a persuasive tone with a rhythmic cadence may indicate someone's attempt to influence or manipulate others. +[2181.000 --> 2188.000] By being aware of these cues, we can better navigate social interactions and make informed decisions. +[2188.000 --> 2191.000] Developing your skills. +[2191.000 --> 2197.000] To improve your ability to detect vocal cues effectively, consider the following tips. +[2197.000 --> 2204.000] Active listening, practice active listening by focusing on the speaker's vocal tone rather than just their words. +[2205.000 --> 2212.000] Pay attention to the nuances in their voice, such as changes in pitch, volume and rhythm. +[2212.000 --> 2217.000] This will help you develop a more intuitive understanding of vocal cues. +[2217.000 --> 2223.000] Record and analyse record conversations or speeches and listen back to them. +[2223.000 --> 2227.000] Pay attention to your own vocal cues and those of others. +[2228.000 --> 2234.000] Analyze the patterns and correlations between vocal tone and the speaker's emotions or intentions. +[2234.000 --> 2240.000] This practice will help you become more attuned to vocal cues in real-time conversations. +[2240.000 --> 2246.000] Cultural sensitivity, remember that vocal cues can vary across cultures. +[2246.000 --> 2252.000] What may be considered normal or appropriate in one culture may be perceived differently in another. +[2252.000 --> 2258.000] Be mindful of cultural differences and adapt to your interpretation of vocal cues accordingly. +[2258.000 --> 2265.000] Practice empathy, developing empathy is crucial for accurately interpreting vocal cues. +[2265.000 --> 2272.000] Put yourself in the speaker's shoes and try to understand their emotions and intentions based on their vocal tone. +[2272.000 --> 2277.000] This will help you build stronger connections and communicate more effectively. +[2278.000 --> 2288.000] By honing your skills in detecting vocal tone and pitch, you will be able to read people more accurately and understand the hidden messages behind their words. +[2288.000 --> 2299.000] Remember, vocal cues are just one piece of the puzzle and it is essential to consider them in conjunction with other nonverbal and verbal cues to gain a comprehensive understanding of others. +[2299.000 --> 2302.000] Analyzing speech patterns +[2302.000 --> 2307.000] Speech is one of the most powerful tools we have for communication. +[2307.000 --> 2315.000] It not only conveys information but also reveals a great deal about a person's thoughts, emotions and personality. +[2315.000 --> 2322.000] By analysing speech patterns, we can gain valuable insights into a person's mindset and intentions. +[2322.000 --> 2329.000] In this section, we will explore the various aspects of speech that can be decoded to better understand others. +[2330.000 --> 2332.000] Peace and rhythm +[2332.000 --> 2338.000] The pace and rhythm of someone's speech can provide important clues about their state of mind. +[2338.000 --> 2349.000] A fast-paced speech may indicate excitement, enthusiasm or nervousness while a slow-paced speech may suggest calmness, thoughtfulness or even boredom. +[2349.000 --> 2357.000] Pay attention to sudden changes in pace as they can reveal shifts in emotions or the importance of the topic being discussed. +[2358.000 --> 2365.000] When analysing speech patterns, it is essential to consider the context in which the conversation is taking place. +[2365.000 --> 2376.000] For example, a fast-paced speech during a heated argument may indicate anger or frustration while the same pace during a lively discussion may simply reflect enthusiasm. +[2376.000 --> 2385.000] By observing the pace and rhythm of someone's speech, you can gain a deeper understanding of their emotional state and engagement level. +[2385.000 --> 2387.000] Volume and intensity +[2387.000 --> 2395.000] The volume and intensity of someone's speech can also provide valuable insights into their personality and emotions. +[2395.000 --> 2408.000] A loud and intense voice may indicate confidence, assertiveness or even aggression while a soft and gentle voice may suggest shyness, introversion or a desire to avoid confrontation. +[2408.000 --> 2415.000] It's important to note that cultural and individual differences can influence the perception of volume and intensity. +[2415.000 --> 2421.000] What may be considered normal in one culture may be perceived as loud or quiet in another. +[2421.000 --> 2427.000] Therefore, it's crucial to take these factors into account when analysing speech patterns. +[2427.000 --> 2429.000] Tone and pitch +[2429.000 --> 2436.000] The tone and pitch of someone's voice can reveal a wealth of information about their emotions and attitudes. +[2436.000 --> 2447.000] A high-pitched voice may indicate excitement, nervousness or even anxiety while a low-pitched voice may suggest confidence, authority or seriousness. +[2447.000 --> 2457.000] In addition to pitch, the tone of someone's voice can convey various emotions such as happiness, sadness, anger or sarcasm. +[2457.000 --> 2466.000] For example, a cheerful and upbeat tone may indicate a positive mood while a flat and monotonous tone may suggest boredom or disininterest. +[2466.000 --> 2472.000] When analysing tone and pitch, it's important to consider the overall context of the conversation. +[2472.000 --> 2479.000] A sudden change in tone or pitch may indicate a shift in emotions or the introduction of a new topic. +[2479.000 --> 2487.000] By paying attention to these subtle cues, you can gain a deeper understanding of a person's emotional state and intentions. +[2487.000 --> 2490.000] Word, choice and language patterns +[2490.000 --> 2499.000] The words we choose and the way we structure our sentences can reveal a great deal about our thoughts, beliefs and personality. +[2500.000 --> 2510.000] By analysing someone's word, choice and language patterns, we can gain insights into their level of education, cultural background and even their emotional state. +[2510.000 --> 2520.000] For example, someone who frequently uses complex and technical vocabulary may be highly educated or knowledgeable in a specific field. +[2520.000 --> 2528.000] On the other hand, someone who uses simple and straightforward language may prefer to communicate in a more accessible manner. +[2528.000 --> 2539.000] Language patterns, such as the use of metaphors, analogies or storytelling, can also provide insights into a person's thought processes and communication style. +[2539.000 --> 2547.000] Some individuals may rely heavily on metaphors to convey their ideas while others may prefer a more direct and logical approach. +[2547.000 --> 2550.000] Verbal fillers and pauses +[2550.000 --> 2556.000] Verbal fillers, such as M, R or like, are common in everyday speech. +[2557.000 --> 2565.000] While they may seem insignificant, they can reveal important information about a person's thought processes and level of confidence. +[2565.000 --> 2572.000] Frequent use of verbal fillers may suggest a lack of preparation or uncertainty about the topic being discussed. +[2572.000 --> 2579.000] On the other hand, minimal use of fillers may indicate a well-prepared and confident speaker. +[2579.000 --> 2582.000] Pauses in speech can also be revealing. +[2582.000 --> 2593.000] A brief pause before answering a question may indicate careful consideration, while a long pause may suggest hesitation or the need for more time to formulate a response. +[2593.000 --> 2602.000] Pay attention to the timing and duration of pauses as they can provide valuable insights into a person's thought processes and decision making. +[2602.000 --> 2605.000] Cultural and individual differences +[2606.000 --> 2612.000] When analysing speech patterns, it's important to consider cultural and individual differences. +[2612.000 --> 2620.000] Different cultures have unique communication styles and norms which can influence the way people speak and express themselves. +[2620.000 --> 2629.000] For example, some cultures may value direct and assertive communication while others may prefer a more indirect and polite approach. +[2630.000 --> 2637.000] Individual differences, such as personality traits and communication styles, can also impact speech patterns. +[2638.000 --> 2648.000] Introverted individuals may speak more softly and use fewer words, while extroverted individuals may speak more loudly and use more expressive language. +[2648.000 --> 2657.000] By being aware of these cultural and individual differences, you can avoid making assumptions or misinterpreting someone's speech patterns. +[2657.000 --> 2666.000] It's important to approach the analysis of speech with an open mind and consider the broader context in which the conversation is taking place. +[2666.000 --> 2675.000] In conclusion, analysing speech patterns can provide valuable insights into a person's thoughts, emotions and personality. +[2675.000 --> 2686.000] By paying attention to the pace, volume, tone, word choice and fillers in someone's speech, you can gain a deeper understanding of their mindset and intentions. +[2686.000 --> 2693.000] However, it's crucial to consider cultural and individual differences to avoid misinterpretation. +[2693.000 --> 2699.000] Developing the skill of analysing speech patterns can greatly enhance your ability to read others like a book. +[2699.000 --> 2703.000] Discifering verbal fillers and pauses. +[2703.000 --> 2709.000] Verbal communication is not just about the words we speak, it also includes the way we speak them. +[2710.000 --> 2719.000] Verbal fillers and pauses are important cues that can provide valuable insights into a person's thoughts, emotions and intentions. +[2719.000 --> 2728.000] In this section, we will explore the significance of verbal fillers and pauses and how to decipher them to gain a deeper understanding of others. +[2728.000 --> 2731.000] The role of verbal fillers. +[2731.000 --> 2738.000] Verbal fillers are words or phrases that people use to fill gaps in their speech or to buy themselves time to think. +[2738.000 --> 2744.000] These fillers can include words like M, R, like, you know and so. +[2744.000 --> 2752.000] While they may seem insignificant, they can reveal a lot about a person's thought, process and level of confidence. +[2752.000 --> 2758.000] Uncertainty and hesitation verbal fillers often indicate uncertainty or hesitation. +[2759.000 --> 2767.000] When someone uses fillers excessively, it may suggest that they are unsure about what they are saying or lack confidence in their own words. +[2767.000 --> 2773.000] Pay attention to the frequency and intensity of fillers to gauge the level of uncertainty. +[2773.000 --> 2780.000] Gathering thoughts, fillers can also indicate that a person is gathering their thoughts or organising their ideas. +[2781.000 --> 2788.000] In some cases, people use fillers as a way to maintain the flow of conversation while they formulate their response. +[2788.000 --> 2795.000] This can be particularly common in situations where the person is asked to complex or unexpected question. +[2795.000 --> 2801.000] Nervousness or anxiety, fillers can also be a sign of nervousness or anxiety. +[2801.000 --> 2808.000] When people feel anxious or under pressure, they may rely on fillers as a way to cope with their discomfort. +[2808.000 --> 2815.000] These fillers can act as a buffer, allowing the person to collect their thoughts and manage their anxiety. +[2815.000 --> 2817.000] Interpreting pauses. +[2817.000 --> 2822.000] Pauses in speech are another important aspect of verbal communication. +[2822.000 --> 2829.000] They can convey various meanings depending on their duration and placement within a conversation. +[2829.000 --> 2833.000] Here are some key factors to consider when interpreting pauses. +[2834.000 --> 2840.000] Reflective pauses pauses can indicate that a person is taking a moment to reflect on what has been said. +[2840.000 --> 2847.000] These pauses are often brief and occur after a particularly important or thought-provoking statement. +[2847.000 --> 2854.000] Reflective pauses suggest that the person is actively processing the information and considering their response. +[2854.000 --> 2860.000] Emotional pauses pauses can also be a sign of emotional processing. +[2860.000 --> 2869.000] When someone experiences a strong emotion, they may pause to gather themselves and regain control before continuing to speak. +[2869.000 --> 2876.000] These pauses can be longer and more pronounced, indicating the intensity of the emotion being felt. +[2876.000 --> 2883.000] Power dynamics pauses can also be used strategically to assert power or dominance in a conversation. +[2884.000 --> 2892.000] When someone intentionally pauses before responding, it can create tension and anticipation, giving them an advantage in the interaction. +[2892.000 --> 2899.000] These pauses can be a deliberate tactic to control the flow of conversation and assert authority. +[2899.000 --> 2905.000] Deception and evasion pauses can also be a red flag for deception or evasion. +[2905.000 --> 2913.000] When someone is being dishonest or trying to avoid a topic, they may pause before responding as they mentally construct their answer. +[2913.000 --> 2921.000] These pauses can be longer and more noticeable as the person tries to buy time and come up with a plausible explanation. +[2921.000 --> 2924.000] Analyzing patterns and context +[2924.000 --> 2932.000] To effectively decipher verbal fillers and pauses, it is essential to consider the patterns and context in which they occur. +[2932.000 --> 2937.000] Here are some strategies to help you analyse and interpret these cues. +[2937.000 --> 2946.000] Observe baseline behaviour, pay attention to a person's typical speech patterns and use of fillers and pauses in everyday conversations. +[2946.000 --> 2953.000] This will provide you with a baseline against which you can compare their behaviour in different situations. +[2953.000 --> 2959.000] Deviations from their baseline behaviour can indicate changes in their thoughts or emotions. +[2959.000 --> 2968.000] Consider cultural and individual differences, keep in mind that the use of fillers and pauses can vary across cultures and individuals. +[2968.000 --> 2979.000] Some cultures may have different norms and expectations regarding pauses and fillers, so it is important to consider these cultural differences when interpreting these cues. +[2979.000 --> 2987.000] Additionally, individuals may have their own unique speech patterns and habits that influence their use of fillers and pauses. +[2987.000 --> 2996.000] Look for clusters of cues, verbal fillers and pauses should be considered in conjunction with other non-verbal cues and verbal content. +[2996.000 --> 3004.000] Look for clusters of cues that align or contradict each other to gain a more comprehensive understanding of the person's thoughts and emotions. +[3004.000 --> 3013.000] For example, if someone uses fillers while displaying signs of nervousness, it may indicate a lack of confidence or discomfort with the topic. +[3014.000 --> 3021.000] Consider the context, the context in which the conversation takes place is crucial for accurate interpretation. +[3021.000 --> 3031.000] Different situations can elicit different levels of uncertainty, anxiety or emotional processing which can influence the use of fillers and pauses. +[3031.000 --> 3041.000] Consider the topic of conversation, the relationship between the individuals and any external factors that may impact the person's speech patterns. +[3041.000 --> 3050.000] By paying attention to verbal fillers and pauses, you can gain valuable insights into a person's thoughts, emotions and intentions. +[3050.000 --> 3058.000] Remember to consider individual differences and the context in which these cues occur to ensure accurate interpretation. +[3058.000 --> 3069.000] Developing the skill to decipher these cues will enhance your ability to read others like a book and improve your overall communication and understanding of those around you. +[3069.000 --> 3073.000] Chapter 3. Reading Emotional Signals +[3073.000 --> 3076.000] Identifying micro-expressions +[3076.000 --> 3084.000] Micro-expressions are fleeting facial expressions that occur involuntarily and reveal a person's true emotions. +[3084.000 --> 3091.000] They are brief and often go unnoticed, but they can provide valuable insights into a person's thoughts and feelings. +[3092.000 --> 3100.000] In this section, we will explore the art of identifying micro-expressions and how they can help you decode people more effectively. +[3100.000 --> 3103.000] What are micro-expressions? +[3103.000 --> 3109.000] Micro-expressions are tiny facial movements that last for just a fraction of a second. +[3109.000 --> 3118.000] They occur when a person tries to conceal or suppress their true emotions, but their facial muscles involuntarily reveal their underlying feelings. +[3119.000 --> 3125.000] These micro-expressions are universal and can be observed across different cultures and backgrounds. +[3125.000 --> 3129.000] The Seven Universal Micro-Expressions +[3129.000 --> 3138.000] Psychologist Paul Eckman identified seven universal micro-expressions that are present in all humans, regardless of cultural background. +[3138.000 --> 3144.000] These micro-expressions represent the basic emotions that we all experience. +[3144.000 --> 3146.000] They include +[3146.000 --> 3152.000] Happiness, a genuine smile that involves the corners of the mouth lifting and the eyes crinkling. +[3152.000 --> 3159.000] Sadness, a downward turn of the mouth, eyebrows pull together and a slight drooping of the eyelids. +[3159.000 --> 3165.000] Anger, eyebrows lowered and drawn together, eyes narrowed and lips pressed together. +[3165.000 --> 3172.000] Fear, eyebrows raised and drawn together, wide open eyes and a slightly open mouth. +[3173.000 --> 3179.000] Disgust, a wrinkling of the nose raised up a lip and a narrowing of the eyes. +[3179.000 --> 3185.000] Surprise, eat eyebrows raised, eyes widened and mouth slightly open. +[3185.000 --> 3192.000] Contempt, the one-sided curl of the lip often accompanied by a slight raising of one eyebrow. +[3192.000 --> 3196.000] The importance of micro-expressions +[3196.000 --> 3203.000] are crucial in understanding a person's true emotions because they occur involuntarily and are difficult to fake. +[3203.000 --> 3208.000] While people can consciously control their facial expressions to some extent, +[3208.000 --> 3213.000] micro-expressions reveal their genuine feelings even if they try to hide them. +[3213.000 --> 3222.000] By learning to identify micro-expressions, you can gain deeper insights into a person's emotional state and intentions. +[3222.000 --> 3225.000] How to identify micro-expressions? +[3225.000 --> 3231.000] Identifying micro-expressions requires keen observation and practice. +[3231.000 --> 3237.000] Here are some steps to help you develop your skills in recognising micro-expressions. +[3237.000 --> 3245.000] Pay attention to the face, focus on the person's face, particularly the eyes, eyebrows, mouth and forehead. +[3245.000 --> 3250.000] These areas often display the most noticeable micro-expressions. +[3250.000 --> 3257.000] Observe the timing, micro-expressions are incredibly brief, lasting only a fraction of a second. +[3257.000 --> 3265.000] Train yourself to notice these fleeting expressions by practising with videos or images that capture micro-expressions. +[3265.000 --> 3271.000] Look for inconsistencies, compare the person's micro-expression with their overall demeanor. +[3271.000 --> 3279.000] If their micro-expression contradicts their verbal or non-verbal cues, it may indicate that they are hiding their true emotions. +[3280.000 --> 3287.000] Consider the context, take into account the situation and the person's background when interpreting micro-expressions. +[3287.000 --> 3296.000] Cultural differences and individual personality traits can influence the intensity and frequency of micro-expressions. +[3296.000 --> 3302.000] Practice empathy, put yourself in the other person's shoes and try to understand their emotions. +[3303.000 --> 3309.000] This can help you connect with them on a deeper level and interpret their micro-expressions more accurately. +[3309.000 --> 3313.000] Common challenges in identifying micro-expressions. +[3313.000 --> 3321.000] While identifying micro-expressions can be a valuable skill, it is important to be aware of the challenges that may arise. +[3321.000 --> 3327.000] Speed, micro-expressions occur rapidly, making them difficult to catch. +[3327.000 --> 3333.000] It takes practice and training to develop the ability to recognise them in real-time. +[3333.000 --> 3340.000] Suttling micro-expressions are subtle and can be easily missed, especially if you are not actively looking for them. +[3340.000 --> 3345.000] Training your observation skills is essential to overcome this challenge. +[3345.000 --> 3352.000] Cultural differences, while the basic emotions expressed through micro-expressions are universal, +[3352.000 --> 3358.000] cultural differences can influence the intensity and frequency of these expressions. +[3358.000 --> 3363.000] Be mindful of cultural variations when interpreting micro-expressions. +[3363.000 --> 3373.000] Individual differences, each person has their own unique way of expressing emotions and some individuals may have less pronounced micro-expressions. +[3373.000 --> 3379.000] It is important to consider individual differences when analysing micro-expressions. +[3379.000 --> 3381.000] Conclusion +[3381.000 --> 3388.000] Identifying micro-expressions is a valuable skill that can enhance your ability to read others accurately. +[3388.000 --> 3399.000] By understanding the seven universal micro-expressions and practising keen observation, you can gain deeper insights into a person's true emotions and intentions. +[3399.000 --> 3408.000] Remember to consider the context, be aware of cultural and individual differences and practice empathy to interpret micro-expressions effectively. +[3408.000 --> 3415.000] With time and practice, you will become more proficient in decoding people through their micro-expressions. +[3415.000 --> 3418.000] Understanding emotional contagion +[3418.000 --> 3427.000] Emotional contagion is a fascinating phenomenon that occurs when individuals unconsciously mimic and synchronize their emotions with those around them. +[3427.000 --> 3435.000] It is the process by which emotions are transferred from one person to another, often without conscious awareness. +[3435.000 --> 3442.000] Understanding emotional contagion is crucial for decoding people and gaining insight into their emotional states. +[3442.000 --> 3446.000] The science behind emotional contagion +[3446.000 --> 3451.000] Emotional contagion is rooted in our innate ability to empathise with others. +[3451.000 --> 3460.000] It is believed to be a result of mirror neurons in our brains, which are responsible for imitating and mirroring the actions and emotions of others. +[3461.000 --> 3470.000] When we observe someone experiencing an emotion, our mirror neurons fire, causing us to experience a similar emotional state. +[3470.000 --> 3481.000] Research has shown that emotional contagion can occur through various channels, including facial expressions, body language, vocal tone and even through the release of pheromones. +[3482.000 --> 3493.000] It can happen in both positive and negative contexts and the intensity of the emotional contagion can vary depending on the strength of the emotional connection between individuals. +[3493.000 --> 3496.000] Recognising emotional contagion +[3496.000 --> 3502.000] Being able to recognise emotional contagion is a valuable skill in understanding others. +[3502.000 --> 3508.000] Here are some signs that can help you identify when emotional contagion is taking place. +[3508.000 --> 3519.000] Mirroring, when someone unconsciously mimics the facial expressions, gestures or body language of another person, it is a strong indication of emotional contagion. +[3519.000 --> 3529.000] For example, if you notice that someone starts smiling when you smile or adopt a similar posture as you, it suggests that they are experiencing emotional contagion. +[3529.000 --> 3535.000] Rapid emotional shifts, emotional contagion can cause rapid shifts in emotions. +[3536.000 --> 3546.000] If you observe someone suddenly experiencing a change in their emotional state after interacting with another person, it is likely a result of emotional contagion. +[3546.000 --> 3555.000] For instance, if someone goes from being happy to sad after spending time with a person who is feeling down, it indicates emotional contagion. +[3555.000 --> 3562.000] Shared emotional experience, emotional contagion often leads to a shared emotional experience. +[3562.000 --> 3569.000] When individuals are emotionally contagious, they tend to experience similar emotions simultaneously. +[3569.000 --> 3580.000] For example, if you find yourself feeling anxious or excited in a group setting where others are also experiencing the same emotions, it is likely due to emotional contagion. +[3580.000 --> 3587.000] Empathy and compassion, emotional contagion can evoke feelings of empathy and compassion towards others. +[3588.000 --> 3598.000] If you notice yourself feeling a strong sense of empathy or compassion towards someone who is expressing intense emotions, it is a sign that emotional contagion is occurring. +[3598.000 --> 3601.000] The impact of emotional contagion. +[3601.000 --> 3607.000] Emotional contagion can have a profound impact on individuals and their relationships. +[3607.000 --> 3610.000] Here are some key aspects to consider. +[3611.000 --> 3618.000] Emotional atmosphere, emotional contagion can create a shared emotional atmosphere within a group or social setting. +[3618.000 --> 3629.000] For example, if one person in a meeting is feeling stressed or anxious, it can quickly spread to others, affecting the overall mood and productivity of the group. +[3629.000 --> 3635.000] Relationship dynamics, emotional contagion can influence the dynamics of relationships. +[3636.000 --> 3644.000] When individuals experience emotional contagion, it can lead to a deeper sense of connection and understanding between them. +[3644.000 --> 3651.000] On the other hand, if negative emotions are contagious, it can strain relationships and create tension. +[3651.000 --> 3658.000] Leadership and influence, emotional contagion plays a significant role in leadership and influence. +[3658.000 --> 3668.000] Emotions, leaders who are aware of emotional contagion can use it to their advantage by intentionally spreading positive emotions and creating a motivating environment. +[3668.000 --> 3675.000] Similarly, individuals with strong emotional contagion can influence other's emotions and behaviours. +[3675.000 --> 3681.000] Emotional wellbeing, emotional contagion can impact an individual's emotional wellbeing. +[3681.000 --> 3688.000] Being surrounded by positive emotions can uplift one's mood and enhance overall wellbeing. +[3688.000 --> 3695.000] Conversely, being exposed to negative emotions can lead to increased stress and emotional distress. +[3695.000 --> 3698.000] Managing emotional contagion +[3698.000 --> 3705.000] While emotional contagion is a natural and often unconscious process, there are ways to manage its impact. +[3705.000 --> 3712.000] Self-awareness, developing self-awareness is crucial in recognising when emotional contagion is occurring. +[3712.000 --> 3720.000] By being mindful of your own emotions and reactions, you can better understand how others' emotions may be influencing you. +[3720.000 --> 3727.000] Emotional boundaries, setting emotional boundaries can help protect yourself from negative emotional contagion. +[3728.000 --> 3736.000] It involves being aware of your emotional limits and consciously choosing not to absorb or mirror others' negative emotions. +[3736.000 --> 3744.000] Positive influence, being aware of your own emotional contagion can allow you to intentionally spread positive emotions to others. +[3744.000 --> 3751.000] By being a source of positivity and support, you can create a more uplifting and harmonious environment. +[3751.000 --> 3758.000] Empathy and compassion, practicing empathy and compassion towards others can help manage emotional contagion. +[3758.000 --> 3766.000] By understanding and acknowledging others' emotions without absorbing them, you can maintain a healthy emotional balance. +[3766.000 --> 3773.000] Understanding emotional contagion is a powerful tool in decoding people and building stronger connections. +[3774.000 --> 3783.000] By recognising the signs of emotional contagion and managing its impact, you can navigate social interactions with greater insight and empathy. +[3783.000 --> 3786.000] Recognising emotional leakage. +[3786.000 --> 3796.000] Emotions are an integral part of human communication. They play a significant role in how we interact with others and convey our thoughts and feelings. +[3797.000 --> 3807.000] While some people may be skilled at hiding their emotions, there are often subtle signs that leak out, providing valuable insights into their true emotional state. +[3807.000 --> 3816.000] Recognising these emotional leaks can help you gain a deeper understanding of others and enhance your ability to read people effectively. +[3816.000 --> 3819.000] Understanding emotional leakage. +[3819.000 --> 3829.000] Emotional leakage refers to the unintentional display of emotions through non-verbal cues such as facial expressions, body language, and vocal tone. +[3829.000 --> 3839.000] These leaks occur when individuals are unable to completely control or suppress their emotions, allowing glimpses of their true feelings to surface. +[3839.000 --> 3848.000] While they may try to mask their emotions, these leaks can reveal their underlying thoughts and emotions, providing you with valuable information. +[3848.000 --> 3850.000] Facial expressions. +[3850.000 --> 3855.000] Facial expressions are one of the most prominent channels for emotional leakage. +[3855.000 --> 3866.000] The face is incredibly expressive and can reveal a wide range of emotions, including happiness, sadness, anger, fear, surprise, and disgust. +[3866.000 --> 3876.000] While some people may be skilled at masking their emotions, micro-expressions, which are brief and involuntary facial expressions, can still leak out. +[3877.000 --> 3887.000] To recognise emotional leakage through facial expressions, pay attention to subtle changes in the eyebrows, eyes, mouth, and overall facial muscle tension. +[3887.000 --> 3895.000] For example, a slight following of the brows or a tightening of the lips may indicate anger or frustration. +[3895.000 --> 3902.000] Similarly, a sudden widening of the eyes or a raised eyebrow may suggest surprise or disbelief. +[3902.000 --> 3909.000] By observing these micro-expressions, you can gain insights into a person's true emotional state. +[3909.000 --> 3911.000] Body language. +[3911.000 --> 3916.000] Body language is another powerful indicator of emotional leakage. +[3916.000 --> 3924.000] The way a person holds themselves, their posture, and their gestures can provide valuable clues about their emotional state. +[3924.000 --> 3931.000] For example, cross-arms and a tense body posture may indicate defensiveness or discomfort. +[3931.000 --> 3937.000] On the other hand, open and relaxed body language may suggest a sense of ease and comfort. +[3937.000 --> 3945.000] Pay attention to subtle changes in body language, such as fidgeting, shifting weight, or avoiding eye contact. +[3945.000 --> 3951.000] These behaviours can indicate nervousness, anxiety, or even deception. +[3951.000 --> 3960.000] Additionally, observe the overall body movements and gestures as they can reveal a person's level of confidence, engagement, or disininterest. +[3960.000 --> 3969.000] By analysing these non-verbal cues, you can uncover emotional leakage and gain a deeper understanding of a person's true feelings. +[3969.000 --> 3972.000] Vocal tone and pitch. +[3972.000 --> 3977.000] The way a person speaks can also provide insights into their emotional state. +[3977.000 --> 3987.000] Vocal tone, pitch, and inflection can reveal underlying emotions, such as excitement, sadness, anger, or nervousness. +[3987.000 --> 3996.000] Pay attention to changes in the volume, speed, and rhythm of their speech, as well as any noticeable shifts in their vocal tone. +[3996.000 --> 4006.000] For example, a trembling or shaky voice may indicate fear or anxiety, while a raised or aggressive tone may suggest anger or frustration. +[4006.000 --> 4012.000] Similarly, a monotone voice with little variation may indicate boredom or disininterest. +[4012.000 --> 4021.000] By listening closely to these vocal cues, you can detect emotional leakage and gain a better understanding of a person's true emotions. +[4021.000 --> 4025.000] In congruence between verbal and non-verbal cues. +[4025.000 --> 4032.000] One of the key indicators of emotional leakage is the incongruence between a person's verbal and non-verbal cues. +[4033.000 --> 4042.000] When someone is trying to hide their true emotions, there may be inconsistencies between what they say and how they express themselves non-verbally. +[4042.000 --> 4050.000] For example, someone may say they are happy, but their facial expression and body language may suggest otherwise. +[4050.000 --> 4057.000] Pay attention to these inconsistencies as they can provide valuable insights into a person's true emotional state. +[4057.000 --> 4065.000] When you notice such incongruence, it is essential to trust your intuition and dig deeper to uncover the underlying emotions. +[4065.000 --> 4072.000] By recognising these leaks, you can gain a more accurate understanding of a person's true feelings and thoughts. +[4072.000 --> 4075.000] Context and baseline behaviour. +[4075.000 --> 4084.000] To effectively recognise emotional leakage, it is crucial to consider the context and establish a baseline behaviour for comparison. +[4085.000 --> 4093.000] People's emotional expressions can vary depending on the situation, cultural background and individual personality. +[4093.000 --> 4098.000] What may be considered a leak in one context may be a normal expression in another. +[4098.000 --> 4107.000] By observing a person's behaviour over time and in different situations, you can establish a baseline for their typical emotional expressions. +[4107.000 --> 4113.000] This baseline will help you identify any deviations or leaks that occur. +[4113.000 --> 4122.000] Additionally, considering the context in which the leakage occurs can provide valuable insights into the underlying emotions and motivations. +[4122.000 --> 4125.000] Practice and observation. +[4125.000 --> 4130.000] Recognising emotional leakage requires practice and keen observation skills. +[4130.000 --> 4135.000] It is essential to be present and fully engaged when interacting with others. +[4135.000 --> 4141.000] Pay attention to the subtle cues and signals that may indicate emotional leakage. +[4141.000 --> 4148.000] The more you practice, the better you will become at reading people and understanding their true emotions. +[4148.000 --> 4154.000] Additionally, it is crucial to remember that emotional leakage is not an exact science. +[4154.000 --> 4159.000] People are complex and their emotions can be influenced by various factors. +[4159.000 --> 4167.000] Therefore, it is essential to approach people reading with empathy, understanding and an open mind. +[4167.000 --> 4177.000] Use your observations as a starting point for deeper conversations and connections, rather than making assumptions or judgments based solely on emotional leakage. +[4177.000 --> 4185.000] In conclusion, recognising emotional leakage is a valuable skill that can enhance your ability to read people effectively. +[4185.000 --> 4198.000] By paying attention to facial expressions, body language, vocal cues and inconsistencies between verbal and non-verbal cues, you can gain insights into a person's true emotional state. +[4198.000 --> 4203.000] Remember to consider the context and establish a baseline behaviour for comparison. +[4203.000 --> 4212.000] With practice and observation, you can become more proficient at recognising emotional leakage and understanding others on a deeper level. +[4212.000 --> 4215.000] Interpreting emotional displays. +[4215.000 --> 4228.000] Emotions play a significant role in human communication and being able to interpret emotional displays accurately can provide valuable insights into a person's thoughts, feelings and intentions. +[4228.000 --> 4235.000] In this section, we will explore various aspects of emotional displays and how to interpret them effectively. +[4235.000 --> 4238.000] Facial expressions +[4238.000 --> 4244.000] Facial expressions are one of the most powerful and universal ways to convey emotions. +[4244.000 --> 4252.000] The face is incredibly expressive and different facial muscles work together to create a wide range of emotional displays. +[4252.000 --> 4260.000] Understanding and interpreting these expressions can help you gain a deeper understanding of a person's emotional state. +[4260.000 --> 4268.000] When interpreting facial expressions, it is essential to consider both the individual features and the overall expression. +[4268.000 --> 4277.000] For example, a raised eyebrow may indicate surprise or skepticism while a furrowed brow may suggest anger or confusion. +[4277.000 --> 4286.000] Similarly, a smile can convey happiness, but the presence of tension in the muscles around the eyes may indicate a forced or insincere smile. +[4286.000 --> 4297.000] It is crucial to remember that facial expressions can vary across cultures, so it is essential to consider cultural differences when interpreting emotional displays. +[4297.000 --> 4307.000] For example, in some cultures, showing emotions openly may be considered inappropriate or disrespectful, leading individuals to mask their true feelings. +[4307.000 --> 4310.000] Vocal cues +[4310.000 --> 4317.000] In addition to facial expressions, vocal cues can provide valuable insights into a person's emotional state. +[4317.000 --> 4328.000] The tone, pitch and volume of someone's voice can convey a range of emotions, including happiness, sadness, anger, fear and surprise. +[4328.000 --> 4334.000] When interpreting vocal cues, it is essential to pay attention to changes in pitch and tone. +[4334.000 --> 4343.000] For example, a high-pitched voice may indicate excitement or nervousness while a low-pitched voice may suggest anger or sadness. +[4343.000 --> 4348.000] Similarly, a monotone voice may indicate boredom or lack of interest. +[4348.000 --> 4354.000] It is also crucial to consider the context in which the vocal cues are expressed. +[4354.000 --> 4362.000] For example, a person may use sarcasm or irony to convey a different emotion than what their words may suggest. +[4362.000 --> 4370.000] By paying attention to vocal cues, you can gain a deeper understanding of a person's emotional state and intentions. +[4370.000 --> 4372.000] Body language +[4372.000 --> 4379.000] Body language refers to the non-verbal signals that we convey through our posture, gestures and movements. +[4379.000 --> 4384.000] It can provide valuable insights into a person's emotional state and intentions. +[4384.000 --> 4393.000] When interpreting emotional displays through body language, it is essential to consider both individual gestures and the overall body posture. +[4393.000 --> 4402.000] Gestures such as crossed arms, clenched fists or tapping fingers may indicate frustration, defensiveness or anger. +[4402.000 --> 4412.000] On the other hand, open and relaxed gestures such as open palms or uncrossed legs may suggest comfort, openness and confidence. +[4412.000 --> 4418.000] Body posture can also provide valuable insights into a person's emotional state. +[4418.000 --> 4429.000] For example, slumped shoulders and a lowered head may indicate sadness or defeat while an upright posture with a lifted chin may suggest confidence or assertiveness. +[4429.000 --> 4441.000] It is important to note that body language can vary across individuals and cultures, so it is crucial to consider individual differences and cultural norms when interpreting emotional displays through body language. +[4441.000 --> 4444.000] Micro-expressions +[4444.000 --> 4451.000] Micro-expressions are brief, involuntary facial expressions that occur within a fraction of a second. +[4451.000 --> 4458.000] They often reveal a person's true emotions even when they are trying to conceal or mask them. +[4458.000 --> 4465.000] Micro-expressions can be challenging to detect, but with practice you can become more adept at recognising them. +[4466.000 --> 4475.000] Common micro-expressions include a fleeting flash of anger, fear, surprise, disgust, happiness or sadness. +[4475.000 --> 4483.000] These expressions are often subtle and may only last for a fraction of a second before the person regains control over their facial muscles. +[4483.000 --> 4491.000] To interpret micro-expressions accurately, it is essential to pay close attention to the subtle changes in facial muscles. +[4492.000 --> 4498.000] The eyebrows, eyes, nose, mouth and chin are key areas to focus on. +[4498.000 --> 4508.000] By training yourself to recognise these micro-expressions, you can gain valuable insights into a person's true emotions even when they are trying to hide them. +[4508.000 --> 4511.000] Emotional leakage +[4511.000 --> 4519.000] Emotional leakage refers to the unintentional display of emotions that occur when a person is trying to conceal or suppress them. +[4519.000 --> 4529.000] Despite their best efforts, emotions can leak through various non-verbal cues such as facial expressions, body language and vocal cues. +[4529.000 --> 4537.000] When interpreting emotional leakage, it is important to look for inconsistencies between a person's verbal and non-verbal cues. +[4537.000 --> 4547.000] For example, if someone claims to be happy but their facial expression and body language suggest otherwise, there may be emotional leakage. +[4547.000 --> 4552.000] It is also crucial to consider the context in which emotional leakage occurs. +[4552.000 --> 4559.000] For example, a person may display emotional leakage when discussing a sensitive or personal topic. +[4559.000 --> 4567.000] By paying attention to these subtle cues, you can gain a deeper understanding of a person's true emotions and thoughts. +[4567.000 --> 4569.000] Conclusion +[4569.000 --> 4576.000] Interpreting emotional displays is a valuable skill that can help you understand others on a deeper level. +[4576.000 --> 4589.000] By paying attention to facial expressions, vocal cues, body language, micro-expressions and emotional leakage, you can gain valuable insights into a person's emotional state and intentions. +[4589.000 --> 4599.000] Remember to consider individual differences and cultural norms when interpreting emotional displays as these factors can influence how emotions are expressed. +[4600.000 --> 4606.000] With practice and observation, you can become more adept at reading and understanding others like a book. +[4606.000 --> 4609.000] Decoding emotional body language +[4609.000 --> 4617.000] Emotions play a significant role in our daily interactions and can greatly influence our relationships and communication. +[4617.000 --> 4626.000] While verbal cues provide valuable information about a person's emotional state, body language can often reveal even more. +[4626.000 --> 4635.000] Decoding emotional body language is a skill that can help you better understand others and enhance your ability to connect with them on a deeper level. +[4635.000 --> 4645.000] In this section, we will explore various aspects of emotional body language and provide you with practical tips on how to decipher and interpret these signals. +[4645.000 --> 4648.000] Facial expressions +[4648.000 --> 4658.000] The face is a powerful tool for expressing emotions and being able to read facial expressions can give you valuable insights into a person's emotional state. +[4658.000 --> 4666.000] The six universal facial expressions are happiness, sadness, anger, fear, surprise and disgust. +[4666.000 --> 4675.000] However, it's important to note that cultural differences and individual variations can influence the way people express their emotions. +[4675.000 --> 4680.000] When decoding facial expressions, pay attention to the following cues. +[4680.000 --> 4685.000] Eyes, the eyes are often referred to as the window to the soul. +[4685.000 --> 4690.000] They can reveal a person's true emotions even when they are trying to hide them. +[4690.000 --> 4698.000] Look for changes in eye contact, pupil dilation and eyebrow movements as these can indicate various emotions. +[4699.000 --> 4705.000] Mouth, the mouth can provide valuable clues about a person's emotional state. +[4705.000 --> 4712.000] Pay attention to the curvature of the lips as well as any tension or relaxation in the jaw muscles. +[4712.000 --> 4720.000] Smiles can be genuine or fake, so look for signs of authenticity such as the presence of crow's feet around the eyes. +[4720.000 --> 4725.000] Browse, the eyebrows can convey a wide range of emotions. +[4725.000 --> 4733.000] Raised eyebrows can indicate surprise or disbelief while furrowed brows can signal anger or confusion. +[4733.000 --> 4739.000] Pay attention to any asymmetry or rapid movements as these can reveal underlying emotions. +[4739.000 --> 4742.000] Posture and gestures. +[4742.000 --> 4749.000] Body posture and gestures can also provide valuable insights into a person's emotional state. +[4749.000 --> 4754.000] Pay attention to the following cues when decoding emotional body language. +[4754.000 --> 4762.000] Openness versus closeness and open posture with relaxed arms and legs indicates a sense of comfort and openness. +[4762.000 --> 4769.000] Conversely, cross-arms or legs and a hunched posture can indicate defensiveness or discomfort. +[4769.000 --> 4776.000] Gestures and movements and gestures can reveal a person's level of engagement and enthusiasm. +[4776.000 --> 4781.000] Pay attention to the speed, direction and intensity of their gestures. +[4782.000 --> 4791.000] For example, rapid and expansive gestures can indicate excitement while slow and restrained movements can suggest caution or hesitation. +[4791.000 --> 4800.000] Microw expressions are brief facial expressions that occur involuntarily and can reveal a person's true emotions. +[4800.000 --> 4809.000] These fleeting expressions can last for just a fraction of a second, so it's important to pay close attention to subtle changes in the face. +[4809.000 --> 4812.000] Body language clusters. +[4812.000 --> 4821.000] To accurately decode emotional body language, it's essential to look for clusters of cues rather than relying on individual signals alone. +[4821.000 --> 4832.000] A single gesture or expression may not provide a complete picture of a person's emotional state, but when combined with other cues, it can help you gain a deeper understanding. +[4832.000 --> 4843.000] For example, if someone is crossing their arms, avoiding eye contact and displaying tense facial muscles, it may indicate that they are feeling defensive or uncomfortable. +[4843.000 --> 4854.000] However, if they are leaning forward, maintaining eye contact and displaying open body language, it may suggest that they are engaged and interested in the conversation. +[4854.000 --> 4857.000] Context and cultural considerations. +[4857.000 --> 4866.000] When decoding emotional body language, it's important to consider the context and cultural factors that may influence a person's behaviour. +[4866.000 --> 4875.000] Different cultures have varying norms and expectations regarding the display of emotions, which can impact how people express themselves non-verbally. +[4875.000 --> 4883.000] Additionally, it's crucial to remember that body language cues can be subjective and may vary from person to person. +[4883.000 --> 4896.000] While certain gestures or expressions may generally indicate a particular emotion, it's essential to consider individual differences and avoid making assumptions based solely on non-verbal cues. +[4896.000 --> 4899.000] Practice and observation. +[4899.000 --> 4905.000] Decoding emotional body language is a skill that can be honed with practice and observation. +[4905.000 --> 4910.000] To improve your ability to read others, try the following exercises. +[4910.000 --> 4918.000] People watching observe people in different settings, such as cafes, parks or public transportation. +[4918.000 --> 4925.000] Pay attention to their body language and try to identify the emotions they may be experiencing. +[4925.000 --> 4931.000] Mirror and mimic practice mirroring the body language of others in a subtle and respectful way. +[4931.000 --> 4938.000] This exercise can help you develop empathy and enhance your ability to understand and connect with others. +[4938.000 --> 4944.000] Self-awareness. Pay attention to your own body language and how it reflects your emotions. +[4944.000 --> 4951.000] By becoming more aware of your own non-verbal cues, you can better understand how others may be feeling. +[4951.000 --> 4961.000] Remember, decoding emotional body language is not an exact science and it's important to consider multiple factors when interpreting non-verbal cues. +[4962.000 --> 4970.000] With practice and observation, you can develop a greater understanding of others and improve your communication and empathy skills. +[4970.000 --> 4973.000] Analyzing emotional responses. +[4973.000 --> 4983.000] Emotions play a significant role in our daily interactions and can provide valuable insights into a person's thoughts, feelings and intentions. +[4983.000 --> 4990.000] Analyzing emotional responses is a crucial skill in decoding people and understanding their true emotions. +[4991.000 --> 5000.000] By observing and interpreting emotional cues, you can gain a deeper understanding of someone's emotional state and respond appropriately. +[5000.000 --> 5007.000] In this section, we will explore various techniques and strategies for analysing emotional responses. +[5007.000 --> 5014.000] Facial expressions. Facial expressions are one of the most powerful indicators of emotions. +[5014.000 --> 5020.000] The face is a canvas that reflects a person's inner feelings often involuntarily. +[5020.000 --> 5028.000] By paying attention to subtle changes in facial expressions, you can gain valuable insights into a person's emotional state. +[5028.000 --> 5035.000] Micro-expressions are fleeting facial expressions that occur within a fraction of a second. +[5035.000 --> 5042.000] They are often unconscious and can reveal a person's true emotions, even when they are trying to conceal them. +[5043.000 --> 5051.000] Pay attention to subtle movements in the eyebrows, eyes, mouth and other facial muscles to identify micro-expressions. +[5051.000 --> 5059.000] Makro-expressions, unlike micro-expressions, macro-expressions are more prolonged and easier to detect. +[5059.000 --> 5065.000] These expressions can provide valuable information about a person's overall emotional state. +[5065.000 --> 5072.000] Look for changes in the shape of the mouth, the position of the eyebrows and the overall tension in the face. +[5072.000 --> 5080.000] Contextual analysis, when analysing facial expressions, it is essential to consider the context in which they occur. +[5080.000 --> 5088.000] A smile, for example, can indicate happiness, but it can also be a social mass to hide true emotions. +[5088.000 --> 5096.000] By considering the situation and the person's overall behaviour, you can better understand the meaning behind their facial expressions. +[5096.000 --> 5098.000] Vocal cues. +[5098.000 --> 5106.000] In addition to facial expressions, vocal cues can also provide valuable insights into a person's emotional state. +[5106.000 --> 5113.000] The tone, pitch and rhythm of someone's voice can reveal their underlying emotions and attitudes. +[5113.000 --> 5118.000] Here are some techniques for analysing emotional responses through vocal cues. +[5118.000 --> 5123.000] Ton of voice, pay attention to the overall tone of someone's voice. +[5123.000 --> 5132.000] A calm and steady tone may indicate confidence or contentment, while a tense or shaky tone may suggest anxiety or fear. +[5132.000 --> 5138.000] Changes in pitch and volume can also provide clues about a person's emotional state. +[5138.000 --> 5144.000] Speech rate, the speed at which someone's speech can reveal their level of excitement or agitation. +[5144.000 --> 5152.000] Rapid speech may indicate enthusiasm or nervousness, while slow speech may suggest sadness or contemplation. +[5152.000 --> 5157.000] Pay attention to any changes in speech rate during a conversation. +[5157.000 --> 5166.000] Emphasis and intonation, the way someone emphasises certain words or phrases can provide insights into their emotional state. +[5166.000 --> 5172.000] For example, a person may emphasise certain words when they are angry or frustrated. +[5172.000 --> 5179.000] Similarly, the intonation of their voice can reveal their level of interest or engagement in a conversation. +[5179.000 --> 5182.000] Body language. +[5182.000 --> 5187.000] Body language is another essential aspect of analysing emotional responses. +[5187.000 --> 5196.000] The way someone holds themselves, their gestures and their overall posture can reveal a wealth of information about their emotional state. +[5196.000 --> 5200.000] Here are some key body language cues to consider. +[5200.000 --> 5204.000] Posture, pay attention to how someone holds themselves. +[5204.000 --> 5214.000] A slouched or closed-off posture may indicate low confidence or discomfort, while an open and upright posture may suggest confidence and openness. +[5214.000 --> 5221.000] Changes in posture during a conversation can also provide insights into a person's emotional state. +[5221.000 --> 5229.000] Gestures, hand movements, facial gestures and other non-verbal cues can reveal a person's emotional state. +[5229.000 --> 5238.000] For example, clenched fists may indicate anger or frustration, while fidgeting or tapping fingers may suggest nervousness or impatience. +[5238.000 --> 5244.000] Pay attention to any repetitive or exaggerated gestures that may indicate heightened emotions. +[5244.000 --> 5251.000] Proximity, the distance someone maintains from others can also provide insights into their emotional state. +[5251.000 --> 5261.000] A person who invades personal space may be displaying dominance or aggression, while someone who keeps their distance may be indicating discomfort or a desire for privacy. +[5261.000 --> 5266.000] Pay attention to any changes in proximity during a conversation. +[5267.000 --> 5269.000] Emotional leakage +[5269.000 --> 5276.000] Emotional leakage refers to the unintentional display of emotions that a person is trying to conceal. +[5276.000 --> 5283.000] Despite their best efforts, people often leak subtle emotional cues that can be detected by a keen observer. +[5283.000 --> 5288.000] Here are some signs of emotional leakage to watch out for. +[5288.000 --> 5297.000] Micro-expressions, as mentioned earlier, micro-expressions can reveal a person's true emotions, even when they are trying to hide them. +[5297.000 --> 5305.000] Look for fleeting expressions of sadness, anger, fear, or disgust that may contradict their verbal or non-verbal cues. +[5305.000 --> 5312.000] In congruence, pay attention to any inconsistencies between a person's verbal and non-verbal cues. +[5313.000 --> 5322.000] For example, if someone claims to be happy but their facial expression or body language suggests otherwise, there may be emotional leakage. +[5322.000 --> 5333.000] Suttle changes, emotional leakage can also manifest as subtle changes in behaviour, such as shifts in eye contact, changes in voice tone or sudden pauses in speech. +[5333.000 --> 5339.000] These subtle cues can provide valuable insights into a person's true emotions. +[5339.000 --> 5342.000] Contextual analysis +[5342.000 --> 5348.000] When analysing emotional responses, it is crucial to consider the context in which they occur. +[5348.000 --> 5357.000] People's emotions can be influenced by various factors, including the situation, their relationship with others, and their personal history. +[5357.000 --> 5364.000] By considering the broader context, you can gain a more accurate understanding of someone's emotional state. +[5365.000 --> 5373.000] Environmental factors pay attention to the physical environment and any external factors that may influence a person's emotions. +[5373.000 --> 5380.000] For example, a noisy or crowded room may contribute to feelings of stress or discomfort. +[5380.000 --> 5387.000] Social dynamics consider the dynamics between individuals and how they may impact emotions. +[5387.000 --> 5396.000] Power imbalances, social norms, and personal relationships can all influence how someone expresses and conceals their emotions. +[5396.000 --> 5403.000] Personal history, people's past experiences and traumas can shape their emotional responses. +[5403.000 --> 5410.000] Be mindful that individuals may have unique triggers or sensitivities based on their personal history. +[5410.000 --> 5419.000] By combining your observations of facial expressions, vocal cues, body language, emotional leakage, and contextual analysis, +[5419.000 --> 5424.000] you can develop a more comprehensive understanding of a person's emotional responses. +[5424.000 --> 5434.000] Remember that no single cue can provide a complete picture, but by considering multiple factors, you can enhance your ability to read and understand others. +[5434.000 --> 5438.000] Chapter 4. Detecting Deception +[5438.000 --> 5441.000] Spotting signs of lying. +[5441.000 --> 5453.000] Lying is a common human behaviour that can occur in various situations, whether it's in personal relationships, professional settings, or even during casual conversations. +[5453.000 --> 5459.000] As a skilled reader of people, it is essential to be able to spot signs of lying accurately. +[5459.000 --> 5468.000] By understanding the subtle cues and behaviours associated with deception, you can become more adept at deciphering the truth from the lies. +[5468.000 --> 5474.000] In this section, we will explore some key indicators that can help you spot signs of lying. +[5474.000 --> 5477.000] Facial expressions +[5477.000 --> 5486.000] Facial expressions can provide valuable insights into a person's emotional state and can be particularly revealing when it comes to detecting deception. +[5486.000 --> 5495.000] While it is important to remember that everyone's facial expressions can vary, there are some common signs to look out for when trying to spot signs of lying. +[5495.000 --> 5501.000] One of the most well-known indicators of deception is the presence of micro-expressions. +[5501.000 --> 5508.000] These are fleeting facial expressions that occur involuntarily and can reveal a person's true emotions. +[5508.000 --> 5519.000] When someone is lying, they may display micro-expressions of fear, surprise, or contempt, which can contradict the emotions they are trying to convey. +[5519.000 --> 5523.000] Another facial cue to watch for is the suppression of emotions. +[5523.000 --> 5529.000] Lies may try to control their facial expressions to appear more composed and less emotional. +[5530.000 --> 5541.000] However, this can result in subtle inconsistencies such as a slight delay in their emotional response or a lack of congruence between their words and facial expressions. +[5541.000 --> 5543.000] Body language +[5543.000 --> 5551.000] Body language plays a significant role in communication and can provide valuable clues when it comes to detecting deception. +[5551.000 --> 5558.000] When someone is lying, they may exhibit certain body language signals that can indicate their discomfort or unease. +[5558.000 --> 5563.000] One common sign of lying is increased fidgeting or restlessness. +[5563.000 --> 5571.000] Lies may exhibit nervous behaviors such as tapping their fingers, shifting their weight, or playing with objects. +[5571.000 --> 5577.000] These actions can be a result of the anxiety and stress associated with deceiving others. +[5577.000 --> 5582.000] Another body language cue to look out for is the avoidance of eye contact. +[5582.000 --> 5590.000] While it is not always an indicator of lying on its own, it can be a sign of discomfort or a desire to avoid detection. +[5590.000 --> 5600.000] Lies may also engage in excessive blinking or rapid eye movements as they try to think of plausible explanations or divert attention away from their deception. +[5600.000 --> 5603.000] Verbal cues +[5603.000 --> 5609.000] Verbal cues can provide valuable insights into a person's truthfulness or deception. +[5609.000 --> 5617.000] While it is important to consider the context and individual differences, there are some common verbal cues that can indicate lying. +[5617.000 --> 5622.000] One such cue is the use of vague language or evasive answers. +[5622.000 --> 5630.000] Lies may try to avoid providing specific details or may use ambiguous language to create a sense of uncertainty. +[5630.000 --> 5638.000] They may also exhibit a higher pitch or a change in vocal tone, which can be indicative of nervousness or anxiety. +[5638.000 --> 5643.000] Another verbal cue to watch for is the presence of inconsistencies in their story. +[5643.000 --> 5651.000] Lies may struggle to maintain a consistent narrative and their words may contradict known facts or previous statements. +[5651.000 --> 5658.000] Pay attention to any sudden changes in their story or the use of excessive qualifiers and justifications. +[5658.000 --> 5661.000] Microwexpressions of deception +[5661.000 --> 5669.000] Microwexpressions are brief facial expressions that occur within a fraction of a second and can reveal a person's true emotions. +[5669.000 --> 5679.000] When it comes to detecting deception, microexpressions can be particularly useful as they often occur involuntarily and can be difficult to control. +[5679.000 --> 5684.000] There are several microexpressions that are commonly associated with deception. +[5684.000 --> 5692.000] One of these is this microexpression of contempt, which involves a slight curling of the lip on one side of the face. +[5692.000 --> 5699.000] This expression can indicate a hidden feeling of superiority or disdain, which may be present when someone is lying. +[5699.000 --> 5704.000] Another microexpression to watch for is this microexpression of fear. +[5704.000 --> 5710.000] This expression involves a widening of the eyes and a slight raising of the eyebrows. +[5710.000 --> 5721.000] While fear can be a genuine emotion in certain situations, it can also be a sign of deception when it occurs in response to specific questions or topics. +[5721.000 --> 5724.000] Verbal and nonverbal inconsistencies +[5724.000 --> 5731.000] When someone is lying, there is often a mismatch between their verbal and nonverbal cues. +[5731.000 --> 5737.000] Paying attention to these inconsistencies can help you spot signs of deception more effectively. +[5737.000 --> 5749.000] For example, if someone claims to be confident and sure of their statements but displays signs of nervousness such as fidgeting or avoiding eye contact, it may indicate that they are not being entirely truthful. +[5749.000 --> 5760.000] Similarly, if their words and body language do not align, such as saying, I'm fine, while displaying tense body posture, it can be a red flag for deception. +[5760.000 --> 5767.000] It is important to note that these inconsistencies should be considered in the context of the individual's baseline behaviour. +[5767.000 --> 5779.000] Some people naturally display more nervousness or have different communication styles, so it is crucial to establish a baseline for comparison before making judgments about their truthfulness. +[5779.000 --> 5783.000] Identifying manipulative behaviours +[5783.000 --> 5790.000] In addition to spotting signs of lying, it is also important to be able to identify manipulative behaviours. +[5790.000 --> 5800.000] Manipulation can involve tactics such as gaslighting, guilt tripping, or playing the victim to deceive others and gain control over a situation. +[5800.000 --> 5805.000] One key indicator of manipulative behaviour is a lack of accountability. +[5805.000 --> 5814.000] Manipulators often deflect blame onto others or make excuses for their actions, avoiding taking responsibility for their behaviour. +[5814.000 --> 5820.000] They may also use charm and flattery to manipulate others into doing what they want. +[5820.000 --> 5825.000] Another manipulative behaviour to watch for is the use of emotional manipulation. +[5825.000 --> 5834.000] Manipulators may try to evoke guilt, pity, or sympathy in others to manipulate their emotions and gain their compliance. +[5834.000 --> 5842.000] They may also employ tactics such as exaggeration or selective disclosure of information to manipulate the perception of a situation. +[5842.000 --> 5850.000] By being aware of these manipulative behaviours, you can better protect yourself from being deceived and manipulated by others. +[5850.000 --> 5853.000] Conclusion +[5853.000 --> 5862.000] Spotting signs of lying is a valuable skill that can help you navigate various aspects of life, from personal relationships to professional interactions. +[5862.000 --> 5874.000] By paying attention to facial expressions, body language, verbal cues, micro expressions, and inconsistencies, you can become more adept at detecting deception. +[5874.000 --> 5883.000] Additionally, being able to identify manipulative behaviours can help you protect yourself from being deceived and manipulated by others. +[5883.000 --> 5893.000] Remember, while these cues can be indicative of deception, it is essential to consider the context and individual differences before making any judgments. +[5893.000 --> 5902.000] With practice and observation, you can enhance your ability to read people like a book and gain a deeper understanding of their true intentions and emotions. +[5902.000 --> 5906.000] Understanding micro expressions of deception +[5906.000 --> 5916.000] Micro expressions are brief, involuntary facial expressions that occur when a person is trying to conceal their true emotions or intentions. +[5916.000 --> 5924.000] These fleeting expressions can provide valuable insights into a person's true feelings, especially when it comes to detecting deception. +[5924.000 --> 5932.000] In this section, we will explore the fascinating world of micro expressions and how they can help you uncover hidden truths. +[5932.000 --> 5935.000] What are micro expressions? +[5935.000 --> 5945.000] Micro expressions are facial expressions that last for just a fraction of a second, often occurring in response to an emotion that the person is trying to hide. +[5945.000 --> 5953.000] These expressions are automatic and uncontrollable, making them a reliable indicator of a person's true emotional state. +[5953.000 --> 5961.000] While they may be difficult to detect with a naked eye, with practice and observation, you can learn to spot the subtle cues. +[5961.000 --> 5965.000] The Seven Universal Micro Expressions +[5965.000 --> 5975.000] Research has identified seven universal micro expressions that are present across different cultures and are associated with specific emotions. +[5975.000 --> 5978.000] These micro expressions include +[5978.000 --> 5986.000] Happiness, a genuine smile that involves the corners of the mouth lifting, the cheeks raising and the eyes crinkling. +[5986.000 --> 5993.000] Sadness, a downward turn of the lips, eyebrows pull together and a slight drooping of the eyelids. +[5993.000 --> 5999.000] Anger, a furrowed brow, narrowed eyes and lips pressed together. +[5999.000 --> 6004.000] Fear, wide eyes, raised eyebrows and a slightly open mouth. +[6004.000 --> 6010.000] Disgust, a wrinkled nose, raised up a lip and narrowed eyes. +[6010.000 --> 6016.000] Surprise, raised eyebrows, wide and eyes and an open mouth. +[6016.000 --> 6020.000] Contempt, a slight curl of the lip on one side of the mouth. +[6020.000 --> 6032.000] By familiarizing yourself with these universal micro expressions, you can begin to recognize when someone is experiencing a particular emotion, even if they are trying to hide it. +[6032.000 --> 6036.000] Detecting deception through micro expressions. +[6036.000 --> 6041.000] Micro expressions can be particularly useful in detecting deception. +[6041.000 --> 6051.000] When someone is lying, they often experience conflicting emotions, which can manifest as micro expressions that contradict their verbal statements. +[6051.000 --> 6057.000] By paying close attention to these subtle cues, you can increase your ability to spot deception. +[6057.000 --> 6063.000] Here are some key points to keep in mind when using micro expressions to detect deception. +[6063.000 --> 6066.000] Timing is crucial. +[6066.000 --> 6074.000] Micro expressions occur very quickly, usually lasting for only one 25th to one 15th of a second. +[6074.000 --> 6082.000] To detect them, you need to be observant and attentive as they can easily go unnoticed if you blink or look away at the wrong moment. +[6082.000 --> 6084.000] Look for incongruence. +[6084.000 --> 6091.000] When someone is lying, their micro expressions may not align with their verbal statements. +[6091.000 --> 6097.000] For example, they may display a micro expression of fear while claiming to be calm and confident. +[6097.000 --> 6103.000] Look for these inconsistencies between their words and their facial expressions. +[6103.000 --> 6107.000] Pay attention to micro expression clusters. +[6107.000 --> 6113.000] A single micro expression may not provide enough information to determine if someone is lying. +[6113.000 --> 6119.000] Instead, look for clusters of micro expressions that occur within a short period. +[6119.000 --> 6126.000] Multiple micro expressions of fear, for instance, could indicate that the person is feeling anxious or deceptive. +[6126.000 --> 6129.000] Context matters. +[6129.000 --> 6133.000] Consider the context in which the micro expression occurs. +[6133.000 --> 6138.000] Is there a specific question or topic that triggers the micro expression? +[6138.000 --> 6145.000] Understanding the context can help you interpret the meaning behind the micro expression more accurately. +[6145.000 --> 6149.000] Developing your micro expression reading skills. +[6149.000 --> 6155.000] Reading micro expressions accurately requires practice and honing your observation skills. +[6155.000 --> 6161.000] Here are some strategies to help you develop your micro expression reading skills. +[6161.000 --> 6164.000] Study facial expressions. +[6164.000 --> 6170.000] Take the time to study and familiarize yourself with the seven universal micro expressions. +[6170.000 --> 6176.000] Look for examples in movies, TV shows or real life situations. +[6176.000 --> 6181.000] Pay attention to the subtle changes in facial muscles that accompany each emotion. +[6181.000 --> 6184.000] Practice observation. +[6184.000 --> 6189.000] Observe people's facial expressions in various situations. +[6189.000 --> 6193.000] Pay attention to the small, fleeting changes in their faces. +[6193.000 --> 6201.000] Practice identifying micro expressions in real time and try to determine the underlying emotions they represent. +[6201.000 --> 6204.000] Use video resources. +[6204.000 --> 6210.000] There are numerous video resources available that provide training in reading micro expressions. +[6210.000 --> 6220.000] These resources often include slow motion footage and detailed explanations to help you identify and interpret micro expressions accurately. +[6220.000 --> 6222.000] Seek feedback. +[6222.000 --> 6227.000] Ask for feedback from others who are skilled in reading micro expressions. +[6227.000 --> 6233.000] They can provide valuable insights and help you refine your observation skills. +[6233.000 --> 6238.000] Consider joining a study group or seeking guidance from experts in the field. +[6238.000 --> 6241.000] Ethical considerations. +[6241.000 --> 6250.000] While micro expressions can be a powerful tool for understanding others, it is essential to use this knowledge responsibly and ethically. +[6250.000 --> 6255.000] Avoid using micro expressions to manipulate or deceive others. +[6255.000 --> 6264.000] Instead, focus on using your skills to enhance communication, build rapport and gain a deeper understanding of those around you. +[6264.000 --> 6270.000] Remember that micro expressions are just one piece of the puzzle when it comes to reading others. +[6270.000 --> 6281.000] It is crucial to consider other non-verbal cues, verbal cues and contextual factors to form a comprehensive understanding of a person's emotions and intentions. +[6281.000 --> 6294.000] By understanding micro expressions and incorporating them into your people reading skills, you can become more adept at detecting deception and gaining valuable insights into the true emotions of those around you. +[6294.000 --> 6298.000] Analyzing verbal and non-verbal inconsistencies. +[6298.000 --> 6310.000] In the previous sections, we explored the various aspects of decoding people, including non-verbal communication, verbal cues, emotional signals and detecting deception. +[6310.000 --> 6317.000] Now, we will delve into the fascinating world of analysing verbal and non-verbal inconsistencies. +[6317.000 --> 6325.000] When it comes to understanding others, it is essential to pay attention to both what is being said and how it is being said. +[6325.000 --> 6334.000] Verbal and non-verbal inconsistencies can provide valuable insights into a person's true thoughts, feelings and intentions. +[6334.000 --> 6345.000] These inconsistencies occur when there is a mismatch between what someone says and how they express themselves through their body language, tone of voice or facial expressions. +[6345.000 --> 6354.000] By learning to recognise and analyse these inconsistencies, you can gain a deeper understanding of a person's true emotions and motivations. +[6354.000 --> 6358.000] Contradictions between words and body language. +[6358.000 --> 6367.000] One of the most common forms of verbal and non-verbal inconsistencies is the contradiction between a person's words and their body language. +[6367.000 --> 6376.000] For example, someone may say they are happy and excited about something, but their facial expressions and body posture may indicate otherwise. +[6376.000 --> 6383.000] These contradictions can be subtle, but they can reveal a person's true feelings and intentions. +[6383.000 --> 6390.000] To analyse these inconsistencies, it is crucial to observe a person's body language while they are speaking. +[6390.000 --> 6395.000] Pay attention to their facial expressions, gestures and posture. +[6395.000 --> 6402.000] Look for signs of discomfort such as fidgeting, crossed arms or avoiding eye contact. +[6402.000 --> 6411.000] These non-verbal cues can indicate that the person is not being entirely truthful or that they may have conflicting emotions about what they are saying. +[6411.000 --> 6417.000] Additionally, inconsistencies can also be detected through changes in vocal tone and pitch. +[6417.000 --> 6424.000] If someone's voice suddenly becomes higher or lower, shaky or strained while discussing a particular topic, +[6424.000 --> 6430.000] it may indicate that they are not being entirely honest or that they are uncomfortable with the subject matter. +[6430.000 --> 6433.000] In congruence in emotional displays. +[6433.000 --> 6440.000] Another form of verbal and non-verbal inconsistencies is in congruence in emotional displays. +[6440.000 --> 6450.000] This occurs when a person's words do not align with the emotions they are expressing through their facial expressions, body language or tone of voice. +[6450.000 --> 6458.000] For example, someone may say they are happy, but their facial expressions may show signs of sadness or anger. +[6458.000 --> 6466.000] To analyse these inconsistencies, it is essential to pay attention to the congruence between a person's words and their emotional displays. +[6467.000 --> 6474.000] Look for micro expressions, which are fleeting facial expressions that reveal a person's true emotions. +[6474.000 --> 6484.000] These micro expressions can occur within a fraction of a second and are often involuntary, making them a reliable indicator of a person's true feelings. +[6484.000 --> 6488.000] Additionally, observe a person's body language and vocal tone. +[6488.000 --> 6494.000] Are they displaying open and relaxed body language while expressing positive emotions? +[6494.000 --> 6500.000] Or are they exhibiting closed-off postures and tense vocal tones while claiming to be happy? +[6500.000 --> 6506.000] These incongruences can provide valuable insights into a person's true emotional state. +[6506.000 --> 6511.000] Detecting inconsistencies in verbal fillers and pauses. +[6511.000 --> 6517.000] Verbal fillers and pauses are another area where inconsistencies can be detected. +[6517.000 --> 6526.000] Verbal fillers, such as M, R or like, are often used when a person is searching for the right words or trying to buy time. +[6526.000 --> 6532.000] Pauses, on the other hand, can indicate hesitation or the need to gather one's thoughts. +[6532.000 --> 6539.000] Analyzing these verbal cues can help uncover inconsistencies in a person's speech. +[6539.000 --> 6544.000] Pay attention to the frequency and timing of verbal fillers and pauses. +[6544.000 --> 6548.000] Are they more prevalent when discussing certain topics? +[6548.000 --> 6552.000] Do they occur when answering specific questions? +[6552.000 --> 6559.000] These patterns can indicate discomfort, uncertainty or a lack of confidence in what is being said. +[6559.000 --> 6562.000] Interpreting mixed messages. +[6562.000 --> 6569.000] In some cases, people may intentionally send mixed messages to confuse or deceive others. +[6569.000 --> 6578.000] These mixed messages can be challenging to decipher as they involve deliberating inconsistencies in both verbal and nonverbal communication. +[6578.000 --> 6585.000] To interpret mixed messages, it is crucial to consider the context and the person's overall behaviour. +[6585.000 --> 6589.000] Look for patterns of inconsistency in their communication style. +[6589.000 --> 6594.000] Do they frequently contradict themselves or change their story? +[6594.000 --> 6602.000] Are there inconsistencies in their body language, tone of voice or facial expressions when discussing certain topics? +[6602.000 --> 6607.000] These inconsistencies can be indicators of deception or manipulation. +[6607.000 --> 6613.000] However, it is essential to approach the interpretation of mixed messages with caution. +[6613.000 --> 6620.000] Sometimes, inconsistencies can be the result of genuine confusion or conflicting emotions. +[6620.000 --> 6628.000] It is crucial to gather additional information and consider the person's behaviour over time before making any conclusions. +[6628.000 --> 6632.000] The importance of context and baseline behaviour. +[6632.000 --> 6640.000] When analysing verbal and nonverbal inconsistencies, it is essential to consider the context and the person's baseline behaviour. +[6640.000 --> 6652.000] Contextual factors such as the environment, the relationship between individuals and cultural influences can impact a person's communication style and behaviour. +[6652.000 --> 6657.000] What may seem inconsistent in one context may be entirely normal in another. +[6657.000 --> 6663.000] Additionally, establishing a baseline behaviour for an individual is crucial. +[6663.000 --> 6671.000] By observing their typical patterns of communication and behaviour, you can better identify inconsistencies when they occur. +[6671.000 --> 6681.000] Understanding a person's baseline behaviour allows you to differentiate between genuine inconsistencies and temporary deviations from their usual patterns. +[6681.000 --> 6689.000] In conclusion, analysing verbal and nonverbal inconsistencies is a valuable skill in decoding people. +[6689.000 --> 6707.000] By paying attention to contradictions between words and body language, incongruence in emotional displays, inconsistencies in verbal fillers and pauses, and mixed messages, you can gain deeper insights into a person's true thoughts, feelings, and intentions. +[6707.000 --> 6714.000] However, it is essential to consider the context and the person's baseline behaviour to avoid misinterpretation. +[6714.000 --> 6722.000] With practice and observation, you can become adept at reading these inconsistencies and understanding others on a deeper level. +[6722.000 --> 6726.000] Identifying manipulative behaviours +[6726.000 --> 6732.000] Manipulation is a common tactic used by individuals to influence and control others for personal gain. +[6732.000 --> 6742.000] Being able to identify manipulative behaviours is an essential skill in decoding people and protecting yourself from being taken advantage of. +[6742.000 --> 6751.000] In this section, we will explore various manipulative behaviours and provide you with the tools to recognise and respond to them effectively. +[6751.000 --> 6754.000] Understanding manipulation +[6754.000 --> 6759.000] Manipulation can take many forms and can be both subtle and overt. +[6759.000 --> 6767.000] It involves using tactics to deceive, exploit, or influence others without their knowledge or consent. +[6767.000 --> 6775.000] Manipulative individuals often have a hidden agenda and use psychological strategies to achieve their desired outcomes. +[6775.000 --> 6783.000] By understanding the different types of manipulative behaviours, you can become more adept at recognising them in various situations. +[6783.000 --> 6787.000] Recognising manipulative tactics +[6787.000 --> 6792.000] Manipulative individuals employ a range of tactics to achieve their goals. +[6792.000 --> 6796.000] Here are some common manipulative behaviours to watch out for. +[6796.000 --> 6799.000] Gaslighting +[6799.000 --> 6807.000] Gaslighting is a manipulative tactic where the individual makes the other person doubt their own perceptions, memories, and sanity. +[6807.000 --> 6813.000] They may deny or distort the truth, making the victim question their own reality. +[6813.000 --> 6821.000] Gaslighting can be emotionally and psychologically damaging as it undermines the victim's confidence and sense of self. +[6821.000 --> 6823.000] Guilt tripping +[6823.000 --> 6832.000] Guilt tripping involves manipulating someone by making them feel guilty for not meeting them manipulators' expectations or desires. +[6832.000 --> 6842.000] The manipulator may use emotional blackmail or passive-aggressive behaviour to make the other person feel responsible for their unhappiness or dissatisfaction. +[6842.000 --> 6845.000] Emotional manipulation +[6845.000 --> 6852.000] Emotional manipulation involves exploiting someone's emotions to gain control or influence over them. +[6852.000 --> 6862.000] Manipulators may use tactics such as playing the victim using emotional blackmail or using excessive flattery to manipulate other's feelings and actions. +[6862.000 --> 6865.000] Deception and lies +[6865.000 --> 6871.000] The manipulative individuals often resort to deception and lies to achieve their goals. +[6871.000 --> 6879.000] They may present false information with hold important details or twist the truth to manipulate others into doing what they want. +[6879.000 --> 6887.000] Detecting inconsistencies in their stories or noticing a pattern of dishonesty can help you identify manipulative behaviour. +[6887.000 --> 6889.000] Isolation +[6889.000 --> 6897.000] Manipulators may try to isolate their victims from friends, family or support networks to gain more control over them. +[6897.000 --> 6908.000] By cutting off their sources of support and influence, manipulators can increase their power and make it harder for their victims to seek help or escape the manipulative relationship. +[6908.000 --> 6911.000] Charm and manipulative persuasion +[6911.000 --> 6917.000] Some manipulators are skilled at using charm and persuasive tactics to manipulate others. +[6917.000 --> 6924.000] They may use flattery, charisma or seduction to win people over and gain their trust. +[6924.000 --> 6930.000] It is important to be aware of these tactics and not be swayed solely by surface charm. +[6930.000 --> 6934.000] Signs of manipulative behaviours +[6934.000 --> 6942.000] Recognising manipulative behaviours can be challenging as manipulators are often skilled at hiding their true intentions. +[6942.000 --> 6948.000] However, there are some signs that can help you identify manipulative behaviours. +[6948.000 --> 6952.000] Inconsistencies in words and actions +[6952.000 --> 6960.000] Manipulators may say one thing but do another. They may make promises they don't keep or give mixed messages. +[6960.000 --> 6967.000] Pay attention to inconsistencies between their words and actions as this can be a sign of manipulative behaviour. +[6967.000 --> 6969.000] Lack of empathy +[6969.000 --> 6976.000] Manipulative individuals often lack empathy and disregard the feelings and needs of others. +[6976.000 --> 6983.000] They may show little concern for the well-being of others and prioritize their own interests above all else. +[6983.000 --> 6985.000] Emotional manipulation +[6985.000 --> 6990.000] Manipulators may use emotional manipulation tactics to control others. +[6990.000 --> 6997.000] They may try to make you feel guilty, ashamed or responsible for their emotions or actions. +[6997.000 --> 7003.000] They may also use emotional outbursts or tantrums to manipulate your behaviour. +[7003.000 --> 7006.000] Controlling behaviour +[7006.000 --> 7013.000] Manipulators often exhibit controlling behaviour, seeking to control various aspects of your life. +[7013.000 --> 7019.000] They may try to dictate your choices, isolate you from others or limit your independence. +[7019.000 --> 7025.000] Pay attention to signs of controlling behaviour as it can be a red flag for manipulation. +[7025.000 --> 7028.000] Lack of accountability +[7028.000 --> 7033.000] Manipulative individuals often avoid taking responsibility for their actions. +[7033.000 --> 7038.000] They may shift blame onto others or make excuses for their behaviour. +[7038.000 --> 7043.000] They may also deny or minimise their actions when confronted. +[7043.000 --> 7047.000] Responding to manipulative behaviours +[7047.000 --> 7055.000] When you identify manipulative behaviours, it is important to respond in a way that protects your well-being and boundaries. +[7055.000 --> 7058.000] Here are some strategies to consider. +[7058.000 --> 7060.000] Set boundaries +[7060.000 --> 7064.000] Establish clear boundaries and communicate them assertively. +[7064.000 --> 7073.000] Let the manipulator know what behaviours are unacceptable and enforce consequences if they continue to manipulate or cross your boundaries. +[7073.000 --> 7076.000] Trust your instincts. +[7076.000 --> 7081.000] If something feels off or you sense manipulation, trust your instincts. +[7081.000 --> 7088.000] Your intuition can often pick up on subtle cues and warning signs that may not be immediately apparent. +[7088.000 --> 7091.000] Seek support +[7091.000 --> 7100.000] If you find yourself in a manipulative relationship or situation, reach out to trusted friends, family or professionals for support. +[7100.000 --> 7107.000] They can provide guidance, perspective and help you navigate the challenges of dealing with manipulative individuals. +[7107.000 --> 7110.000] Educate yourself +[7110.000 --> 7115.000] Continue to educate yourself about manipulative behaviours and tactics. +[7115.000 --> 7122.000] The more you understand about manipulation, the better equipped you will be to recognise and respond to it effectively. +[7122.000 --> 7125.000] Practice self-care +[7125.000 --> 7132.000] Taking care of your physical, emotional and mental well-being is crucial when dealing with manipulative individuals. +[7132.000 --> 7139.000] Engage in activities that bring you joy, practice self-care and prioritise your needs. +[7139.000 --> 7141.000] Conclusion +[7141.000 --> 7150.000] Identifying manipulative behaviours is an important skill in decoding people and protecting yourself from being manipulated. +[7150.000 --> 7161.000] By understanding the different types of manipulative tactics and recognising the signs, you can respond effectively and maintain healthy boundaries in your relationships and interactions. +[7161.000 --> 7168.000] Remember to trust your instincts. Seek support when needed and prioritise your well-being. +[7168.000 --> 7172.000] Chapter 5 Interpreting personal style +[7172.000 --> 7176.000] Recognising clothing and appearance cues +[7176.000 --> 7185.000] In the art of decoding people, one of the key aspects to consider is the way individuals present themselves through their clothing and appearance. +[7185.000 --> 7194.000] Our clothing choices and personal style can reveal a great deal about our personality, values and even our emotional state. +[7194.000 --> 7203.000] By paying attention to these cues, we can gain valuable insights into a person's character and better understand their motivations and intentions. +[7203.000 --> 7206.000] Dressing for identity +[7206.000 --> 7212.000] The way we dress often reflects our sense of identity and how we want to be perceived by others. +[7212.000 --> 7221.000] Clothing can be a powerful tool for self-expression allowing us to communicate our values, interests and social affiliations. +[7221.000 --> 7235.000] For example, someone who dresses in a professional and polished manner may be seen as ambitious and detailed-oriented, while someone who prefers a more casual and relaxed style may be perceived as laid back and approachable. +[7235.000 --> 7242.000] When reading others, it is important to consider how their clothing aligns with their personal and professional goals. +[7242.000 --> 7250.000] Are they dressing to conform to societal norms and expectations, or are they intentionally challenging those norms? +[7250.000 --> 7258.000] By understanding the motivations behind their clothing choices, we can gain a deeper understanding of their personality and values. +[7258.000 --> 7261.000] Cultural influences +[7261.000 --> 7267.000] Clothing and appearance cues are heavily influenced by cultural norms and traditions. +[7267.000 --> 7274.000] Different cultures have distinct styles of dress that reflect their history, beliefs and social structures. +[7274.000 --> 7284.000] When decoding people, it is crucial to be aware of these cultural influences and avoid making assumptions based on our own cultural biases. +[7284.000 --> 7292.000] For example, in some cultures, modesty is highly valued and individuals may choose to dress in a more conservative manner. +[7292.000 --> 7298.000] In contrast, other cultures may embrace more revealing clothing styles. +[7298.000 --> 7308.000] By understanding and respecting these cultural differences, we can avoid misinterpreting someone's intentions or character-based solely on their appearance. +[7308.000 --> 7312.000] Personal style and self-expression +[7312.000 --> 7322.000] While cultural influences play a significant role in our clothing choices, personal style allows individuals to express their unique personality and preferences. +[7322.000 --> 7331.000] Personal styling compasses not only the type of clothing we wear but also the colors, patterns and accessories we choose. +[7331.000 --> 7337.000] When reading others, pay attention to the consistency and coherence of their personal style. +[7337.000 --> 7344.000] Do they have a signature look or a specific color palette they consistently gravitate towards? +[7344.000 --> 7349.000] These choices can provide insights into their personality traits. +[7349.000 --> 7361.000] For example, someone who consistently wears bold and vibrant colors may be seen as confident and outgoing, while someone who prefers neutral tones may be perceived as more reserved and introspective. +[7361.000 --> 7364.000] Gruming and attention to detail +[7364.000 --> 7372.000] In addition to clothing, grooming and attention to detail in appearance can also reveal important cues about a person. +[7372.000 --> 7382.000] How someone presents themselves in terms of personal hygiene, hairstyle and overall grooming can indicate their level of self-care and attention to detail. +[7382.000 --> 7389.000] For example, someone who takes meticulous care of their appearance may be seen as organized and detail-oriented. +[7389.000 --> 7399.000] On the other hand, someone who appears disheveled or unkempt may be perceived as more laid back or less concerned with societal expectations. +[7399.000 --> 7402.000] Emotional state and non-verbal cues +[7402.000 --> 7408.000] Clothing and appearance cues can also provide insights into a person's emotional state. +[7408.000 --> 7418.000] When individuals are experiencing strong emotions, they may unconsciously reflect these emotions through their clothing choices and overall appearance. +[7418.000 --> 7430.000] For example, someone who is feeling confident and empowered may choose to wear bold and assertive clothing, while someone who is feeling down or anxious may opt for more subdued and comfortable attire. +[7430.000 --> 7438.000] By paying attention to these cues, we can gain a better understanding of a person's emotional state and respond accordingly. +[7438.000 --> 7442.000] Context and individual differences +[7442.000 --> 7453.000] It is important to note that while clothing and appearance cues can provide valuable insights, they should always be considered within the context of the individual and their unique circumstances. +[7453.000 --> 7462.000] People have different preferences, cultural backgrounds and personal styles which can influence their clothing choices and appearance. +[7462.000 --> 7471.000] When decoding people, it is crucial to avoid making snap judgments or generalizations based solely on clothing and appearance cues. +[7471.000 --> 7477.000] Instead, use these cues as a starting point for further observation and understanding. +[7477.000 --> 7484.000] Combine them with other non-verbal and verbal cues to form a more comprehensive picture of the individual. +[7484.000 --> 7486.000] Conclusion +[7486.000 --> 7492.000] Recognizing clothing and appearance cues is an essential skill in decoding people. +[7492.000 --> 7506.000] By paying attention to how individuals present themselves through their clothing choices, personal style, grooming and attention to detail, we can gain valuable insights into their personality, values and emotional state. +[7506.000 --> 7515.000] However, it is important to consider cultural influences, individual differences and the context in which these cues are observed. +[7516.000 --> 7527.000] By combining clothing and appearance cues with other non-verbal and verbal cues, we can develop a more accurate understanding of others and enhance our ability to read people like a book. +[7527.000 --> 7530.000] Understanding cultural influences +[7530.000 --> 7537.000] Culture plays a significant role in shaping an individual's behaviour, beliefs and values. +[7537.000 --> 7548.000] When it comes to decoding people, it is crucial to understand the cultural influences that can impact their non-verbal cues, communication styles and personal preferences. +[7548.000 --> 7557.000] Cultural differences can greatly affect how people express themselves, interpret social cues and establish relationships. +[7558.000 --> 7567.000] In this section, we will explore the importance of understanding cultural influences and how they can impact our ability to read others accurately. +[7567.000 --> 7571.000] Cultural norms and non-verbal communication +[7571.000 --> 7580.000] Non-verbal communication is a universal aspect of human interaction, but the way it is expressed can vary across different cultures. +[7580.000 --> 7589.000] Gestures, facial expressions and body language can have different meanings and interpretations depending on the cultural context. +[7589.000 --> 7600.000] For example, while direct eye contact is considered a sign of attentiveness and respect in Western cultures, it may be seen as disrespectful or confrontational in some Asian cultures. +[7600.000 --> 7606.000] Understanding cultural norms is essential to accurately interpret non-verbal cues. +[7606.000 --> 7612.000] It is important to be aware of the cultural context in which you are observing someone's behaviour. +[7612.000 --> 7624.000] By familiarising yourself with the cultural norms of a particular group, you can avoid misinterpreting non-verbal signals and make more accurate assessments of a person's intentions and emotions. +[7624.000 --> 7628.000] Communication styles and language +[7628.000 --> 7635.000] Language is a fundamental aspect of culture and different cultures have distinct communication styles. +[7635.000 --> 7643.000] Some cultures value direct and explicit communication while others prefer indirect and implicit communication. +[7643.000 --> 7650.000] These differences can greatly impact how people express their thoughts, emotions and intentions. +[7650.000 --> 7662.000] In high context cultures such as many Asian and Middle Eastern cultures, communication is often indirect and relies heavily on non-verbal cues and contextual information. +[7662.000 --> 7674.000] In contrast, low context cultures, like many Western cultures, tend to value direct and explicit communication, focusing more on the word spoken rather than the underlying context. +[7674.000 --> 7683.000] Understanding these differences in communication styles is crucial for accurately decoding people from different cultural backgrounds. +[7683.000 --> 7692.000] It is important to be mindful of the cultural context and adapt to your communication style accordingly to ensure effective and respectful communication. +[7692.000 --> 7695.000] Personal space and territory +[7695.000 --> 7701.000] The concept of personal space and territorial boundaries can also vary across cultures. +[7701.000 --> 7714.000] Some cultures have a larger personal space bubble and prefer more physical distance between individuals during interactions, while others have a smaller personal space bubble and are comfortable with closer proximity. +[7714.000 --> 7724.000] For example, in many Western cultures, people generally prefer a larger personal space bubble and may feel uncomfortable if someone stands too close. +[7724.000 --> 7736.000] In contrast, in many Latin American or Middle Eastern cultures, people tend to have a smaller personal space bubble and may stand closer to each other during conversations. +[7736.000 --> 7744.000] Understanding these cultural differences in personal space and territorial boundaries is essential for reading people accurately. +[7744.000 --> 7754.000] It is important to respect and adapt to the cultural norms of personal space to avoid making others feel uncomfortable or invading their personal boundaries. +[7754.000 --> 7758.000] Cultural influences on expressing emotions +[7758.000 --> 7764.000] The way people express and interpret emotions can also be influenced by culture. +[7764.000 --> 7771.000] Some cultures encourage the open expression of emotions while others value emotional restraint and control. +[7771.000 --> 7777.000] These cultural differences can impact how people display and interpret emotional cues. +[7777.000 --> 7785.000] For example, in some Western cultures, it is common for individuals to express their emotions openly and directly. +[7785.000 --> 7793.000] However, in many Asian cultures, individuals may be more reserved and display emotions in a more subtle and controlled manner. +[7793.000 --> 7801.000] Understanding these cultural influences on emotional expression is crucial for accurately decoding people's emotions. +[7801.000 --> 7814.000] It is important to consider the cultural context and individual differences when interpreting emotional cues as what may be considered an appropriate display of emotion in one culture may be perceived differently in another. +[7814.000 --> 7818.000] Stereotypes and cultural sensitivity +[7818.000 --> 7828.000] While understanding cultural influences is important, it is essential to approach cultural differences with sensitivity and avoid falling into stereotypes. +[7828.000 --> 7837.000] Each individual is unique and cultural norms should not be used as a blanket assumption about someone's behaviour or personality. +[7837.000 --> 7850.000] It is important to recognise that cultural influences are just one aspect of a person's identity and should be considered alongside other factors such as individual personality, upbringing and personal experiences. +[7850.000 --> 7859.000] By being culturally sensitive and open-minded, we can avoid making assumptions and judgments based solely on cultural differences. +[7859.000 --> 7861.000] Conclusion +[7862.000 --> 7867.000] Understanding cultural influences is a crucial aspect of decoding people accurately. +[7867.000 --> 7880.000] Cultural norms, communication styles, personal space and emotional expression can all vary across different cultures, impacting how people express themselves and interpret social cues. +[7881.000 --> 7892.000] By being aware of these cultural influences and approaching them with sensitivity, we can enhance our ability to read others and build effective cross-cultural relationships. +[7892.000 --> 7895.000] Analyzing personal space and territory +[7895.000 --> 7906.000] Personal space and territory are important aspects of non-verbal communication that can provide valuable insights into a person's mindset, comfort level and intentions. +[7907.000 --> 7917.000] By understanding how individuals use and react to personal space, you can gain a deeper understanding of their emotions, attitudes and relationships. +[7917.000 --> 7924.000] In this section, we will explore the significance of personal space and territory and how to analyse them effectively. +[7924.000 --> 7927.000] The importance of personal space +[7928.000 --> 7940.000] Personal space refers to the invisible boundary that individuals maintain around themselves, which varies depending on cultural norms, personal preferences and the nature of the relationship. +[7940.000 --> 7948.000] It is an essential aspect of human interaction and plays a crucial role in establishing comfort, trust and intimacy. +[7948.000 --> 7956.000] The size of personal space can vary from person to person, but generally it can be categorised into four zones. +[7957.000 --> 7968.000] Intimate zone, this zone ranges from 0 to 18 inches and is reserved for close relationships, such as romantic partners, family members or close friends. +[7968.000 --> 7972.000] In this zone, physical contact is expected and comfortable. +[7972.000 --> 7981.000] Personal zone, the personal zone extends from 18 inches to 4 feet and is typically maintained in casual social interactions. +[7982.000 --> 7988.000] It is the space where most conversations and interactions occur with acquaintances and colleagues. +[7988.000 --> 7996.000] Social zone, the social zone spans from 4 to 12 feet and is maintained in formal or professional settings. +[7996.000 --> 8004.000] It is the distance at which individuals feel comfortable engaging in public speaking, presentations or group discussions. +[8005.000 --> 8016.000] Public zone, the public zone extends beyond 12 feet and is used in situations where individuals address large audiences or engage in public performances. +[8016.000 --> 8019.000] Analyzing personal space +[8019.000 --> 8027.000] Analyzing personal space involves observing how individuals react to the invasion or expansion of their personal space. +[8027.000 --> 8032.000] Here are some key factors to consider when interpreting personal space. +[8032.000 --> 8038.000] Comfort level pay attention to how individuals respond when their personal space is invaded. +[8038.000 --> 8044.000] Some people may become visibly uncomfortable while others may not react at all. +[8044.000 --> 8050.000] These reactions can provide insights into their level of comfort, trust and boundaries. +[8050.000 --> 8057.000] Distance maintenance observe how individuals maintain their personal space during interactions. +[8057.000 --> 8066.000] Some individuals may consistently maintain a larger personal space indicating a need for more privacy or a preference for personal boundaries. +[8066.000 --> 8072.000] Others may have a smaller personal space suggesting a more open and approachable demeanor. +[8072.000 --> 8083.000] Territorial behaviour, people often exhibit territorial behaviour by marking their personal space with objects or by using body language to assert dominance or ownership. +[8084.000 --> 8094.000] Look for signs of territorial behaviour such as placing personal belongings to claim space or using expansive gestures to establish dominance. +[8094.000 --> 8100.000] Cultural differences keep in mind that personal space norms can vary across cultures. +[8100.000 --> 8107.000] Some cultures may have smaller personal space distances while others may have larger distances. +[8107.000 --> 8114.000] It is essential to consider cultural context when analysing personal space to avoid misinterpretation. +[8114.000 --> 8118.000] Interpreting personal space in different situations. +[8118.000 --> 8125.000] The interpretation of personal space can vary depending on the context and the relationship between individuals. +[8125.000 --> 8131.000] Here are some common situations where personal space can provide valuable insights. +[8132.000 --> 8140.000] Conversations during one-on-one conversations observe how individuals position themselves in relation to each other. +[8140.000 --> 8148.000] If one person consistently invades the other's personal space, it may indicate dominance or a lack of respect for boundaries. +[8148.000 --> 8156.000] On the other hand, if both individuals maintain a comfortable distance, it suggests a balanced and respectful interaction. +[8156.000 --> 8166.000] Crowded environments, encrowded environments such as public transportation or crowded events, personal space is often compromised. +[8166.000 --> 8171.000] Observe how individuals react to the invasion of their personal space. +[8171.000 --> 8178.000] Some may become visibly agitated or defensive while others may adapt and tolerate the close proximity. +[8178.000 --> 8185.000] Workspaces pay attention to how individuals organise and personalise their workspaces. +[8185.000 --> 8193.000] Some individuals may keep their work space open and accessible, indicating a willingness to collaborate and engage with others. +[8193.000 --> 8202.000] Others may create physical barriers or keep their work space more private, suggesting a need for personal boundaries or concentration. +[8202.000 --> 8210.000] Social gatherings, in social gatherings, personal space can provide insights into the dynamics of relationships. +[8211.000 --> 8215.000] Observe how individuals position themselves in relation to others. +[8215.000 --> 8225.000] Close proximity may indicate familiarity and comfort while maintaining a larger personal space may suggest a more reserved or cautious demeanor. +[8225.000 --> 8228.000] Adapting to personal space preferences. +[8228.000 --> 8236.000] Understanding personal space preferences can help you adapt your own behaviour to make others feel more comfortable and respected. +[8237.000 --> 8241.000] Here are some tips for adapting to personal space preferences. +[8241.000 --> 8250.000] Respect boundaries pay attention to cues and signals that indicate when someone is uncomfortable with their personal space being invaded. +[8250.000 --> 8257.000] Respect their boundaries and maintain an appropriate distance to ensure a comfortable interaction. +[8257.000 --> 8266.000] Mirror behaviour, when interacting with others, mirror their personal space preferences to establish rapport and create a sense of comfort. +[8266.000 --> 8275.000] If someone maintains a larger personal space, try to match their distance to avoid making them feel crowded or uncomfortable. +[8275.000 --> 8284.000] Cultural sensitivity, when interacting with individuals from different cultures, be aware of cultural differences in personal space. +[8284.000 --> 8293.000] Research and understand the cultural norms to avoid unintentionally invading someone's personal space or making them feel uncomfortable. +[8293.000 --> 8299.000] Flexibility, recognise that personal space preferences can vary from person to person. +[8299.000 --> 8308.000] Be flexible and adaptable in your interactions, allowing individuals to set their own boundaries and adjusting your behaviour accordingly. +[8308.000 --> 8317.000] By analysing personal space and territory, you can gain valuable insights into a person's comfort level, boundaries and relationships. +[8317.000 --> 8327.000] Understanding and respecting personal space preferences can enhance your communication skills, build rapport and create more meaningful connections with others. +[8327.000 --> 8331.000] Decoding personal habits and routines. +[8332.000 --> 8341.000] Understanding a person's habits and routines can provide valuable insights into their personality, preferences and priorities. +[8341.000 --> 8349.000] Our daily habits and routines are often deeply ingrained and can reveal a great deal about who we are as individuals. +[8349.000 --> 8356.000] By observing and decoding these patterns, we can gain a better understanding of someone's behaviour and motivations. +[8356.000 --> 8364.000] In this section, we will explore how to decode personal habits and routines to enhance our ability to read others. +[8364.000 --> 8367.000] Morning and evening routines. +[8367.000 --> 8373.000] One of the most revealing aspects of a person's habits is their morning and evening routines. +[8373.000 --> 8382.000] These routines can provide valuable clues about a person's priorities, organisation skills and overall mindset. +[8382.000 --> 8389.000] Pay attention to how someone starts and ends their day as it can shed light on their approach to life. +[8389.000 --> 8398.000] For example, someone who wakes up early and follows a structured morning routine may be disciplined, goal-oriented and focused. +[8398.000 --> 8407.000] On the other hand, someone who struggles to get out of bed and has a chaotic morning may be more spontaneous and less concerned with structure. +[8407.000 --> 8418.000] Similarly, observing someone's evening routine can provide insights into their relaxation techniques, self-care practices and overall work-life balance. +[8418.000 --> 8427.000] Someone who prioritises winding down before bed with activities like reading or meditation may value self-care and stress management. +[8427.000 --> 8434.000] On the other hand, someone who consistently works late into the night may be highly driven and ambitious. +[8435.000 --> 8443.000] Our eating habits can reveal a lot about our personality, lifestyle and even our emotional state. +[8443.000 --> 8449.000] Pay attention to how someone approaches food, their eating speed and their food choices. +[8449.000 --> 8458.000] For example, someone who eats slowly and savers each bite may be someone who values mindfulness and enjoys the present moment. +[8458.000 --> 8465.000] On the other hand, someone who eats quickly and without much thought may be more focused on efficiency and productivity. +[8465.000 --> 8471.000] Food choices can also provide insights into a person's preferences and values. +[8471.000 --> 8478.000] Someone who consistently chooses healthy, nutritious options may prioritise their physical well-being. +[8478.000 --> 8487.000] On the other hand, someone who frequently indulges in comfort foods may seek emotional comfort or have a more relaxed approach to their diet. +[8488.000 --> 8491.000] Exercise and physical activity. +[8491.000 --> 8501.000] Observing someone's exercise and physical activity habits can provide valuable information about their energy levels, discipline and overall health consciousness. +[8501.000 --> 8509.000] Pay attention to how often someone exercises the type of activities they engage in and their approach to fitness. +[8510.000 --> 8521.000] For example, someone who exercises regularly and engages in a variety of activities may be highly motivated, disciplined and open to new experiences. +[8521.000 --> 8529.000] On the other hand, someone who rarely exercises or sticks to a rigid routine may be more sedentary or resistant to change. +[8529.000 --> 8538.000] Additionally, the intensity and duration of someone's workouts can provide insights into their level of commitment and determination. +[8538.000 --> 8546.000] Someone who pushes themselves to their limits during workouts may have a competitive nature and a strong drive for success. +[8546.000 --> 8554.000] On the other hand, someone who prefers low intensity activities may prioritise relaxation and stress reduction. +[8554.000 --> 8556.000] Work habits. +[8556.000 --> 8565.000] Our work habits can reveal a great deal about our work ethic, organisational skills and overall approach to professional life. +[8565.000 --> 8572.000] Pay attention to how someone manages their time, their level of productivity and their approach to problem solving. +[8572.000 --> 8582.000] For example, someone who consistently arrives early to work stays organised and meets deadlines may be highly conscientious and detail oriented. +[8582.000 --> 8594.000] On the other hand, someone who frequently procrastinates, struggles with time management or has a cluttered workspace may be more laid back or struggle with prioritisation. +[8594.000 --> 8602.000] Additionally, observing how someone handles challenges and problem solving can provide insights into their resilience and adaptability. +[8602.000 --> 8610.000] Someone who approaches problems with a calm and logical mindset may be highly analytical and solution oriented. +[8610.000 --> 8618.000] On the other hand, someone who becomes easily overwhelmed or avoids challenges may be more risk-averse or struggle with decision-making. +[8619.000 --> 8621.000] Leisure and Hobbies. +[8621.000 --> 8631.000] Our leisure activities and hobbies can provide valuable insights into our interests, passions and overall approach to life outside of work. +[8631.000 --> 8639.000] Pay attention to how someone spends their free time, the activities they engage in and their level of enthusiasm. +[8639.000 --> 8650.000] For example, someone who consistently pursues creative hobbies like painting or writing may have a strong need for self-expression and enjoy exploring their imagination. +[8650.000 --> 8659.000] On the other hand, someone who engages in competitive sports or activities may have a strong drive for achievement and enjoy the thrill of competition. +[8659.000 --> 8671.000] Additionally, observing how someone balances their leisure activities with other aspects of their life can provide insights into their ability to prioritise and maintain a healthy work-life balance. +[8671.000 --> 8679.000] Someone who consistently makes time for their hobbies and leisure activities may prioritise self-care and personal fulfilment. +[8680.000 --> 8690.000] On the other hand, someone who neglects their hobbies or consistently overworks themselves may struggle with boundaries or have a strong work-centric mindset. +[8690.000 --> 8699.000] By decoding personal habits and routines, we can gain a deeper understanding of someone's personality, values and priorities. +[8700.000 --> 8709.000] However, it is important to remember that these observations should be made in conjunction with other non-verbal and verbal cues to form a more accurate assessment. +[8709.000 --> 8715.000] People are complex beings and no single aspect of their behaviour can fully define them. +[8716.000 --> 8720.000] Chapter 6. Assessing Personality Traits +[8720.000 --> 8724.000] Identifying Introversion and Extroversion +[8724.000 --> 8733.000] Understanding the personality traits of introversion and extroversion can provide valuable insights into how individuals interact with the world around them. +[8733.000 --> 8742.000] Introversion and extroversion are two fundamental dimensions of personality that describe how people gain energy and process information. +[8742.000 --> 8751.000] By identifying these traits in others, you can better understand their preferences, communication styles and social behaviours. +[8752.000 --> 8755.000] Introversion, the Quiet Observers +[8755.000 --> 8763.000] Introversion is a personality trait characterised by a preference for solitude and a need for quiet and calm environments. +[8763.000 --> 8770.000] Introverts tend to be more reserved and introspective, often preferring to spend time alone or in small groups. +[8770.000 --> 8776.000] They gain energy from internal reflection and may find social interactions draining. +[8777.000 --> 8782.000] When reading someone's introversion, there are several cues to look out for. +[8782.000 --> 8790.000] Introverts often exhibit a preference for listening rather than speaking, taking their time to process information before responding. +[8790.000 --> 8800.000] They may also display a more reserved body language, such as cross-darms or a closed posture, as they tend to be more cautious and guarded in their interactions. +[8801.000 --> 8806.000] Introverts may also show a preference for written communication over verbal communication. +[8806.000 --> 8817.000] They may be more comfortable expressing themselves through writing or prefer to communicate through email or text messages rather than face-to-face conversations. +[8817.000 --> 8824.000] Additionally, introverts may seek out quieter environments and avoid large social gatherings or events. +[8825.000 --> 8828.000] Introversion, the social energises. +[8828.000 --> 8837.000] Extroversion, on the other hand, is a personality trait characterised by a preference for social interaction and external stimulation. +[8837.000 --> 8843.000] Extroverts thrive in social settings and gain energy from being around others. +[8843.000 --> 8848.000] They tend to be outgoing, talkative and enjoy being the centre of attention. +[8849.000 --> 8854.000] When reading someone's extroversion, there are several cues to consider. +[8854.000 --> 8861.000] Extroverts often display open and expansive body language, such as open arms and a relaxed posture. +[8861.000 --> 8866.000] They may also engage in more frequent and animated gestures while speaking. +[8866.000 --> 8873.000] Extroverts are typically more comfortable with small talk and enjoy initiating conversations with others. +[8874.000 --> 8880.000] Extroverts may also exhibit a preference for verbal communication and thrive in group settings. +[8880.000 --> 8888.000] They may enjoy brainstorming sessions, team meetings and social events where they can interact with a large number of people. +[8888.000 --> 8897.000] Extroverts may also seek out external stimulation and may become restless or bored in quiet or solitary environments. +[8898.000 --> 8901.000] Ambivversion, the balance of both. +[8901.000 --> 8907.000] It's important to note that not everyone falls strictly into the categories of introversion or extroversion. +[8907.000 --> 8913.000] Some individuals exhibit a balance of both traits known as ambivversion. +[8913.000 --> 8922.000] Ambivverts can adapt their behaviour to different situations and may display introverted or extroverted tendencies depending on the context. +[8922.000 --> 8928.000] When reading someone's ambivversion, it can be helpful to observe their behaviour in various settings. +[8928.000 --> 8937.000] Ambivverts may display a mix of introverted and extroverted cues depending on their comfort level and the specific situation. +[8937.000 --> 8943.000] They may enjoy socialising in small groups but also appreciate quiet time alone. +[8944.000 --> 8953.000] Ambivverts may be more adaptable and flexible in their communication styles, able to engage in both deep conversations and lighthearted banter. +[8953.000 --> 8958.000] The importance of understanding introversion and extroversion. +[8958.000 --> 8965.000] Identifying introversion and extroversion in others can be beneficial in various personal and professional contexts. +[8966.000 --> 8974.000] Understanding someone's preference for introversion or extroversion can help you tailor your communication style to better connect with them. +[8974.000 --> 8980.000] For introverts, providing them with space and time to process information can be crucial. +[8980.000 --> 8990.000] Avoiding overwhelming them with excessive social interactions or high-pressure situations can help create a more comfortable environment for them to express themselves. +[8991.000 --> 8997.000] Active listening and allowing them to contribute at their own pace can also foster better communication. +[8997.000 --> 9004.000] For extroverts, providing opportunities for social interaction and external stimulation can be important. +[9004.000 --> 9012.000] Engaging them in group activities or brainstorming sessions can help them thrive and contribute their ideas. +[9013.000 --> 9021.000] Allowing them to express themselves verbally and providing a platform for them to share their thoughts and opinions can also enhance their engagement. +[9021.000 --> 9030.000] By understanding and respecting the introversion and extroversion of others, you can create more harmonious and effective interactions. +[9030.000 --> 9039.000] Remember that these traits exist on a spectrum and individuals may display a combination of both introverted and extroverted behaviours. +[9039.000 --> 9047.000] Being mindful of these differences can lead to better communication, stronger relationships and a deeper understanding of others. +[9047.000 --> 9051.000] Analyzing dominance and submissiveness +[9051.000 --> 9058.000] Understanding the dynamics of dominance and submissiveness is crucial when it comes to decoding people and their behaviour. +[9058.000 --> 9068.000] Dominance and submissiveness are personality traits that can greatly influence how individuals interact with others and navigate social situations. +[9068.000 --> 9077.000] By analysing these traits, you can gain valuable insights into a person's communication style, decision-making process and overall demeanor. +[9077.000 --> 9080.000] The nature of dominance +[9080.000 --> 9088.000] Dominance is a personality trait characterized by assertiveness, confidence and a desire to control or influence others. +[9088.000 --> 9097.000] Dominant individuals tend to take charge in social situations, express their opinions openly and assert their authority. +[9097.000 --> 9106.000] They often display strong body language, such as standing tour, making direct eye contact and using expansive gestures. +[9106.000 --> 9112.000] When analysing dominance, it is important to consider both verbal and non-verbal cues. +[9112.000 --> 9119.000] Verbal cues may include speaking loudly, interrupting others and using assertive language. +[9119.000 --> 9127.000] Non-verbal cues may include a firm handshake, a strong and steady gaze and a relaxed and open posture. +[9127.000 --> 9133.000] Dominant individuals often strive for power and control seeking to lead and influence others. +[9133.000 --> 9138.000] They are typically confident in their abilities and may exhibit a competitive nature. +[9138.000 --> 9146.000] They are comfortable taking risks and making decisions, often displaying a high level of self-assurance. +[9146.000 --> 9149.000] Recognising submissiveness +[9149.000 --> 9156.000] Submissiveness, on the other hand, is a personality trait characterized by a more passive and accommodating nature. +[9156.000 --> 9164.000] Submissive individuals tend to be more reserved, compliant and willing to yield to others' opinions or desires. +[9164.000 --> 9173.000] They may display more submissive body language, such as avoiding eye contact, crossing their arms or adopting a closed-off posture. +[9173.000 --> 9179.000] When analysing submissiveness, it is important to consider both verbal and non-verbal cues. +[9179.000 --> 9186.000] Verbal cues may include speaking softly, using tentative language and avoiding confrontation. +[9186.000 --> 9195.000] Non-verbal cues may include a weak handshake, avoiding direct eye contact and displaying a tense or hunched posture. +[9195.000 --> 9200.000] Submissive individuals often prioritize harmony and avoiding conflict. +[9201.000 --> 9208.000] They may be more inclined to follow rather than lead and they may struggle with making decisions or asserting themselves. +[9208.000 --> 9216.000] They may exhibit a more cautious and risk-averse approach to life, often seeking approval and validation from others. +[9216.000 --> 9219.000] The interplay of dominance and submissiveness. +[9219.000 --> 9228.000] In social interactions, the interplay between dominance and submissiveness can greatly impact the dynamics and outcomes. +[9228.000 --> 9236.000] Dominant individuals may naturally gravitate towards leadership roles and may assert their opinions and desires more forcefully. +[9236.000 --> 9246.000] Submissive individuals, on the other hand, may be more comfortable in supportive roles and may be more willing to accommodate the needs and preferences of others. +[9246.000 --> 9252.000] It is important to note that dominance and submissiveness are not fixed traits but exist on a spectrum. +[9253.000 --> 9261.000] Individuals may display varying degrees of dominance or submissiveness depending on the context and the people they are interacting with. +[9261.000 --> 9269.000] Some individuals may exhibit dominant behaviour in certain situations while displaying more submissive behaviour in others. +[9269.000 --> 9273.000] Analyzing dominance submissiveness imbalances. +[9273.000 --> 9280.000] When analysing dominance and submissiveness, it is important to be aware of potential imbalances in relationships. +[9281.000 --> 9290.000] Power imbalances can occur when one person consistently dominates the interaction, leaving the other person feeling powerless or unheard. +[9290.000 --> 9296.000] This can lead to strange relationships, lack of trust and communication breakdowns. +[9296.000 --> 9306.000] In some cases, individuals may adopt a submissive role to avoid conflict or maintain harmony, even if it goes against their own desires or needs. +[9307.000 --> 9312.000] This can result in feelings of resentment, frustration and a lack of fulfilment. +[9312.000 --> 9319.000] It is important to recognise and address these imbalances to ensure healthy and balanced relationships. +[9319.000 --> 9323.000] Strategies for balancing dominance and submissiveness. +[9323.000 --> 9331.000] Balancing dominance and submissiveness in relationships is essential for effective communication and healthy dynamics. +[9331.000 --> 9334.000] Here are some strategies to consider. +[9334.000 --> 9342.000] Self-awareness. Recognise your own dominant or submissive tendencies and how they may impact your interactions with others. +[9342.000 --> 9348.000] Reflect on your communication style and be open to adjusting it when necessary. +[9348.000 --> 9356.000] Active listening. Practice active listening by giving others the opportunity to express their thoughts and opinions without interruption. +[9357.000 --> 9361.000] Show genuine interest and empathy towards their perspective. +[9361.000 --> 9372.000] Assertiveness training, if you tend to be more submissive, consider assertiveness training to develop the skills to express your needs and opinions confidently and respectfully. +[9372.000 --> 9378.000] Collaboration and compromise strive for collaboration and compromise in relationships. +[9378.000 --> 9384.000] Seek win-win solutions that take into account the needs and desires of all parties involved. +[9384.000 --> 9392.000] Equal participation. Encourage equal participation in conversations and decision-making processes. +[9392.000 --> 9397.000] Create an environment where everyone's voice is valued and respected. +[9397.000 --> 9406.000] Open communication, foster open and honest communication by creating a safe space for expressing thoughts, feelings and concerns. +[9406.000 --> 9412.000] Encourage feedback and address any power imbalances that may arise. +[9412.000 --> 9420.000] By analysing dominance and submissiveness, you can gain a deeper understanding of individuals' communication styles and behaviours. +[9420.000 --> 9430.000] This knowledge can help you navigate social interactions more effectively, build stronger relationships and enhance your overall people reading skills. +[9430.000 --> 9434.000] Recognising openness and conscientiousness. +[9435.000 --> 9443.000] Understanding a person's personality traits can provide valuable insights into their behaviour, motivations and preferences. +[9443.000 --> 9448.000] Two important traits to consider are openness and conscientiousness. +[9448.000 --> 9456.000] Openness refers to a person's willingness to experience new things, embrace new ideas and be open-minded. +[9456.000 --> 9465.000] Consciousness, on the other hand, relates to a person's level of organisation, responsibility and attention to detail. +[9465.000 --> 9474.000] By recognising these traits in others, you can better understand how they approach tasks, make decisions and interact with the world around them. +[9474.000 --> 9476.000] Openness. +[9476.000 --> 9485.000] Openness is a personality trait that reflects a person's receptiveness to new experiences, ideas and perspectives. +[9485.000 --> 9491.000] Individuals who score high in openness tend to be curious, imaginative and creative. +[9491.000 --> 9498.000] They are open to new possibilities and enjoy exploring different ways of thinking and doing things. +[9498.000 --> 9507.000] On the other hand, individuals who score low in openness may be more traditional, resistant to change and prefer routine and familiarity. +[9507.000 --> 9510.000] Signs of high openness. +[9510.000 --> 9518.000] People who are high in openness often exhibit certain behaviours and characteristics that can help you recognise this trait in them. +[9518.000 --> 9526.000] Curiosity and intellectual interests, open individuals are naturally curious and have a thirst for knowledge. +[9526.000 --> 9533.000] They enjoy learning about various subjects and may engage in intellectual discussions and debates. +[9533.000 --> 9544.000] Imagination and creativity open individuals have a rich imagination and often express their creativity through various outlets such as art, writing or music. +[9544.000 --> 9549.000] They may have a unique and unconventional approach to problem-solving. +[9549.000 --> 9556.000] Adventurusness, open individuals are more likely to seek out new experiences and take risks. +[9556.000 --> 9564.000] They may enjoy travelling to unfamiliar places, trying new cuisines or participating in adventurous activities. +[9564.000 --> 9571.000] Tolerance for ambiguity open individuals are comfortable with uncertainty and ambiguity. +[9571.000 --> 9579.000] They can handle situations where there are no clear-cut answers and are open to multiple interpretations and perspectives. +[9579.000 --> 9582.000] Signs of low openness. +[9582.000 --> 9589.000] Individuals who score low in openness may exhibit the following behaviours and characteristics. +[9589.000 --> 9595.000] Resistance to change people low in openness tend to prefer routine and familiarity. +[9595.000 --> 9602.000] They may be resistant to change and find it challenging to adapt to new situations or ideas. +[9602.000 --> 9612.000] Traditional and conventional thinking individuals low in openness may have a more traditional mindset and be less receptive to new or unconventional ideas. +[9612.000 --> 9616.000] They may prefer sticking to established norms and traditions. +[9616.000 --> 9627.000] Narrow interests, people low in openness may have limited interests and may not actively seek out new experiences or knowledge outside of their comfort zone. +[9628.000 --> 9634.000] Preference for structure individuals low in openness may prefer structure and predictability. +[9634.000 --> 9639.000] They may feel more comfortable when things are well-defined and organised. +[9639.000 --> 9642.000] Conscientiousness +[9642.000 --> 9651.000] Conscientiousness is a personality trait that reflects a person's level of organisation, responsibility and attention to detail. +[9651.000 --> 9657.000] Individuals who score high and conscientiousness tend to be diligent, reliable and organised. +[9657.000 --> 9663.000] They have a strong work ethic and strive for excellence in their endeavours. +[9663.000 --> 9672.000] On the other hand, individuals who score low in conscientiousness may be more spontaneous, flexible and less focused on structure and planning. +[9672.000 --> 9675.000] Signs of high conscientiousness +[9675.000 --> 9684.000] People who are high in conscientiousness often display certain behaviours and characteristics that can help you identify this trait in them. +[9684.000 --> 9691.000] Organizational skills, conscientious individuals are highly organised and value structure and order. +[9691.000 --> 9698.000] They tend to keep their physical and digital spaces tidy and have well-planned schedules and routines. +[9698.000 --> 9706.000] Reliability and dependability individuals high in conscientiousness are known for their reliability and dependability. +[9706.000 --> 9712.000] They fulfil their commitments, meet deadlines and take their responsibilities seriously. +[9712.000 --> 9720.000] Attention to detail, conscientious individuals pay close attention to detail and strive for accuracy in their work. +[9720.000 --> 9726.000] They are meticulous and thorough in their approach ensuring that everything is done correctly. +[9726.000 --> 9733.000] Goal-oriented people high in conscientiousness are driven by goals and are motivated to achieve them. +[9733.000 --> 9741.000] They set clear objectives, create action plans and work diligently towards their desired outcomes. +[9741.000 --> 9744.000] Signs of low conscientiousness +[9744.000 --> 9751.000] Individuals who score low in conscientiousness may exhibit the following behaviours and characteristics. +[9752.000 --> 9761.000] Spontaneity, people low in conscientiousness may be more spontaneous and flexible in their approach to tasks and responsibilities. +[9761.000 --> 9765.000] They may be less concerned with strict schedules and deadlines. +[9765.000 --> 9776.000] Lack of organisation, individuals low in conscientiousness may struggle with organisation and may have a more relaxed attitude towards structure and planning. +[9776.000 --> 9786.000] Procrastination, people low in conscientiousness may be more prone to procrastination and may struggle with initiating and completing tasks in a timely manner. +[9786.000 --> 9798.000] Less attention to detail, individuals low in conscientiousness may have a more relaxed attitude towards details and may not prioritise accuracy and precision in their work. +[9798.000 --> 9806.000] Recognising openness and conscientiousness in others can help you understand their approach to work, decision making and problem solving. +[9806.000 --> 9816.000] By understanding these traits, you can adapt your communication and interaction style to better connect with individuals and build effective relationships. +[9816.000 --> 9825.000] Remember that personality traits exist on a spectrum and individuals may exhibit a combination of different traits to varying degrees. +[9826.000 --> 9829.000] Disiffering agreeableness and neuroticism. +[9829.000 --> 9835.000] Understanding a person's personality traits is crucial when it comes to decoding people. +[9835.000 --> 9842.000] In this section, we will explore two important dimensions of personality, agreeableness and neuroticism. +[9842.000 --> 9850.000] By deciphering these traits, you will gain valuable insights into how individuals interact with others and handle emotions. +[9851.000 --> 9853.000] Decoding agreeableness. +[9853.000 --> 9862.000] Agreeableness is a personality trait that reflects an individual's tendency to be cooperative, compassionate and considerate towards others. +[9862.000 --> 9869.000] People who score high in agreeableness are often warm, empathetic and willing to help others. +[9869.000 --> 9878.000] On the other hand, individuals with lower agreeableness may be more competitive, skeptical and less concerned about the needs of others. +[9878.000 --> 9881.000] Signs of higher agreeableness. +[9881.000 --> 9888.000] When interacting with someone who exhibits higher agreeableness, there are several cues to look out for. +[9888.000 --> 9894.000] Friendly and approachable, agreeable individuals tend to have a warm and friendly demeanor. +[9894.000 --> 9899.000] They are often open to new experiences and enjoy socializing with others. +[9899.000 --> 9907.000] Empathetic and compassionate people high in agreeableness are often empathetic and show genuine concern for the well-being of others. +[9907.000 --> 9912.000] They are more likely to offer support and help when needed. +[9912.000 --> 9918.000] Cooperative and team-oriented agreeable individuals thrive in collaborative environments. +[9918.000 --> 9925.000] They are willing to work together with others, value teamwork and strive for harmony within groups. +[9925.000 --> 9933.000] Avoidance of conflict, those high in agreeableness tend to avoid confrontations and prefer peaceful resolutions. +[9933.000 --> 9939.000] They may go to great lengths to maintain harmony and avoid causing discomfort to others. +[9939.000 --> 9942.000] Signs of lower agreeableness. +[9942.000 --> 9948.000] Conversely, individuals with lower agreeableness may exhibit the following behaviours. +[9948.000 --> 9956.000] Competitive and assertive people low in agreeableness may be more assertive and competitive in their interactions. +[9956.000 --> 9961.000] They may prioritize their own needs and goals over the needs of others. +[9961.000 --> 9968.000] Skeptical and critical, those with lower agreeableness may be more skeptical of others' intentions and motivations. +[9968.000 --> 9974.000] They may question the trustworthiness of others and be more critical in their judgments. +[9974.000 --> 9981.000] Less concerned about others, individuals low in agreeableness may be less concerned about the well-being of others. +[9981.000 --> 9987.000] They may prioritize their own interests and be less willing to offer help or support. +[9988.000 --> 9997.000] Tendency for conflict, people with lower agreeableness may be more comfortable with conflict and may engage in arguments or disagreements more readily. +[9997.000 --> 10000.000] Deciphering neuroticism. +[10000.000 --> 10007.000] Neuroticism is a personality trait that reflects an individual's emotional stability or instability. +[10007.000 --> 10018.000] Those high in neuroticism tend to experience negative emotions more frequently and intensely, while those low in neuroticism are generally more emotionally stable and resilient. +[10018.000 --> 10021.000] Signs of high neuroticism. +[10021.000 --> 10027.000] When trying to decipher high neuroticism in an individual, watch out for the following indicators. +[10027.000 --> 10034.000] Emotional sensitivity, people high in neuroticism may be more sensitive to emotional stimuli. +[10034.000 --> 10042.000] They may react strongly to stresses and experience intense emotions such as anxiety, sadness or anger. +[10042.000 --> 10052.000] Tendency for worry and rumination, individuals with high neuroticism may have a tendency to worry excessively and ruminate on negative thoughts. +[10052.000 --> 10058.000] They may find it challenging to let go of past events or move on from negative experiences. +[10059.000 --> 10065.000] Moved swings, those high in neuroticism may experience frequent and unpredictable mood swings. +[10065.000 --> 10072.000] Their emotions may fluctuate rapidly, leading to sudden shifts in behaviour and reactions. +[10072.000 --> 10080.000] Hightened self-consciousness, people with high neuroticism may be more self-conscious and concerned about how others perceive them. +[10080.000 --> 10085.000] They may worry about making mistakes or being judged by others. +[10085.000 --> 10088.000] Signs of low neuroticism. +[10088.000 --> 10094.000] Conversely, individuals with low neuroticism may exhibit the following characteristics. +[10094.000 --> 10101.000] Emotional stability, people low in neuroticism tend to be emotionally stable and resilient. +[10101.000 --> 10108.000] They are less likely to be overwhelmed by negative emotions and can bounce back quickly from setbacks. +[10108.000 --> 10113.000] Positive outlook, those low in neuroticism often have a positive outlook on life. +[10114.000 --> 10121.000] They may be more optimistic, hopeful and able to maintain a sense of calm even in challenging situations. +[10121.000 --> 10129.000] Ability to handle stress, individuals low in neuroticism are better equipped to handle stress and cope with adversity. +[10129.000 --> 10135.000] They may have effective strategies for managing their emotions and maintaining a sense of balance. +[10136.000 --> 10144.000] Less self-conscious, people with low neuroticism are generally less self-conscious and less concerned about how others perceive them. +[10144.000 --> 10150.000] They may feel more comfortable being themselves without worrying excessively about judgment. +[10150.000 --> 10160.000] Understanding the dimensions of agreeableness and neuroticism can provide valuable insights into how individuals interact with others and handle emotions. +[10160.000 --> 10167.000] By deciphering these traits, you can enhance your ability to read people and navigate social interactions more effectively. +[10167.000 --> 10174.000] Remember, personality traits are not fixed and can vary across different situations and contexts. +[10174.000 --> 10182.000] Therefore, it is important to consider multiple cues and factors when decoding people's personalities. +[10182.000 --> 10186.000] Chapter 7 Reading Relationships +[10187.000 --> 10190.000] Analysing interactions and dynamics +[10190.000 --> 10198.000] Understanding the dynamics of interpersonal interactions is crucial when it comes to decoding people and reading them like a book. +[10198.000 --> 10207.000] Every interaction between individuals is a complex dance of verbal and non-verbal cues, power dynamics and relationship dynamics. +[10208.000 --> 10217.000] In this section, we will explore the various aspects of analysing interactions and dynamics to help you gain deeper insights into the people you encounter. +[10217.000 --> 10221.000] Verbal and non-verbal synchronisation +[10221.000 --> 10228.000] One of the key elements to analysing any interaction is the level of synchronisation between verbal and non-verbal cues. +[10228.000 --> 10235.000] When a person's words align with their body language, it indicates authenticity and congruence. +[10235.000 --> 10243.000] On the other hand, inconsistencies between verbal and non-verbal signals may suggest hidden intentions or discomfort. +[10243.000 --> 10250.000] Pay attention to the tone of voice, facial expressions and body language of the person you are observing. +[10250.000 --> 10253.000] Are they smiling while talking about something sad? +[10253.000 --> 10257.000] Do their gestures match the emotions they are expressing? +[10257.000 --> 10264.000] These subtle cues can provide valuable insights into a person's true feelings and intentions. +[10264.000 --> 10266.000] Power dynamics +[10266.000 --> 10271.000] Power dynamics play a significant role in any relationship or interaction. +[10271.000 --> 10279.000] Analyzing power imbalances can help you understand the underlying dynamics and motivations of individuals involved. +[10279.000 --> 10287.000] Power can manifest in various ways such as through social status, authority or control over resources. +[10287.000 --> 10293.000] Observe how individuals assert their power or submit to others in a given interaction. +[10293.000 --> 10300.000] Look for signs of dominance such as assertive body language interrupting others or speaking loudly. +[10300.000 --> 10309.000] Conversely, signs of submissiveness may include avoiding eye contact, speaking softly or displaying closed body language. +[10309.000 --> 10319.000] Understanding power dynamics can help you navigate social situations more effectively and identify potential conflicts or power struggles. +[10320.000 --> 10322.000] Relationship cues +[10322.000 --> 10328.000] Interactions are heavily influenced by the nature of the relationship between individuals. +[10328.000 --> 10337.000] Whether it's a romantic relationship, a friendship or a professional setting that dynamics and expectations vary significantly. +[10337.000 --> 10343.000] Pay attention to the level of familiarity, comfort and trust between individuals. +[10343.000 --> 10348.000] Are they maintaining eye contact and engaging in open body language? +[10348.000 --> 10352.000] Do they use intimate gestures like touching or leaning in? +[10352.000 --> 10356.000] These cues can indicate a close relationship. +[10356.000 --> 10367.000] Conversely, if individuals maintain more distance, avoid physical contact or display guarded body language, it may suggest a more formal or distant relationship. +[10367.000 --> 10371.000] Nonverbal signals in relationships +[10371.000 --> 10380.000] Nonverbal signals play a crucial role in relationships, often conveying emotions and intentions more accurately than words alone. +[10380.000 --> 10387.000] Analyzing these signals can provide valuable insights into the dynamics and health of a relationship. +[10387.000 --> 10391.000] Observe how individuals interact with each other. +[10391.000 --> 10395.000] Are they mirroring each other's body language and gestures? +[10395.000 --> 10398.000] Mirroring is a sign of rapport and connection. +[10398.000 --> 10410.000] Conversely, if individuals display closed-off body language, avoid eye contact or exhibit defensive gestures, it may indicate tension or conflict within the relationship. +[10410.000 --> 10415.000] Pay attention to the frequency and quality of touch between individuals. +[10415.000 --> 10419.000] Touch can range from casual and friendly to intimate and affectionate. +[10419.000 --> 10426.000] The absence of touch or a lack of physical proximity may suggest a more distant or strained relationship. +[10427.000 --> 10430.000] Emotional Dynamics +[10430.000 --> 10434.000] Emotions play a significant role in interpersonal interactions. +[10434.000 --> 10441.000] Analyzing emotional dynamics can help you understand the underlying motivations and reactions of individuals. +[10441.000 --> 10446.000] Observe the emotional displays of individuals during an interaction. +[10446.000 --> 10452.000] Are they expressing genuine emotions or are they suppressing their true feelings? +[10452.000 --> 10459.000] Look for micro-expressions, subtle changes in facial expressions that reveal true emotions. +[10459.000 --> 10468.000] Pay attention to emotional leakage, which occurs when individuals unintentionally display their true emotions through non-verbal cues. +[10468.000 --> 10474.000] These cues can include changes in voice tone, facial expressions or body language. +[10474.000 --> 10482.000] Understanding the emotional dynamics in an interaction can help you respond appropriately and build stronger connections with others. +[10482.000 --> 10485.000] Cultural Influences +[10485.000 --> 10491.000] Cultural influences shape the way individuals communicate and interact with each other. +[10491.000 --> 10500.000] Analyzing these influences can provide valuable insights into the behaviour and expectations of individuals from different cultural backgrounds. +[10500.000 --> 10507.000] Be aware of cultural norms regarding personal space, eye contact, touch and gestures. +[10507.000 --> 10514.000] These norms can vary significantly across cultures and may impact the dynamics of an interaction. +[10514.000 --> 10523.000] Avoid making assumptions based on your own cultural background and be open to learning about and respecting the cultural differences of others. +[10523.000 --> 10530.000] By analysing interactions and dynamics, you can gain a deeper understanding of the people you encounter. +[10530.000 --> 10540.000] Pay attention to verbal and non-verbal synchronisation, power dynamics, relationship cues, non-verbal signals, emotional dynamics. +[10540.000 --> 10548.000] And cultural influences will help you read people like a book and navigate social situations with greater insight and empathy. +[10548.000 --> 10551.000] Detecting power imbalances +[10551.000 --> 10556.000] Power imbalances are a fundamental aspect of human relationships. +[10556.000 --> 10565.000] Whether in personal or professional settings, power dynamics play a significant role in how individuals interact with one another. +[10565.000 --> 10575.000] Being able to detect power imbalances can provide valuable insights into the dynamics of a relationship and help you navigate social situations more effectively. +[10576.000 --> 10579.000] Understanding power dynamics +[10579.000 --> 10584.000] Power dynamics refer to the distribution of power and control within a relationship. +[10584.000 --> 10593.000] Power can manifest in various forms such as physical strength, social status, wealth, knowledge or authority. +[10593.000 --> 10604.000] Imbalances in power can significantly impact the dynamics between individuals influencing how they communicate, make decisions and assert their needs and desires. +[10605.000 --> 10612.000] In any relationship, there is usually a power dynamic at play, whether it is explicit or subtle. +[10612.000 --> 10621.000] Power imbalances can be asymmetrical, with one person holding more power than the other, or they can be symmetrical, with power being shared equally. +[10621.000 --> 10628.000] Understanding power dynamics is crucial for interpreting non-verbal cues and accurately reading others. +[10628.000 --> 10632.000] Non-verbal indicators of power imbalances +[10632.000 --> 10638.000] Non-verbal cues can provide valuable insights into power dynamics within a relationship. +[10638.000 --> 10647.000] By observing body language, facial expressions and other non-verbal signals, you can detect signs of power imbalances. +[10647.000 --> 10651.000] Here are some non-verbal indicators to look out for. +[10651.000 --> 10654.000] Dominant body language +[10654.000 --> 10658.000] Individuals with more power often display dominant body language. +[10659.000 --> 10664.000] They may stand tall, take up more space and use expansive gestures. +[10664.000 --> 10673.000] They may also exhibit confident and assertive postures such as leaning forward, making direct eye contact and using firm handshakes. +[10673.000 --> 10684.000] On the other hand, individuals with less power may display submissive body language such as slouching, avoiding eye contact and using closed-off postures. +[10684.000 --> 10687.000] Vocal tone and volume +[10687.000 --> 10692.000] Power imbalances can also be reflected in vocal cues. +[10692.000 --> 10697.000] Those with more power tend to have a louder and more assertive vocal tone. +[10697.000 --> 10703.000] They may speak with confidence and authority using clear and concise language. +[10703.000 --> 10709.000] Conversely, individuals with less power may have a softer and more hesitant vocal tone. +[10709.000 --> 10715.000] They may speak in a more submissive manner using qualifiers and hesitations. +[10715.000 --> 10718.000] Interruptions and speaking time +[10718.000 --> 10725.000] Observing how individuals interact during conversations can provide insights into power dynamics. +[10725.000 --> 10732.000] Those with more power often interrupt and dominate conversations speaking for longer periods. +[10732.000 --> 10736.000] They may also dismiss or ignore the opinions of others. +[10736.000 --> 10743.000] Conversely, individuals with less power may be interrupted more frequently and have less speaking time. +[10743.000 --> 10748.000] They may also exhibit more deference and agreement with the opinions of those in power. +[10748.000 --> 10752.000] Personal space and proximity +[10752.000 --> 10758.000] Power imbalances can also be reflected in the way individuals navigate personal space. +[10758.000 --> 10765.000] Those with more power may invade the personal space of others, stand in closer and disregarding boundaries. +[10765.000 --> 10769.000] They may also use physical touch to assert dominance. +[10769.000 --> 10779.000] Conversely, individuals with less power may maintain a greater distance and exhibit more submissive behaviour, respecting personal boundaries. +[10779.000 --> 10782.000] Verbal indicators of power imbalances. +[10782.000 --> 10790.000] In addition to non-verbal cues, verbal indicators can also reveal power imbalances within a relationship. +[10790.000 --> 10798.000] Paying attention to the language used, the tone of voice and the way individuals communicate can provide valuable insights. +[10798.000 --> 10801.000] Here are some verbal indicators to consider. +[10801.000 --> 10804.000] Use of directives and commands. +[10804.000 --> 10810.000] Individuals with more power often use directives and commands to assert their authority. +[10810.000 --> 10816.000] They may give orders, make demands, or use language that implies control. +[10816.000 --> 10826.000] Conversely, individuals with less power may use more polite and deferential language, seeking permission or using indirect requests. +[10827.000 --> 10830.000] Interruptions and dominance in conversation. +[10830.000 --> 10836.000] Power imbalances can also be reflected in the way individuals engage in conversation. +[10836.000 --> 10844.000] Those with more power may interrupt and dominate conversations, steering the discussion towards their own agenda. +[10844.000 --> 10848.000] They may also dismiss or invalidate the opinions of others. +[10848.000 --> 10855.000] Conversely, individuals with less power may be interrupted more frequently and have their contributions devalued. +[10856.000 --> 10859.000] Use of persuasive techniques. +[10859.000 --> 10865.000] Individuals with more power often employ persuasive techniques to influence others. +[10865.000 --> 10872.000] They may use rhetoric, logical arguments, or emotional appeals to sway opinions and gain compliance. +[10872.000 --> 10881.000] Conversely, individuals with less power may use more passive and submissive language, seeking agreement and avoiding confrontation. +[10881.000 --> 10884.000] Contextual factors. +[10884.000 --> 10890.000] It is essential to consider contextual factors when detecting power imbalances. +[10890.000 --> 10898.000] The dynamics of power can vary depending on the specific situation, cultural norms, and the individuals involved. +[10898.000 --> 10907.000] Factors such as social status, gender, age, and professional hierarchy can all influence power dynamics within a relationship. +[10907.000 --> 10914.000] It is crucial to be mindful of these contextual factors when interpreting non-verbal and verbal cues. +[10914.000 --> 10921.000] Understanding power imbalances and being able to detect them can help you navigate relationships more effectively. +[10921.000 --> 10931.000] By recognising the dynamics at play, you can adjust your communication style, assert your needs, and navigate power imbalances with greater awareness. +[10931.000 --> 10938.000] Developing these skills will enhance your ability to read others and build more meaningful and balanced relationships. +[10938.000 --> 10942.000] Understanding relationship cues. +[10942.000 --> 10951.000] In order to truly understand and decode people, it is essential to pay attention to the relationship cues that are present in their interactions. +[10951.000 --> 10965.000] Relationships play a significant role in shaping our behaviour and communication patterns, and by observing these cues, we can gain valuable insights into the dynamics and dynamics that the relationships people have with others. +[10965.000 --> 10973.000] This section will explore the various relationship cues that can help us better understand the individuals we interact with. +[10973.000 --> 10976.000] Verbal and non-verbal synchrony. +[10976.000 --> 10984.000] One important relationship cue to observe is the level of synchrony between individuals in their verbal and non-verbal communication. +[10984.000 --> 10991.000] Synchrony refers to the degree to which two or more people are in harmony or alignment with each other. +[10991.000 --> 10999.000] When individuals are in a close and positive relationship, they tend to exhibit high levels of synchrony in their communication. +[10999.000 --> 11007.000] This can be observed through mirroring of body language, similar speech patterns, and even matching vocal tone and pitch. +[11007.000 --> 11018.000] On the other hand, individuals who are not in sync may display conflicting non-verbal cues, interruptions in speech, or a lack of shared emotional expressions. +[11018.000 --> 11021.000] Proximity and personal space. +[11021.000 --> 11028.000] Another important relationship cue is the proximity and personal space individuals maintain with each other. +[11028.000 --> 11034.000] The distance we keep from others can reveal a lot about the nature of our relationship with them. +[11034.000 --> 11046.000] In intimate relationships, such as close friendships or romantic partnerships, individuals tend to maintain a smaller personal space and may engage in frequent physical contact. +[11046.000 --> 11055.000] In more formal or distant relationships, the personal space is typically larger and individuals may maintain a greater physical distance. +[11056.000 --> 11066.000] By observing the proximity and personal space between individuals, we can gain insights into the level of comfort, trust, and intimacy in their relationship. +[11066.000 --> 11069.000] Body language and touch. +[11069.000 --> 11077.000] Body language and touch are powerful relationship cues that can provide valuable information about the nature of a relationship. +[11077.000 --> 11087.000] The way individuals interact physically, such as hugging, holding hands, or touching each other's arms, can indicate a close and affectionate relationship. +[11087.000 --> 11096.000] Conversely, a lack of physical contact or the presence of rigid and distant body language may suggest a more formal or distant relationship. +[11096.000 --> 11109.000] It is important to note that cultural norms and personal boundaries can influence the interpretation of body language and touch, so it is crucial to consider these factors when analysing relationship cues. +[11109.000 --> 11112.000] Emotional expressions. +[11112.000 --> 11119.000] Emotional expressions are key relationship cues that can reveal the depth and quality of a relationship. +[11119.000 --> 11131.000] When individuals are in a positive and supportive relationship, they are more likely to display genuine and positive emotional expressions, such as smiles, laughter, and expressions of joy. +[11131.000 --> 11141.000] Conversely, negative emotional expressions, such as frowns, anger, or sadness, may indicate tension or conflict within the relationship. +[11141.000 --> 11151.000] By observing the emotional expressions of individuals in their interactions, we can gain insights into the overall emotional climate of their relationship. +[11151.000 --> 11154.000] Communication patterns. +[11154.000 --> 11161.000] The communication patterns between individuals can provide valuable clues about the dynamics of their relationship. +[11161.000 --> 11170.000] In healthy relationships, individuals tend to engage in open and respectful communication, where both parties feel heard and understood. +[11170.000 --> 11176.000] They may take turns speaking actively listen to each other and show empathy and support. +[11176.000 --> 11189.000] On the other hand, in dysfunctional or toxic relationships, communication patterns may be characterized by frequent interruptions, defensiveness, criticism, or a lack of active listening. +[11189.000 --> 11199.000] By analysing the communication patterns between individuals, we can gain insights into the level of trust, respect, and mutual understanding in their relationship. +[11200.000 --> 11202.000] Power dynamics. +[11202.000 --> 11210.000] Power dynamics are an important aspect of relationships and can greatly influence the way individuals interact with each other. +[11210.000 --> 11223.000] Power imbalances can be observed through verbal and non-verbal cues, such as dominant body language, interrupting or talking over others, or using manipulative tactics to control the conversation. +[11224.000 --> 11235.000] Individuals in positions of power may display more confident and assertive behaviour, while those in more subordinate positions may exhibit more submissive or deferential behaviour. +[11235.000 --> 11244.000] By understanding the power dynamics at play in a relationship, we can better interpret the behaviors and communication patterns of individuals involved. +[11244.000 --> 11247.000] Trust and intimacy. +[11247.000 --> 11256.000] Trust and intimacy are fundamental components of healthy relationships and their presence or absence can be observed through various cues. +[11256.000 --> 11265.000] In trusting relationships, individuals tend to display open body language, maintain eye contact, and engage in active listening. +[11265.000 --> 11271.000] They may also share personal information, thoughts, and feelings with each other. +[11271.000 --> 11282.000] Conversely, in relationships where trust is lacking, individuals may exhibit closed-off body language, avoid eye contact, and display defensive or guarded behaviour. +[11282.000 --> 11292.000] By observing the level of trust and intimacy in a relationship, we can gain insights into the overall health and strength of the connection between individuals. +[11292.000 --> 11301.000] Understanding relationship cues is crucial for decoding people and gaining a deeper understanding of their behaviour and communication patterns. +[11301.000 --> 11320.000] By paying attention to verbal and non-verbal synchrony, proximity and personal space, body language and touch, emotional expressions, communication patterns, power dynamics, and trust and intimacy, we can develop a more comprehensive understanding of the relationship's individuals have with others. +[11320.000 --> 11328.000] This knowledge can help us navigate our own relationships more effectively and build stronger connections with those around us. +[11329.000 --> 11344.000] Interpreting non-verbal signals in relationships. In any relationship, whether it be romantic, familial, or professional, non-verbal signals play a crucial role in understanding the dynamics and emotions between individuals. +[11344.000 --> 11354.000] These signals can provide valuable insights into a person's thoughts, feelings, and intentions allowing you to navigate the relationship more effectively. +[11355.000 --> 11363.000] In this section, we will explore the various non-verbal signals that can be observed in relationships and how to interpret them. +[11363.000 --> 11373.000] Body language. Body language is a powerful form of non-verbal communication that can reveal a person's true feelings and attitudes. +[11373.000 --> 11382.000] In relationships, paying attention to body language can help you understand the level of comfort, trust, and engagement between individuals. +[11382.000 --> 11393.000] For example, cross-arms and a tense posture may indicate defensiveness or discomfort while open and relaxed body language suggests a sense of ease and openness. +[11393.000 --> 11400.000] Gestures and facial expressions also play a significant role in conveying emotions and intentions. +[11400.000 --> 11408.000] A smile, for instance, can indicate happiness or friendliness while a furrowed brow may signal confusion or concern. +[11409.000 --> 11415.000] It is important to consider the context and cluster of non-verbal signals to accurately interpret their meaning. +[11415.000 --> 11424.000] For instance, a person who is smiling but has cross-arms and avoids eye contact may be masking their true emotions. +[11424.000 --> 11433.000] Eye contact. Eye contact is a powerful non-verbal signal that can convey a range of emotions and intentions. +[11433.000 --> 11439.000] In relationships, eye contact can indicate interest, attentiveness and sincerity. +[11439.000 --> 11447.000] When someone maintains steady eye contact, it suggests that they are actively listening and engaged in the conversation. +[11447.000 --> 11454.000] On the other hand, avoiding eye contact may indicate discomfort, dishonesty, or disininterest. +[11454.000 --> 11460.000] It is essential to consider cultural and individual differences when interpreting eye contact. +[11461.000 --> 11470.000] In some cultures, prolonged eye contact may be seen as disrespectful or confrontational, while in others, it may be a sign of trust and respect. +[11470.000 --> 11479.000] Additionally, some individuals may naturally have difficulty maintaining eye contact due to shyness or social anxiety. +[11479.000 --> 11489.000] Therefore, it is crucial to consider other non-verbal cues in conjunction with eye contact to gain a comprehensive understanding of the person's emotions and intentions. +[11489.000 --> 11500.000] Proximity and touch. The proximity between individuals and their comfort with physical touch can provide valuable insights into the nature of their relationship. +[11500.000 --> 11512.000] In intimate relationships, such as romantic partnerships or close friendships, individuals tend to maintain closer physical proximity and engage in more frequent and affectionate touch. +[11512.000 --> 11523.000] On the other hand, in professional or formal relationships, individuals typically maintain a greater physical distance and engage in less physical contact. +[11523.000 --> 11532.000] Observing the level of comfort and boundaries regarding physical touch can help you understand the dynamics and level of intimacy in a relationship. +[11532.000 --> 11543.000] For example, if someone consistently avoids physical contact or appears uncomfortable when touched, it may indicate a need for personal space or a lack of trust. +[11543.000 --> 11550.000] Conversely, individuals who initiate and welcome physical touch may feel more connected and comfortable with each other. +[11550.000 --> 11553.000] Vocal cues. +[11553.000 --> 11561.000] In addition to non-verbal cues, vocal cues can also provide valuable insights into a person's emotions and intentions. +[11561.000 --> 11570.000] The tone, pitch and volume of someone's voice can convey a range of emotions such as anger, excitement or sadness. +[11570.000 --> 11580.000] For example, a raised voice and aggressive tone may indicate frustration or anger while a soft and soothing tone may convey comfort or empathy. +[11580.000 --> 11585.000] It is important to pay attention to changes in vocal cues during a conversation. +[11585.000 --> 11592.000] Sudden shifts in tone or volume may indicate a change in emotions or the presence of underlying tension. +[11592.000 --> 11601.000] Additionally, the pace and rhythm of speech can also provide insights into a person's level of confidence, nervousness or excitement. +[11601.000 --> 11604.000] Consistency and congruence. +[11604.000 --> 11612.000] When interpreting non-verbal signals in relationships, it is crucial to consider the consistency and congruence of these signals. +[11613.000 --> 11619.000] Consistency refers to the alignment between a person's verbal and non-verbal cues. +[11619.000 --> 11630.000] For example, if someone says they are happy but their facial expression and body language suggest otherwise, there may be a discrepancy between their words and their true emotions. +[11630.000 --> 11637.000] Congruence, on the other hand, refers to the alignment between different non-verbal signals. +[11637.000 --> 11648.000] For instance, if someone is smiling, maintaining eye contact and leaning in during a conversation, these signals are congruent and suggest interest and engagement. +[11648.000 --> 11657.000] However, if their arms are crossed and they are avoiding eye contact, these signals may be incongruent and indicate discomfort or disininterest. +[11657.000 --> 11667.000] By considering both consistency and congruence, you can gain a more accurate understanding of a person's true thoughts, feelings and intentions. +[11667.000 --> 11677.000] It is important to remember that non-verbal signals should be interpreted in conjunction with verbal communication and the context of the relationship to avoid misinterpretation. +[11677.000 --> 11679.000] Conclusion. +[11680.000 --> 11689.000] Interpreting non-verbal signals in relationships is a valuable skill that can enhance your understanding of others and improve your communication and empathy. +[11689.000 --> 11705.000] By paying attention to body language, eye contact, proximity, touch, vocal cues and the consistency and congruence of these signals, you can gain valuable insights into a person's emotions, intentions and the dynamics of the relationship. +[11706.000 --> 11718.000] Remember to consider individual and cultural differences and to interpret non-verbal signals in conjunction with verbal communication to gain a comprehensive understanding of the person and the relationship. +[11718.000 --> 11723.000] Chapter 8. Applying people reading skills. +[11723.000 --> 11726.000] Enhancing communication and empathy. +[11726.000 --> 11737.000] In the previous chapters, we have explored various aspects of decoding people from understanding non-verbal communication to detecting deception and interpreting personal style. +[11737.000 --> 11744.000] Now, let's delve into how we can apply these people reading skills to enhance communication and empathy. +[11744.000 --> 11747.000] Active Listening. +[11747.000 --> 11753.000] One of the most important skills in enhancing communication and empathy is active listening. +[11753.000 --> 11763.000] Active Listening involves fully engaging with the speaker, not just hearing their words, but also paying attention to their non-verbal cues and emotions. +[11763.000 --> 11770.000] By actively listening, you can gain a deeper understanding of the speaker's thoughts, feelings and intentions. +[11770.000 --> 11776.000] To practice active listening, start by giving your full attention to the speaker. +[11776.000 --> 11786.000] Maintain eye contact, nod your head to show understanding and provide verbal cues such as SIC or Go-On to encourage them to share more. +[11786.000 --> 11791.000] Avoid interrupting or formulating your response while the speaker is talking. +[11791.000 --> 11797.000] Instead, focus on understanding their perspective and empathizing with their emotions. +[11797.000 --> 11800.000] Empathy and Perspective Taking. +[11800.000 --> 11805.000] Empathy is the ability to understand and share the feelings of another person. +[11805.000 --> 11811.000] It is a crucial skill in building strong relationships and effective communication. +[11811.000 --> 11819.000] By putting yourself in someone else's shoes and seeing the world from their perspective, you can develop a deeper sense of empathy. +[11819.000 --> 11824.000] To enhance your empathy skills, practice perspective taking. +[11824.000 --> 11829.000] Try to imagine how the other person might be feeling in a given situation. +[11830.000 --> 11837.000] Consider their background, experiences and beliefs that may influence their emotions and actions. +[11837.000 --> 11843.000] By understanding their perspective, you can respond in a more empathetic and compassionate manner. +[11843.000 --> 11846.000] Emotional Intelligence. +[11846.000 --> 11854.000] Emotional intelligence is the ability to recognize, understand and manage your own emotions as well as the emotions of others. +[11854.000 --> 11860.000] It plays a vital role in effective communication and building strong relationships. +[11860.000 --> 11868.000] By developing your emotional intelligence, you can better navigate social interactions and respond appropriately to other emotions. +[11868.000 --> 11875.000] To enhance your emotional intelligence, start by becoming more aware of your own emotions. +[11875.000 --> 11882.000] Pay attention to how certain situations or interactions make you feel and how you typically respond. +[11882.000 --> 11887.000] This self-awareness will help you better understand and regulate your emotions. +[11887.000 --> 11894.000] Additionally, practice empathy by actively observing and interpreting the emotions of others. +[11894.000 --> 11899.000] Notice their facial expressions, body language and tone of voice. +[11899.000 --> 11907.000] Try to identify the underlying emotions they may be experiencing and respond with empathy and understanding. +[11907.000 --> 11910.000] Nonverbal Communication. +[11910.000 --> 11916.000] Nonverbal Communication plays a significant role in enhancing communication and empathy. +[11916.000 --> 11925.000] By paying attention to nonverbal cues, you can gain valuable insights into a person's thoughts, feelings and intentions. +[11925.000 --> 11930.000] Observe the speaker's body language, facial expressions and gestures. +[11930.000 --> 11937.000] These nonverbal cues can provide clues about their level of comfort, engagement and emotional state. +[11937.000 --> 11947.000] For example, cross-arms may indicate defensiveness or discomfort, while open and relaxed body language may suggest receptiveness and trust. +[11947.000 --> 11954.000] By aligning your own nonverbal cues with the speakers, you can create a sense of rapport and understanding. +[11954.000 --> 11962.000] Mirror their body language and facial expressions to establish a connection and show that you are actively engaged in the conversation. +[11962.000 --> 11965.000] Effective questioning and clarification. +[11966.000 --> 11973.000] As you engage in conversations, it is essential to ask effective questions and seek clarification when needed. +[11973.000 --> 11980.000] This demonstrates your interest in understanding the speaker's perspective and encourages them to share more information. +[11980.000 --> 11986.000] Ask open-ended questions that require more than a simple yes or no answer. +[11986.000 --> 11993.000] These questions invite the speaker to elaborate and provide deeper insights into their thoughts and feelings. +[11993.000 --> 11999.000] Additionally, use clarifying questions to ensure you have understood their message correctly. +[11999.000 --> 12006.000] Repeat back what you have understood and ask if your interpretation aligns with their intended meaning. +[12006.000 --> 12009.000] Cultivating trust and rapport. +[12009.000 --> 12014.000] Building trust and rapport is crucial for effective communication and empathy. +[12014.000 --> 12021.000] When people feel comfortable and trust you, they are more likely to open up and share their thoughts and emotions. +[12021.000 --> 12027.000] To cultivate trust and rapport, be genuine and authentic in your interactions. +[12027.000 --> 12032.000] Show empathy, understanding and respect for the other person's perspective. +[12032.000 --> 12036.000] Be reliable and follow through on your commitments. +[12036.000 --> 12044.000] By consistently demonstrating these qualities, you can establish a foundation of trust and rapport in your relationships. +[12044.000 --> 12048.000] Flexibility and adaptability. +[12048.000 --> 12054.000] Enhancing communication and empathy also requires flexibility and adaptability. +[12054.000 --> 12059.000] Different people have unique communication styles and preferences. +[12059.000 --> 12066.000] By being flexible in your approach, you can adjust your communication style to meet the needs of others. +[12066.000 --> 12071.000] Pay attention to the verbal and non-verbal cues of the person you are communicating with. +[12071.000 --> 12077.000] Adapt your tone, pace and style of communication to match theirs. +[12077.000 --> 12083.000] This flexibility demonstrates your willingness to connect and understand them on their terms. +[12083.000 --> 12086.000] Conflict resolution and negotiation. +[12086.000 --> 12092.000] Effective communication and empathy are essential in conflict resolution and negotiation. +[12092.000 --> 12102.000] By understanding the perspectives and emotions of all parties involved, you can find common ground and work towards a mutually beneficial solution. +[12102.000 --> 12106.000] Practice active listening and empathy during conflicts. +[12106.000 --> 12112.000] Allow each person to express their thoughts and emotions without interruption. +[12112.000 --> 12116.000] Seek to understand their underlying needs and concerns. +[12116.000 --> 12125.000] By acknowledging and validating their feelings, you can create a more collaborative and empathetic environment for resolving conflicts. +[12125.000 --> 12128.000] Continuous learning and improvement. +[12128.000 --> 12133.000] Enhancing communication and empathy is an ongoing process. +[12133.000 --> 12136.000] It requires continuous learning and improvement. +[12136.000 --> 12142.000] Take the time to reflect on your interactions and identify areas for growth. +[12142.000 --> 12149.000] Seek feedback from others to gain insights into how your communication style and empathy skills are perceived. +[12149.000 --> 12155.000] Actively listen to their feedback and use it as an opportunity for self-improvement. +[12155.000 --> 12162.000] Additionally, continue to educate yourself on the latest research and practices in communication and empathy. +[12162.000 --> 12171.000] By committing to continuous learning and improvement, you can enhance your communication skills and deepen your empathy towards others. +[12171.000 --> 12177.000] In conclusion, enhancing communication and empathy is a vital aspect of decoding people. +[12177.000 --> 12188.000] By practicing active listening, empathy, emotional intelligence and nonverbal communication, you can build stronger relationships and communicate more effectively. +[12189.000 --> 12200.000] Additionally, by cultivating trust, being flexible and continuously learning and improving, you can navigate conflicts and negotiations with empathy and understanding. +[12200.000 --> 12207.000] Remember, people reading is not about manipulation, but about fostering genuine connections and understanding. +[12207.000 --> 12210.000] Building rapport and trust. +[12210.000 --> 12217.000] Building rapport and trust is essential in any relationship, whether it's personal or professional. +[12217.000 --> 12224.000] When you can read people like a book, you have a better understanding of their needs, desires and motivations. +[12224.000 --> 12231.000] This knowledge allows you to connect with them on a deeper level and establish a strong foundation of trust. +[12231.000 --> 12238.000] In this section, we will explore strategies and techniques to help you build rapport and trust with others. +[12238.000 --> 12241.000] Establishing a connection. +[12241.000 --> 12247.000] To build rapport and trust, it's important to establish a genuine connection with the other person. +[12247.000 --> 12251.000] Here are some strategies to help you do that. +[12251.000 --> 12257.000] Active listening show genuine interest in what the other person is saying by actively listening. +[12257.000 --> 12264.000] Maintain eye contact, nod your head and provide verbal cues to show that you are engaged in the conversation. +[12264.000 --> 12268.000] Avoid interrupting or jumping to conclusions. +[12268.000 --> 12274.000] Empathy put yourself in the other person's shoes and try to understand their perspective. +[12274.000 --> 12280.000] Show empathy by acknowledging their feelings and validating their experiences. +[12280.000 --> 12284.000] This will help create a sense of trust and understanding. +[12284.000 --> 12291.000] Find common ground, look for shared interests, experiences or values that you can connect on. +[12291.000 --> 12300.000] Finding common ground helps to establish a sense of familiarity and similarity which can strengthen the bond between you and the other person. +[12300.000 --> 12307.000] Use open and positive body language, your body language plays a crucial role in building rapport. +[12307.000 --> 12314.000] Maintain an open posture, smile genuinely and use appropriate gestures to convey warmth and friendliness. +[12314.000 --> 12319.000] Avoid crossing your arms or displaying defensive body language. +[12319.000 --> 12322.000] Building trust. +[12322.000 --> 12326.000] Trust is the foundation of any successful relationship. +[12326.000 --> 12330.000] Here are some strategies to help you build trust with others. +[12330.000 --> 12335.000] Be reliable, consistently follow through on your commitments and promises. +[12335.000 --> 12341.000] When others see that you are reliable and dependable, they are more likely to trust you. +[12341.000 --> 12347.000] Be authentic, be true to yourself and show authenticity in your interactions. +[12347.000 --> 12352.000] People are more likely to trust someone who is genuine and transparent. +[12352.000 --> 12358.000] Maintain confidentiality, respect the privacy and confidentiality of others. +[12358.000 --> 12364.000] When people feel that their personal information is safe with you, they are more likely to trust you. +[12364.000 --> 12370.000] Demonstrate competence, show that you are knowledgeable and skilled in your area of expertise. +[12370.000 --> 12377.000] When others see that you are competent, they are more likely to trust your judgement and rely on your expertise. +[12377.000 --> 12382.000] Be consistent, consistency is key in building trust. +[12382.000 --> 12386.000] Be consistent in your words, actions and behaviours. +[12386.000 --> 12393.000] When people can predict how you will respond in different situations, they feel more secure and trusting. +[12394.000 --> 12401.000] Show empathy and understanding, demonstrate empathy and understanding towards others' feelings and experiences. +[12401.000 --> 12406.000] When people feel heard and understood, they are more likely to trust you. +[12406.000 --> 12414.000] Effective communication. Effective communication is crucial in building rapport and trust. +[12414.000 --> 12420.000] Here are some communication strategies to help you establish a strong connection with others. +[12420.000 --> 12427.000] Use active listening, practice active listening by fully focusing on the speaker and avoiding distractions. +[12427.000 --> 12434.000] Show that you are engaged by paraphrasing, asking clarifying questions and providing feedback. +[12434.000 --> 12441.000] Use clear and concise language, communicate your thoughts and ideas clearly and concisely. +[12441.000 --> 12446.000] Avoid using jargon or complex language that may confuse the other person. +[12447.000 --> 12452.000] Use simple and straightforward language to ensure effective communication. +[12452.000 --> 12462.000] Be mindful of non-verbal cues, pay attention to your non-verbal cues, such as facial expressions, body language and tone of voice. +[12462.000 --> 12468.000] Ensure that your non-verbal cues align with your verbal message to avoid any miscommunication. +[12468.000 --> 12475.000] Practice empathy, put yourself in the other person's shoes and try to understand their perspective. +[12475.000 --> 12480.000] Show empathy by acknowledging their emotions and validating their experiences. +[12480.000 --> 12486.000] This will help create a safe and trusting environment for open communication. +[12486.000 --> 12491.000] Be open to feedback, encourage open and honest feedback from others. +[12491.000 --> 12500.000] When people feel that their opinions are valued and respected, they are more likely to trust you and engage in meaningful conversations. +[12500.000 --> 12509.000] Building rapport in different settings. Building rapport and trust can vary depending on the setting and context. +[12509.000 --> 12513.000] Here are some tips for building rapport in different settings. +[12513.000 --> 12520.000] Professional settings in professional settings focus on establishing credibility and expertise. +[12520.000 --> 12526.000] Demonstrate professionalism, be punctual and deliver on your commitments. +[12526.000 --> 12531.000] Show respect for others' opinions and be open to collaboration. +[12531.000 --> 12539.000] Social settings in social settings focus on finding common interests and engaging in meaningful conversations. +[12539.000 --> 12545.000] Be a good listener, show genuine interest in others and be respectful of their boundaries. +[12545.000 --> 12551.000] Use humour and positivity to create a relaxed and enjoyable atmosphere. +[12551.000 --> 12558.000] Personal relationships in personal relationships focus on building emotional connections. +[12558.000 --> 12564.000] Show vulnerability, share personal experiences and be supportive of the other person. +[12564.000 --> 12570.000] Invest time and effort in nurturing the relationship and creating shared memories. +[12570.000 --> 12575.000] Building rapport and trust takes time and effort but the rewards are invaluable. +[12575.000 --> 12587.000] By understanding and applying the principles of reading others, you can establish strong connections, foster trust and create meaningful relationships in all aspects of your life. +[12587.000 --> 12597.000] Negotiating and persuading effectively. Negotiating and persuading effectively are essential skills in both personal and professional settings. +[12597.000 --> 12606.000] Whether you are trying to close a business deal, convince someone to see things from your perspective or simply navigate a difficult conversation. +[12606.000 --> 12613.000] Understanding the art of reading others can greatly enhance your ability to negotiate and persuade successfully. +[12613.000 --> 12621.000] In this section, we will explore how you can apply your people reading skills to become a more effective negotiator and persuader. +[12621.000 --> 12625.000] Understanding the other person's perspective. +[12625.000 --> 12632.000] One of the key aspects of negotiating and persuading effectively is understanding the other person's perspective. +[12632.000 --> 12640.000] By reading their non-verbal cues, you can gain valuable insights into their thoughts, emotions and motivations. +[12640.000 --> 12649.000] Pay attention to their body language, facial expressions and gestures to gauge their level of interest, agreement or resistance. +[12649.000 --> 12661.000] For example, if you notice that the person you are negotiating with is leaning forward, maintaining eye contact and nodding their head, it indicates that they are engaged and receptive to your ideas. +[12661.000 --> 12670.000] On the other hand, crossed arms are furrowed brow or avoiding eye contact may suggest skepticism or disagreement. +[12670.000 --> 12678.000] By being aware of these signals, you can adjust your approach and tailor your arguments to address their concerns effectively. +[12678.000 --> 12687.000] Building rapport and trust. Building rapport and trust is crucial in any negotiation or persuasion attempt. +[12687.000 --> 12692.000] People are more likely to be persuaded by someone they trust and feel a connection with. +[12692.000 --> 12699.000] By reading the other person's non-verbal cues, you can establish rapport more effectively. +[12699.000 --> 12705.000] Mirror their body language and gestures subtly to create a sense of familiarity and similarity. +[12705.000 --> 12712.000] This technique, known as mirroring, can help establish a subconscious connection and build rapport. +[12712.000 --> 12720.000] However, it is important to be genuine and not overdo it as people can sense when someone is being insincere. +[12720.000 --> 12724.000] Additionally, pay attention to their vocal tone and pitch. +[12724.000 --> 12730.000] A warm and friendly tone can help create a positive atmosphere and foster trust. +[12730.000 --> 12737.000] By matching their tone and pace of speech, you can establish a sense of harmony and understanding. +[12737.000 --> 12746.000] Active listening and empathy. Active listening and empathy are essential skills when it comes to negotiating and persuading effectively. +[12746.000 --> 12756.000] By truly listening to the other person's concerns, needs and desires, you can tailor your arguments and proposals to address their specific interests. +[12756.000 --> 12763.000] Non-verbal cues can provide valuable insights into the other person's emotions and underlying motivations. +[12763.000 --> 12771.000] For example, if you notice signs of frustration or impatience, it may indicate that they feel unheard or misunderstood. +[12771.000 --> 12781.000] By acknowledging their emotions and demonstrating empathy, you can build a stronger connection and increase the likelihood of a successful negotiation or persuasion. +[12782.000 --> 12794.000] Adapting your communication style. Effective negotiators and persuaders understand the importance of adapting their communication style to suit the other person's preferences. +[12794.000 --> 12802.000] By reading their non-verbal cues, you can gauge their preferred communication style and adjust your approach accordingly. +[12802.000 --> 12812.000] For example, some individuals may prefer a direct and assertive communication style, while others may respond better to a more collaborative and cooperative approach. +[12812.000 --> 12823.000] By observing their body language and listening to their speech patterns, you can tailor your communication style to match their preferences, increasing the chances of a positive outcome. +[12823.000 --> 12827.000] Using persuasive language and techniques. +[12827.000 --> 12837.000] In addition to reading non-verbal cues, understanding persuasive language and techniques can greatly enhance your negotiation and persuasion skills. +[12837.000 --> 12843.000] By using the right words and techniques, you can influence the other person's thoughts and decisions. +[12843.000 --> 12847.000] One effective technique is the use of storytelling. +[12847.000 --> 12855.000] By sharing relevant and compelling stories, you can engage the other person emotionally and make your arguments more memorable. +[12855.000 --> 12866.000] Additionally, using persuasive language such as positive framing, emphasizing benefits and addressing objections can help sway the other person's opinion in your favour. +[12866.000 --> 12872.000] However, it is important to use these techniques ethically and responsibly. +[12872.000 --> 12881.000] Manipulative tactics can damage relationships and trust, ultimately undermining your ability to negotiate and persuade effectively in the long run. +[12881.000 --> 12891.000] Managing conflict and finding win-win solutions. Negotiations and persuasions often involve some level of conflict or disagreement. +[12891.000 --> 12899.000] By reading the other person's non-verbal cues, you can better manage conflict and find win-win solutions. +[12899.000 --> 12908.000] Pay attention to signs of frustration, anger or defensiveness as they may indicate that the other person feels threatened or unheard. +[12908.000 --> 12915.000] By acknowledging their emotions and concerns, you can diffuse tension and create a more collaborative atmosphere. +[12915.000 --> 12924.000] Additionally, by understanding the other person's needs and interests, you can work towards finding mutually beneficial solutions. +[12924.000 --> 12934.000] This approach, known as integrative negotiation, focuses on creating value for both parties and can lead to more sustainable and satisfying outcomes. +[12935.000 --> 12944.000] Practicing patience and persistence. Negotiating and persuading effectively often require patience and persistence. +[12944.000 --> 12951.000] People may not be easily swayed and it may take time to build trust and reach a mutually beneficial agreement. +[12951.000 --> 12959.000] By reading the other person's non-verbal cues, you can gauge their level of receptiveness and adjust your approach accordingly. +[12959.000 --> 12970.000] If you notice signs of resistance or disagreement, it may be necessary to reframe your arguments, provide additional evidence or address their concerns more directly. +[12970.000 --> 12978.000] Remember that negotiation and persuasion are iterative processes and it may take multiple attempts to achieve your desired outcome. +[12978.000 --> 12987.000] By staying patient and persistent, you can increase your chances of success, improving leadership and influence skills. +[12988.000 --> 12996.000] In the previous chapters, we have explored various aspects of decoding people and understanding their non-verbal and verbal cues. +[12996.000 --> 13004.000] Now, let's delve into how you can apply these people reading skills to improve your leadership and influence abilities. +[13004.000 --> 13007.000] Building trust and credibility. +[13007.000 --> 13015.000] As a leader, one of the most important qualities you can possess is the ability to build trust and credibility with your team. +[13015.000 --> 13022.000] When your team members trust you, they are more likely to follow your lead and be influenced by your decisions. +[13022.000 --> 13033.000] By reading people effectively, you can gain insights into their thoughts, emotions and motivations which can help you build stronger relationships based on trust. +[13033.000 --> 13038.000] To build trust and credibility, start by actively listening to your team members. +[13038.000 --> 13044.000] Pay attention not only to their words but also to their non-verbal cues. +[13044.000 --> 13051.000] Are they displaying signs of discomfort or agreement? Are they maintaining eye contact or avoiding it? +[13051.000 --> 13059.000] By observing these cues, you can better understand their level of engagement and adjust your communication style accordingly. +[13059.000 --> 13064.000] Additionally, be aware of your own non-verbal signals. +[13064.000 --> 13071.000] Your body language, facial expressions and tone of voice can greatly impact how others perceive you. +[13071.000 --> 13079.000] By aligning your non-verbal cues with your verbal messages, you can enhance your credibility and build trust with your team. +[13079.000 --> 13082.000] Effective communication and empathy. +[13082.000 --> 13090.000] Leadership is not just about giving orders, it's about effective communication and understanding the needs of your team members. +[13090.000 --> 13100.000] By honing your people reading skills, you can become a more empathetic leader who can effectively communicate with and understand the emotions of your team. +[13100.000 --> 13107.000] When communicating with your team, pay attention to their non-verbal cues to gauge their level of understanding and engagement. +[13107.000 --> 13112.000] Are they nodding in agreement or showing signs of confusion? +[13112.000 --> 13118.000] Adjust your communication style accordingly to ensure that your message is being received and understood. +[13118.000 --> 13122.000] Empathy is another crucial skill for effectively leadership. +[13123.000 --> 13132.000] By reading people's emotions and understanding their perspectives, you can respond in a way that shows you genuinely care about their well-being. +[13132.000 --> 13140.000] This can foster a positive and supportive work environment, leading to increased productivity and employee satisfaction. +[13140.000 --> 13143.000] Influencing and persuading others. +[13143.000 --> 13149.000] Leadership often involves influencing and persuading others to achieve common goals. +[13149.000 --> 13158.000] By understanding people's motivations, preferences and decision-making processes, you can become a more persuasive and influential leader. +[13158.000 --> 13166.000] When trying to influence others, it is important to tailor your approach to their individual needs and communication styles. +[13166.000 --> 13174.000] Some people may respond better to logical arguments while others may be more influenced by emotional appeals. +[13174.000 --> 13183.000] By reading people's verbal and non-verbal cues, you can adapt your communication style to effectively persuade and influence them. +[13183.000 --> 13190.000] Additionally, understanding people's values and beliefs can help you frame your message in a way that resonates with them. +[13190.000 --> 13200.000] By appealing to their core values and showing how your ideas align with their interests, you can increase the likelihood of them being influenced by your suggestions. +[13200.000 --> 13203.000] Developing effective leadership styles. +[13203.000 --> 13214.000] Every leader has their own unique leadership style, but by reading people effectively, you can adapt your style to suit different situations and individuals. +[13214.000 --> 13224.000] Understanding the personalities, preferences and communication styles of your team members can help you tailor your leadership approach for maximum effectiveness. +[13224.000 --> 13235.000] For example, some team members may thrive under a more hands-on and directive leadership style, while others may prefer a more collaborative and empowering approach. +[13235.000 --> 13243.000] By reading people's cues and understanding their needs, you can adjust your leadership style to bring out the best in each individual. +[13243.000 --> 13250.000] Furthermore, effective leaders are able to recognise and leverage the strengths of their team members. +[13250.000 --> 13262.000] By understanding people's skills, talents and interests, you can assign tasks and responsibilities that align with their abilities, leading to increased motivation and productivity. +[13262.000 --> 13266.000] Emotional intelligence and self-awareness. +[13266.000 --> 13273.000] Improving your leadership and influence skills also requires developing emotional intelligence and self-awareness. +[13274.000 --> 13283.000] By understanding your own emotions and how they impact your behaviour, you can better manage your reactions and make more informed decisions. +[13283.000 --> 13289.000] Reading people effectively can help you gain insights into your own emotional triggers and biases. +[13289.000 --> 13297.000] By recognising these patterns, you can work on managing them and responding in a more constructive and empathetic manner. +[13297.000 --> 13303.000] Additionally, being self-aware allows you to recognise the impact of your words and actions on others. +[13303.000 --> 13314.000] By reading people's reactions and adjusting your behaviour accordingly, you can create a positive and inclusive work environment that fosters collaboration and productivity. +[13314.000 --> 13322.000] In conclusion, improving your leadership and influence skills requires the ability to read people effectively. +[13322.000 --> 13336.000] By understanding non-verbal and verbal cues, building trust and credibility, practicing effective communication and empathy and adapting your leadership style, you can become a more influential and successful leader. +[13336.000 --> 13344.000] Developing emotional intelligence and self-awareness further enhances your ability to lead with empathy and make informed decisions. +[13344.000 --> 13351.000] So, continue honing your people reading skills and watch as your leadership abilities saw. +[13351.000 --> 13358.000] In conclusion, the ability to read others effectively is a valuable skill when it comes to negotiating and persuading. +[13358.000 --> 13378.000] By understanding the other person's perspective, building rapport and trust, practicing active listening and empathy, adapting your communication style, using persuasive language and techniques, managing conflict and being patient and persistent, you can greatly enhance your negotiation and persuasion skills. +[13378.000 --> 13388.000] By applying these people reading skills, you can increase your chances of achieving successful outcomes in various personal and professional situations. +[13388.000 --> 13401.000] Closing the chapters of decoding human behaviour, mastering non-verbal communication written by mindful literary marks the beginning of your enriched journey in understanding the intricacies of human interaction. +[13402.000 --> 13412.000] Armed with the insights and techniques shared within these pages, you are now equipped to navigate the diverse landscapes of communication with newfound clarity and proficiency. +[13412.000 --> 13423.000] Remember decoding non-verbal cues is not merely about observation, it's about fostering deeper connections, building empathy and honing your leadership skills. +[13424.000 --> 13433.000] As you apply these learnings in your personal and professional spheres, may you continue to evolve as a perceptive communicator and an empathetic individual. +[13433.000 --> 13438.000] Thank you for embarking on this transformative exploration with us. +[13438.000 --> 13451.000] Keep practicing and embodying the principles of mindful communication and may your journey toward mastering non-verbal communication be filled with growth, understanding and meaningful connections. +[13451.000 --> 13457.000] Wishing you continued success and fulfilment on your path of decoding human behaviour. diff --git a/transcript/allocentric_Ks-_Mh1QhMc.txt b/transcript/allocentric_Ks-_Mh1QhMc.txt new file mode 100644 index 0000000000000000000000000000000000000000..924c000ee0c7d5e5bdf7e60b6f859aee7784b38b --- /dev/null +++ b/transcript/allocentric_Ks-_Mh1QhMc.txt @@ -0,0 +1,210 @@ +[0.000 --> 23.840] So I want to start by offering you a free no tech life hack and all it requires of you +[23.840 --> 30.960] is this that you change your posture for two minutes. But before I give it away, I want to ask you to +[30.960 --> 36.080] right now do a little audit of your body and what you're doing with your body. So how many of you +[36.080 --> 41.200] are sort of making yourself smaller, maybe you're hunching, crossing your legs, maybe wrapping your +[41.200 --> 52.080] ankles, sometimes we hold onto our arms like this, sometimes we spread out. I see you. So I want you +[52.080 --> 56.160] to pay attention to what you're doing right now. We're going to come back to that in a few minutes +[56.160 --> 61.040] and I'm hoping that if you sort of learn to tweak this a little bit, it could significantly change +[61.040 --> 68.720] the way your life unfolds. So we're really fascinated with body language and we're particularly +[68.720 --> 73.920] interested in other people's body language. You know, we're interested in like, you know, +[73.920 --> 85.360] an awkward interaction or a smile or a contemptuous glance or maybe a very awkward wink or maybe even +[85.360 --> 91.120] something like a handshake. Here they are arriving at number 10 and look at this lucky policeman +[91.120 --> 95.680] gets the shake hands with the president of the United States. Oh, here comes the prime minister. +[95.920 --> 96.880] No. +[102.880 --> 109.440] So a handshake or the lack of a handshake can have us talking for weeks and weeks and weeks, +[109.440 --> 115.760] even the BBC and the New York Times. So obviously when we think about nonverbal behavior or body +[115.760 --> 121.200] language, but we call it nonverbal as social scientists, it's language. So we think about communication. +[121.200 --> 125.520] When we think about communication, we think about interactions. So what is your body language +[125.520 --> 132.080] communicating to me? What's mine communicating to you? And there's a lot of reason to believe that +[132.080 --> 137.040] this is a valid way to look at this. So social scientists have spent a lot of time looking at the +[137.040 --> 142.480] effects of our body language or other people's body language on judgments and we make sweeping +[142.480 --> 148.480] judgments and inferences from body language and those judgments can predict really meaningful +[148.480 --> 153.600] life outcomes like who we hire or promote, who we ask out on the date. For example, +[155.280 --> 161.840] Nallini-ombadiya researcher at Tufts University shows that when people watch 30-second soundless +[161.840 --> 167.440] clips of real physician patient interactions, their judgments of the physician's niceness +[168.160 --> 172.480] predict whether or not that physician will be sued. So it doesn't have to do so much with whether +[172.480 --> 176.240] or not that physician was incompetent, but do we like that person and how they interacted? +[176.480 --> 183.760] Even more dramatic, Alex Todorovic Princeton has shown us that judgments of political candidates +[183.760 --> 193.360] faces in just one second predict 70% of US Senate and gubernatorial race outcomes. And even, +[193.360 --> 199.920] let's go digital, emoticons used well in online negotiations can lead you to claim more value +[199.920 --> 206.880] from that negotiation if you use them poorly, bad idea. So when we think of non-verbals, we think +[206.880 --> 211.600] of how we judge others, how they judge us and what the outcomes are, we tend to forget the +[211.600 --> 217.920] other audience that's influenced by our non-verbals and that's ourselves. We are also influenced by +[217.920 --> 223.760] our non-verbals, our thoughts and our feelings and our physiology. So what non-verbals am I talking +[223.760 --> 230.320] about? I'm a social psychologist, I study prejudice, and I teach it at a competitive business school. +[230.320 --> 237.280] So it was inevitable that I would become interested in power dynamics. I became especially interested in +[237.280 --> 243.040] non-verbal expressions of power and dominance. And what are non-verbal expressions of power and +[243.040 --> 248.800] dominance? Well, this is what they are. So in the animal kingdom, they are about expanding. So you +[248.800 --> 255.440] make yourself big, you stretch out, you take up space, you're basically opening up, it's about opening +[255.440 --> 262.480] up. And this is true across the animal kingdom, it's not just limited to primates and humans do the +[262.480 --> 269.040] same thing. So they do this both when they have power sort of chronically and also when they're +[269.040 --> 274.160] feeling powerful in the moment. And this one is especially interesting because it really shows us +[274.240 --> 280.560] how universal and old these expressions of power are. This expression, which is known as pride, +[281.200 --> 286.720] Jessica Tracy has studied, she shows that people who are born with sight and people who are +[286.720 --> 292.080] can generally blind do this when they win at a physical competition. So when they cross the +[292.080 --> 296.960] finish line and they won, it doesn't matter if they've never seen anyone do it, they do this. So the +[296.960 --> 302.000] arms up in the V, the chin is slightly lifted. What are we doing when we feel powerless? We do +[302.000 --> 307.920] exactly the opposite. We close up, we wrap ourselves up, we make ourselves small, we don't want to +[307.920 --> 313.760] bump into the person next to us. So again, both animals and humans do the same thing. And this is +[313.760 --> 319.840] what happens when you put together high and low power. So what we tend to do when it comes to power +[319.840 --> 324.800] is that we compliment the others non-verbals. So if someone's being really powerful with us, +[324.800 --> 329.440] we tend to make ourselves smaller. We don't mirror them, we do the opposite of them. So +[330.320 --> 336.000] I'm watching this behavior in the classroom. And what do I notice? I notice that +[337.840 --> 343.520] MBA students really exhibit the full range of power non-verbals. So you have people who are like +[343.520 --> 347.840] caricatures of alphas, like really coming to the room, they get right into the middle of the room, +[348.400 --> 353.360] before class even starts, like they really want to occupy space. When they sit down, they're sort of +[353.360 --> 358.800] spread out, they raise their hands like this. You have other people who are virtually collapsing when +[358.800 --> 362.960] they come in, as soon as they come in, you see it. You see it on their faces and their bodies, +[362.960 --> 367.520] and they sit in their chair and they make themselves tiny, and they go like this when they raise their hand. +[368.560 --> 372.960] I notice a couple things about this. One, you're not going to be surprised. It seems to be related +[372.960 --> 381.280] to gender. So women are much more likely to do this kind of thing than men. Women feel chronically +[381.280 --> 386.560] less powerful than men, so this is not surprising. But the other thing I noticed is that it also +[386.560 --> 391.760] seemed to be related to the extent to which the students were participating and how well they +[391.760 --> 396.800] were participating. And this is really important in the MBA classroom because participation counts +[396.800 --> 402.880] for half the grade. So, business schools have been struggling with its gender grade gap. You get +[402.880 --> 408.000] these equally qualified women and men coming in, and then you get these differences in grades, +[408.000 --> 413.600] and it seems to be partly attributable to participation. So I started to wonder, you know, okay, +[414.080 --> 418.640] so you have these people coming in like this and they're participating. Is it possible that we +[418.640 --> 424.240] could get people to fake it and would it lead them to participate more? So my main collaborator, +[424.240 --> 430.640] Dana Karney, who's at Berkeley, and I really wanted to know, can you fake it till you make it? +[430.640 --> 435.520] Like, can you do this just for a little while and actually experience a behavioral outcome that +[435.520 --> 441.120] makes you seem more powerful? So we know that our non-verbals govern how other people think and +[441.120 --> 445.920] feel about us. There's a lot of evidence, but our question really was, do our non-verbals +[445.920 --> 452.880] govern how we think and feel about ourselves? There's some evidence that they do. So, for example, +[453.920 --> 459.200] when we smile when we feel happy, but also when we're forced to smile by holding a pen in our +[459.200 --> 465.440] teeth like this, it makes us feel happy. So it goes both ways. When it comes to power, +[466.400 --> 473.120] it also goes both ways. So when you feel powerful, you're more likely to do this, but it's also +[473.120 --> 481.840] possible that when you pretend to be powerful, you are more likely to actually feel powerful. +[482.800 --> 488.240] So the second question really was, you know, so we know that our minds change our bodies, +[488.240 --> 494.560] but is it also true that our bodies change our minds? And when I say minds in the case of the +[494.560 --> 499.680] powerful, what am I talking about? So I'm talking about thoughts and feelings and the sort of +[499.680 --> 504.080] physiological things that make up our thoughts and feelings. And in my case, that's hormones. +[504.080 --> 509.120] I look at hormones. So what do the minds of the powerful versus the powerless look like? +[510.080 --> 516.080] So powerful people tend to be not surprisingly more assertive and more confident, +[516.640 --> 520.800] more optimistic. They actually feel that they're going to win even at games of chance. +[521.280 --> 526.880] They also tend to be able to think more abstractly. So there are a lot of differences. +[526.880 --> 530.400] They take more risks. There are a lot of differences between powerful and powerless people. +[531.040 --> 537.360] Physiologically, there are also our differences. On two key hormones, testosterone, which is the +[537.360 --> 543.920] dominant hormone, and cortisol, which is the stress hormone. So what we find is that +[544.080 --> 551.360] high power alpha males in primate hierarchies have high testosterone and low cortisol. +[552.240 --> 558.880] And powerful and effective leaders also have high testosterone and low cortisol. So what does +[558.880 --> 562.800] that mean? When do you think about power, 10 people tended to think only about testosterone, +[562.800 --> 567.920] because that was about dominance. But really, power is also about how you react to stress. +[567.920 --> 573.040] So do you want the high power leader that's dominant, high on testosterone, but really +[573.040 --> 578.960] stress reactive? Probably not. You want the person who's powerful and assertive and dominant, +[578.960 --> 586.800] but not very stress reactive. The person who's laid back. So we know that in primate hierarchies, +[587.280 --> 592.880] if an alpha needs to take over, if an individual needs to take over an alpha role, +[592.880 --> 598.720] sort of suddenly. Within a few days, that individual's testosterone has gone up significantly, +[598.720 --> 604.480] and cortisol has dropped significantly. So we have this evidence, both that the body can shape the +[604.480 --> 611.600] mind, at least at the facial level, and also that role changes can shape the mind. So what happens? +[611.600 --> 616.560] Okay, you take a role change. What happens if you do that at a really minimal level? Like this +[616.560 --> 621.200] tiny manipulation, this tiny intervention, for two minutes, you say, I want you to stay on +[621.200 --> 628.000] like this and it's going to make you feel more powerful. So this is what we did. We decided to +[628.000 --> 634.720] bring people into the lab and run a little experiment. And these people adopted for two minutes, +[634.720 --> 640.400] either high power poses or low power poses. And I'm just going to show you five of the poses, +[640.400 --> 649.920] although they took on only two. So here's one, a couple more. This one has been dubbed the Wonder Woman +[650.000 --> 655.360] by the media. Here are a couple more. So you can be standing or you can be sitting. +[656.320 --> 659.600] Here are the low power poses. So you're folding up, you're making yourself small. +[662.080 --> 666.720] This one is very low power. When you're touching your neck, you're really kind of protecting yourself. +[667.600 --> 673.920] So this is what happens. They come in, they spit into a vial. For two minutes, say, you need to do +[673.920 --> 677.680] this or this. They don't look at pictures of the poses. We don't want to prime them with a concept +[677.680 --> 683.680] of power. We want them to be feeling power. So two minutes, they do this. We then ask them how +[683.680 --> 688.160] powerful do you feel on a series of items. And then we give them an opportunity to gamble. +[688.880 --> 693.680] And then we take another saliva sample. That's it. That's the whole experiment. So this is what we +[693.680 --> 699.280] find. Risk tolerance, which is the gambling. But we find is that when you're in the high power +[699.280 --> 705.920] pose condition, 86% of you will gamble. When you're in the low power pose condition, only 60%. +[705.920 --> 710.240] And that's a pretty whopping significant difference. Here's what we find on testosterone. +[711.360 --> 716.640] From their baseline, when they come in, high power people experience about a 20% increase. +[718.000 --> 724.000] And low power people experience about a 10% decrease. So again, two minutes and you get these changes. +[724.560 --> 729.760] Here's what you get on cortisol. High power people experience about a 25% decrease. +[730.720 --> 736.080] And the low power people experience about a 15% increase. So two minutes leads to these +[736.080 --> 743.280] hormonal changes that configure your brain to basically be either a sort of confident and comfortable, +[743.280 --> 749.760] or really stress reactive. And you know, feeling sort of shut down. And we've all had that feeling, +[749.760 --> 756.320] right? So it seems that our non-verbales do govern how we think and feel about ourselves. So it's +[756.320 --> 762.320] not just others, but it's also ourselves. Also, our bodies change our minds. But the next +[762.320 --> 767.360] question, of course, is can power posing for a few minutes really change your life in meaningful +[767.360 --> 772.240] ways? So this isn't the lab. It's this little task. It's just a couple of minutes. Where can you +[772.240 --> 779.200] actually apply this? Which we cared about, of course. And so we think it's really what matters. +[779.200 --> 784.720] I mean, where you want to use this is evaluative situations, like social threat situations. Where +[784.720 --> 790.320] are you being evaluated, either by your friends, like for teenagers at the lunchroom table? It could be, +[790.320 --> 796.240] you know, for some people speaking at a school board meeting, it might be giving a pitch or giving +[796.240 --> 802.400] a talk like this or doing a job interview. We decided that the one that most people could relate +[802.400 --> 808.400] to because most people had been through was the job interview. So we published these findings +[808.400 --> 812.800] and the media are all over it and they say, okay, so this is what you do when you go in for the job +[812.880 --> 818.800] interview, right? So we were, of course, horrified and it said, oh my god, no, no, no, that's not what we +[818.800 --> 824.400] meant at all for numerous reasons. No, no, no, don't do that. Again, this is not about you talking to +[824.400 --> 828.640] other people. It's you talking to yourself. What do you do before you go into a job interview? You do +[828.640 --> 833.200] this, right? You're sitting down. You're looking at your iPhone or your Android and not trying to +[833.200 --> 838.160] leave anyone out. You are, you know, you're looking at your notes. You're hunting up, making yourself small. +[838.160 --> 843.200] And really what you should be doing maybe is this like in the bathroom, right? Do that, find two +[843.200 --> 848.480] minutes. So that's what we want to test, okay? So we bring people into a lab and they do a cup, +[848.480 --> 853.600] they do either higher low power poses again. They go through a very stressful job interview. +[853.600 --> 861.360] It's five minutes long. They are being recorded. They're being judged also and the judges are trained +[861.360 --> 866.080] to give no nonverbal feedback. So they look like this. Like imagine this is the person +[866.080 --> 873.200] interviewing you. So for five minutes, nothing. And this is worse than being heckled. People hate +[873.200 --> 878.800] this. It's what, Mary Ann LeFrance calls standing in social quicksand. So this really spikes your +[878.800 --> 882.720] cortisol. So this is the job interview we put them through because we really wanted to see what +[882.720 --> 888.960] happened. We then have these coders look at these tapes. Four of them. They're blind to the hypothesis. +[888.960 --> 894.240] They're blind to the conditions. They have no idea who's been posing in what pose. And they, +[894.800 --> 900.160] they end up looking at these sets of tapes and they say, oh, we want to hire these people, +[900.160 --> 905.600] all the high power poses. We don't want to hire these people. We also evaluate these people much +[905.600 --> 911.600] more positively overall. But what's driving it? It's not about the content of the speech. It's +[911.600 --> 915.600] about the presence that they're bringing to the speech. We also, because we rate them on all +[915.600 --> 919.920] these variables related to sort of competence. Like how well structured it is the speech. +[920.000 --> 925.040] How good is it? What are their qualifications? No effect on those things. This is what's affected. +[925.040 --> 929.920] These kinds of things. People are bringing their true selves, basically. They're bringing themselves. +[929.920 --> 936.480] They bring their ideas, but as themselves with no residue over them. So this is what's driving +[936.480 --> 944.080] the effect or mediating the effect. So when I tell people about this, that our bodies change +[944.080 --> 948.080] our minds and our minds can change our behavior and our behavior can change our outcomes, they say to +[948.080 --> 954.480] me, I don't, it feels fake, right? So I said fake it till you make it. Like I don't, it's not me. +[954.480 --> 958.720] Like I don't want to get there and then still feel like a fraud. I don't want to feel like an +[958.720 --> 965.040] imposter. I don't want to get there only to feel like I'm not supposed to be here. And that really +[965.040 --> 969.600] resonated with me because I want to tell you a little story about being an imposter and feeling like +[969.600 --> 975.600] I'm not supposed to be here. When I was 19, I was in a really bad car accident. I was thrown out of a car +[975.760 --> 983.200] rolled several times. I was thrown from the car and I woke up in a head injury rehab ward and I had +[983.200 --> 990.240] been withdrawn from college. And I learned that my IQ had dropped by two standard deviations, +[990.960 --> 996.480] which was very traumatic. I knew my IQ because I had identified with being smart and I had been +[996.480 --> 1001.760] called gifted as a child. So I'm taking out a college. I keep trying to go back. They say you're +[1001.760 --> 1006.960] not going to finish college. There are other things for you to do but that's not going to work out +[1006.960 --> 1013.360] for you. So I really struggled with this and I have to say having your identity taken from you, +[1013.360 --> 1018.320] your core identity and for me it was being smart. Having that taken from you, there's nothing that +[1018.320 --> 1023.280] leaves you feeling more powerless than that. So I felt entirely powerless. I worked and worked and +[1023.280 --> 1027.840] worked and I got lucky and worked and got lucky and worked. Eventually I graduated from college. +[1028.560 --> 1035.520] Took me four years longer than my peers and I convinced someone, my angel advisor, Susan Fisk, +[1035.520 --> 1041.120] to take me on. And so I ended up at Princeton and I was like, I am not supposed to be here. I am +[1041.120 --> 1045.520] an imposter. And the night before my first year of talking, the first year of talking at Princeton is +[1045.520 --> 1051.760] a 20 minute talk to 20 people. That's it. I was so afraid of being found out the next day +[1051.760 --> 1057.280] that I called her and said, I'm quitting. She was like, you are not quitting because I took a gamble +[1057.600 --> 1061.600] on you and you're staying. You're going to stay and this is what you're going to do. You're going +[1061.600 --> 1066.880] to fake it. You're going to do every talk that you ever get asked to do. You're just going to do it +[1066.880 --> 1072.800] and do it and do it even if you're terrified and just paralyzed and having an out of body experience. +[1072.800 --> 1078.800] Until you have this moment where you say, oh my gosh, I'm doing it. I have become this. I am actually +[1078.800 --> 1083.520] doing this. So that's what I did. Five years in grad school. A few years, I'm at Northwestern, +[1083.520 --> 1088.640] I moved to Harvard. I'm at Harvard. I'm not really thinking about it anymore. But for a long time, +[1088.640 --> 1093.120] I had been thinking not supposed to be here. I'm not supposed to be here. So the end of my first year +[1093.120 --> 1100.080] at Harvard, a student who had not talked in class the entire semester who I had said, look, you +[1100.080 --> 1104.240] got to participate or else you're going to fail, came into my office. I really didn't know where at all. +[1104.960 --> 1110.960] And she said, she came in totally defeated and she said, I'm not supposed to be here. +[1114.160 --> 1121.120] And that was the moment for me because two things happened. One was that I realized, oh my gosh, +[1121.120 --> 1126.160] I don't feel like that anymore. I don't feel that anymore, but she does and I get that feeling. +[1126.160 --> 1131.360] And the second one, she is supposed to be here. Like she can fake it. She can become it. So I was like, +[1131.920 --> 1136.240] yes, you are. You are supposed to be here. And tomorrow you're going to fake it. You're going to +[1136.240 --> 1139.680] make yourself powerful. And you're going to. +[1143.920 --> 1150.400] And you're going to go into the classroom and you are going to give the best comment ever. +[1151.520 --> 1155.520] And she gave the best comment ever. And people turned around and they were like, oh my god, +[1155.520 --> 1161.360] I didn't even notice her sitting there. She comes back to me months later and I realized that she +[1161.360 --> 1166.960] had not just faked it till she made it. She had actually faked it till she became it. So she had +[1166.960 --> 1173.280] changed. And so I want to say to you, don't fake it till you make it. Fake it till you become it. +[1174.400 --> 1179.120] It's not do it enough until you actually become it and internalize. The last thing I'm going to +[1179.120 --> 1188.640] leave you with is this, tiny tweaks can lead to big changes. So this is two minutes, two minutes, +[1188.640 --> 1193.040] two minutes, two minutes. Before you go into the next stressful evaluative situation, +[1193.040 --> 1198.720] for two minutes, try doing this in the elevator in a bathroom stall at your desk behind closed doors. +[1198.720 --> 1203.360] That's what you want to do. Get configure your brain to cope the best in that situation. +[1203.360 --> 1208.080] Get your testosterone up. Get your cortisol down. Don't leave that situation feeling like, +[1208.080 --> 1212.800] oh, I didn't show them who I am. Leave that situation feeling like, oh, I really feel like I got to +[1212.800 --> 1219.920] say who I am and show who I am. So I want to ask you first, you know, both to try power posing. +[1220.800 --> 1227.040] And also I want to ask you to share this science because this is simple. I don't have ego involved in +[1227.040 --> 1231.440] this. Give it away. Like share it with people because the people who can use it the most are the +[1231.440 --> 1238.880] ones with no resources and no technology and no status and no power. Give it to them because they +[1238.880 --> 1244.480] can do it in private. They need their bodies, privacy and two minutes and it can significantly change +[1244.480 --> 1254.480] the outcomes of their life. Thank you. diff --git a/transcript/allocentric_M5i5c9kNbOQ.txt b/transcript/allocentric_M5i5c9kNbOQ.txt new file mode 100644 index 0000000000000000000000000000000000000000..21253140f82b12b604907d59450c60557cbd1c73 --- /dev/null +++ b/transcript/allocentric_M5i5c9kNbOQ.txt @@ -0,0 +1,46 @@ +[0.000 --> 7.120] April is autism awareness month and in that spirit we want to introduce you to a very special young man named Chase. +[7.120 --> 13.920] David, actually news reporter Kim Russo took to Twitter asking if you would like to see more happy and positive stories right here on the news. +[15.760 --> 24.560] This one is about Chase, the little boy who couldn't communicate with the world. His parents trying one last therapy and hopes for change. +[24.560 --> 29.920] The results? Happy both day to day. Happy both day to years. +[29.920 --> 36.800] Here's how it happened. Imagine living every day not only unable to speak but to make sounds. +[36.800 --> 38.480] All through this. +[38.480 --> 45.440] This is video of therapists giving Chase a little boy from Macomb County a way to communicate with technology when he was three years old. +[45.440 --> 47.440] Switch, switch, switch, no. +[49.440 --> 50.640] All good job. +[51.440 --> 56.400] The idea is once you teach someone how to communicate one way, sometimes speech follows. +[56.400 --> 57.440] Cookie. +[57.440 --> 64.800] A year later Chase had made some progress but not as much as hoped. His parents have been told by some that it was going to be too late for him to learn. +[64.800 --> 69.520] There was a point when we thought at four if he's not talking there's a good chance he's not going to talk. +[69.520 --> 71.520] We're like well he's going to be nonverbal. +[71.520 --> 76.960] They decided to try one more thing. They came here to the Kaufman Children Center for an evaluation. +[77.200 --> 81.360] Experts here recognized Chase didn't just have autism. +[81.360 --> 87.840] He had severe apraxia. That meant his brain could actually organize the words but he didn't have the motor ability to say them. +[87.840 --> 93.440] This called for different treatments. They taught him sign language in a fun way to expand his ability to communicate. +[96.320 --> 98.960] At the same time they treated his apraxia. +[98.960 --> 102.320] They had to actually put tools in his mouth to get the sound. +[102.960 --> 106.720] You know to get like to even get his position of his jaw correct. +[106.720 --> 112.160] When Chase first came to us he had no vocal imitation. He couldn't imitate any sounds. +[112.160 --> 112.960] Hamsters. +[112.960 --> 113.760] Hamsters. +[113.760 --> 119.520] Now to see him today he can talk in sentences. He has a sense of humor. +[119.520 --> 122.640] Happy birthday to you. Happy birthday. +[122.640 --> 125.520] It's it's just like a miracle. +[125.520 --> 131.920] Nancy Arcoffman owns the Kaufman Children Center. She says miraculous things happen when the right therapies are brought together. +[131.920 --> 138.720] It's about the techniques and so we may be doing something that isn't working for the child +[138.720 --> 143.280] and we may not have known that there was another way to approach the issues. +[143.280 --> 148.480] Chase's parents are sharing their story because they want other parents of children who are nonverbal to know +[148.480 --> 152.640] that sometimes a change of therapy can make all the difference. +[152.640 --> 154.400] His journey has just been amazing. +[154.400 --> 157.840] The band by Lake TLC. +[157.840 --> 162.080] I see a web or a truck around me. +[162.080 --> 164.400] It's long. It's tough. +[164.400 --> 165.600] But you got to stay with it. +[165.600 --> 167.760] You surprises us all the time. +[167.760 --> 170.320] So you got to stay positive. +[170.320 --> 173.280] In West Bloomfield, Kim Russell, Seven Action News. +[173.280 --> 179.040] Wow and you can see the tears in their eyes and boy I can't wait until that young man grows up. +[179.040 --> 182.320] I mean he's a little boy now but I mean with the progress who knows what will happen. +[182.320 --> 188.320] Thank you and very special right and happy for that family to see this progress and thanks to +[188.320 --> 190.400] Kim Russell for bringing that to us. +[190.400 --> 193.440] For sure I'm sure you inspired a lot of people today. +[193.440 --> 193.840] The recent diff --git a/transcript/allocentric_MuRVOQY8KoY.txt b/transcript/allocentric_MuRVOQY8KoY.txt new file mode 100644 index 0000000000000000000000000000000000000000..3b14a451501c2f18e29c0649aeabe6685c85c393 --- /dev/null +++ b/transcript/allocentric_MuRVOQY8KoY.txt @@ -0,0 +1,1376 @@ +[0.000 --> 12.000] Here's the agenda for today. +[12.000 --> 17.240] As usual, a bunch of announcements in red assignment 4 was graded. +[17.240 --> 23.080] There will be comments showing up online on stellar soon on any of you didn't get a near +[23.080 --> 25.080] perfect score on it. +[25.080 --> 29.920] And I'll also be going over a little bit of it in a moment. +[29.920 --> 33.560] And then once we do that, we're going to talk about navigation, how we know where we are +[33.560 --> 38.040] and how to get from here to some place else, which is much more awesome than it sounds +[38.040 --> 40.320] at first, as you will see. +[40.320 --> 43.320] Okay, so quick review. +[43.320 --> 45.680] Okay, so what was the key point? +[45.680 --> 49.520] Why did I assign the HACSPE 2001 article for you guys to read? +[49.520 --> 54.560] It presents this important challenge to the functional specificity of the face area and +[54.560 --> 55.560] the place area. +[55.560 --> 57.200] What was that challenge? +[57.200 --> 59.200] What was HACSPE's key point? +[59.200 --> 61.200] Yes, as of all. +[61.200 --> 72.200] So whether the EPA, use just, it's just a has a preference for linearity of scenes, but +[72.200 --> 74.200] it's actually a scene for it. +[74.200 --> 76.200] It's not truly technical. +[76.200 --> 83.400] Yeah, he wasn't worrying about rectilinearity so much back then, but his point was that if +[83.400 --> 88.880] you, we shouldn't care just about the overall magnitude of response of a region. +[88.880 --> 94.160] Like, okay, it's nice if the face area responds like this to faces and like that to objects, +[94.160 --> 101.800] but even if it responds low and the same to cars and chairs, it might still have information +[101.800 --> 106.560] to enable you to distinguish cars from chairs if the pattern of response across voxels in +[106.560 --> 110.640] that region was stably different for cars and chairs. +[110.640 --> 112.080] Okay, that's really key. +[112.080 --> 114.680] We'll go over it a few more points, but that's essential. +[114.680 --> 118.720] Right, a lot of the details that I'm going to teach you that go by in class don't matter +[118.720 --> 122.080] but I really want you guys to understand the VPA and that's the nub of it. +[122.080 --> 129.120] Okay, so the idea is that selective his kind of claim is that selective regions like the +[129.120 --> 134.400] face area contain information about non-preferred stimuli, that is like non-faces for the face +[134.400 --> 138.280] area or non-places for the place area. +[138.280 --> 142.760] And because they contain information, those regions don't care only about their preferred +[142.760 --> 143.760] category. +[143.760 --> 148.480] So why does Camerchor get off saying that the fae is only about faces and the PPA is only +[148.480 --> 152.040] about places if we can see information about other things in those regions. +[152.040 --> 153.720] Okay, that's a really important critique. +[153.720 --> 155.720] That's why we're spending time on it. +[155.720 --> 157.400] Okay, okay. +[157.400 --> 165.400] Next, what kind of empirical data might be an answer to Haxby's charge? +[165.400 --> 170.040] I've presented at least three different kinds of data that can address this and say, +[170.040 --> 174.760] hey, wait a minute, you know, you have a point but what kind of data could speak to that +[174.760 --> 177.440] and respond to Haxby? +[177.440 --> 181.080] We didn't actually talk about this explicitly in class, but think about it. +[181.080 --> 183.440] Here's the claim he makes. +[183.440 --> 184.560] What might we say, right? +[184.560 --> 186.440] So that's empirically true. +[186.440 --> 192.560] Like you look in the FFA even in my own data, I can distinguish chairs from shoes a little +[192.560 --> 195.120] teeny bit in the FFA. +[195.120 --> 198.000] Okay, so that empirical claim is true. +[198.000 --> 204.600] Why might it nonetheless be the case that the face area is really only about face-fraining +[204.600 --> 205.600] fashion? +[205.600 --> 210.280] What other data have you heard in here that might make you think that? +[210.280 --> 211.280] Yes, Ben? +[211.280 --> 218.520] The presence of the facial features is the presence of a little stimuli that are generally +[218.520 --> 224.520] in faces but also scarcely about chairs or cards. +[224.520 --> 226.520] Absolutely. +[226.520 --> 232.680] So yes, so put another way, even if you had a perfect coder for faces, you know, like +[232.680 --> 237.840] take your best deep net for face recognition, VGG face, it can distinguish chairs and +[237.840 --> 239.160] shoes too, right? +[239.160 --> 244.500] The features that you use to represent faces will slightly discriminate between other +[244.500 --> 245.500] non-face objects. +[245.500 --> 250.880] So the fact that we can see that information in itself isn't strong evidence that that +[250.880 --> 254.840] region isn't doing selective, isn't selective for face perception. +[254.840 --> 255.840] Absolutely. +[255.840 --> 257.840] What else? +[257.840 --> 258.840] Yeah. +[258.840 --> 259.840] Okay. +[259.840 --> 265.280] So with trans-cranial magnetic simulation, that when you stimulate the ff and ff and look +[265.280 --> 269.760] at a face that affects it, when you are parallel to that other objects, it's not going +[269.760 --> 270.760] to affect us. +[270.760 --> 271.760] Exactly. +[271.760 --> 275.400] And so what does that tell you about, okay, so there's pattern information in there about +[275.400 --> 283.240] other things beyond faces, but apparently it's not used, right? +[283.240 --> 286.000] Now with every bit of evidence, you can always argue back. +[286.000 --> 288.760] People who say, well, TMS, those effects are tiny. +[288.760 --> 293.080] There isn't, we didn't have power detected, blah, blah, blah, blah, but at least absolutely +[293.080 --> 294.960] you're right, TMS argues against that. +[294.960 --> 295.960] What else? +[295.960 --> 299.840] Or at least there's a way to argue against it and the picture paper that I assigned and +[299.840 --> 304.440] other papers that we've talked about in here provides some evidence that actually, at least +[304.440 --> 309.960] the occipital face area really is only causally involved in face perception, even if there's +[309.960 --> 312.560] information in there about other things. +[312.560 --> 313.560] Okay. +[313.560 --> 314.560] What else? +[314.560 --> 316.560] What other methods can address this? +[316.560 --> 317.560] Yeah. +[317.560 --> 324.560] So, this is a simulation where even when you present non-face or face, it's actually +[324.560 --> 326.560] you can proceed face-to-face. +[326.560 --> 327.560] Exactly. +[327.560 --> 328.560] Exactly. +[328.560 --> 329.560] So these are both causal tests, right? +[329.560 --> 333.960] Okay, there's information in there, but is it causally used in behavior, right? +[333.960 --> 340.440] TMS suggests not the little bit of direct intercranial stimulation data that I showed you +[340.440 --> 345.800] also suggests the causal effects when you stimulate that region are specific to face +[345.800 --> 346.800] perception. +[346.800 --> 350.800] And suggesting that even if there's pattern information in there, it's not doing anything +[350.800 --> 354.760] important because we can mess it up and nothing happens to the perception of things that +[354.760 --> 355.760] aren't faces. +[355.760 --> 356.760] Absolutely. +[356.760 --> 357.760] What else? +[357.760 --> 362.800] We talked about it very briefly a few weeks ago. +[362.800 --> 363.800] Yeah. +[363.800 --> 369.280] So if you know the model or the origin, it just completely makes a person and incapable +[369.280 --> 370.280] of perceiving face-to-face. +[370.280 --> 372.800] That is like a person. +[372.800 --> 376.200] Yes, but the crucial way, yes. +[376.200 --> 381.200] And the crucial way to address Hacksby would be what further aspect of that? +[381.200 --> 382.200] Yes. +[382.200 --> 386.960] And by the way, we don't remove the area in humans, but occasionally we find a human who +[386.960 --> 390.040] had a lesion there due to a stroke and then we study them. +[390.040 --> 392.200] So they still need to do other categories? +[392.200 --> 393.200] Exactly. +[393.200 --> 394.200] Exactly. +[394.200 --> 401.120] So all three lines of evidence and suggest from studies of prosopagnosia, electrical stimulation +[401.120 --> 405.760] directly on the brain and TMS, all can provide evidence to various degrees. +[405.760 --> 410.040] Again, one can quibble about each of these particular studies, but all of those suggest +[410.040 --> 414.600] that even though there's information in the pattern, Hacksby's right, there's information +[414.600 --> 419.480] in there about other things that aren't faces, the only causal effects when you mess up +[419.480 --> 423.240] with that region are on faces, not on other things. +[423.240 --> 427.360] That suggests that pattern information is what they sometimes say in philosophical circles +[427.360 --> 429.000] is epiphenomital. +[429.000 --> 435.200] That is, it's just not related to behavior and perception. +[435.200 --> 436.200] Does that make sense? +[436.200 --> 437.200] Okay. +[437.200 --> 438.800] Moving along. +[438.800 --> 443.480] How can we then use Hacksby's method to not just engage in this little fight about the +[443.480 --> 449.680] FFA and how specific it is, but to ask whether to harness this method and ask other +[449.680 --> 452.760] interesting questions from functional MRI data? +[452.760 --> 458.320] How can we use it to find out, for example, does the place area discriminate, say, beach +[458.320 --> 459.640] scenes from city scenes? +[459.640 --> 461.080] We want to know what's represented in there. +[461.080 --> 464.760] How can we use this method to find out? +[464.760 --> 465.760] Yes, join me. +[465.760 --> 471.760] What do you want to have to be put in, like, trying to decoder and see if it goes to the +[471.760 --> 482.440] decoder because, like, the side, the difference between the city and the right container +[482.440 --> 483.440] sheet? +[483.440 --> 484.440] Exactly. +[484.440 --> 485.440] Exactly. +[485.440 --> 490.480] So, we talked about decoding methods last time as a way to use machine learning to look +[490.480 --> 495.520] at the pattern of response in a region of the brain and train the decoder so it knows +[495.520 --> 500.360] what beach scene, what the response looks like during viewing of beach scenes, train +[500.360 --> 503.920] it so it knows what the response in that region looks like when you're looking at city +[503.920 --> 508.440] scenes and then take a new pattern and say, okay, is this more like the beach pattern or +[508.440 --> 510.040] is it more like the city pattern? +[510.040 --> 512.240] And that's how you could decode from that region. +[512.240 --> 513.240] Yes? +[513.240 --> 514.240] That doesn't tell as much, right? +[514.240 --> 515.240] It doesn't tell. +[515.240 --> 516.240] It's not telling the... +[516.240 --> 522.160] I mean, we know that there's residue of information, there's an discriminator, can be better +[522.160 --> 525.280] than any region, consider it any problems. +[525.280 --> 526.280] So... +[526.280 --> 528.040] We have a true nihilist here. +[528.040 --> 531.160] No, it's a good question. +[531.160 --> 536.440] It's not the case that you can discriminate anything based on any region of the brain. +[536.440 --> 537.880] So there are some constraints. +[537.880 --> 541.640] There's some things you can find in some places and other things you can find in other +[541.640 --> 544.560] places and they're not uniformly distributed over the brain. +[544.560 --> 546.040] However, the fact we just... +[546.040 --> 550.520] The point I just made about, yes, there's discriminative information in the face area +[550.520 --> 557.440] about non-faces, but maybe it's not used, should raise a huge caveat about this whole method. +[557.440 --> 558.720] How do we ever know? +[558.720 --> 560.800] We see some discriminative information. +[560.800 --> 564.840] How do we know whether it's actually used by the brain, part of the brain's own code +[564.840 --> 570.440] for information or just epiphenomenal garbage that's a byproduct of something else? +[570.440 --> 573.560] It's a really important question about all of pattern analysis. +[574.080 --> 578.760] We do it anyway because we're beggars, we can't be choosers in terms of methods with human +[578.760 --> 579.920] cognitive neuroscience. +[579.920 --> 583.160] And we want to know desperately what's represented in each region. +[583.160 --> 584.160] So we do this. +[584.160 --> 589.840] But whenever you see these lovely, I can decode x from y, things you should always be wondering. +[589.840 --> 594.480] Who knows if that fact that you, the scientist, can decode it from that region, means the +[594.480 --> 598.480] brain itself is reading that information out of that region. +[598.480 --> 600.040] Big important question. +[600.040 --> 601.040] Okay. +[602.040 --> 604.280] All right, put another way. +[604.280 --> 607.880] So Jimmy mentioned just decoding in general, and that's absolutely right. +[607.880 --> 611.640] But to directly harness the Hacksby version of this, what would we do? +[611.640 --> 615.280] First, we would functionally localize the PPA. +[615.280 --> 619.560] By scanning them, looking at scenes and objects, find that region in each subject. +[619.560 --> 624.560] Then we would collect the pattern of response across foxes in the PPA, while subjects were +[624.560 --> 626.560] looking at, say, beach scenes. +[626.560 --> 630.960] And so if this is the PPA, this is the pattern of response across foxes in that region. +[630.960 --> 636.040] When they're looking at beach scenes, fake data, obviously, just to give you the idea. +[636.040 --> 639.600] So we would split the data in half, even runs odd runs. +[639.600 --> 641.160] That would be like even runs. +[641.160 --> 644.000] Then we get another pattern for odd runs. +[644.000 --> 648.520] And then we get another pattern for when they're looking at city scenes with even runs +[648.520 --> 653.800] and another pattern when they're looking at city scenes in odd runs. +[653.800 --> 659.280] So then, once we have those four patterns, what is the key prediction? +[659.280 --> 663.840] If using Hacksby's correlation method, what is the key prediction? +[663.840 --> 668.400] If the PPA, if pattern of response in the PPA can discriminate beach scenes from city +[668.400 --> 671.240] scenes, what should we see from these patterns? +[671.240 --> 675.560] What's the key prediction? +[675.560 --> 677.920] Claire. +[677.920 --> 678.920] Key prediction. +[678.920 --> 681.320] You have these four patterns in the PPA. +[681.320 --> 685.480] Now you want to know, is there information in there that enables you to discriminate +[685.480 --> 686.800] beach scenes from city scenes? +[686.800 --> 692.040] Is that like beach even in the beach order, more similar than beach even in city scenes? +[692.040 --> 693.040] Exactly. +[693.040 --> 694.040] Exactly. +[694.040 --> 698.880] It actually, it sounds all complicated and it's easy to get confused, but the no of the +[698.880 --> 700.320] idea is really simple. +[700.320 --> 703.040] It just says, look, the beach patterns are stable. +[703.040 --> 706.800] We do beach a few times, we get the same pattern more or less. +[706.800 --> 711.280] We do city, we get a different pattern, and we keep doing city, we get the same pattern +[711.280 --> 716.080] more or less, and the beach pattern and the city pattern are different. +[716.080 --> 720.320] So that's the no of the idea, and so you can implement it with decoding methods or the +[720.320 --> 727.360] Hacksby versions just to ask whether the correlation between two beach patterns, beach even beach +[727.360 --> 735.440] odd, is more similar than the pattern between one of the beaches and one of the cities. +[735.440 --> 740.600] Just asking, are they more, are they stably similar within a category and stably different +[740.600 --> 741.800] from another category? +[741.800 --> 744.120] Does that make sense? +[744.360 --> 745.360] Okay. +[745.360 --> 750.160] Boom, this is just a variant of this thing I showed you guys before. +[750.160 --> 755.080] We just harness this to ask whether that region can discriminate. +[755.080 --> 757.160] Okay, and I just set all of this. +[757.160 --> 761.440] Okay, if you still feel shaky on this, there's a few things you can do. +[761.440 --> 767.600] A version of my little lecture on this method is here at the my website. +[767.600 --> 770.760] You can look at that, it's just like six minutes and it's basically what I did before, but +[770.760 --> 773.120] if you want to go over it again, there it is. +[773.120 --> 777.040] You can reread the Hacksby paper, which I know is not super easy, but it's actually nicely +[777.040 --> 780.600] written, and if you read it carefully, it explains the method pretty clearly. +[780.600 --> 784.680] You can talk to me or a TA, and we'll get back to this question of whether we should do +[784.680 --> 788.320] a whole MATLAB-based problem set on this. +[788.320 --> 789.320] All right? +[789.320 --> 793.320] Okay, let's move on and talk about navigation. +[793.320 --> 797.920] Okay, this is a monarch butterfly. +[797.920 --> 801.160] It weighs about half a gram. +[801.160 --> 809.000] And yet, each fall, the monarch migrates over 2,000 miles from the USA and Canada down +[809.000 --> 810.760] to Mexico. +[810.760 --> 817.120] In fact, it migrates, a single monarch flies 50 miles in a single day. +[817.120 --> 821.440] It's pretty amazing for this tiny little beautiful delicate thing. +[821.440 --> 826.160] Even more amazing, it flies to a very specific forest in Mexico. +[826.160 --> 830.960] It's just a few acres in size, and it arrives at that particular forest. +[830.960 --> 836.360] Now that's already amazing, but here's the part that is just totally mind-blowing. +[836.360 --> 842.640] And that is, and it flies back north in the spring, and that is that this whole cycle +[842.640 --> 846.080] takes four generations to complete. +[846.080 --> 850.480] And that means that the monarch that starts up in Canada and flies down to that forest +[850.480 --> 858.080] in Mexico, one monarch does that, is the great, great grand kid of his ancestor that last +[858.080 --> 860.600] went on that route. +[860.600 --> 863.400] That in your head and smog it. +[863.400 --> 864.920] That's pretty amazing. +[864.920 --> 865.920] Okay? +[865.920 --> 869.120] All right. +[869.120 --> 872.640] Consider the female loggerhead turtle. +[872.640 --> 879.840] She hatches at a beach and goes out in the sea and swims around in the sea for 20 years +[879.840 --> 884.680] before she comes back 20 years later for the first time to the beach that she hatched +[884.680 --> 885.680] at. +[885.680 --> 888.760] Okay? +[888.760 --> 893.040] Now it's pretty amazing, but some mothers miss by 20 miles. +[893.040 --> 896.840] They go to the wrong island or the wrong beach on the same island. +[896.840 --> 897.840] Okay? +[897.840 --> 900.760] And so you might think, okay, it's pretty good. +[900.760 --> 902.560] It's not amazing. +[902.560 --> 904.200] But here's the thing. +[904.200 --> 909.440] The wrong beach that those mothers go to is the exactly right beach had the Earth's magnetic +[909.440 --> 912.800] field not shifted slightly over those 20 years. +[912.800 --> 917.120] They're exactly precise, but they just don't compensate for the shift in the Earth's +[917.120 --> 920.120] magnetic field. +[920.120 --> 922.120] Okay? +[922.120 --> 923.440] Here's a bat. +[923.440 --> 930.000] This bat maintains its sense of direction even while it flies 30 to 50 miles in a single +[930.000 --> 933.400] night in the dark, catching food. +[933.400 --> 934.400] Okay? +[934.400 --> 938.400] And it maintains its sense of direction even though it's flying around in all different +[938.400 --> 941.240] orientations in three dimensions. +[941.240 --> 949.120] And even as it flips over in lands to perch on the surface of a cave, it doesn't get confused +[949.120 --> 951.400] by being upside down. +[951.400 --> 953.680] Okay? +[953.680 --> 956.920] This is Categlifus, the Tunisian Desert Ant. +[956.920 --> 958.000] These guys are amazing. +[958.000 --> 964.200] They crawl around on the surface of the Tunisian Desert where it's 140 degrees in the daytime. +[964.200 --> 967.120] They have to crawl around up there to forage for food. +[967.120 --> 971.080] And then because it's so damn hot, as soon as they find food, they zoom back to their nest +[971.080 --> 973.760] and go down in the nest where it's cooler. +[973.760 --> 979.920] So here is a track of Categlifus starting at point A and foraging. +[979.920 --> 986.160] He's meandering around looking for food going along this whole crazy path to point B. +[986.160 --> 992.440] And then if he finds food at point B, boom, straight line back exactly to the nest. +[992.440 --> 998.040] Now we might ask, how does Categlifus keep track as he's doing all this stuff of where +[998.040 --> 1001.600] his heading is back to his nest? +[1001.600 --> 1005.440] The first thing you might think of is things like what it looks like, maybe their landmarks, +[1005.440 --> 1009.280] maybe their otters, maybe their otters, but no. +[1009.280 --> 1010.840] He doesn't use any of those things. +[1010.840 --> 1017.400] And we know that because when scientists who have set up this measurement device capture +[1017.400 --> 1022.400] Categlifus after he goes out on this tortuous path and finds the feeding station, they +[1022.400 --> 1026.680] capture him and move him across the desert on which they've drawn all these grid lines +[1026.680 --> 1028.600] for the convenience of their experiment. +[1028.600 --> 1030.120] And they release him here. +[1030.120 --> 1031.920] And what does Categlifus do? +[1031.920 --> 1036.800] He goes on the exactly correct vector. +[1036.800 --> 1039.800] No landmarks, no relevant otters. +[1039.800 --> 1045.080] And yet he's obviously encoded the exact vector of how to get home. +[1045.080 --> 1047.160] Think about what that entails and what's involved. +[1047.160 --> 1048.160] Okay. +[1048.160 --> 1050.160] The same vector with respect to. +[1050.160 --> 1051.160] Nor? +[1051.160 --> 1052.680] With respect to, yes. +[1052.680 --> 1053.680] Yes. +[1053.680 --> 1060.360] With respect to like absolute external direction, absolutely. +[1060.360 --> 1062.640] Okay. +[1062.640 --> 1065.680] So that's what I just said. +[1065.680 --> 1070.920] So these feats of animal navigation are amazing. +[1070.920 --> 1075.720] And animals have evolved ways to solve all these problems unique to their environment. +[1075.720 --> 1080.920] They've evolved these abilities because they really have to be able to find food and +[1080.920 --> 1084.360] mates and shelter. +[1084.360 --> 1088.000] And this is not just a satirica in the natural world. +[1088.000 --> 1095.800] MIT students too need to be able to find food and mates and shelter. +[1095.800 --> 1101.120] So what is navigation anyway and what is it in tail? +[1101.120 --> 1105.080] Well, I'll argue over the next two lectures that there are two fundamental questions that +[1105.080 --> 1108.400] organisms need to solve to be able to navigate. +[1108.400 --> 1110.840] This one is where am I? +[1110.840 --> 1115.800] And the second one is how do I get from here to there, A to B, wherever there is that +[1115.800 --> 1116.920] you need to get. +[1116.920 --> 1117.920] Okay. +[1117.920 --> 1118.920] So we'll unpack this. +[1118.920 --> 1121.200] There are many different facets of each. +[1121.200 --> 1128.600] But so for example, if you see this image, you immediately know where you are. +[1128.600 --> 1131.240] And you also know where to go. +[1131.240 --> 1136.720] If for example, it starts raining, you might brush into lobby seven. +[1136.720 --> 1143.280] Or if you're hungry, you might turn around and go back to the student center. +[1143.280 --> 1144.360] Same deal here. +[1144.360 --> 1148.760] If you see this, then you know where you are and where you would go to get to various +[1148.760 --> 1150.560] things. +[1150.560 --> 1151.560] Okay. +[1151.560 --> 1156.400] Now these judgments rely on the specific knowledge you guys have of those particular places. +[1156.400 --> 1158.360] You recognize that exact place. +[1158.360 --> 1162.080] And you know you have some kind of map in your head that we'll talk more about in a moment +[1162.080 --> 1165.640] that tells you where everything else is with respect to it. +[1165.640 --> 1170.680] But even if you're in a place you don't know at all, you can still extract some information. +[1170.680 --> 1176.720] So suppose you miraculously found yourself, boom, here, I wouldn't mind actually, but +[1176.720 --> 1179.200] that's not in the cards for a while. +[1179.200 --> 1180.760] So you're here. +[1180.760 --> 1184.720] Even if you've just hiked around the corner, if you've never seen this place before, you +[1184.720 --> 1187.960] have some kind of idea of what sort of place this is. +[1187.960 --> 1190.320] Where would you pitch your tent? +[1190.320 --> 1193.280] Where might you try to go to get out of this valley? +[1193.280 --> 1197.840] If it was me, I would have friends who would go straight up there and try to drag me along +[1197.840 --> 1198.840] complaining. +[1198.840 --> 1201.840] If it was me, I'd rather look for some other route. +[1201.840 --> 1205.080] But you can tell all of that just by looking at this image. +[1205.080 --> 1206.080] Okay. +[1206.080 --> 1207.080] Okay. +[1207.080 --> 1210.680] Where you can go from there, not just what kind of a place it is, but what are the possible +[1210.680 --> 1212.680] routes you might take. +[1212.680 --> 1213.680] Okay. +[1213.680 --> 1217.960] So these fundamental problems that we solve in navigation of knowing, where am I and +[1217.960 --> 1221.120] how do I get from here to there? +[1221.120 --> 1223.000] Include multiple components. +[1223.000 --> 1228.920] In terms of where am I, the first piece is recognizing a specific place you know. +[1228.920 --> 1229.920] Okay. +[1229.920 --> 1232.960] So you might open your eyes and say, okay, this is my living room. +[1232.960 --> 1234.920] I know this particular place. +[1234.920 --> 1235.920] Okay. +[1235.920 --> 1240.320] But as I just pointed out, even if the place is unfamiliar, we can get a sense of what kind +[1240.320 --> 1241.320] of place this is. +[1241.320 --> 1242.320] Right? +[1242.320 --> 1246.000] And am I in an urban environment, a natural environment, a living room, a bathroom where +[1246.000 --> 1248.160] am I? +[1248.160 --> 1250.160] A third aspect of where am I? +[1250.160 --> 1254.760] A third way that we might answer that question is something about the geometry of the environment +[1254.760 --> 1256.000] we're in. +[1256.000 --> 1258.760] So try this right now, close your eyes. +[1258.760 --> 1259.760] Okay. +[1259.760 --> 1263.760] Now think about how far the wall is in front of you. +[1263.760 --> 1264.840] Don't open your eyes. +[1264.840 --> 1266.880] Just think about how far away it is. +[1266.880 --> 1270.640] How far away the left wall is and the right wall is. +[1270.640 --> 1271.960] And how about the wall behind you? +[1271.960 --> 1273.040] Don't open your eyes. +[1273.040 --> 1276.840] How far back is the wall behind you from where you are right now? +[1276.840 --> 1277.840] Okay. +[1277.840 --> 1278.840] You can open your eyes. +[1278.840 --> 1279.840] It's not rocket science. +[1279.840 --> 1284.680] I just wanted you to intuit that even though you're presumably riveted by this lecture +[1284.680 --> 1289.680] and thinking only about navigation, you sort of have a kind of situational awareness of +[1289.680 --> 1292.720] the spatial layout of the space you're in. +[1292.720 --> 1297.720] So you might have a sense of, okay, I'm in a space like this and I'm over here in it. +[1297.720 --> 1298.720] Right? +[1298.720 --> 1304.000] And we'll talk more about that exact kind of awareness of your position relative to the +[1304.000 --> 1306.360] spatial layout of your immediate environment. +[1306.360 --> 1310.480] It's something that's very important in navigation. +[1310.480 --> 1315.240] And another part of that is you might think how would I get out of here if I'm seriously +[1315.240 --> 1319.760] bored by the lecture or for any other reason I urgently need to get out of here, you probably +[1319.760 --> 1322.240] know exactly where the doors are in the space. +[1322.240 --> 1327.040] It's just part of one of those things that we keep track of, okay? +[1327.040 --> 1328.040] Okay. +[1328.040 --> 1331.120] So those are aspects of where am I in this place? +[1331.120 --> 1335.120] What are the things we need to know to know how we get from here to some place else? +[1335.120 --> 1336.120] Okay. +[1336.200 --> 1341.960] Well, the simplest way to navigate to another location, another goal is called beaconing. +[1341.960 --> 1346.520] And this is the case where you can directly see or hear your target location. +[1346.520 --> 1349.960] So you're sailing in the fog, you can't see a damn thing, but you hear the fog horn +[1349.960 --> 1354.800] over there and you know you're sailing to that point, so you just go toward the sound. +[1354.800 --> 1355.800] Nice and simple. +[1355.800 --> 1357.320] You don't need any broader map of anything else. +[1357.320 --> 1361.200] You just hear it and head toward it. +[1361.440 --> 1368.040] Or if you see this and your goal is to get to the green building, well, you know there's +[1368.040 --> 1370.040] a green building and you just head that way. +[1370.040 --> 1373.560] Now you're going to have to like go around a little bit to get around those obstacles, +[1373.560 --> 1376.840] but you know where it ahead because you can see your target directly. +[1376.840 --> 1381.880] Okay, these are cases where you don't need a broader long-term knowledge of the whole +[1381.880 --> 1382.880] environment. +[1382.880 --> 1385.680] If you can see your target, you just go straight for it. +[1385.680 --> 1386.680] Okay. +[1386.680 --> 1388.000] So that's beaconing. +[1388.000 --> 1394.440] This kind of A to B and it requires no mental map, no kind of internal model of the whole +[1394.440 --> 1396.680] world you're navigating in. +[1396.680 --> 1402.040] But if you can't see the place you want to go, then you need some kind of mental map +[1402.040 --> 1403.280] of the world. +[1403.280 --> 1405.760] So what do we mean by a mental map of the world? +[1405.760 --> 1412.040] Well, this idea was first articulated in a classic experiment way back in the 1940s. +[1412.040 --> 1416.960] So this was actually one of the original experiments that launched the Cognitive Revolution +[1416.960 --> 1423.840] when we emerged from the scourge of behaviorism to realize it was actually okay and indeed +[1423.840 --> 1427.680] of the essence to talk about what's going on in the mind. +[1427.680 --> 1433.480] And a really influential study that launched the Cognitive Revolution by Tolman was dawn +[1433.480 --> 1435.240] on rats and it went like this. +[1435.240 --> 1436.240] He trained rats. +[1436.240 --> 1442.040] He put them down in this area and they had to learn that there would be food out there +[1442.040 --> 1443.040] at the goal. +[1443.040 --> 1447.120] So they just have to make the series of left and right turns to find the food. +[1447.120 --> 1451.560] Okay, so you train them on that for a while till they're really good at it. +[1451.560 --> 1454.360] And then he put the rats in this environment. +[1454.360 --> 1460.400] Okay, now the environment is similar except there's multiple paths. +[1460.400 --> 1463.320] One that seems analogous to the old route. +[1463.320 --> 1465.720] So what do the rats do in this situation? +[1465.720 --> 1470.880] They run down here, they run into a wall and they realize, okay, that's not going to work. +[1470.880 --> 1473.280] Okay, nothing no surprise is yet. +[1473.280 --> 1481.600] But then the rats immediately come back out and they go straight out that way. +[1481.600 --> 1484.040] What does that tell you? +[1484.040 --> 1485.640] What do they learn? +[1485.640 --> 1489.280] Did they learn a series of like go straight and then left and then right and then right +[1489.280 --> 1491.360] and then go for a long ways? +[1491.360 --> 1494.000] No, that wouldn't work over here. +[1494.000 --> 1496.480] They learned something much more interesting. +[1496.480 --> 1501.400] Even though they were only being trained on this task here, they learned some much more +[1501.400 --> 1507.240] interesting thing about the kind of vector average of all of those turns. +[1507.240 --> 1508.240] Everybody get this? +[1508.240 --> 1511.000] It's really simple but really deep. +[1511.000 --> 1516.000] Okay, so from this, Tolman and others started talking about cognitive maps. +[1516.000 --> 1520.920] Whatever it is you have to have learned in a situation like this so you can abstract +[1520.920 --> 1522.280] the general direction. +[1522.280 --> 1523.280] Okay? +[1523.280 --> 1528.600] We don't just learn specific roots as a series of stimulus and responses. +[1528.600 --> 1533.560] Okay, so there must be some kind of map in your head to be able to do this. +[1533.560 --> 1536.840] And rats have that and so do you. +[1536.840 --> 1539.160] So let's consider this question right now. +[1539.160 --> 1540.160] Where am I? +[1540.160 --> 1542.280] Where are you? +[1542.280 --> 1546.840] To answer that question to yourself, there's something like this in your head. +[1546.840 --> 1551.240] And it probably doesn't look exactly like that in your head but there's some version of +[1551.240 --> 1556.440] this information that's in your head that you're using when you answer the question of +[1556.440 --> 1558.040] where you are. +[1558.040 --> 1559.280] Okay? +[1559.280 --> 1565.640] So and you have some way to say in that map of the world, I know not just what the MIT +[1565.640 --> 1569.600] campus looks like and how it's arranged but I know where I am in it. +[1569.600 --> 1572.800] Okay? +[1572.800 --> 1576.800] Now if you want to know how to get somewhere else, like suppose you're hungry and you +[1576.800 --> 1580.760] want to go over to the state of cafeteria over there. +[1580.760 --> 1585.280] What else do you need to know besides knowledge of the map of your environment and where +[1585.280 --> 1586.880] you are in it? +[1586.880 --> 1591.680] What else do you need to know? +[1591.680 --> 1595.880] You know you have this map, you know where you are and you know where your goal is. +[1595.880 --> 1597.800] Now you have to plan how to get over there. +[1597.800 --> 1599.560] What else do you need to know? +[1599.560 --> 1600.560] Yeah. +[1600.560 --> 1604.320] Yes, you have to know which parts are like paths and which parts are buildings? +[1604.320 --> 1605.320] Yes, exactly. +[1605.320 --> 1606.800] Where can you go in there? +[1606.800 --> 1609.360] Actually, where can you physically get through? +[1609.360 --> 1613.520] Like actually our vector is right over there but you can't go that way because you can't +[1613.520 --> 1616.800] go through that glass even though you can see through it. +[1616.800 --> 1622.120] So knowledge of physical barriers and what's an actual path and what isn't is crucial? +[1622.120 --> 1625.240] What else do you need to know? +[1625.240 --> 1629.840] Suppose we had a robot in this room sitting right here facing the front of room like you +[1629.840 --> 1633.760] guys and we're programming the robot on how to get over there. +[1633.760 --> 1638.160] What are other things we'd have to tell the robot to get it to plan how to get over to +[1638.160 --> 1640.600] the state of cafeteria? +[1640.600 --> 1642.600] Yeah. +[1642.600 --> 1643.600] Absolutely. +[1643.600 --> 1650.800] We'd have to know about obstacles like moving obstacles not just fixed ones. +[1650.800 --> 1651.800] Absolutely. +[1651.800 --> 1652.800] What else? +[1652.800 --> 1653.800] Yeah. +[1653.800 --> 1654.800] Yes. +[1654.800 --> 1656.800] Yes, you have to know which way he's headed. +[1656.800 --> 1660.600] Right, you're going to give this robot instructions on which way to go. +[1660.600 --> 1665.280] It matters a whole lot if the robot is starting like this or starting like that. +[1665.280 --> 1669.600] The instructions are different in the two cases and likewise for you guys to plan a +[1669.600 --> 1673.640] route you need to know which way you're heading. +[1673.640 --> 1678.080] If you guys ever been in Manhattan and you come up from the subway and you see the streets +[1678.080 --> 1681.360] go on like this and you know it's north-south and you don't know if you're heading south +[1681.360 --> 1682.960] or north, right? +[1682.960 --> 1683.960] Really common thing. +[1683.960 --> 1684.960] Okay. +[1684.960 --> 1689.760] It's not enough to know I'm at the junction of fifth and twenty second. +[1689.760 --> 1693.400] You need to know I'm facing south or north otherwise you can't figure out which way to +[1693.400 --> 1694.400] go. +[1694.400 --> 1696.320] That's called heading direction. +[1696.320 --> 1697.320] Okay. +[1697.320 --> 1698.320] Okay. +[1698.320 --> 1699.960] We just did all that. +[1699.960 --> 1700.960] Okay. +[1700.960 --> 1702.760] You need to know your current heading. +[1702.760 --> 1703.760] Okay. +[1703.760 --> 1709.240] You also need to know the direction of your goal in order to plan a route to it. +[1709.240 --> 1710.240] Okay. +[1710.240 --> 1715.000] So in this kind of taxonomy of all the things you need to know to navigate, we've just +[1715.000 --> 1719.920] added that if you're going to navigate in your known environment you need to know where +[1720.000 --> 1725.600] not just where you are in it but which way you are facing in that mental map. +[1725.600 --> 1726.600] Okay. +[1726.600 --> 1730.040] And we also talked about this business of what routes are possible from here. +[1730.040 --> 1732.680] How do we move around obstacles? +[1732.680 --> 1734.160] Where are the doors? +[1734.160 --> 1738.080] Where are the hazards like cars, etc. +[1738.080 --> 1742.600] A final thing you need to know is that even if you have a good system for all of these +[1742.600 --> 1746.920] other bits it's still possible to get lost in all kinds of ways. +[1746.920 --> 1750.720] If you lose track, you get confused, you get lost. +[1750.720 --> 1754.720] So we also need a way to reorient ourselves when we're lost. +[1754.720 --> 1757.720] And we'll talk a lot about that in the next lecture. +[1757.720 --> 1758.720] Okay. +[1758.720 --> 1762.160] So this is just common sense we're doing a kind of low tech version of more computational +[1762.160 --> 1764.160] theory for navigation. +[1764.160 --> 1767.960] Like what are the things that we would need to know or that a robot would need to know +[1767.960 --> 1769.320] to be able to navigate. +[1769.320 --> 1770.320] Okay. +[1770.320 --> 1772.480] Just thinking about the nature of the problem. +[1772.480 --> 1774.040] All right. +[1774.040 --> 1775.760] So that's what we need. +[1775.760 --> 1778.360] What's the neural basis of all of this? +[1778.360 --> 1779.360] All right. +[1779.360 --> 1782.200] So I'm going to start right in with the pariah of the Campbell Place area. +[1782.200 --> 1786.120] Not to imply it is the total neural bit neural basis of this whole thing. +[1786.120 --> 1788.560] It's just one little piece of a much bigger puzzle. +[1788.560 --> 1791.240] But we'll start in there because it's nice and concrete. +[1791.240 --> 1792.240] Okay. +[1792.240 --> 1793.240] All right. +[1793.240 --> 1797.000] So this story starts, oh god, about 20 years ago. +[1797.000 --> 1800.920] I think I mentioned some of this in the first class when I talked about the story of Bob. +[1800.920 --> 1805.360] When I talked about Russell Epstein, it was in my postdoc and it was doing nice behavioral +[1805.360 --> 1809.480] experiments and thought it was trashy and cheap to mess around with brain imaging and he +[1809.480 --> 1813.280] was going to have none of it until I said, Russell, just do one experiment. +[1813.280 --> 1814.600] Scan subject looking at scenes. +[1814.600 --> 1819.240] I know it's kind of stupid, but just do it and you'll have a slide for your job talk. +[1819.240 --> 1823.160] And he scans subjects looking at scenes and looking at objects. +[1823.160 --> 1827.320] And here is one of those early subjects that probably me, I don't remember, with a bunch +[1827.320 --> 1832.280] of vertical slices through the brain, near the back of the brain down there, moving forward +[1832.280 --> 1833.680] as we go up to here. +[1833.680 --> 1835.080] Everybody oriented. +[1835.080 --> 1836.080] Okay. +[1836.080 --> 1838.960] So, sorry, it's not showing up very well in this lighting. +[1838.960 --> 1844.280] But there's a little bilateral region right in the middle there that shows a stronger response +[1844.280 --> 1848.760] when people look at pictures of scenes than when they look at pictures of objects. +[1848.760 --> 1849.760] Okay. +[1849.760 --> 1852.560] So we hadn't predicted this. +[1852.560 --> 1853.560] Yeah. +[1853.560 --> 1854.560] Is the thing is it higher? +[1854.560 --> 1855.560] Yeah. +[1855.560 --> 1856.560] Yeah. +[1856.560 --> 1860.680] All the colors are, there's significance maps or P levels. +[1860.680 --> 1865.480] So pink is higher than blue, but blue is borderline significant. +[1865.480 --> 1866.800] Okay. +[1866.800 --> 1868.400] So, this is kind of dopey. +[1868.400 --> 1870.520] We didn't actually predict it for any deep reason. +[1870.520 --> 1873.600] We hadn't been thinking about theories of navigation or anything like that. +[1873.600 --> 1878.480] It was just one of those dumb experiments where we found something and we followed the data. +[1878.480 --> 1882.080] So we found this and it's like, okay, let's try some other subjects. +[1882.080 --> 1885.000] So here are the first nine subjects we scanned. +[1885.000 --> 1891.320] Every single subject had that kind of signature response in exactly the same place. +[1891.320 --> 1892.320] Okay. +[1892.320 --> 1895.840] In a part of the brain called parahypic ample cortex. +[1895.840 --> 1896.840] Okay. +[1896.840 --> 1900.760] So this is very systematic and there's lots of ways to make progress in science. +[1900.760 --> 1906.040] One way is to have a big theory and use it to motivate brilliant, elegantly designed experiments. +[1906.040 --> 1910.280] And another is you just see something salient and robust that you didn't predict and you +[1910.280 --> 1912.480] follow your nose and try to figure it out. +[1912.480 --> 1913.480] So that's what we did in this case. +[1913.480 --> 1916.000] It's like, okay, what the hell is that? +[1916.000 --> 1917.040] All right. +[1917.040 --> 1922.920] So if you think about, we've eventually called it the parahypic ample place area after +[1922.920 --> 1924.680] a little more work. +[1924.680 --> 1928.920] If you think about what we have so far, we've scanned people looking at pictures like this +[1928.920 --> 1930.720] and pictures like that. +[1930.720 --> 1934.320] And what we've shown is that little patch of brain responds a bunch more to these than +[1934.320 --> 1936.160] those. +[1936.160 --> 1942.760] So my first question is, is that a minimal pair? +[1942.760 --> 1944.760] Tally, is that a minimal pair? +[1944.760 --> 1946.880] Sorry, I'm like boys. +[1946.880 --> 1947.880] Sorry. +[1947.880 --> 1948.880] Simple, simple, simple. +[1948.880 --> 1950.880] We're contrasting this with that. +[1950.880 --> 1951.880] Okay. +[1951.880 --> 1958.720] Minimal pair is this thing we aspire towards in experimental design where we have two conditions +[1958.720 --> 1961.880] that are identical except for one little thing we're manipulating. +[1961.880 --> 1968.320] Well, I don't really think it's minimal pair, but I'm not really sure. +[1968.320 --> 1971.480] Well, I even told you what we were designing to manipulate. +[1971.480 --> 1975.640] There seems to be like too many differences between a delivery room. +[1975.640 --> 1977.160] I'm like, it's ludicrous. +[1977.160 --> 1978.160] Right? +[1978.160 --> 1981.160] I mean, there's a million differences here, right? +[1981.160 --> 1983.320] So we don't know that we have anything yet. +[1983.320 --> 1987.160] There's all kinds of uninteresting accounts of the systematic activation in that part of +[1987.160 --> 1988.760] the brain, right? +[1988.760 --> 1993.720] So just to list a few that you've probably already noticed, these things have rich high +[1993.720 --> 1996.240] level meaning and complexity, right? +[1996.240 --> 2003.440] So you could think about living rooms or where you might sit or somebody's aesthetic home +[2003.440 --> 2007.240] design or, there's all kinds of stuff to think about there. +[2007.240 --> 2009.560] Much more than just, okay, it's a blender, right? +[2009.560 --> 2013.400] So there's just complexity in every possible way. +[2013.400 --> 2019.560] There are also lots of objects present here and only a single object over there. +[2019.560 --> 2023.080] So maybe that region just represents objects and if you have more objects to get a higher +[2023.080 --> 2026.120] signal, right? +[2026.120 --> 2031.760] There's another possibility and that is that these images depict spatial layout and that +[2031.760 --> 2032.760] one does not. +[2032.760 --> 2037.560] Okay, so you have some sense of the walls and the floor and like the layout of the local +[2037.560 --> 2041.560] environment here that you don't have over there, all right? +[2041.560 --> 2044.320] And we could probably list a million other things, okay? +[2044.320 --> 2046.800] It's a very, very sloppy contrast. +[2046.800 --> 2051.680] Okay, so how are we going to ask which of these things might be driving the response of +[2051.680 --> 2053.640] that region? +[2053.640 --> 2058.840] Well, a natural thing to do is just deconstruct the stimuli. +[2058.840 --> 2060.640] So here's what we did. +[2060.640 --> 2062.480] This is actually way back 20 years ago. +[2062.480 --> 2066.360] There were better methods at the time but I didn't know them so I actually drove around +[2066.360 --> 2071.520] Cambridge, photographed my friends' apartments, left the camera on the same tripod, moved +[2071.560 --> 2074.080] all the furniture out of the way and photographed the space again. +[2074.080 --> 2077.200] Ha, ha, I know. +[2077.200 --> 2081.800] And then these, and we probably cut out with some horrific version of Adobe Photoshop that +[2081.800 --> 2083.640] existed 20 years ago. +[2083.640 --> 2089.160] Anyway, we deconstructed the scenes into their component objects and the bare spatial layout. +[2089.160 --> 2093.880] Okay, everybody get the logic here just to try to make a big cut in this hypothesis space +[2093.880 --> 2096.400] of what might be driving that region. +[2096.400 --> 2097.920] Okay. +[2097.920 --> 2101.720] So what do we predict that the PPA will respond? +[2101.720 --> 2105.680] How strongly will it respond? +[2105.680 --> 2107.960] Oops. +[2107.960 --> 2111.000] How strongly will it respond if these two things are true? +[2111.000 --> 2117.440] If it's a complexity or multiplicity of objects that's driving it, what do you predict? +[2117.440 --> 2118.440] We will see over there. +[2118.440 --> 2120.720] We already know you get a high response here. +[2120.720 --> 2124.520] What do we get over there? +[2124.520 --> 2125.520] Yeah. +[2126.520 --> 2127.520] Yeah. +[2127.520 --> 2129.200] Probably get more bias from the screen. +[2129.200 --> 2131.640] Yeah, respond more to this than that, right? +[2131.640 --> 2135.520] It's really simple-minded, right? +[2135.520 --> 2141.960] If instead it responds more to the spatial layout, what do we predict as well? +[2141.960 --> 2144.560] How is going to respond to the entry when it's more? +[2144.560 --> 2146.800] Yeah. +[2146.800 --> 2149.960] And that seems like a weird hypothesis because these are really boring. +[2149.960 --> 2153.280] There's kind of nothing going on here and there's just lots of stuff going on here. +[2153.280 --> 2157.160] I mean, it's not riveting, but it's a whole bunch of- whole but- whole up more interesting +[2157.160 --> 2158.640] to look at these than those. +[2158.640 --> 2161.880] Believe me, I got scanned for hours and hours looking at these things. +[2161.880 --> 2165.600] And whenever the empty rooms came on, I was like, oh my god, I'm just so bored, right? +[2165.600 --> 2166.600] There's just nothing here. +[2166.600 --> 2170.360] Whereas here at least there's stuff, right? +[2170.360 --> 2172.600] But that's not what the PPA thinks. +[2172.600 --> 2178.160] What the PPA does, oops, oops, we just did the localizer. +[2178.160 --> 2179.160] Okay. +[2179.160 --> 2180.640] It responds like this. +[2180.640 --> 2187.200] This is percent signal change, a measure of magnitude of response to the full scenes +[2187.200 --> 2192.160] way down less than half the response to all those objects and almost the same response +[2192.160 --> 2199.160] as the original scene when all you have is a bare spatial layout. +[2199.160 --> 2201.280] Pretty surprising, isn't it? +[2201.280 --> 2202.280] We were blown away. +[2202.280 --> 2203.520] We're like, what? +[2203.520 --> 2206.320] What? +[2206.320 --> 2211.040] But can you see how even this really simple-minded experiment enables us to just pretty much +[2211.040 --> 2213.240] rule out that whole space of hypotheses? +[2213.240 --> 2217.840] It's not about the richness or interest or multiplicity of objects. +[2217.840 --> 2221.320] It's something much more like spatial layout because that's kind of all there is in those +[2221.320 --> 2223.120] empty rooms. +[2223.120 --> 2226.160] I mean, it could be something like, you know, the texture of wood floors or something +[2226.160 --> 2228.560] weird like that. +[2228.560 --> 2231.040] But one's first guess is it's something about spatial layout. +[2231.040 --> 2232.040] Does this make sense? +[2232.040 --> 2238.480] It's just a way to take a big sloppy contrast and try to formulate initial hypotheses and +[2238.480 --> 2240.960] knock out a whole big space of hypotheses. +[2240.960 --> 2241.960] Yes. +[2241.960 --> 2242.960] Is it Alana? +[2242.960 --> 2243.960] Yeah. +[2243.960 --> 2245.960] I'm sorry, I met a question here at the point. +[2245.960 --> 2248.960] So we're looking at the empty room. +[2248.960 --> 2249.960] Not empty room. +[2249.960 --> 2251.960] We're looking at the empty room. +[2251.960 --> 2252.960] Ah, good question. +[2252.960 --> 2255.240] I skipped over all of that. +[2255.240 --> 2256.240] We did. +[2256.240 --> 2257.800] We did, yes, that's true. +[2257.800 --> 2261.560] We did mush them all together and one could worry about that. +[2261.560 --> 2267.200] When you see this, you remember that that's a version of this, right? +[2267.200 --> 2268.200] Absolutely. +[2268.200 --> 2271.000] Absolutely. +[2271.000 --> 2276.560] And so maybe, yes, nonetheless, if what you were doing, that's absolutely true. +[2276.560 --> 2282.800] But if what you were doing here is kind of mentally recalling this, right? +[2282.800 --> 2286.000] Then why couldn't you also do that here? +[2286.000 --> 2288.120] Maybe you could. +[2288.120 --> 2292.680] You might argue that this is more evocative of that than this is, but it's also got lots +[2292.680 --> 2294.280] of relevant information. +[2294.280 --> 2295.280] Okay? +[2295.280 --> 2296.280] Yeah, Jimmy. +[2296.280 --> 2301.520] You guys were the, for example, if you guys try placing them in the same like exact position +[2301.520 --> 2303.880] as the same, seeing if that. +[2303.880 --> 2304.880] We did both versions. +[2304.880 --> 2309.360] For exactly the reasons you guys are pointing out and it didn't make a difference. +[2309.360 --> 2310.360] Yeah. +[2310.360 --> 2311.360] Yeah. +[2311.360 --> 2312.360] Sorry, quickly. +[2312.360 --> 2317.920] Maybe you could give the, we'll just have you pointed to that. +[2317.920 --> 2321.000] There's more stuff. +[2321.000 --> 2326.200] Like in the interior, there's more background, but there's still more background. +[2326.200 --> 2327.200] Totally. +[2327.200 --> 2328.200] You're absolutely right. +[2328.200 --> 2331.720] This has taken us pretty far, but it's still pretty sloppy. +[2331.720 --> 2334.640] This stuff goes all the way out to the edge of the frame and here are this lots of empty +[2334.640 --> 2335.640] space. +[2335.640 --> 2336.640] Is that what you're getting at? +[2336.640 --> 2337.640] Absolutely. +[2337.640 --> 2342.080] I took out those slides because I felt I didn't want to spend the entire lecture doing millions +[2342.080 --> 2343.440] of control conditions on the PPA. +[2343.440 --> 2349.360] I thought you'd get bored, but actually another version that we did was we then took all +[2349.360 --> 2353.480] of these conditions and we chopped them into little bits and rearranged the bits so that +[2353.480 --> 2360.280] you have much more coverage of stuff when the chopped up scenes than the chopped up objects. +[2360.280 --> 2363.360] And in the chopped up versions, it doesn't respond differently at all. +[2363.360 --> 2366.080] So it's not the amount of total spatial coverage. +[2366.080 --> 2369.520] It's the actual something more like the depiction of space. +[2369.520 --> 2371.640] Was there a question over there? +[2371.640 --> 2377.480] I was wondering if there would be any difference between a shooting image as a 2D board of +[2377.480 --> 2382.680] 3D scene and actually being there to see the 3D inside of the 3D. +[2382.680 --> 2383.680] Totally. +[2383.680 --> 2384.680] Totally. +[2384.680 --> 2385.680] It's a real challenge. +[2385.680 --> 2389.080] With navigation, navigation is very much about being there and moving around in the +[2389.080 --> 2390.600] space. +[2390.600 --> 2394.040] And this is just a pretty rudimentary thing where you're lying in the scanner and these +[2394.040 --> 2398.680] images are just flashing on and you're doing some simple tasks like pressing a button +[2398.680 --> 2400.680] when consecutive images are identical. +[2400.680 --> 2402.440] It's not like moving around in the real world. +[2402.440 --> 2404.840] You don't think you're actually there. +[2404.840 --> 2409.080] But here's where video games and VR come in. +[2409.080 --> 2414.200] Because actually, they produce a pretty powerful simulation of knowing your environment, +[2414.200 --> 2416.440] feeling you're in a place in it. +[2416.440 --> 2421.720] And so lots of studies have used those methods to give something closer to the actual experience +[2421.720 --> 2424.400] of navigation. +[2424.400 --> 2427.000] Okay. +[2427.000 --> 2428.000] So. +[2428.000 --> 2429.000] So where are we so far? +[2429.000 --> 2433.600] We've said the PPA seems to be involved in recognizing a particular scene. +[2433.600 --> 2438.760] So this just says it responds to scenes and something about spatial layout maybe. +[2438.760 --> 2443.840] Does it care about that particular scene or something? +[2443.840 --> 2448.080] Do you have to recognize that particular scene to be able to use the information? +[2448.080 --> 2452.200] Now our subjects mostly didn't know those particular scenes but we wanted to do a tighter +[2452.200 --> 2457.080] contrast asking if knowledge of the particular scene matters. +[2457.080 --> 2462.360] So what we did was we took a bunch of pictures around the MIT campus and we took a bunch of +[2462.360 --> 2464.920] pictures around the Tufts campus. +[2464.920 --> 2470.680] And we scanned MIT students looking at MIT pictures versus Tufts pictures. +[2470.680 --> 2473.680] And then what else do we do? +[2473.680 --> 2475.880] Get the Tufts to. +[2475.880 --> 2477.280] Yeah, why? +[2477.280 --> 2482.320] Oh, just to make sure that it's not all about the weird architecture. +[2482.320 --> 2483.320] Exactly. +[2483.320 --> 2484.320] Exactly. +[2484.320 --> 2487.080] So this is called counter, who's weird architecture? +[2487.080 --> 2489.600] I think ours is weirder. +[2489.600 --> 2493.880] So it's not just about the particular scenes or the particular subjects. +[2493.880 --> 2498.320] So everybody get how with that counterbalance design you can really pull out the essence +[2498.320 --> 2502.840] of familiarity itself, unconfounded from the particular images. +[2502.840 --> 2503.840] Okay. +[2503.840 --> 2511.560] So when we did that we found a very similar response magnitude in the PPA for the Tufts students +[2511.560 --> 2514.400] and for the familiar and unfamiliar scenes. +[2514.400 --> 2515.400] Okay. +[2515.400 --> 2517.880] Really didn't make much difference. +[2517.880 --> 2518.880] Yeah. +[2518.880 --> 2524.440] Taking a step back, so we started off with the one question of navigation and it involving +[2524.440 --> 2526.440] all these different components. +[2526.440 --> 2528.800] I just want the players to wear this. +[2528.800 --> 2529.800] We're getting there. +[2529.800 --> 2530.800] We're getting there. +[2530.800 --> 2531.800] There won't be like a perfect answer. +[2531.800 --> 2535.400] We're not going to end up with that slide with the exact brain region of each of those +[2535.400 --> 2536.400] things. +[2536.400 --> 2539.880] We'll get some justy vague senses of what this is. +[2539.880 --> 2540.880] Yeah. +[2541.280 --> 2541.880] Okay. +[2541.880 --> 2548.080] So this tells us it's not about, whatever the PPA is responding to in a scene, it's not +[2548.080 --> 2550.640] something that hinges on knowing that exact scene. +[2550.640 --> 2553.960] So it can't be something like, okay, if I was here and I wanted to get coffee, what would +[2553.960 --> 2557.880] my root from this location be, given my knowledge of the environment? +[2557.880 --> 2560.240] Because otherwise we wouldn't get this result. +[2560.240 --> 2564.520] So whatever it is, it's something more immediate and perceptual to do with just seeing this +[2564.520 --> 2565.520] place. +[2565.520 --> 2566.520] Okay. +[2566.520 --> 2567.520] All right. +[2567.520 --> 2568.520] All right. +[2569.520 --> 2570.760] So where are we? +[2570.760 --> 2576.080] We've said that there's this region that responds more to scenes than objects. +[2576.080 --> 2581.080] That when all the objects are removed from the scenes, the response, you know, barely drops. +[2581.080 --> 2582.320] Okay. +[2582.320 --> 2587.080] And its response is pretty much the same for familiar and unfamiliar scenes. +[2587.080 --> 2591.080] So all of that suggests that it's involved in something like perceiving the shape of +[2591.080 --> 2592.360] space around you. +[2592.360 --> 2596.800] It doesn't nail it yet, but it kind of pushes you towards that hypothesis. +[2596.800 --> 2597.800] Yeah. +[2597.800 --> 2598.800] I'm going to go here a second. +[2598.800 --> 2599.800] I'm going to go. +[2599.800 --> 2600.800] No. +[2600.800 --> 2601.800] Okay. +[2601.800 --> 2604.440] No, it's very, no, but is it actually looking at that? +[2604.440 --> 2606.240] Oh, great question. +[2606.240 --> 2608.240] Not very much. +[2608.240 --> 2609.240] Okay. +[2609.240 --> 2610.240] Yeah. +[2610.240 --> 2616.560] If you take pictures of places from above versus this kind of view, you get a response +[2616.560 --> 2619.560] in this kind of view, but not above. +[2619.560 --> 2620.560] Yeah. +[2620.560 --> 2623.560] Very telling. +[2623.560 --> 2624.560] Yeah. +[2624.560 --> 2625.560] Okay. +[2625.560 --> 2627.120] So I'm going to skip. +[2627.120 --> 2629.880] We're not going to do like, you know, the third-eother experiments. +[2629.880 --> 2633.760] We're going to skip to the general picture that hears the PPA and four subjects in this +[2633.760 --> 2635.760] very stereotype location. +[2635.760 --> 2638.560] And here are some of the many conditions we've tested. +[2638.560 --> 2640.960] It's not just, you know, abstract maps like this. +[2640.960 --> 2642.440] They don't produce a strong response. +[2642.440 --> 2644.880] Oh, this is an answer to Koolie's question way back. +[2644.880 --> 2647.400] Here's the scrambled up scene, much lower response. +[2647.400 --> 2651.880] So it's not just coverage of visual junk, right? +[2651.880 --> 2656.320] And it responds pretty strongly to scenes made out of legos compared to objects made out +[2656.320 --> 2660.160] of legos and various other silly things. +[2660.160 --> 2661.160] Okay. +[2661.160 --> 2665.560] So all of that seems to suggest that it's processing something like the shape or geometry +[2665.560 --> 2670.160] of space around you, visible space in your immediate environment. +[2670.160 --> 2671.160] Okay. +[2671.160 --> 2675.240] Nonetheless, there's always pushback. +[2675.240 --> 2676.840] And there's pushback on multiple fronts. +[2676.840 --> 2679.040] And there should be that's proper science. +[2679.040 --> 2684.800] So one of the lines of pushback was this paper by Nasser et al. +[2684.800 --> 2685.800] That I didn't assign. +[2685.800 --> 2687.200] I signed you the response to it. +[2687.200 --> 2693.480] Anyway, what Nasser et al did was scan people looking at rectilinear things like cubes +[2693.480 --> 2698.920] and pyramids versus curvilinear roundy things like cones and spheres. +[2698.920 --> 2704.960] And what they showed is the PPA responds more to the rectilinear than the curvilinear +[2704.960 --> 2706.360] shapes. +[2706.360 --> 2708.320] Okay. +[2708.320 --> 2711.040] And okay, that's the first thing. +[2711.040 --> 2717.600] And so then they argue that in general scenes have more rectilinear structure than curvilinear +[2717.600 --> 2718.600] structure. +[2718.600 --> 2721.160] And they did a bunch of math to make that case. +[2721.160 --> 2729.640] And so they argue that maybe the apparent scene selectivity of the PPA is due to a what +[2729.640 --> 2732.800] of scenes with rectilinearity. +[2732.800 --> 2733.800] Yeah. +[2734.800 --> 2735.800] Confound? +[2735.800 --> 2736.800] Yes. +[2736.800 --> 2737.800] Exactly. +[2737.800 --> 2738.800] A confound. +[2738.800 --> 2742.240] This is exactly what a confound is. +[2742.240 --> 2746.520] Something else that co-varies with the manipulation you care about that gives you an alternative +[2746.520 --> 2747.520] account. +[2747.520 --> 2749.400] Namely, okay, it's not seen selectivity. +[2749.400 --> 2751.400] It's just rectilinearity. +[2751.400 --> 2754.560] I mean, that might be interesting to other people, but it would make it not very relevant +[2754.560 --> 2758.480] to navigation and much less interesting to me at least, right? +[2758.480 --> 2759.480] Okay. +[2759.480 --> 2761.960] So that's an important criticism. +[2761.960 --> 2766.200] And so then the Brian at all paper, the you guys read, starts from there and says, okay, +[2766.200 --> 2767.200] let's take that seriously. +[2767.200 --> 2768.800] Let's find out. +[2768.800 --> 2774.360] And so you guys should have read all of this, but just to remind you, they have a nice +[2774.360 --> 2776.160] little 2x2 design. +[2776.160 --> 2780.400] Remember we talked about 2x2 designs where they manipulate whether the image has a lot +[2780.400 --> 2785.080] of rectilinear structure or less rectilinear structure and whether the image is a place +[2785.080 --> 2786.280] or a face. +[2786.280 --> 2789.280] Okay. +[2789.280 --> 2795.160] And what they find in the PPA is the same response to these and it's higher to the scenes +[2795.160 --> 2800.360] than the faces and rectilinearity didn't add her for the scenes. +[2800.360 --> 2801.360] Okay. +[2801.360 --> 2806.440] So evidently, even though it does matter with these abstract shapes in actual scenes, it +[2806.440 --> 2809.000] doesn't seem and faces, it doesn't seem to be doing much. +[2809.000 --> 2811.000] It's not accounting for this difference. +[2811.000 --> 2812.000] Okay. +[2812.000 --> 2814.000] Everybody get that? +[2814.000 --> 2815.000] Okay. +[2815.000 --> 2816.000] Okay. +[2816.000 --> 2817.000] Let's talk about this graph. +[2817.000 --> 2819.240] Are there main effects or interactions here? +[2819.240 --> 2824.000] And what are those main effects or interactions? +[2824.000 --> 2827.000] Yes, fully. +[2827.000 --> 2829.000] There's a many things. +[2829.000 --> 2830.000] Yeah. +[2830.000 --> 2833.000] Of categories, seeing versus face. +[2833.000 --> 2834.000] Yeah. +[2834.000 --> 2835.000] Anything else? +[2835.000 --> 2842.000] What's that there? +[2842.000 --> 2845.000] What's the first thing? +[2845.000 --> 2847.000] What's the first thing? +[2847.000 --> 2848.000] Wait. +[2848.000 --> 2850.000] These are scenes and those are faces. +[2850.000 --> 2851.000] Okay. +[2851.000 --> 2852.000] And this is the code here. +[2852.000 --> 2856.000] These are rectilinear versus curvilinear. +[2856.000 --> 2860.000] Just one main effect or is there an interaction or another main effect? +[2860.000 --> 2861.000] No. +[2861.000 --> 2862.000] Just one main effect. +[2862.000 --> 2863.000] Okay. +[2863.000 --> 2864.000] Right? +[2864.000 --> 2865.000] These guys are higher than those guys. +[2865.000 --> 2866.000] That's it. +[2866.000 --> 2867.000] Okay. +[2867.000 --> 2872.000] So that just tells you there's nothing else going on in these data other than seeing selectivity. +[2872.000 --> 2873.000] Okay. +[2873.000 --> 2879.000] Rectolinearity doesn't interact with or modify seeing selectivity and it doesn't have a separate effect. +[2879.000 --> 2880.000] Okay. +[2880.000 --> 2890.000] Nonetheless, as we've been arguing with all the whole Hacksby-Rigmarole, does the fact that there's no main effect of rectilinearity in here +[2890.000 --> 2896.000] mean that the PPA doesn't have information about rectilinearity? +[2896.000 --> 2897.000] No. +[2897.000 --> 2898.000] Josh. +[2898.000 --> 2899.000] Why? +[2899.000 --> 2904.000] I mean, instead of tiny amount of that could be, you know, this is not the right experiment. +[2904.000 --> 2905.000] That's right. +[2905.000 --> 2907.000] This is a big, well, it's a right experiment. +[2907.000 --> 2909.000] It's not the right analysis, right? +[2909.000 --> 2912.000] It's the big average responses are the same. +[2912.000 --> 2914.000] But maybe the patterns are different. +[2914.000 --> 2915.000] Okay. +[2915.000 --> 2916.000] That wouldn't directly engage with this. +[2916.000 --> 2920.000] So what I want to know is there information in there about rectilinearity. +[2920.000 --> 2922.000] Okay. +[2922.000 --> 2925.000] So how would we find out? +[2925.000 --> 2928.000] So this was your assignment and I think most people got it right. +[2928.000 --> 2935.000] But in case anybody missed it, we were zooming in on this figure four here. +[2935.000 --> 2939.000] So again, this is just the same basic design of experiment two. +[2939.000 --> 2942.000] And now let's consider what's going on here. +[2942.000 --> 2946.000] So you guys read the paper and you understood what was going on here. +[2946.000 --> 2950.000] What's represented in that cell right there? +[2950.000 --> 2952.000] What is the point of this diagram? +[2952.000 --> 2959.000] What are they doing here and what does that cell mean in that matrix? +[2959.000 --> 2963.000] You can't understand the paper without knowing that. +[2963.000 --> 2964.000] Is it Ollie? +[2964.000 --> 2965.000] No. +[2965.000 --> 2966.000] Sorry. +[2966.000 --> 2967.000] What's your name? +[2967.000 --> 2968.000] Shardun. +[2968.000 --> 2969.000] I've only asked you like six times. +[2969.000 --> 2970.000] Yeah, go ahead. +[2971.000 --> 2979.000] So they want to see whether the activation patterns can better discriminate between +[2979.000 --> 2986.000] rectilinearity of the same category of things or between categories of things with the same rectilinearity. +[2986.000 --> 2995.000] So the first thing I said is to the left and the second one is to the right. +[2995.000 --> 2996.000] And they. +[2996.000 --> 2997.000] Sorry. +[2997.000 --> 2998.000] Wait here and here. +[2998.000 --> 2999.000] No. +[2999.000 --> 3000.000] Right side. +[3000.000 --> 3001.000] Yeah. +[3001.000 --> 3008.000] So this part that part is discriminating between rectilinearity and that side of discriminating between categories. +[3008.000 --> 3011.000] And they take the differences of well, not the differences. +[3011.000 --> 3018.000] They take the how well it can distinguish between each of those categories and plug them down there. +[3018.000 --> 3019.000] Right. +[3019.000 --> 3020.000] Okay. +[3020.000 --> 3021.000] That's exactly right. +[3021.000 --> 3024.000] So this is how well it can discriminate plotted down here. +[3024.000 --> 3027.000] But based on an analysis that follows this scheme. +[3027.000 --> 3030.000] So what does that cell in there represent? +[3030.000 --> 3031.000] That dark green cell. +[3031.000 --> 3032.000] What are they? +[3032.000 --> 3042.000] What is the number that's going to be calculated from the data corresponding to that cell? +[3042.000 --> 3046.000] Similar piece of same rectangular piece. +[3046.000 --> 3047.000] Exactly. +[3047.000 --> 3048.000] Exactly. +[3048.000 --> 3052.000] So just as if you want to distinguish chairs from cars or something else. +[3052.000 --> 3056.000] If you want to know, is there information about rectilinearity in there? +[3056.000 --> 3060.000] You take these two cases which are the same in rectilinearity. +[3060.000 --> 3065.000] Both hy rectilinear, both low rectilinear for run one and run two. +[3065.000 --> 3069.000] And that's the correlation between run and one two for those cells. +[3069.000 --> 3072.000] That's the within rectilinearity case. +[3072.000 --> 3073.000] Right. +[3073.000 --> 3080.000] And if there's information about rectilinearity, the prediction is those within correlations are higher than the between correlations. +[3080.000 --> 3085.000] Just as we argued a bit back with beaches and cities and everything else. +[3085.000 --> 3086.000] Same argument. +[3086.000 --> 3090.000] This is just presenting the data in terms of run one and run two. +[3090.000 --> 3095.000] And which cells do we grab to do this computation? +[3095.000 --> 3097.000] Okay. +[3097.000 --> 3101.000] So each of the cells in there, for each of the cells, +[3101.000 --> 3106.000] we're going to calculate an r value of how similar those patterns are. +[3106.000 --> 3108.000] Okay. +[3108.000 --> 3114.000] Pattern and, you know, a pattern for rectilinear scenes in run two, +[3114.000 --> 3116.000] a pattern for rectilinear scenes in run one, +[3116.000 --> 3120.000] this cell is a correlation between those two patterns. +[3120.000 --> 3123.000] How stable is that pattern across repeated measures? +[3123.000 --> 3125.000] Okay. +[3125.000 --> 3126.000] All right. +[3126.000 --> 3129.000] So that's what that r value is. +[3129.000 --> 3135.000] The two darker blue squares here are the r values +[3135.000 --> 3139.000] for stimuli that differ in rectilinearity. +[3139.000 --> 3144.000] And remember that the essence of the Hacksby-Syle pattern analysis +[3144.000 --> 3151.000] is to see if the within correlations are higher than the between correlations. +[3151.000 --> 3158.000] In this case, the within correlations are within rectilinearity versus between rectilinearity. +[3158.000 --> 3161.000] Okay. +[3161.000 --> 3162.000] All right. +[3162.000 --> 3167.000] And so then they calculate all those correlation differences +[3167.000 --> 3172.000] and they plot them as discrimination abilities. +[3172.000 --> 3178.000] And so what this is showing us here is that actually the PBA doesn't have any information +[3178.000 --> 3183.000] in its pattern of response about the rectilinearity of the scene. +[3183.000 --> 3189.000] However, if we take the same data and now choose within category versus between category, +[3189.000 --> 3196.000] ignoring rectilinearity, and we get the same kind of selectivity correlation difference +[3196.000 --> 3201.000] within versus between for category, there's heaps of information about category. +[3201.000 --> 3204.000] Does that make sense? +[3204.000 --> 3205.000] Okay. +[3205.000 --> 3208.000] Again, if you're fuzzy about this, look back on that slide. +[3208.000 --> 3211.000] I have lots of suggestions for how to unfuzzle yourself on it. +[3211.000 --> 3213.000] Okay. +[3213.000 --> 3214.000] All right. +[3214.000 --> 3218.000] So, interim summary, PBA responds more to scenes and objects. +[3218.000 --> 3222.000] It seems to like spatial layout in particular. +[3222.000 --> 3230.000] It does respond more to boxes and circles, but that rectilinearity bias can't account for scene selectivity. +[3230.000 --> 3232.000] That's all very nice. +[3232.000 --> 3239.000] But what is a whole other kind of fundamental question we haven't yet asked about the PBA? +[3239.000 --> 3243.000] So we've been messing around with Function Lemma-I measuring magnitudes of response, +[3243.000 --> 3250.000] trying to test these kind of vague, you know, general hypotheses about what it might be responding to. +[3250.000 --> 3251.000] Yes. +[3251.000 --> 3252.000] A causation? +[3252.000 --> 3253.000] Yes. +[3253.000 --> 3257.000] What particular causation? +[3257.000 --> 3266.000] I guess like how the PBA would work in place and person can see anything. +[3266.000 --> 3267.000] Exactly. +[3267.000 --> 3268.000] Exactly. +[3268.000 --> 3271.000] Again, we can test the causal rule of a stimulus on the PPA. +[3271.000 --> 3272.000] So, I talked about that. +[3272.000 --> 3273.000] Minipulate the stimulus. +[3273.000 --> 3275.000] Find different PPA responses. +[3275.000 --> 3285.000] But what we haven't done yet is ask what is the causal relationship if any between activity and the PPA and perception of scenes or navigation? +[3285.000 --> 3286.000] Okay. +[3286.000 --> 3288.000] So, so far this is all just suggestive. +[3288.000 --> 3291.000] We have no causal evidence for its role in navigation. +[3291.000 --> 3292.000] Right? +[3292.000 --> 3293.000] Or perception. +[3293.000 --> 3294.000] All right. +[3294.000 --> 3295.000] So, let's get some. +[3295.000 --> 3297.000] I'll show you a few examples. +[3298.000 --> 3304.000] So, one, as you guys have learned by now, is these rare cases where there's direct electrical stimulation of a region. +[3304.000 --> 3309.000] And there's one patient in whom this is reported. +[3309.000 --> 3313.000] This patient, again, is being mapped out before neurosurgery. +[3313.000 --> 3316.000] They did Function Lemma-I in the patient first. +[3316.000 --> 3320.000] This is his Function Lemma-I response to, I think, houses versus objects. +[3321.000 --> 3325.000] Houses are not as strong an activator as scenes for the PPA, but they're pretty good. +[3325.000 --> 3328.000] PPA responds much more to houses than other objects. +[3328.000 --> 3331.000] And so, that's a nice activation map showing the PPA. +[3331.000 --> 3335.000] And those little circles are where the electrodes are, little black circles. +[3335.000 --> 3336.000] Okay. +[3336.000 --> 3342.000] So, they know they're in the PPA because they did Function Lemma-I first to localize that region. +[3342.000 --> 3344.000] Now those electrodes are sitting there. +[3344.000 --> 3348.000] And so, first thing we do is record, or first thing they did is record responses. +[3348.000 --> 3354.000] They flash up a bunch of different kind of images and they measure the response in those electrodes. +[3354.000 --> 3359.000] And so, what you see is, in those electrodes right over there, one, two, three, that correspond to the PPA, +[3359.000 --> 3364.000] you see a higher response to house images than to any of the other images. +[3364.000 --> 3368.000] And you see the time course here over a few seconds. +[3368.000 --> 3371.000] Okay. Everybody clear? This has not caused a leavenance yet. +[3371.000 --> 3375.000] It's just amazing direct-intercranial recordings from the PPA. +[3375.000 --> 3380.000] I think the only time this was ever done because it's pretty rare to have the electrodes right there +[3380.000 --> 3384.000] and a patient who's willing to look at your silly pictures and all of that. +[3384.000 --> 3385.000] Right? Okay. +[3385.000 --> 3389.000] But now, what happens when they stimulate there? +[3389.000 --> 3394.000] Okay. So, let's look at what happens when they stimulate on these sites, +[3394.000 --> 3399.000] four and three that are off to the side of the scene selectivity. +[3399.000 --> 3401.000] And this is just a dialogue. +[3401.000 --> 3407.000] We don't have a video, unfortunately, the videos are more fun, but this is just a dialogue between the neurologist and the patient. +[3407.000 --> 3412.000] And the neurologist electrically stimulates that region and says, +[3412.000 --> 3414.000] did you see anything there? +[3414.000 --> 3417.000] Patient says, I don't know. I started feeling something. +[3417.000 --> 3421.000] I don't know. It's probably just me. Oh, no, it's not you. +[3421.000 --> 3426.000] And then they stimulate again. Anything there? No. Anything here? No. +[3426.000 --> 3430.000] Okay. So, that's right next to the side of the scene selective electrodes, right next door. +[3430.000 --> 3435.000] A few millimeters away. Then they move their stimulator over here. +[3435.000 --> 3438.000] They don't move anything. They just control where they're going to stimulate. +[3438.000 --> 3440.000] Patient, of course, has no idea. +[3440.000 --> 3443.000] neurologist says, anything here? Do you see anything? +[3443.000 --> 3450.000] Feel anything? Patient says, yeah. I feel like he looks perplexed, puts hand to forehead. +[3450.000 --> 3454.000] I feel like I saw like some other site. +[3454.000 --> 3456.000] We were at the train station. +[3457.000 --> 3461.000] neurologist cleverly says, so it feels like you're at a train station. +[3461.000 --> 3465.000] Patient says, yeah. Outside the train station. +[3465.000 --> 3469.000] neurologist, let me know if you get any sensation like that again. +[3469.000 --> 3474.000] Stimulates, do you feel anything here? No. +[3474.000 --> 3479.000] And they does it again. Do you see the train station or did it, +[3479.000 --> 3484.000] oh, did you see the train station or did it feel like you were at the train station? +[3484.000 --> 3487.000] Patient, I saw it. +[3487.000 --> 3491.000] These are very sparse, precious data, but that's so telling. +[3491.000 --> 3494.000] It's not that he knew we was at the train station abstractly. +[3494.000 --> 3497.000] He saw it. +[3497.000 --> 3502.000] So then they stimulate again, right on those scene selective regions. +[3502.000 --> 3507.000] Patient says, again, I saw almost like, I don't know, like I saw. +[3507.000 --> 3510.000] It was very brief. neurologist says, I'm going to show it to you one more time. +[3510.000 --> 3513.000] Really what he means is I'm going to stimulate you in the same place one more time. +[3513.000 --> 3516.000] See if you can describe it any further. +[3516.000 --> 3522.000] I'm going to give you one last time. What do you think? +[3522.000 --> 3528.000] I don't really know what to make of it, but I saw like another staircase. +[3528.000 --> 3532.000] The rest I couldn't make out, but I saw a closet space. +[3532.000 --> 3536.000] But not this one. He points to a closet door in the room. +[3536.000 --> 3539.000] That one was stuffed and it was blue. +[3539.000 --> 3541.000] Have you seen it before? It's neurologist. +[3541.000 --> 3544.000] Have you seen it before at some point in your life? +[3544.000 --> 3547.000] Yeah, I mean when I saw the train station. +[3547.000 --> 3550.000] Train station you've been at. +[3550.000 --> 3552.000] Yeah, et cetera, et cetera. +[3552.000 --> 3555.000] So it's not a lot of data, but it's very compelling. +[3555.000 --> 3557.000] What is the patient describing? +[3557.000 --> 3563.000] Places that he's in, that he sees, and then he describes this closet space. +[3563.000 --> 3566.000] And it's colors. Interestingly, color regions are right next to scene regions. +[3566.000 --> 3569.000] So that's kind of cool too. +[3569.000 --> 3572.000] So it's causal evidence. It's sparse. +[3572.000 --> 3576.000] Ideally, we'd like more in science, but it's pretty cool. +[3576.000 --> 3579.000] And what does the patient just state in that quality? +[3579.000 --> 3582.000] You know, I actually forget in the paper. I got to go look that up. +[3582.000 --> 3584.000] I forget exactly what the patient was doing. +[3584.000 --> 3587.000] Whether I think he's just in the room looking out. +[3587.000 --> 3590.000] Usually they don't control it that much because it's done kind of for clinical reasons. +[3590.000 --> 3593.000] And the patient is in their hospital bed and they're just stimulating. +[3593.000 --> 3595.000] So he's probably just looking out at the space he's in. +[3595.000 --> 3600.000] He must have been because at one point he says the closet, not like that one over there. +[3600.000 --> 3606.000] So if he was staring at a blank thing, he was also looking out at his room. +[3606.000 --> 3609.000] Okay, so yeah. +[3609.000 --> 3615.000] So the region of color perception is very close to this. +[3615.000 --> 3621.000] Is there any relationship between like functional proximity and? +[3621.000 --> 3625.000] That's a great question. Nobody in the field has an answer to this. +[3625.000 --> 3629.000] People often make hay about the proximity of two regions like, +[3629.000 --> 3633.000] oh, there's some deep link because this thing is next to that thing. +[3633.000 --> 3636.000] You know, the body selective region is right next to it. +[3636.000 --> 3640.000] In fact, slightly overlapping with area MT that responds to motion. +[3640.000 --> 3642.000] It's like, ooh, body's moved. +[3642.000 --> 3645.000] And well, you know, face is move and cars move too. +[3645.000 --> 3647.000] Like, I don't know. It's tantalizing. +[3647.000 --> 3651.000] It feels like it ought to mean something and people often talk about, you know, +[3651.000 --> 3658.000] talk as if it does and maybe it does, but nobody's really put their finger on what exactly it would mean. +[3658.000 --> 3660.000] But it's youthful, right? +[3660.000 --> 3664.000] So when Rosa, Lafarsusa, who you met in the color demo, +[3664.000 --> 3671.000] and I showed that in humans, you get face color and place regions right next to each other in that order. +[3671.000 --> 3675.000] That was really cool because Rosa had previously shown that in monkeys. +[3675.000 --> 3679.000] In the monkey brain, it goes face color place in exactly the same order. +[3679.000 --> 3682.000] And so we thought, okay, that's really interesting. +[3682.000 --> 3685.000] That suggests common inheritance because that's so weird and arbitrary. +[3685.000 --> 3686.000] Why would it be the same? +[3686.000 --> 3690.000] So it can be useful in ways like that, at least. +[3690.000 --> 3693.000] Okay. So we just went through all of this. +[3693.000 --> 3698.000] So how does this go beyond what we knew from functional MRI? +[3698.000 --> 3701.000] I'm insulting your intelligence. You know the answer to this. +[3701.000 --> 3706.000] It goes beyond it because it tells you it implies that there's a causal role of that region in place perception, +[3706.000 --> 3709.000] some aspect of seeing a place. +[3709.000 --> 3715.000] Okay. Now all of this about the PPA, I just started in there because it's nice and concrete and easy to think about. +[3715.000 --> 3719.000] But no complex mental process happens in just one brain region. +[3719.000 --> 3721.000] Nothing is ever like that. +[3721.000 --> 3727.000] And likewise, scene perception and navigation is part of a much broader set of regions. +[3727.000 --> 3731.000] So if you do a contrast, scan people looking at scenes versus objects. +[3731.000 --> 3734.000] You see, not just the PPA in here. +[3734.000 --> 3738.000] Again, this is a folded up brain and this is the mathematically unfolded version. +[3738.000 --> 3740.000] So you can see the whole cortex. +[3740.000 --> 3744.000] Dark bits are the bits that were that used to be inside a solcus until it was mathematically unfolded. +[3744.000 --> 3747.000] So there's the PPA kind of hiding up in that solcus. +[3747.000 --> 3750.000] And when you unfold it, you see this nice big huge region. +[3750.000 --> 3753.000] Okay. But you also see all these other regions. +[3753.000 --> 3755.000] Okay. Now there's a bunch of terminology. +[3755.000 --> 3758.000] And I don't panic. I don't think you should memorize everything about each region. +[3758.000 --> 3760.000] You should know that there's multiple scene regions. +[3760.000 --> 3767.000] You should know some of the kinds of ways you tease apart the functions and some of the functions that have been tested and how they're tested. +[3767.000 --> 3770.000] But you don't need to memorize every last detail. +[3770.000 --> 3771.000] Okay. +[3771.000 --> 3773.000] Because it's going to get a little hairy. +[3773.000 --> 3780.000] Okay. So here's a second scene region right there called retro-splenial cortex or RSC. +[3780.000 --> 3787.000] And actually Russell Fstein and I saw that activation in the very very first experiments we did in the 1990s. +[3787.000 --> 3790.000] But we really didn't know what we were doing back then. +[3790.000 --> 3793.000] And we knew that this is right near the calcrine solcus. +[3793.000 --> 3796.000] Remind me. What happens in the calcrine solcus? +[3796.000 --> 3800.000] What functional region lives in the calcrine solcus? +[3800.000 --> 3806.000] It's just a weird little fact, but it's kind of an important one. +[3806.000 --> 3809.000] That we mentioned weeks ago. +[3809.000 --> 3812.000] D1, primary visual cortex. +[3812.000 --> 3815.000] That's where primary visual cortex lives. +[3815.000 --> 3824.000] And remember, primary visual cortex has a map of retinotopic space, with next door bits of primary visual cortex responding to next door bits of space. +[3824.000 --> 3830.000] And in fact, that map has the center of gaze out here and the periphery out there. +[3830.000 --> 3836.000] So when Russell and I first saw that activation, we had the same worry that Quilly mentioned a while back. +[3836.000 --> 3838.000] And that is the scenes are sticking out. +[3838.000 --> 3841.000] There's stuff everywhere, the objects, there isn't that much sticking out. +[3841.000 --> 3844.000] And we thought, oh, that's just peripheral retinotopic cortex. +[3844.000 --> 3846.000] But it's not. It's right next to there. +[3846.000 --> 3847.000] And it's a totally different thing. +[3847.000 --> 3849.000] And it turns out to be extremely interesting. +[3849.000 --> 3853.000] You don't need to know all that. It's just a little history. +[3853.000 --> 3858.000] Okay. There's a third region up there that's on the outer surface out there. +[3858.000 --> 3860.000] That used to be called TOS. +[3860.000 --> 3862.000] And now is now called OPA. I'm sorry about that. +[3862.000 --> 3863.000] You don't need to remember this. +[3863.000 --> 3866.000] No, that there's not there at least three regions. +[3866.000 --> 3874.000] But TOS slash OPA is interesting because there's a method we can apply to it that we can't apply to the others. +[3874.000 --> 3876.000] What would that method be? +[3876.000 --> 3882.000] Yeah, TMS. It's right out on the surface. +[3882.000 --> 3884.000] You just stick the coil there and go zap. +[3884.000 --> 3887.000] So of course, we've done a lot of that. +[3887.000 --> 3891.000] Okay. Can't get the coil into the PPA or RSC. It's too medial. +[3891.000 --> 3896.000] Okay. And there's another region that we'll talk about more next time called the hippocampus. +[3896.000 --> 3903.000] You saw the hippocampus when Ann Gray-Bill spent all that time digging in the temporal of to find that bumpy little dendrite gyrus. +[3903.000 --> 3905.000] Approximately right in there. +[3905.000 --> 3908.000] And so all of these and probably other regions. +[3908.000 --> 3914.000] But these are the core elements of the of the scene selective regions that are implicated in different aspects of navigation. +[3914.000 --> 3919.000] Okay. So when you have multiple regions that seem to be part of a system. +[3919.000 --> 3921.000] That's an opportunity. +[3921.000 --> 3926.000] Because now we have the possibility that maybe we could figure out different functions for different regions. +[3926.000 --> 3930.000] And then maybe that would really tell us more than just, okay, scenes and navigation end of story. +[3930.000 --> 3932.000] It's got a root of entry. +[3932.000 --> 3938.000] Right? It would be nice if different aspects of the navigation story engage different parts of this system. +[3938.000 --> 3944.000] Okay. So really what we want to know is how does each of these regions help us navigate and see scenes. +[3944.000 --> 3949.000] And I'm not going to answer that fully. We're still at the field is still trying to understand all of this. +[3949.000 --> 3953.000] But I'll give you a few tantalizing little snippets. Okay. +[3953.000 --> 3957.000] So let's take retro-spinial cortex right here. +[3957.000 --> 3966.000] So this is first the response of the PPA right there and retro-spinial cortex, which is just behind it. +[3966.000 --> 3970.000] This is just its mean response to a bunch of different kinds of stimuli. +[3970.000 --> 3976.000] Showing you that it likes landscapes and cityscapes, scenes more than a bunch of other categories of objects. +[3976.000 --> 3979.000] And that's true of both the PPA and RSC. +[3979.000 --> 3983.000] Okay. No surprises here. They're both somewhat scenes elective. +[3983.000 --> 3984.000] Okay. +[3984.000 --> 3991.000] But then in a whole bunch of other studies summarized in this graph here, Russell Epstein and his colleagues, +[3991.000 --> 3995.000] had subjects engage in different tasks while they were looking at scenes. +[3995.000 --> 3998.000] In some tasks, they had to say where they were. +[3998.000 --> 4002.000] He's at UPEN and he showed his subjects pictures of the UPEN campus. +[4002.000 --> 4009.000] And they had to answer all kinds of questions about what part of campus they were, where they were on campus, +[4009.000 --> 4014.000] and also about which way they were facing given the view of the campus they were looking at. +[4014.000 --> 4022.000] Okay. Then he also showed people familiar scenes and unfamiliar scenes, much like we did with our Tufts study. +[4022.000 --> 4024.000] And he had object controls. +[4024.000 --> 4027.000] And you can see the PPA doesn't care about any of that. +[4027.000 --> 4032.000] Doesn't care really if they're familiar or unfamiliar. Doesn't care what task you're doing on the scene. +[4032.000 --> 4035.000] You're looking at a scene. It's just going. +[4035.000 --> 4038.000] Okay. So we didn't really tease of heart functions there. +[4038.000 --> 4043.000] But RSC responds differently in these conditions. +[4043.000 --> 4051.000] It's more engaged in both the location task and the orientation task. +[4051.000 --> 4059.000] It responds substantially more when you look at images of a familiar place than an unfamiliar place. +[4059.000 --> 4062.000] So this is the first time we've seen that in the scene network. +[4062.000 --> 4068.000] And so now, think about all the things you can do when you're looking at a picture of a scene and you know that place. +[4068.000 --> 4071.000] You have memories of having been there. +[4071.000 --> 4076.000] You can think about what you might do if you were there, how you would get from there to someplace. +[4076.000 --> 4082.000] And all of those things are possible things that might be driving RSC. +[4082.000 --> 4089.000] Another thing that might be driving RSC is that if you're looking at a picture of a familiar place, +[4089.000 --> 4094.000] you orient yourself with respect to the broader environment that that view is part of. +[4094.000 --> 4097.000] Right? So when I showed you that picture of the front of the state, +[4097.000 --> 4103.000] you immediately imagine, oh, like I'm out on Vastor Street facing that way, roughly Northwest, I think. +[4104.000 --> 4107.000] If you look at a picture of a scene and you don't know that scene, +[4107.000 --> 4111.000] it doesn't tell you anything about your broader heading in the broader world. +[4111.000 --> 4118.000] So all of those are things that the RSC, its function seems to depend on knowing that place. +[4118.000 --> 4120.000] Okay. +[4120.000 --> 4127.000] Perhaps the most telling case comes from a patient who had damage in retrospectible in the O'Courtex. +[4127.000 --> 4134.000] And the description in the paper of this says that this patient could recognize buildings and the landmarks +[4134.000 --> 4137.000] and therefore understand where he was. +[4137.000 --> 4139.000] Okay. So lots is intact. +[4139.000 --> 4143.000] Can recognize scenes and know where he is. +[4143.000 --> 4151.000] But the landmarks he recognized did not provoke directional information about any other places with respect to those landmarks. +[4151.000 --> 4153.000] Okay. +[4153.000 --> 4157.000] So this person can look at a picture and say, yeah, I know that place. +[4157.000 --> 4159.000] That's the front of my house. +[4159.000 --> 4167.000] But then if you say, in which direction is a coffee shop two blocks away, he doesn't know which way it is from there. +[4167.000 --> 4172.000] Okay. So this should sound familiar. +[4172.000 --> 4179.000] This is my guess of the bit that my friend Bob got messed up. +[4179.000 --> 4182.000] This is exactly his description. He could recognize places. +[4182.000 --> 4186.000] But it wouldn't tell him how to get from there to somewhere else. +[4186.000 --> 4187.000] Okay. +[4187.000 --> 4194.000] And so the best current guess about retrospectible in O'Courtex is that it's involved in anchoring where you are. +[4194.000 --> 4199.000] You have this mental map of the world and you have a scene and you're trying to put them together. +[4199.000 --> 4204.000] Given that I see this, where am I on the map in which way am I heading in that map? +[4204.000 --> 4205.000] Okay. +[4205.000 --> 4209.000] Again, think about the problem you faced when you emerged from the subway in Manhattan. +[4209.000 --> 4213.000] Right? Like you look around where am I in which way am I heading? +[4213.000 --> 4216.000] That's what you need retrospectible in O'Courtex for. +[4216.000 --> 4218.000] Okay. +[4218.000 --> 4221.000] All right. How about this TOS thing? +[4221.000 --> 4223.000] I'll give you, there's lots of studies of it. +[4223.000 --> 4226.000] I'll give you just one little offering. +[4226.000 --> 4233.000] Okay. So this is a causal investigation because as we discussed, TOS is out on the lateral surface. +[4233.000 --> 4236.000] So we can zap it. And so of course we do. +[4236.000 --> 4246.000] And so in this study, we were asking whether TOS is involved in perceiving the structure of space around you. +[4246.000 --> 4251.000] So we took scenes like this from CAD programs and we just varied them slightly. +[4251.000 --> 4261.000] So for example, the position of this wall moves around, the aspect ratio, the height of the ceiling moves around and we make this subtle morph space of different versions of this image. +[4261.000 --> 4265.000] Okay. And then for control condition, we do the same with faces. +[4265.000 --> 4269.000] We morph between this guy and that guy who make a whole spectrum in between. +[4269.000 --> 4273.000] And then in the task, what we do is here's one trial. +[4273.000 --> 4280.000] One of the scenes or faces comes on briefly and then shortly thereafter you get a choice of two. +[4280.000 --> 4283.000] And you have to say which of these matches that one. +[4283.000 --> 4289.000] Okay. And then what we do is we zap people right after we present this stimulus. +[4290.000 --> 4295.000] Okay. And so the idea is this is as close as we can get to a pretty pure perceptual task. +[4295.000 --> 4300.000] How well can you see the shape of that environment or the shape of that face? +[4300.000 --> 4303.000] Okay. You don't have to remember it for more than a few hundred milliseconds. +[4303.000 --> 4306.000] So it's really more of a perception task than a memory task. +[4306.000 --> 4318.000] Okay. And what we measure is we actually muck with how different these two images are in each trial and measure how far apart they have to be in morph space for you to be about seven seconds. +[4318.000 --> 4320.000] For you to be about 75% correct. +[4320.000 --> 4323.000] Okay. That's the kind of standards, psychophysical measure. +[4323.000 --> 4333.000] The details don't matter, but our dependent measure is how different do this stimuli have to be for you to discriminate them as a function of whether you're getting, whether you're getting zapped in TOS or not. +[4333.000 --> 4337.000] Okay. And so you're the data. +[4337.000 --> 4341.000] So let's take the case where you're doing the scene task here. +[4341.000 --> 4346.000] What this threshold is is again, how different the stimuli need to be for you to discriminate them. +[4346.000 --> 4349.000] So the higher the bar, the worse performance. +[4349.000 --> 4352.000] Okay. They have to be really different. You can't tell them apart. +[4352.000 --> 4362.000] And so what you see is when you zapped OPA, that lateral scene selective region discrimination threshold goes up a bit. +[4362.000 --> 4364.000] That means you get worse at the discrimination. +[4364.000 --> 4369.000] The stimuli need to be more different compared to zapping the top of your head. +[4369.000 --> 4375.000] Okay. You remember you always want to control condition and there's no perfect control condition because it feels differently to be zapped in different places. +[4375.000 --> 4381.000] But getting zapped up here is a, you know, better than nothing control. +[4381.000 --> 4383.000] And then here's the occipital face area. +[4383.000 --> 4387.000] That's the lateral face region we talked about before when I should do another TMS study. +[4387.000 --> 4392.000] Basically, whenever there's anything lateral, we zapped it because we can't. +[4392.000 --> 4399.000] And see, it's not affected here. Zapping the occipital face area does not mess up your ability to discriminate the scenes. +[4399.000 --> 4404.000] However, in the face task, we see the opposite pattern. +[4404.000 --> 4414.000] For the face task, zapping the occipital place area doesn't do anything compared to zapping the top of your head, but zapping the face area does. +[4414.000 --> 4417.000] This is a double dissociation. +[4417.000 --> 4424.000] If we just had the scene task and be like, yeah, maybe, who knows? +[4424.000 --> 4428.000] Maybe, maybe, who knows why? But, you know, it's not very strong. +[4428.000 --> 4437.000] But when you have these opposite things, then we really have much more strong evidence that these two regions have different functions from each other. +[4437.000 --> 4449.000] And everybody get that this is a double dissociation in the same sense of when you have one patient with damage and one location and another patient with damage and another location and they have opposite patterns of deficit, then we're really kind of in business. +[4449.000 --> 4453.000] Then we can draw strong inferences. +[4453.000 --> 4455.000] All right, so we just said all of that. +[4455.000 --> 4458.000] Okay, so that's just a little snippet. +[4458.000 --> 4472.000] And other data suggests that that region is strongly active when you look at scenes and it seems to be involved in something like perceiving, you know, just directly online perceiving the structure of the space in front of you. +[4472.000 --> 4475.000] Okay. +[4475.000 --> 4476.000] All right. +[4476.000 --> 4482.000] So, yeah, we already did retrospective no cortex. +[4482.000 --> 4489.000] And next time we'll talk about the hippocampus in there and its role in the whole navigation thing. +[4489.000 --> 4502.000] Now, since I have ended early, rare event, I actually put together a whole other piece of this lecture and I thought, no, don't always have a part you don't get to. +[4502.000 --> 4507.000] So it turns out, we do get to it. +[4507.000 --> 4513.000] Okay, we're going to go over this more later, but we're going to start with this business right here. +[4513.000 --> 4516.000] Anybody have questions about this stuff so far? +[4516.000 --> 4517.000] Okay. +[4517.000 --> 4527.000] So I've spent a lot of time talking about multiple voxel pattern analysis because it's the only method I've mentioned so far that enables us to go beyond the business of saying, +[4527.000 --> 4534.000] how strongly to the neurons fire in this region to the more interesting question of what information is contained in this region. +[4534.000 --> 4535.000] Okay. +[4535.000 --> 4543.000] But I also ended the last lecture with this kind of depressive note that you can't see much with MVPA applied to face patches. +[4543.000 --> 4547.000] Even when we know there's information in there with electrophysiology data. +[4547.000 --> 4554.000] Remember, I showed you that monkey study where they tried MVPA and the face patches and monkeys and they couldn't kind of read out a damn thing. +[4554.000 --> 4561.000] And then they try MVPA on individual neural responses of the same region and they can read out all kinds of information. +[4561.000 --> 4566.000] And that tells you that the information is there and we just can't always see it with MVPA. +[4566.000 --> 4573.000] Now, today you've seen cases where you can see stuff with MVPA in the scene region so sometimes it works, sometimes it doesn't. +[4573.000 --> 4579.000] And when it doesn't work, we're left in this unsatisfying situation that we don't know if the information isn't there. +[4579.000 --> 4586.000] Or if the neurons are just so scrambled together that we just don't, we can't see the different patterns. +[4586.000 --> 4592.000] Okay. So bottom line, we need another method. MVPA is a whole lot better than nothing. +[4592.000 --> 4601.000] But we want to be able to ask, is there information present in this region, even when we think the relevant neurons are all spatially intermingled. +[4601.000 --> 4605.000] Okay. So let me just do a little bit of this and then we'll continue later. +[4606.000 --> 4612.000] So goal, we want this new method is called event-related functional MRI adaptation. +[4612.000 --> 4621.000] And we use it when we want to know if neural populations in a particular region can discriminate between two stimulus classes. +[4621.000 --> 4629.000] So for example, do neurons in the FFA distinguish between this image and that image. +[4630.000 --> 4642.000] So if we want to know that, we could measure the functional MRI response in the FFA and find this would be an event-related response, similar responses to the two. +[4642.000 --> 4649.000] Okay. And as I just mentioned, that wouldn't mean that there isn't information in the FFA that discriminates that. +[4649.000 --> 4655.000] It just says they have the same mean response. Everybody get that? Okay. +[4655.000 --> 4672.000] Now, if we zoom in and think about what might neurons be doing, it's still possible, even with the same mean response, that neurons could be organized like this, with some of them responding only to this image and some of them responding only to that image. +[4672.000 --> 4680.000] But it's also possible that all of the neurons respond equally to both. And we kind of desperately need to know. +[4680.000 --> 4689.000] So, we're not in this case. This is a toy example, obviously. But we often, when we're trying to understand the region of the brain, we need to know which situation we're in. +[4689.000 --> 4698.000] Okay. So that neural population can discriminate these two and that one can't. Okay. How are we going to tell which is true? +[4698.000 --> 4709.000] Well, we talked before about multiple voxel pattern analysis. But as I just said, it only works when the neurons are spatially clustered on the scale of voxels. +[4709.000 --> 4717.000] So, imagine you have these situations here. This is getting more and more of a toy example, but just to give you the idea. +[4717.000 --> 4730.000] Suppose where those neural populations land with respect to voxels is like this. So if each of these is a voxel in the brain, a little say two by two by three millimeter chunk of brain that we're getting an MRI signal from. +[4730.000 --> 4739.000] If you have the different neural populations spatially segregated enough that they mostly land in different voxels, then MVPA might work here. +[4739.000 --> 4746.000] Is that intuitive? If you guys all see that, then we get a different pattern in these voxels for looking at those two different images. +[4747.000 --> 4761.000] But even if we have this situation here, which is kind of informationally the same, if they're spatially scrambled so that they're in roughly equal proportion in each voxel, MVPA won't work. +[4761.000 --> 4773.000] Does that make sense? And so that's when we need this other method called functional MRI adaptation. Make sense? Okay. I'm going to go one minute over probably. +[4773.000 --> 4783.000] Okay, so the point of functional MRI adaptation is it can work even when there's no spatial clustering of the relevant neural populations on the scale of voxels. +[4783.000 --> 4798.000] So let me go through it quickly. We'll come back to it later. So here's how it goes. The basic idea is any measure that's sensitive to the sameness versus difference between two stimuli can reveal what that system takes to be same or different. +[4798.000 --> 4811.000] So for example, if a brain region discriminates between two similar stimuli like these, then if we measure the functional MRI response in that region to same versus different trials. +[4811.000 --> 4818.000] Okay, so this would be a different trial. You present Trump and then the chimp back to back. That's one trial. +[4818.000 --> 4830.000] Compare to a same trial, chimp and then chimp. And of course we counterbalance everything. So we also do chimp and then Trump and another different case and then Trump and then Trump and another same case. +[4830.000 --> 4846.000] Right? If we find that the neural response is higher when the two stimuli are different than when they're the same, then we know that that region has neurons that respond differentially to the two. +[4847.000 --> 4854.000] Okay, so remember we started with a case where the mean response is the same to this image in this image if you just measure them along. +[4854.000 --> 4867.000] But now we want to know, do we really have neurons that respond differentially? So we're using the fact that neurons are like people and muscles. If you keep doing the same thing to them, they get bored. Then they're done that. +[4867.000 --> 4882.000] Okay, so you present this back to back, you get a lower response than if you present this and then this. Okay, that's called functional MRI adaptation. It's like that waterfall MTA adaptation we talked about before, but just crammed into a fine timescale. +[4882.000 --> 4893.000] Okay, and so then if you do that, you can ask what a region thinks is the same. Okay, so then we could ask, okay, what about these two images? +[4893.000 --> 4910.000] Does it think those are the same? And if we find a response like that, what have we learned? So if these two respond like that, what have we learned about a region that shows this is all fake data, obviously, but if we saw that, what have we learned? And then I'll let you go. +[4910.000 --> 4917.000] It's going to say get a nice answer to this. Yeah. +[4917.000 --> 4932.000] So it's the same between two pictures of the same stimuli that needs that it's activated. Like it can discreetly, but if we're if the yellow one at the same degree is the red thing, you would just be interacting to different pictures. +[4932.000 --> 4949.000] And totally get that is probably right, but totally get it. And keep point just because I don't want to torture you guys and go way over, but keep point is it's the same response is the lower response. We tell that with this case, we actually give it a same one. So same is lower than different. That's just how this method works. +[4949.000 --> 4963.000] Then we're basically asking, does that count as the same to this brain region? And we're finding, yes, it does. That tells us that those neurons are invariant to all kinds of things viewpoint facial expression. +[4963.000 --> 4969.000] You know, when he last died his hair, you know, who the hell knows all these other things, right? +[4969.000 --> 4981.000] So we'll talk more about this, but the idea is now we have another method in addition to MVPA that can start to tell us what neurons are actually discriminating. Okay, sorry to go over. diff --git a/transcript/allocentric_OOpVTlrTYXw.txt b/transcript/allocentric_OOpVTlrTYXw.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae44a235464076e9fdb523510609dc41bc2f4b39 --- /dev/null +++ b/transcript/allocentric_OOpVTlrTYXw.txt @@ -0,0 +1,55 @@ +[0.000 --> 16.240] People on Earth use nonverbal ways to communicate every day, like facial expressions, hand signals, +[16.240 --> 23.240] body language, an American sign language. Astronauts in space have their own nonverbal way +[23.240 --> 25.480] to communicate too. +[25.480 --> 30.280] During the space walk and just generally during space operations all the time communication +[30.280 --> 34.320] is hugely important. Talking to the people who are outside, talking to people on the ground, +[34.320 --> 37.360] and obviously we have radios to do that, but a lot of times we wind up having to do that +[37.360 --> 38.860] nonverbal way. +[38.860 --> 41.360] Hold on, stop. +[41.360 --> 45.800] The whole signal. So maybe sometimes your ears may not be clearing fast enough as the +[45.800 --> 50.040] pressure is changing, maybe someone's helping rescue you, but you're still attached and +[50.040 --> 54.560] you realize that. In any case you give them a whole signal and that should tell everyone +[54.560 --> 58.160] to stop everything that's all the movement and kind of look around and for something +[58.160 --> 60.440] that you seem to have normal. +[60.440 --> 64.080] You okay? I'm okay. +[64.080 --> 67.560] We really want to check on each other, check on our buddies. So the way we usually do that +[67.560 --> 74.840] is we use the okay hand symbol and so we'll use it as a question and as an answer. So if +[74.840 --> 80.680] I'm pointing at Raja and then giving him the okay sign, I'm saying, are you okay? And +[80.680 --> 83.520] if he is, he'll tell me, I am okay. +[83.520 --> 86.240] I see what you're saying. +[86.240 --> 89.920] There's a lot of nonverbal that just comes from knowing and working with people that makes +[89.920 --> 93.620] a big difference when you're working day in and day out, especially on a high stress +[93.620 --> 97.720] thing like a spacewalk, where this to look at someone's face can tell you like either, +[97.720 --> 101.960] yeah, I'm good with this plan or I've got reservations, maybe we should stop and talk +[101.960 --> 107.560] about this and you can do all that with just a glance even through the glass of the space +[107.560 --> 108.560] helmets. +[108.560 --> 111.760] A handful of numbers. +[111.760 --> 116.000] If you're flying formation, which we practice in the T38, we also use hand signals just +[116.000 --> 120.160] to keep up with those skills. And so one of the most common things is transmitting numbers +[120.160 --> 124.640] with your hands. And so one, two, three, four, and five are pretty easy. And then the way +[124.640 --> 130.200] we do 7, 8, 9, 6, 7, 8, and 9, 10 without taking your hand off the stick is to turn your hand +[130.200 --> 135.080] horizontal. And so you can do the same thing with air pressure. So for example, if I had +[135.080 --> 138.600] a problem with my suit and I was trying, she was trying to tell me, you know, what is +[138.600 --> 142.240] your oxygen pressure? And I couldn't talk because I had a communications problem. I could +[142.240 --> 147.520] still tell Kayla, you know, I could tell her a one and then this would tell her one and +[147.520 --> 153.160] six. And then, you know, I could do a combination of those numbers to transmit to her nonverbaly +[153.160 --> 159.240] what the state of any of my values, my suit, whether it's suit pressure, water pressure, +[159.240 --> 163.320] temperature, all the different numerical values we can use hand signals for that. +[163.320 --> 167.520] Maybe we could demonstrate a few for each other and see if we can tell what the other +[167.520 --> 173.120] first the hand signals are. So I'll go first, Raja, and you can see if you know what I'm +[173.120 --> 180.480] trying to tell you. What do you think Kayla is trying to communicate? Is she telling Raja +[180.480 --> 187.280] she can't hear that he needs to clean his helmet visor or asking him what song he's listening +[187.280 --> 193.560] to? Alright, so what Kayla is telling me there is she's pointing to herself, which is +[193.560 --> 197.880] indicating that the person who has the problem, you could also point at someone else, but +[197.880 --> 201.440] in her case, she's pointing at herself so she's telling me she has a problem and then she +[201.440 --> 206.400] waved across her ears, which is telling me she can't hear. Okay, so let's say we have +[206.400 --> 210.880] that same scenario. So we've had some kind of loss of calm and Kayla came to check on me +[210.880 --> 216.760] while I was out on a spacewalk. When she got there, I might give her a signal like this. +[216.760 --> 222.320] Can you figure out what Raja is trying to communicate? That they need to move to the other side +[222.320 --> 228.880] of the space station, that they need to wrap up and finish what they're doing. Or is he +[228.880 --> 235.320] asking her to do a flip in microgravity? So there Raja would be trying to communicate to me +[235.320 --> 239.360] that we need to speed things up. Maybe he has a problem that's accelerating or getting +[239.360 --> 244.240] worse, so he's saying it's kind of an urgent situation here. Let's get a move on, or +[244.240 --> 250.480] less. Next time you see astronauts on a spacewalk, look out for some of the hand signals you +[250.480 --> 255.800] learned today. You can even try them out with your friends to talk in your own non-verbal +[255.800 --> 261.320] code. For more fun with STEM, visit stem.nasa.gov. diff --git a/transcript/allocentric_OdFJuKhtBWU.txt b/transcript/allocentric_OdFJuKhtBWU.txt new file mode 100644 index 0000000000000000000000000000000000000000..82f86c4f0f1908e96f47e0a45d7797e26214478a --- /dev/null +++ b/transcript/allocentric_OdFJuKhtBWU.txt @@ -0,0 +1,531 @@ +[0.000 --> 12.480] The fourth talk is by Michael Hassamo comes to us from Boston University. +[12.480 --> 16.720] I discovered he actually doesn't have a PhD, which disturbed me, but then I found out +[16.720 --> 19.760] that he has a D feel from Oxford. +[19.760 --> 22.560] So that's good news. +[22.560 --> 27.680] So he is of course very well known for his really interesting work on especially on +[27.680 --> 36.800] your neuromodulators and computational work on understanding the entriangle hippocampal +[36.800 --> 39.760] circuits, how memory is formed. +[39.760 --> 48.720] And he really is working hard and asking biophysical questions, how ion channels play a role +[48.720 --> 54.480] in coding, how specific synapses, synaptic plasticity properties are encoded. +[54.480 --> 61.600] For example, during rhythmic activities like the Theta rhythm. +[61.600 --> 70.720] And he is also, I think, the one who is perhaps most out of the speakers, really like forming +[70.720 --> 77.240] computational hypotheses that then he really tests with experiments. +[77.240 --> 87.520] And some of these computational models are motivated heavily by biophysical facts and +[87.520 --> 88.880] others are high level. +[88.880 --> 94.520] And he is very interesting how he can move between these two levels. +[94.520 --> 102.200] He again is trained many people, again trained actually Lisa as well. +[102.200 --> 104.480] And he is widely known for his work. +[104.480 --> 107.400] And he also wrote a very interesting book if you want to buy it. +[107.400 --> 109.400] I don't get that cut. +[109.400 --> 112.240] So this is all just for your interest. +[112.240 --> 113.840] It's really a true pleasure to have you here, Mike. +[113.840 --> 115.840] I'm looking forward to your talk. +[115.840 --> 119.680] Thanks very much. +[119.680 --> 120.680] Thanks very much. +[120.680 --> 126.160] Ivan for inviting me and including me in this symposium and also to Jay for including +[126.160 --> 127.160] me. +[127.160 --> 128.160] It's a lot of fun. +[128.160 --> 132.280] I've really enjoyed the other talks and it's marvelous to see the interaction of the +[132.280 --> 134.440] research in the different labs. +[134.440 --> 139.960] So I really followed the theme of the conference in terms of using space and time in my talk. +[139.960 --> 143.280] And I tried to predict what the other speakers were going to talk about. +[143.280 --> 148.080] And neither Jill nor Edvard really talked as much about time as I expected, but I actually +[148.080 --> 149.400] refer a little bit to it. +[149.400 --> 151.880] Edvard had it there, but he didn't have time to get to it. +[151.880 --> 158.160] So just right up front I wanted to thank the people that did the work that I'll be presenting. +[158.160 --> 163.200] So I'll talk about work that was done by Jake Hinman in my laboratory as well as Andy +[163.200 --> 168.920] Alexander and worked on by Jennifer Robinson in collaboration with Mark Brandon. +[168.920 --> 172.800] And then Mark is an alumnus, but I'll be talking about some of his work as well as work +[172.800 --> 176.240] that Ben Krauss and Caitlin Monahan did. +[176.240 --> 181.000] And some modeling work done by Florian Roudies and you and Lisa worked with me, but she was +[181.000 --> 185.040] doing intercellular work and I won't talk about that work. +[185.040 --> 186.040] So this is just an overview. +[186.040 --> 191.720] I'll talk about some data on neurons that code both time and space. +[191.720 --> 195.200] And then I'll talk about mechanisms, potential mechanisms for coding of time. +[195.200 --> 200.120] We don't have the final answers for that, but some potential mechanisms as well as mechanisms +[200.120 --> 204.680] for coding space and particularly new work that hasn't been published yet on the influence +[204.680 --> 208.240] of environmental boundaries. +[208.240 --> 214.920] So as Ivan nicely mentioned, I have a book about modeling of episodic memory and it really +[214.920 --> 221.160] talks about a particular framework for modeling how you could get the spatial temporal trajectories +[221.160 --> 222.160] of memory. +[222.160 --> 228.400] So, tell being defined memory in terms of what did you do at time t in place p. And in the +[228.400 --> 233.240] book I describe how you could potentially have a circuit through the hippocampus nendironal +[233.240 --> 237.600] cortex that allows you to encode and retrieve particular trajectories. +[237.600 --> 241.680] Just so you know, I'm using the cursor because they told me that the laser pointer doesn't +[241.680 --> 243.360] show up well on the video camera. +[243.360 --> 247.760] So I'm trying to follow the instructions here, but it's a little bit slow actually. +[247.760 --> 253.560] But one another important point about it is that the trajectories would not only be trajectories +[253.560 --> 257.000] through space, but you can often have events where you're sitting in the same location as +[257.000 --> 260.720] you're doing today and you hopefully will have a very clear distinct episodic memory +[260.720 --> 262.640] of the different speakers today. +[262.640 --> 265.280] And so you really need some way of discriminating different times. +[265.280 --> 269.360] If somebody asks you what happened at the beginning of the symposium versus what happened +[269.360 --> 272.680] at the end of the symposium, you can do that even though you've been in the same location +[272.680 --> 276.360] the entire time. +[276.360 --> 280.320] So just to kind of give the same summary that other people gave, of course there's plenty +[280.320 --> 286.240] of evidence that the hippocampus nendironal cortex and adjacent structures are involved +[286.240 --> 291.040] in encoding of episodic memory, both from the lesion and patient age, but also from +[291.040 --> 295.920] epfirmary studies of encoding activity in these structures. +[295.920 --> 301.800] And this is the rodent system is a nice system for studying these structures because the hippocampus +[301.800 --> 308.120] nendironal cortex are disproportionately large in the rodent relative to other structures +[308.120 --> 313.040] and so it makes it nice for doing the in vivo unit recording of the sort that's been used +[313.040 --> 318.240] to discover the various different functional cell types that Edward and others already +[318.240 --> 319.840] summarized in their talks. +[319.840 --> 325.560] And I'll actually talk about a lot of these different functional subtypes in the various +[325.560 --> 328.400] components of the talk. +[328.400 --> 332.800] So first I want to kind of hone in on this kind of the main theme of the symposium which +[332.800 --> 337.520] is the coding of space and time and the fact that there are actually individual neurons +[337.520 --> 340.920] that will simultaneously code both space and time. +[340.920 --> 346.240] So I'm going to show you a video of cells that were referred to as time cells but they +[346.240 --> 349.400] actually are also coding place. +[349.400 --> 353.400] So this is a video of a rat running on a spatial alternation task. +[353.400 --> 357.800] This is actually a project that I did in collaboration with Howard Eichenbaum's lab with Ben Krauss +[357.800 --> 362.880] as a senior author and the colors that you see in the tones you hear are indicating different +[362.880 --> 363.880] neurons recorded. +[363.880 --> 368.680] So three different neurons, red coded by red, green and blue and you can see the rat is +[368.680 --> 371.560] not really leaving the treadmill. +[371.560 --> 376.520] It's running on this treadmill in the center of the spatial alternation task but you can +[376.520 --> 381.840] see that at different times during running the different cell types or the different cells +[381.840 --> 383.040] are firing. +[383.040 --> 385.800] And then the rat goes and starts to do the spatial alternation. +[385.880 --> 390.880] You can see the cell coded in red actually fires in a particular location on the maze +[390.880 --> 392.160] as a place cell. +[392.160 --> 396.560] Then it gets its reward and then you'll see another firing here where the cell coded +[396.560 --> 399.840] in blue is actually firing as a place cell. +[399.840 --> 405.720] But then when it gets back on the treadmill you can see the cell coded in red fires. +[405.720 --> 411.160] Then the cell coded in green is firing during the middle of the period. +[411.160 --> 414.360] And then the cell coded in blue is firing at the end of the period. +[414.360 --> 420.360] And you can see the rat's location and direction is not changed during this period of running +[420.360 --> 421.360] on the treadmill. +[421.360 --> 425.200] And in relation to Jeff's talk I want to point out this treadmill doesn't have features +[425.200 --> 426.680] on the treadmill. +[426.680 --> 428.720] So it's a featureless treadmill. +[428.720 --> 433.960] So the animal doesn't have any indication on the treadmill surface to cue this firing. +[433.960 --> 438.640] And the firing is relative to the start of each of these 16 second trials. +[438.640 --> 442.160] Not the particular portion of the treadmill that the rat is on. +[442.160 --> 447.520] And you can see there's actually tiling across similar to the place cell tiling that +[447.520 --> 450.840] they think Jeff showed with the density around the reward locations. +[450.840 --> 453.240] There's also tiling of different time intervals. +[453.240 --> 458.440] So during this 16 second period of running different neurons if you sort them here according +[458.440 --> 462.440] to what time interval they're coding you can see they're coding a number of different +[462.440 --> 463.440] time intervals. +[463.440 --> 466.040] And this is actually a recurring theme now in the data. +[466.040 --> 471.200] A large number of different groups have shown this type of coding across a range of +[471.200 --> 472.520] intervals. +[472.520 --> 477.720] But as you can see for any given cell if you line up all of the different trials, these +[477.720 --> 483.600] 16 second trials, there's cells that are this one cell here is reliably coding the beginning +[483.600 --> 485.400] of the running period. +[485.400 --> 489.400] This cell is reliably coding the middle of the running period and this cell is reliably +[489.400 --> 491.280] coding the end of the running period. +[491.280 --> 498.880] So the same cells that are coding particular spatial locations are also coding time. +[498.880 --> 507.240] Now using endoscope that was developed by Mark Schnitzer here from EndScopeX will +[507.240 --> 512.280] Mao in Howard Eichenbaum's laboratory did experiments looking at the time cells with +[512.280 --> 518.360] imaging in hippocampal region CA1 and you can see here it's plotted for just individual +[518.360 --> 523.520] trials but you'll see a repeating motif where there's a time cell firing up here on this +[523.520 --> 528.800] trial and then the time field cell firing here and then a time cell firing down here. +[528.800 --> 534.320] And then on the next cell, the next trial, this is all from a rat running on a treadmill. +[534.320 --> 538.400] Here's again the time cell here and then the time cell over here and then the time cell +[538.400 --> 540.480] down in the bottom. +[540.480 --> 545.960] So similar to the unit recording, the electrophysiological recording, the calcium imaging shows +[545.960 --> 551.760] consistent firing for this particular unit of firing near the beginning of each trial +[551.760 --> 553.960] for this one firing near the end. +[553.960 --> 560.960] And Will could then sort these large numbers of units according to what period during the +[560.960 --> 562.880] 10 second interval they were coding. +[562.880 --> 567.640] You can see there's actually a greater number density of cells coding the beginning of the +[567.640 --> 574.920] running and it actually progressively decreases in a very consistent manner to get fewer firing +[574.920 --> 579.520] fields later on in the interval but actually slightly wider firing fields. +[580.000 --> 584.080] And the advantage of the imaging is that these were then lined up with each other over +[584.080 --> 589.120] days and so the same population of neurons or at least many of the same neurons could be +[589.120 --> 596.120] sampled over many days and you can see a similar coding across the different days by these +[596.120 --> 597.360] neurons. +[597.360 --> 601.600] But this has allowed us then to do what Jill Lloydkebs laboratory did. +[601.600 --> 607.400] The M.O.E. mancan studies in 2012 and 2015 where they looked at correlations in their +[607.400 --> 614.120] case neuronal activity of place cells in CA2 or CA3 or CA1 and we saw a similar in this +[614.120 --> 617.920] experiment, a similar decrease in correlation. +[617.920 --> 622.280] In this case across different trials it was a shorter period than the many hours they +[622.280 --> 629.360] study but you see a gradual decrease in the correlation over time which could then be +[629.360 --> 634.880] the basis for forming an episodic representation that is distinct not only for times within +[634.880 --> 641.280] a trial that could be coded by different neurons but also by the correlation across the +[641.280 --> 646.480] whole population which decreases suggesting that you have a change in representation between +[646.480 --> 652.440] trials near the beginning of each day and trials near the end and then Wilmaul also looked +[652.440 --> 657.880] at the correlation in this case the decoding error across days and so there's less decoding +[657.880 --> 662.600] error for within a day versus for a one day interval versus a two day interval. +[662.680 --> 667.360] There's actually a progressive change in the population representation across days that +[667.360 --> 673.960] could allow you to allow the animal to retrieve episodic memories for different days and +[673.960 --> 677.400] that would be analogous to what might be necessary if you were going to remember where you +[677.400 --> 682.320] parked your car today versus where you parked your car yesterday. +[682.320 --> 687.040] So this is intriguing for potential mechanisms for episodic memory. +[687.040 --> 692.600] Now as Edvard showed of course there's also grid cells in the media lanter idle cortex +[692.600 --> 697.480] and this is just showing a video of a wrap foraging an open field environment. +[697.480 --> 703.000] This is recording done by Caitlin Monahan in my lab showing recording of a grid cell but +[703.000 --> 707.040] I'm only pointing this out because the question is you know they're coding space but are +[707.040 --> 712.960] they similar to play cells in that they would also code the time of the running. +[712.960 --> 718.120] So here is the similar sort of paradigm where we are in this case Ben Krauss was taking +[718.120 --> 722.120] neurons that Mark Brandon had found and identified his grid cells and then having the animal +[722.120 --> 726.880] run on the same sort of treadmill task with the spatial alternation and now this is +[726.880 --> 731.840] a single grid cell that's being recorded but you'll see that it actually fires at three +[731.840 --> 736.480] different distinct times so here's the start of a trial fired right at the beginning and +[736.480 --> 741.480] it stops firing for a period of time then it fires for a period of time then it stops. +[743.520 --> 748.920] Then it starts firing again at the end of the trial. +[748.920 --> 753.720] So here this is similar to this cell it's not the same cell but it's similar to this +[753.720 --> 758.920] cell and that it has multiple firing fields in a two-dimensional open field environment +[758.920 --> 763.720] but as you can see from the movie this similar cell fires at the beginning doesn't fire +[763.720 --> 768.520] then fires again then doesn't fire then fires at the end so even in the case where a cell +[768.520 --> 772.000] that's identified as a grid cell in a two-dimensional environment with different firing +[772.080 --> 776.160] fields when it's running on a treadmill in the same location with the same direction +[776.160 --> 781.560] shows distinct coding of different time intervals during that period of running so again the +[781.560 --> 786.680] same cells can code both spatial location and time. +[786.680 --> 793.400] So you know we've seen quite robust coding of both dimensions now the question is what +[793.400 --> 798.600] are the potential mechanisms for coding these dimensions and right up front here is +[798.680 --> 803.160] where I tried to predict what Edward would end up speaking about actually put in a slide +[803.160 --> 806.720] he did show this slide so I guess my prediction was correct he just didn't have that much +[806.720 --> 811.000] time to show it but so this is this interesting experiment that they published where they +[811.000 --> 818.000] saw in a relatively long period of the animal foraging in environments with different colors +[818.160 --> 823.040] black environment or white white environments they would often observe neurons that would +[823.040 --> 829.120] show an exponential decay of firing rate during the time of foraging within an individual +[829.120 --> 834.040] environment this is in lateral interrhymnal cortex in contrast to media lanterrhymnal cortex +[834.040 --> 839.040] which is where the grid cells have been described as well as the time coding that I just showed +[839.040 --> 842.560] you but this is lateral interrhymnal cortex where they show this interesting exponential +[842.560 --> 848.760] decay they also often also saw cells that would reduce exponentially where seem to show +[848.760 --> 854.240] it fit with an exponential decay over a very long period of time and this was really exciting +[854.240 --> 860.240] for myself and for Mark Howard at Boston University because it fits very well with the framework +[860.240 --> 869.440] that Mark has been using at Boston University for many years which is to have a assumption +[869.440 --> 875.880] of an exponentially decaying representation of time that then is combined and can be used +[875.880 --> 882.120] to generate time cell responses so we had actually already done this model and this is a +[882.120 --> 888.720] in a paper by U of Luz or on TIGAG myself and Mark Howard we'd already actually written +[888.720 --> 894.720] up this paper and submitted it at the time that we saw Edvard's paper so it was perfect +[894.720 --> 903.720] for us because in our paper we had we were actually motivated by slice physiology data by +[903.720 --> 909.400] my former collaborator on Helalonzo who sadly passed away in 2005 but on Helalonzo's group +[909.400 --> 915.200] had observed neurons that had firing in slice preparations that would have firing rates that +[915.200 --> 920.160] would in some cases decay exponentially for periods of time and this work has also been done +[920.160 --> 925.640] on Motoharo Yoshida my former postdoctoral fellow this is actually here an intracellular +[925.640 --> 930.520] recording done by Schwind in Cortex there's a number of different preparations that show +[930.520 --> 936.280] an intracellular recording this type of exponential decay so we'd use this as a justification for +[936.280 --> 943.080] a spiking neuronal model that could model how neurons could show an exponential decay of firing +[943.080 --> 950.720] over time at different time constants similar very similar to what the cell paper shows from +[950.720 --> 957.400] the Moser laboratory so this was the input set of spiking neurons with different time constants +[957.800 --> 965.480] were the input of this network that we recently published and then the output was generated by +[965.480 --> 970.840] having these inputs go to neurons that were either excitatory or inhibitory and then would connect +[970.840 --> 977.400] to output neurons that would in a sense sum up the addition or subtraction of these different +[977.400 --> 982.360] exponential functions and would be able to generate the time cells that look I hope you see +[982.440 --> 988.360] this looks quite similar to the distribution of time cell responses in the data from the paper +[988.360 --> 993.080] by Willemaw or you have the higher number density of time cells coding the beginning of the trial +[993.080 --> 1000.840] with shorter periods of firing and then a smaller number of cells coding longer periods at later +[1000.840 --> 1004.760] portions during the trial and this is an important part of the model that Mark Howard has been +[1004.760 --> 1011.320] describing which is the idea that you would have a scale invariant representation and this is +[1011.320 --> 1017.720] consistent with a lot of data both physiology as we're showing here but also behavior that if you ask +[1017.720 --> 1024.600] a any animal or a human to discriminate different time intervals that are one second versus two +[1024.600 --> 1029.160] second their resolution will be more accurate than if you last them to discriminate time intervals +[1029.160 --> 1034.840] of 10 seconds versus 20 seconds and so this scale invariance appears here in this model where if you +[1034.840 --> 1039.800] normalize them to the same interval and same magnitude you'll get the same shape and this is +[1039.800 --> 1043.160] important if you're trying to remember you want to remember oh you know you know what did he +[1043.160 --> 1047.720] just do with the cursor three seconds ago or you want to remember what did I just talk about five +[1047.720 --> 1052.760] minutes ago your ability to discriminate that your ability to encode those types of memories on +[1052.760 --> 1057.880] different time scales depend upon having some representation of time on multiple different time +[1057.880 --> 1063.240] scales but so we were very excited when we saw the data from the Moser laboratory because we +[1063.240 --> 1068.120] could essentially use that as another justification for the input you know we could imagine this is +[1068.120 --> 1073.400] lateral interrhyal cortex and that we are giving inputs that have different time constants in +[1073.400 --> 1077.880] their case of course they have time constants on the order of minutes and we were using time constants +[1077.880 --> 1083.960] or the order of seconds but that's perfect again for this problem of scaling multiple or remembering +[1083.960 --> 1090.360] multiple different scales of time and episodic memory and as I mentioned in terms of the output we +[1090.360 --> 1096.920] were replicating it in terms of the data from Ben Krauss's experiment where you have different +[1096.920 --> 1102.600] cells time cells coding different intervals during time and in particular getting at this change in +[1102.600 --> 1109.240] the number density of the cells with the higher number of cells at the shorter time intervals and +[1109.240 --> 1114.920] then the broader distribution of firing and the smaller number of cells at the longer time interval +[1114.920 --> 1120.280] so that's something that's effectively generated by this model now there's other ways of generating +[1120.280 --> 1125.880] these models you could have a chaining model or we could use a different types of models and if we +[1125.880 --> 1130.680] you know we can have more discussion about that if we want during the discussion period +[1132.520 --> 1136.280] right so I want also wanted to talk about potential mechanisms for coding of space +[1138.520 --> 1144.600] there's a number of different ways that you can generate grid cell or play cell type responses +[1145.160 --> 1149.640] I'm just kind of broadly summarizing there's a lot of different models in this domain +[1149.640 --> 1154.680] but I'm broadly summarizing two different types that I'll talk about one type is doing integration +[1154.680 --> 1159.880] of self-motion velocity so that would be the speed and direction of the animal if it can integrate +[1159.880 --> 1165.640] its movements at each point and time then it can estimate where it is where it is you know just +[1165.640 --> 1169.960] integrating that velocity where you can estimate where it is relative to its starting point and this +[1169.960 --> 1175.400] is the attractor dynamic model of grid cells uses this the oscillatory interference model grid cells +[1175.400 --> 1180.600] uses this mechanism later than I'll also talk about the alternate model using a transformation +[1180.600 --> 1186.760] of sensory input and actually both of these components have been described in this recent paper +[1186.760 --> 1192.120] from Lisa Geochomo's lab by Malcolm Campbell these different potential influences on firing +[1193.400 --> 1198.040] now it's reasonable to assume in these models the path integration models I'll talk about first +[1198.040 --> 1203.240] it's reasonable to assume that you have a self-motion signal available in inter-ional cortex because +[1203.240 --> 1209.480] there are cells coding both direction and running speed so Edvard already summarized the +[1209.560 --> 1217.080] head direction cells that were described by Jeff Tauby and Jim Rank and also by Sargolini and the +[1217.080 --> 1223.400] Moser lab this is one recorded by Mark Brandon in my laboratory showing tuning for a southwest +[1223.400 --> 1229.480] direction and not firing for the north east or south in this polar plot and then as Edvard also +[1229.480 --> 1235.240] mentioned there are cells many cells in inter-ional cortex that will code speed by showing a linear +[1235.240 --> 1242.280] change in firing rate based on running speed and these were actually in some early papers by O'Keefe +[1242.280 --> 1247.320] but the the crop paper from the Moser lab was calling them speed cells specifically for cells that +[1247.320 --> 1252.200] weren't coding other factors that were only coding speed but there's a number of papers showing +[1252.200 --> 1262.040] cells coding both speed and other factors and these if I can get this to start so this is just one +[1262.040 --> 1266.920] of the types of models this is the oscillatory interference model that can use a measure of +[1266.920 --> 1274.680] velocity to generate a code of location and this is just showing how in this case you in this model +[1274.680 --> 1280.520] the oscillations here shown here are being driven by the velocity of the animal relative to +[1280.520 --> 1284.200] different directions in the environment and then you sum up the oscillations when they cross +[1284.200 --> 1290.200] threshold and you can generate a grid cell firing field that in the sense is based on the integration +[1290.200 --> 1297.400] of the velocity over time now this overall framework of using direction and running speed is +[1297.400 --> 1306.200] consistent with some data Jeff Talby's lab did inactivation of the anterior thalamus to block the +[1306.200 --> 1311.720] head direction input to the inter-ional cortex and showed that grid cells that they recorded in +[1311.720 --> 1315.800] the inter-ional cortex in the baseline condition where essentially they lost their spatial +[1315.800 --> 1320.680] specificity when they inactivated the head direction cells and then they recovered afterwards +[1323.320 --> 1329.640] similarly Mark Branden in my laboratory did an experiment where he recorded grid cells in a +[1329.640 --> 1335.640] baseline condition and did inactivation of the medial septum a similar experiment was also +[1335.640 --> 1342.680] done by Julie Konig with Jill and Stefan Lloydkeb and in this case it also had the effect of wiping +[1342.680 --> 1348.360] out the grid cell spatial specificity so you can see during medial septum inactivation you lose +[1348.360 --> 1353.800] that spatial specificity and this is associated with a change from thetherrythomacilatory dynamics +[1353.800 --> 1359.160] in the inter-ional cortex in the baseline condition to the loss of thetherrythomacilations +[1359.160 --> 1364.360] during the medial septum inactivation consistent with the role of these neuronal inputs in generating +[1364.360 --> 1369.080] a ththerrythom and this is something that Ivan and others have done a lot of work on this the +[1369.080 --> 1374.120] role of the medial septum in driving the ththerrythomacilations then when the ththerrythomacilations are +[1374.120 --> 1380.200] covers you see a recovery of the spatial periodicity now this is in a sense the opposite of the +[1380.200 --> 1386.920] head direction manipulation done by the tau v lab because we showed in the same paper that the +[1386.920 --> 1392.760] spatial periodicity the grid cells these conjunctive grid by head direction cells the spatial +[1392.760 --> 1397.960] periodicity of these cells is lost but not the head direction coating so it's the opposite of the +[1397.960 --> 1402.600] case where they are blocking the head direction input and seeing a loss of grid cells here we have +[1402.600 --> 1407.800] a loss of the grid cell firing but we have the maintenance of this head direction coating +[1408.520 --> 1414.440] in the environment so the logical or one our first assumption actually was that we had wiped out +[1414.440 --> 1420.520] the speed code coming into the the inter-ional cortex and one of the first things we did was to +[1420.520 --> 1425.800] look at the speed coding by the different neurons and we were nothing's ever simple of course and +[1425.800 --> 1430.760] so we were disappointed to find that the speed coding wasn't lost so I'm first going to show you +[1430.760 --> 1436.280] just that there is speed coding in a number of different cell types so the grid cells show linear +[1436.280 --> 1440.920] changes in firing rate with running speed the conjunctive grid by head direction cells show it the +[1440.920 --> 1446.840] head direction cells as well as the pure speed cells and here they all are all showing the firing rate +[1446.840 --> 1451.960] change with running speed just in before I show you the results in the medial septum in activation +[1451.960 --> 1457.480] I was going to show you that there's also a change in theta rhythmicity with the running speed so +[1457.480 --> 1463.640] if you do an autocorrelogram on the firing rate and shift the firing spiking relative to itself +[1463.640 --> 1469.640] it'll peak at zero and then as you shift it'll peak again at 125 milliseconds corresponding to +[1469.640 --> 1474.520] a rhythmicity of about eight hertz and then it'll peak again at 250 so that's what's shown here +[1475.480 --> 1480.680] and with running speed you actually get a narrower period between these peaks indicating that +[1480.680 --> 1486.440] the rhythmicity is shifting from about eight hertz to slightly higher frequencies as the running +[1486.440 --> 1493.000] speed increases and this is also seen in all the different cell types but so we looked at the +[1493.000 --> 1497.400] effects during medial septum in activation and as you can see here here's a cell that's showing a +[1497.400 --> 1502.920] general coding of running speed firing rate with running speed and then we did medial septum in +[1502.920 --> 1509.240] activation so a loss of overall rhythmicity and we actually see in this case and in many cells a +[1509.240 --> 1516.520] better coding of firing rate with running speed so it isn't that the signal of running speed was +[1516.520 --> 1523.320] lost but we did see though as of course in some cases a complete loss of the theta rhythmicity of +[1523.320 --> 1529.160] the neurons and which of course would prevent any kind of coding of running speed by rhythmicity +[1529.960 --> 1535.400] or in this case we actually saw a maintenance of some rhythmicity but the coding of running speed +[1535.400 --> 1541.320] by rhythmicity is perturbed so here the rhythmicity is increasing in frequency with higher running speed +[1541.320 --> 1546.440] here it's decreasing in frequency with higher running speed so the rhythmicity representation has +[1546.440 --> 1552.600] been perturbed even though the firing rate code for speed is not perturbed. Now of course the +[1552.600 --> 1558.760] question for many years since then was well what particular subpopulation of neurons in the +[1558.760 --> 1564.120] medial septum is important for this influence on grid cells and this is something that we've been +[1564.120 --> 1568.520] working on for a number of years Holger-Danberg in my own laboratory was working on +[1568.520 --> 1574.760] perturbing the colonergic neurons we haven't yet seen effects from that on the spatial coding +[1575.400 --> 1583.400] but Jennifer Robinson in Mark Brandon's laboratory did selective optogenetic inhibition of +[1584.520 --> 1589.720] gabberergic neurons in the medial septum so she did viral infusions of archoridopsin and then could +[1589.720 --> 1594.760] selectively inactivate the gabberergic neurons and consistent with what I told you before about +[1594.760 --> 1603.400] the overall medial septum inactivation when she did the inactivation of the gabberergic neurons she +[1603.400 --> 1609.720] saw a loss of theta rhythmicity so here's the field potential the power spectra of the field potential +[1609.720 --> 1615.800] during the laser off periods it's very strong eight hertz rhythmicity and then it's greatly reduced +[1615.800 --> 1621.320] during the laser on period and she's had now a number of grid cell recordings in baseline +[1621.320 --> 1626.040] conditions where she has spatial periodicity of grid cells here's two different cells shown here +[1626.040 --> 1633.960] and here and then in the laser on condition she actually sees a loss of the spatial periodicity +[1633.960 --> 1641.000] of the grid cells in both of these cases the one thing is that this was a 30 second laser on 30 +[1641.000 --> 1647.160] second laser off and it apparently wasn't a long enough period for the cells to regain their +[1647.160 --> 1652.040] grid cell periodicity during the laser off period so somehow the networks getting perturbed strongly +[1652.040 --> 1658.120] enough that the grid cells are not firing consistently throughout the period but at least this +[1658.120 --> 1664.360] implicates specifically the gabberergic input for the generation of the grid cell firing response +[1664.760 --> 1672.120] now I've just given you some data that's supportive of the idea of path integration being involved +[1672.120 --> 1677.000] but there's actually a number of potential problems with path integration both for the attractor +[1677.000 --> 1681.720] dynamic models that are doing path integration and the oscillatory interference model +[1682.440 --> 1687.560] one of these was in a number of the papers on the speed coding which is that many of the neurons +[1687.560 --> 1694.120] will actually show an exponentially saturating code of firing rate with running speed where they'll +[1694.120 --> 1699.080] code it for a period of time but then they'll saturate this has been shown in a couple of our +[1699.080 --> 1703.320] papers and there's actually throughout these different classes there's actually a number of +[1703.320 --> 1709.080] cells that show this saturating exponential distribution of firing and that's problematic for +[1709.080 --> 1713.720] doing path integration it's you know you're better off with a linear code of running speed +[1715.240 --> 1719.640] another important thing that you might have noticed is that I kept I kept referring to the +[1719.640 --> 1724.440] fact that you need movement direction for the path integration model and yet the citations +[1724.440 --> 1729.960] are always two head direction cells that's what all these models have cited in the past so we +[1729.960 --> 1736.520] decided in our laboratory to test whether movement direction equals head direction and we found +[1736.520 --> 1741.160] that it doesn't and you know you can walk around and turn your head back and forth and you know +[1742.520 --> 1748.120] your head is not correlated all the time with your movement and neither is that the case in rodents +[1749.640 --> 1754.440] we actually looked at periods of time when the rodent head direction was more than 30 degrees +[1754.440 --> 1759.800] away from the movement direction to see what the neurons were actually coding and we found many +[1759.800 --> 1765.640] cells consistent with previous studies many cells coding head direction during these periods of time +[1765.640 --> 1771.160] and no cells were coding pure movement direction so there were none that stayed focused on the movement +[1771.160 --> 1777.560] direction of the animal independent of its head direction so this indicates that we don't have +[1778.520 --> 1783.640] a clear code for movement direction in the entrional cortex and then we gave these different +[1783.640 --> 1789.160] inputs to the attractor model which is using path integration as well as the oscillatory interference +[1789.160 --> 1795.560] model if we give movement direction input we get nice spatial periodicity of grid cells if we give +[1795.560 --> 1802.040] the head direction input we don't get this clear spatial signal so the head direction signal from +[1802.040 --> 1807.800] the behavioral data is not going to give you the necessary overall movement direction signal you +[1807.800 --> 1812.680] need and people various people suggested well maybe the head direction if you average it over a +[1812.680 --> 1817.800] one second period or a two second period or some period of time on average head direction will add +[1817.800 --> 1822.760] up to movement direction we tried that and we got distributions that looked similar to this in +[1822.760 --> 1829.960] the model so so you can't use head direction as an input to these models so this leads us then to +[1830.920 --> 1836.920] the case that I'll talk about for the rest of the talk which is that many models have proposed +[1836.920 --> 1844.440] that the grid cell spatial code could be using some transformation of sensory input instead where +[1844.440 --> 1848.840] there's an egocentric view of the world and then you can combine it with head direction coding to +[1848.840 --> 1855.720] generate an allocentric spatial location so this is where I'm going to talk about the influence +[1855.720 --> 1861.320] of environmental boundaries because in most of these experiments the most salient visual features +[1861.320 --> 1868.760] have to do with the features on the boundaries so there's plenty of of previous studies showing that +[1868.760 --> 1873.960] movement of the boundaries in the environment will influence the spatial coding by grid cells +[1875.080 --> 1879.880] this was initially done by Caswell Berry who recorded grid cells in a one meter square environment +[1879.880 --> 1885.640] and then compressed them in different directions and saw that the spacing of the grid cell +[1885.640 --> 1892.360] firing fields would compress in the direction of the boundary movements the Moser laboratory showed +[1892.360 --> 1897.400] this similar effect here's a case where the neurons have relatively the grid fields are relatively +[1897.400 --> 1901.400] widely spaced in the environment and then the movement of one of the boundaries will compress +[1901.400 --> 1907.080] them in that direction though they interestingly in this study saw that for narrow spacing between +[1907.080 --> 1913.080] the firing fields you don't get that compression effect so we've modeled this based this was +[1913.080 --> 1918.200] worked on with Florian Routies where we modeled how you could take an input like this where you have +[1918.200 --> 1923.880] features that are either on the ground plane giving you optic flow or on the walls giving you the +[1923.880 --> 1931.640] angle of particular features and then we use this to model grid cells the optic flow on the ground +[1932.040 --> 1937.640] plane we used a template matching technique developed by Perone the visual features on the walls +[1937.640 --> 1943.800] we just took the feature angles on opposite walls to generate the particular distance for the +[1943.800 --> 1949.560] grid cell models and we were able to replicate the compression so if we had visual features on +[1949.560 --> 1955.560] the walls we could replicate the compression of the grid cell firing spacing in that dimension +[1955.560 --> 1960.440] but in we could also replicate the Moser lab data showing that in some cases the narrower spacing +[1960.440 --> 1966.520] would not be shifted by the walls if we modeled the generation of these grid cells based on optic +[1966.520 --> 1972.040] flow from the ground plane so this is showing you know potential different visual influences on the +[1972.040 --> 1980.440] grid cells now in terms of the transformation to create the allocentric representation of space +[1980.440 --> 1987.000] Neil Burgess had published a paper in 2007 proposing that there might be a transformation from an +[1987.000 --> 1992.280] egocentric view of the world that was combined with head direction cells in the retro-spleenial cortex +[1992.280 --> 1998.120] to generate what he called allocentric boundary cells and that these could then drive play cells +[1999.320 --> 2004.520] and this is something that had arisen out of early work that the okeyflab did where they had +[2004.520 --> 2009.960] play cell firing in a one meter square environment and then they expanded the environment and saw +[2009.960 --> 2014.600] that the play cell firing field would often get stretched out and based on this Neil Burgess +[2014.600 --> 2020.360] proposed these allocentric boundary vector cells it would respond to boundaries at a particular +[2020.360 --> 2027.320] orientation relative to the environment so this would be in the sense coding the east boundary +[2028.200 --> 2032.120] and when I first saw this model I thought oh there's no way that neurons are actually coding +[2032.120 --> 2039.320] boundaries in that way and I was very surprised when both the okeyflab colon lever and casual +[2039.320 --> 2045.320] berry published these cells as well as the Moser lab what they call border cells both of these +[2045.320 --> 2049.880] labs have shown these types of allocentric boundary cells here's one that's responding to the +[2049.880 --> 2054.840] west boundary of the environment here's one that's responding to kind of the southeast boundary of +[2054.840 --> 2059.720] the environment and they have the characteristic they'll respond to the you know walls they'll +[2059.720 --> 2066.040] respond to inserted walls so this is showing firing to an inserted wall they'll also respond to +[2066.040 --> 2071.240] the edge of a tabletop and if you pull the tabletop two tabletops apart they'll actually respond +[2071.240 --> 2077.800] to the gap between the two tabletops even though the animal can still cross those so there's a very +[2077.800 --> 2085.640] salient representation of boundaries in actually entriangle cortex and other areas such as subiculum +[2086.760 --> 2091.400] so as I mentioned they the Neil Burgess had proposed that these boundary cells could be originally +[2091.400 --> 2097.720] driven by egocentric view cells and this is where we were very excited to find evidence for this +[2097.720 --> 2105.160] type of response so Jake Hinman in my laboratory was recording endorsement medial striatum and he +[2105.160 --> 2109.880] was actually recording in the region getting input from retrospeinial and entriangle cortex +[2111.080 --> 2116.280] and he found cells that essentially have the same sort of egocentric representation that if you look +[2116.280 --> 2123.320] back at the Neil Burgess paper he has plots that are very similar to this so Jake was recording from +[2123.320 --> 2130.360] a neuron as rat was forging in an open field environment and he saw firing when the rat was near +[2130.360 --> 2135.160] the south wall if it was going east but if it was near the north wall you'd see firing if it was +[2135.160 --> 2141.080] going to the west and he correctly assumed that this meant that the firing was in response to the +[2141.080 --> 2147.720] position of the wall relative to the animal the egocentric coordinates of the wall so he did +[2147.720 --> 2155.480] egocentric plots that where you have the animal facing forward here forward is up back is down +[2155.480 --> 2160.520] and then left and right and he would plot for each spike he would plot the position of the boundary +[2160.520 --> 2166.120] when that spike was generated so here's the position of the boundary for three different spikes +[2166.840 --> 2172.120] here's the position of the boundary average it over 222 different spikes and you can see this +[2172.120 --> 2179.160] cell is consistently firing when the boundary is to the front right of the animal and then you could +[2179.160 --> 2184.440] divide by the occupancy of the wall overall and the behavior to get occupancy normalized +[2184.440 --> 2189.240] firing so here's a clear example of a cell that's responding to the egocentric position of a boundary +[2189.240 --> 2198.360] and he's found many cells of this type hopefully this will be published soon here are multiple +[2198.360 --> 2204.600] examples of neurons coding an egocentric position of the boundary just to the right here in neurons +[2204.600 --> 2209.320] coding position just to the left of the animal here's neurons coding position at greater distance +[2209.320 --> 2216.360] from the animal and so this is exactly what Neil Burgess had originally proposed in fact they +[2216.360 --> 2221.800] even as I mentioned plotted it with the exact same format of egocentric coding that could be +[2221.800 --> 2227.400] combined with head direction cells to generate the allocentric representation but they had proposed +[2227.400 --> 2231.720] that these cells would be appearing in retrosplenial cortex or at least the transformation would be +[2231.720 --> 2238.920] coded in retrosplenial cortex and so and the Alexander in my laboratory went and recorded in retrosplenial +[2238.920 --> 2245.000] cortex and has seen these same types of egocentric boundary responses you know coding left or right +[2245.000 --> 2250.520] side boundaries or even right to the you know to the back of the animal in the retrosplenial cortex +[2250.520 --> 2257.800] consistent with Neil Burgess's original model from over over 10 years ago so this is supportive of +[2257.800 --> 2263.800] this notion that the allocentric spatial code could be generated by taking the egocentric input combining +[2263.800 --> 2269.880] it with head direction cells and generating the allocentric representation and finally just to +[2269.880 --> 2274.360] briefly bring it back to modeling of episodic memory if you think about your episodic memory of +[2274.360 --> 2278.920] walking in I remember walking in here coming from the cafeteria and going to the elevator and +[2278.920 --> 2284.040] coming up the stairs and walking into the room I have in a sense toluing described your episodic +[2284.040 --> 2289.000] memory is a series of kind of movie frames where you can imagine oh yeah I you know what it +[2289.000 --> 2294.280] looked like when I was walking into the room and so on and so somehow you want to combine your +[2294.280 --> 2299.480] spatial temporal trajectory with these egocentric views of the world and now this is a relatively +[2299.480 --> 2306.440] abstract high level model but I've modeled how you could store a spatial temporal trajectory by +[2306.440 --> 2312.680] having speed modulated head direction cells driving grid cells that could drive play cells and +[2312.680 --> 2318.360] then you could form associations via heavy and LTP of the play cells with the speed modulated +[2318.360 --> 2323.320] head direction cells of course this stage here from grid cells to play cells could be using the +[2323.320 --> 2328.920] mechanisms that Jeff McGee talked about but then you could also form associations between the +[2328.920 --> 2334.280] play cell representations and these egocentric views of the boundaries in for instance retrospective +[2334.280 --> 2341.000] plenial cortex as the animal behaves and then during retrieval when there's no behavioral input you +[2341.000 --> 2346.840] could have this loop running to retrieve the spatial temporal trajectory and thereby retrieve +[2346.840 --> 2354.680] these kind of movie frame views of the world in a sense as your recall of an episodic memory +[2355.320 --> 2359.800] all right so I think I'm just about on time and I'll close there thanks very much +[2376.280 --> 2377.880] oh you're signing something +[2384.840 --> 2390.440] you might thanks for being here and thanks for the very nice talk um simple question there were a lot +[2390.440 --> 2395.640] of people studied the behavior at the behavioral level the coding of time you know just simple +[2395.640 --> 2400.120] little things like pressing a bar releasing at a certain number of seconds lighter and you get a +[2400.120 --> 2405.960] distribution of accuracy or even licking responses right and I'm wondering do we know is it known +[2405.960 --> 2411.640] whether the hippocampus and activation affects judgments of time and in simple time specific tasks +[2412.520 --> 2418.840] yeah actually I mean a lot of the work on that has has focused on strike like Warren McHaz focused +[2418.840 --> 2425.640] on stradum coding this so it looks I mean rather than focusing on it being only hippocampus +[2425.640 --> 2431.640] and antironal cortex for that type of timing behavior instead I would argue that the it seems +[2431.640 --> 2437.640] like this this temporal coding of intervals is a general brain wide sort of phenomenon and +[2437.640 --> 2442.920] Mark Howard has actually analyzed data from the stradum and seeing the same sort of distribution +[2442.920 --> 2448.440] of time cell responses with the number you know density changing he's analyzed it in data from +[2448.440 --> 2454.760] prefrontal cortex and both rodents and then monkey prefrontal cortex from Earl Miller so +[2455.640 --> 2462.520] so it does seem to be that this this type of model could be a relatively general model for mechanisms +[2462.520 --> 2470.120] of timing and so I would argue that the hippocampus and antironal cortex is really more for timing in +[2470.120 --> 2474.520] context of episodic memory which isn't usually being tested in those types of experiments +[2482.520 --> 2489.320] so very nice talk I was wondering so our sense of time most of time is absolute but sometimes +[2489.320 --> 2496.040] that's when we lose track of time right so do you think it's so the time encoding cells do you +[2496.040 --> 2500.920] think that they're influenced by the state of the brain or maybe the level of neuromodulator +[2500.920 --> 2508.120] have you seen regulation of their activity you know in terms of sometimes their silent or how +[2508.760 --> 2513.560] how spaced out they're they're firing you know from one cell to another can that be modulated by +[2514.520 --> 2518.120] the actual state of the animal yeah I'd love to do that experiment I mean we all have this +[2518.120 --> 2523.320] subjective experience of you know kind of exciting conversations going really quickly and boring +[2524.120 --> 2531.400] boring talks going really slowly you know so so I agree that there probably is a very strong +[2531.400 --> 2536.280] influence of neuromodulators on this and if we we actually describe this in the paper the +[2536.280 --> 2544.440] leu the u-a-leu paper how colonergic modulation changing the slope of the fi curve for neurons could +[2544.440 --> 2550.440] essentially rescale the coding very effectively you know cross a whole population of different neurons +[2550.440 --> 2555.240] with different time constants so we we propose that but it hasn't been tested the experiments with +[2555.240 --> 2560.600] Ben Kraus we didn't do specific manipulations of neuromodulation though I should point out a +[2561.000 --> 2567.240] the patent group in Lisbon actually did do experiments where they're doing dopamine +[2567.240 --> 2571.800] allergic modulation and did see changes in this subjective coding of time +[2577.400 --> 2583.480] thanks Mike I was wondering how you square this idea of head direction versus movement +[2583.480 --> 2587.720] direction and movement direction being what's needed with the fact that lesions of ATN +[2587.880 --> 2596.120] disrupt good cells yes so I guess I would argue that it's it's shift you over to the sensory +[2596.120 --> 2601.000] processing model I mean there's it's it's possible that both mechanisms are working and you know +[2601.000 --> 2605.800] there's some suggestion that maybe you could have reset from visual inputs and then do path +[2605.800 --> 2610.600] integration for periods of time and then have reset again but I would I would say probably the +[2610.600 --> 2617.000] taube paper result is due to not the loss of path integration but the loss of the ability to +[2617.000 --> 2622.680] have an update of head direction so that you can take your current egocentric input and code +[2622.680 --> 2627.720] your location so if I don't know what direction my head is oriented at then I could get very disoriented +[2627.720 --> 2631.880] in terms of the the visual features being translated into the allocentric +[2631.960 --> 2644.920] go back to the earlier part of the talk you talked about these cells that had time fields in the +[2645.960 --> 2652.280] little treadmill and then place fields around the track and it their place fields around the track +[2652.280 --> 2658.840] were in the same order as their time fields on the on the treadmill and I just never thought of that +[2662.200 --> 2668.440] is is the animal replaying is future trajectory around the track I mean that was the thing that +[2668.440 --> 2673.080] occurred to me when I saw that and I just wondered yeah that's a great idea all these these +[2673.080 --> 2678.680] questions are all very interrelated right we space and time I guess in looking at your book +[2678.680 --> 2684.680] not that long ago I realized that they're very intimately interrelated in a certain way and so +[2684.680 --> 2690.600] maybe that's not surprising but I just wondered if you thought about whether the coding of time and +[2690.600 --> 2695.640] space was in fact you know related in those kinds of neurons yeah and that would be kind of +[2695.640 --> 2699.960] consistent with with Jill saw maybe that you know you're doing some the animals doing some replay +[2700.840 --> 2707.240] we didn't we didn't see that specifically but I should mention that you know I showed examples of +[2707.240 --> 2711.800] three cells that had both time fields in place field but not all the cells had that there's plenty of +[2711.800 --> 2717.320] place cells that don't have time fields and plenty of time cells that don't have place fields so it +[2717.320 --> 2722.920] maybe that we just didn't have enough data to analyze that one thing I should mention is yeah +[2722.920 --> 2728.440] that the time cells have this tendency to spread out near the end of the interval and we've actually +[2728.440 --> 2731.960] been interested in whether or not that would happen with place cells but the thing about place +[2731.960 --> 2737.880] cells is they have kind of ongoing sensory update so that they could in sense reset more accurately +[2737.880 --> 2742.520] whereas the time cells they have this one salient stimulus at the start of the interval and then +[2742.520 --> 2748.040] they're you know essentially coding subsequent time relative to that salient event +[2756.280 --> 2761.160] yeah a question about the egocentric cells that you presented so as you know +[2761.720 --> 2767.800] Jim Tneyrim has seen similar cells in lateral and torrional cortex and I've been reported in CA1 +[2767.800 --> 2775.320] so how similar are they and you see these cells as part of a wider network that actually is not +[2775.320 --> 2780.600] localized to any particular region you know what kind of network could that be yeah I mean that's +[2780.600 --> 2785.480] that's certainly I should have mentioned that Jim's study he actually had took a somewhat different +[2785.480 --> 2791.560] perspective instead of you know coding it in terms of the position of the barriers they were +[2791.560 --> 2795.720] coding it in terms of can the animal keep track of the center of the environment but it is a very +[2795.720 --> 2800.200] similar characteristic and so I think it's perfectly reasonable that they're in lateral and +[2800.200 --> 2807.240] torrional cortex you know in retrosplenial the dorsal medial strideum response is probably due +[2807.240 --> 2813.960] to inputs from entrional and retrosplenial I wouldn't you know necessarily expect to see them +[2813.960 --> 2818.520] everywhere I don't think you know they'd be that likely to show up for instance in hippocampus +[2818.520 --> 2819.960] but it'll be interesting to see +[2826.360 --> 2834.680] so I guess I was wondering why we want such a strong distinction between time cells and space +[2834.680 --> 2839.560] cells because you gave an example of why of how you could be in the same space but in at a different +[2839.560 --> 2844.760] time it doesn't seem like we can ever be in a different place at the same time and you might +[2844.760 --> 2848.360] think that they're just coding something like context and sometimes the context is primarily +[2848.360 --> 2853.640] determined by spatial cues and sometimes by temporal cues so is there is there really like an +[2853.640 --> 2858.120] imprensive distinction between the two? No and in fact that's something I should say I +[2858.120 --> 2864.760] I didn't say it but all of these different functional subtypes are really probably just +[2864.760 --> 2871.320] different categories placed on a continuum of responses and Lisa has a paper from her lab +[2872.440 --> 2879.720] hardcastle that I'll that specifically did analysis of the coding characteristics and neurons +[2879.720 --> 2886.840] and saw all sorts of different combinations of responses so and in fact we did a GLM analysis on +[2886.840 --> 2891.720] the I didn't show this slide but on the time cells we looked at whether they were coding time +[2891.720 --> 2896.840] or distance running on the treadmill and we saw some cells clearly coding time and some clearly +[2896.840 --> 2901.960] coding distance but some were actually kind of coding both time and distance so so there really is +[2901.960 --> 2908.360] probably a continuum of you know coding all of these different dimensions diff --git a/transcript/allocentric_P7Q2fE4Qm2w.txt b/transcript/allocentric_P7Q2fE4Qm2w.txt new file mode 100644 index 0000000000000000000000000000000000000000..1c1d1804105e9edb7f2c59a77e881efb133edc6b --- /dev/null +++ b/transcript/allocentric_P7Q2fE4Qm2w.txt @@ -0,0 +1,35 @@ +[0.000 --> 4.640] Year 12's, the next issue on our agenda is spatial neglect. +[4.640 --> 12.560] As we know from our video on the parietal lobe, one of the functions that it largely is responsible for is the perception of space. +[12.560 --> 17.080] Damage to the parietal lobe then may result in spatial neglect. +[17.080 --> 18.960] What is spatial neglect then? +[18.960 --> 27.200] Well, it's possibly best described as a phenomenon whereby an individual consistently ignores stimuli presented from one side of the body. +[27.200 --> 33.720] Now this is more than consciously deciding to block out somebody speaking or an annoying noise on either your left or right side. +[33.720 --> 39.360] In spatial neglect, stimuli from one side of the body is systematically ignored. +[39.360 --> 42.240] Some sufferers aren't even aware of their condition. +[42.240 --> 46.840] Now most often, the individual will neglect stimuli from the left side of their body. +[46.840 --> 51.960] Remembering that the left hemisphere of the brain controls the right side of the body and vice versa. +[51.960 --> 57.440] Ignoring stimuli from the left side of the body means that the right hemisphere is mostly affected. +[57.440 --> 62.920] That is, spatial neglect is most often the result of damage to the right parietal lobe. +[62.920 --> 66.520] The consequences of spatial neglect can be considerable. +[66.520 --> 74.120] For example, asking an individual with spatial neglect to draw you a clock or a house may result in something like this. +[74.120 --> 79.480] In these situations, the individual is only aware of the right half of the object at hand. +[79.480 --> 85.000] Sufferers may also only eat the right side of their dinner or acknowledge people on their right side. +[85.000 --> 89.720] And this is because they simply are unaware of stimuli presented to their left. +[89.720 --> 95.400] More than that, individuals with spatial neglect may even experience reconstructed memories. +[95.400 --> 102.960] That is, they may only be able to remember the right side of memories that were encoded before they damaged their right parietal lobe. +[102.960 --> 105.640] Things that they saw fully at the time. +[105.640 --> 112.360] So with that information at our disposal, this question from the 2013 V-Car exam seems fairly straightforward. +[112.360 --> 113.120] It reads, +[113.120 --> 117.720] Before suffering a stroke, Bettina was a healthy 36-year-old woman. +[117.720 --> 122.200] Since her stroke, she applies makeup to the right side of her face only. +[122.200 --> 125.920] Bettina's behavior since the stroke suggests that she has. +[125.920 --> 128.000] A. spatial neglect. +[128.000 --> 130.080] B. Broke as aphasia. +[130.080 --> 132.240] C. Where Nicky's aphasia. +[132.240 --> 135.400] Or D. Had split brain surgery. +[135.400 --> 140.080] As I'm sure you can guess, the correct answer here is A. spatial neglect. +[140.080 --> 146.760] It is likely that Bettina's stroke affected her right parietal lobe, thereby resulting in spatial neglect. +[146.760 --> 151.320] This means that she systematically ignore stimuli from the left side of her body, +[151.320 --> 155.960] which explains why she only applies makeup to the right side of her face. +[155.960 --> 158.760] In the next video, we will look at split brain. +[158.760 --> 161.560] Keep working hard and have a psychedelic day. diff --git a/transcript/allocentric_Q1Tczf8vxCM.txt b/transcript/allocentric_Q1Tczf8vxCM.txt new file mode 100644 index 0000000000000000000000000000000000000000..8f9563721ba85ba31b7d6c174fd525966abd3776 --- /dev/null +++ b/transcript/allocentric_Q1Tczf8vxCM.txt @@ -0,0 +1,162 @@ +[0.000 --> 22.000] Hello, my name is Nick Aoy-Jane. I'm a game designer at that game company. And today, I'm going to talk to you all about cognitive maps and how to prevent your players from getting lost in your levels. +[22.000 --> 28.000] So, as I said, I'm a game designer at that game company, but before I was a game designer, I was actually an architecture. +[28.000 --> 37.000] And so, a lot of the references and imagery and knowledge that I'll be using in this presentation actually come from the domains of architecture and urban planning. +[37.000 --> 43.000] One of my big interests has always been wayfinding and navigation, which is how we get here. +[44.000 --> 58.000] So, we're going to jump right in and let's first start by defining what maps are. So, firstly, a map is a tool. A map helps you achieve something. Most of the time, it's orienting yourself in relation to other things. +[58.000 --> 68.000] A map is made. So, I can't go out into nature and find a map just sitting there on the ground. It's either made by me or by somebody else. +[69.000 --> 81.000] It represents spaces or concepts. It's pretty self-explanatory. And it's relational. So, if I show you a piece of paper, and it's got a dot on it, and that dot is labeled Paris, that isn't a map just yet. +[81.000 --> 94.000] However, if I show you that same piece of paper with a dot that says Paris, another dot that says Cairo, now we're beginning to form a map and in those two dots can begin to anchor each other in space. +[94.000 --> 106.000] Also, a map has edges or limits. Even if it's a globe, which is a map of the entire earth, you're going to be limited by how much information you can get at particular scales, etc. +[106.000 --> 120.000] There are a couple things maps aren't. So, maps aren't quote the truth. They are not always orthographic. So, that kind of top-down view of something without any perspective that we associate with maps. +[120.000 --> 140.000] Maps don't always have to be that. They're not always flat. That Polynesian stick chart that I'm showing on the top right is an example of a kind of three-dimensional, very tactile map that maps ocean swells and islands to help navigators navigate wide expanses of ocean, for example. +[141.000 --> 151.000] They're always not there also not always physical, which means that they can take place in our minds, which is as you guessed what a cognitive map is and it's prescriptive. +[151.000 --> 165.000] So, if I show you a map of a part of the world that you're unfamiliar with and I label something incorrectly, as far as you're concerned, that's that incorrect labels is reality to you. +[166.000 --> 175.000] So, let's talk about cognitive mapping. This term was originally coined by Edward Tolman in his lab in 1948. +[175.000 --> 184.000] So, if you've seen imagery of rats in a maze running around trying to get a piece of cheese, that's where this is where it comes from. +[185.000 --> 198.000] So, the experiment was as follows. They would take a rat and place a rat in the apparatus that you see on the top left, this kind of circular room, and they would hide a piece of cheese where that letter G is located in the yellow diamond. +[198.000 --> 213.000] And the rat would eventually find that piece of cheese and then once consumed it would once the cheese was consumed, they would put the rat back in the circle and have the rat do this over and over until it was almost like muscle memory, you know, the rat would go in and do exactly what routes to take and it would get the cheese. +[214.000 --> 223.000] Then, they would take that same rat and put it in the apparatus that you see in the middle. So, that same kind of rounded room, but that original pathway was now blocked. +[223.000 --> 237.000] The researchers wanted to know, would the rat once it realizes that path is blocked, have some sort of intuition as to where this piece of cheese was. +[237.000 --> 246.000] But it did, it would probably bias channel six, which is geographically where you would go if you just wanted to get the cheese. +[246.000 --> 255.000] But if it didn't, we would, if the rat didn't have a kind of understanding of what its world was in its own mind, it would just bias all of the paths equally. +[255.000 --> 267.000] As the researchers and told me found out rats did create some sort of mental map of their environment, as you can see on the bar on the right hand side that really tall. +[267.000 --> 280.000] That really tall bar is how many times those rats picked Avenue six. So that is what these cognitive maps are for rats, but of course, we're not rats. +[280.000 --> 295.000] We are people. We have our own people brains and we inhabit non-trillion spaces. We live in cities, suburbs, all kinds of different environments that demand us to use our own cognitive maps on a daily basis. +[295.000 --> 301.000] So, we can do a little exercise to help you understand what your own cognitive map is like. +[301.000 --> 313.000] So, take five minutes and draw your neighborhood from memory on a piece of paper. Don't look anything up. Just try to let your mind guide your hand and just spend five minutes to do that. +[313.000 --> 317.000] So if you want to do that exercise, go ahead and pause the video now. +[317.000 --> 324.000] If you've done that exercise, you might have a drawing similar to the illustrations we see here on the top. +[324.000 --> 337.000] These illustrations actually came from the image of the city, which is a book that urbanist Kevin Lynch published and Kevin Lynch went to a bunch of cities and asked people to do this very same thing. Hey, can you draw me your neighborhoods? +[337.000 --> 347.000] And after parsing through all of those different illustrations, he was able to discern five elements that people use to make sense of the spaces around them. +[347.000 --> 361.000] Paths, landmarks, districts, edges, and notes. The rest of this talk is going to explain each one of these elements in detail and how we can use those to make really strong cohesive cognitive maps. +[362.000 --> 370.000] But of course, this talk is also about not getting lost. So we need to explain what getting lost is now that we have a clear idea of a cognitive map. +[370.000 --> 385.000] Getting lost is simply a misalignment of your cognitive map with what the world around you is with your surroundings. Is that feeling of feeling like an area or a space is new, but knowing for a fact that it isn't. +[385.000 --> 393.000] And generally a bad time. This can result from changes in your environment or changes in your place within the environment. +[393.000 --> 404.000] Or it can result from insufficiently broad or insufficiently clear cognitive maps because you're unable to respond to those changes in an adequate way. +[404.000 --> 414.000] Similar to this image, which is a map of the world, you know, it's upside down. We might be able to tell it's a map of the world, but I'd be really hard pressed to identify any particular country with the map upside. +[415.000 --> 426.000] So let's talk about the first element. The first are paths. This is also the most self explanatory element. It's a linear space that directs movement and travel. +[426.000 --> 438.000] And it also tends to be dominant in cognitive maps. If you did the exercise earlier, one of the first things you might have done was started diagramming all of the paths that you are aware in your neighborhood. +[439.000 --> 448.000] And these are things like sidewalk streets trails, etc. Travel tends to be concentrated on them. And because of that, we tend to treat them differently. +[448.000 --> 455.000] We pay our roads. We cut channels to make sure that water can flow through correctly. That sort of thing. +[455.000 --> 459.000] And interestingly enough, paths are the most temporal element. +[459.000 --> 469.000] So a path isn't useful to you if you're not moving along it and moving along that is a moving along a path is inherently tied to time. +[469.000 --> 477.000] And so that process of incrementing your way along a path is what Lynch called scaling. +[477.000 --> 487.000] And there are a few limitations though when we're dealing with paths. We kind of digest them and use them in two ways. One is dead reckoning and the other is path integration. +[487.000 --> 503.000] And they're technically different. But for the sake of this presentation, we can just think about both of these concepts as knowledge that where I am now is where I was previously, plus all of the steps that I took since that last reference point. +[503.000 --> 514.000] This relies on continuity, a pro-price-ception or, you know, a good understanding of your own body and its movement throughout spaces, a calculation intuition, etc. +[514.000 --> 526.000] However, it can be difficult or impossible to properly utilize a map using using these techniques without knowledge of those things. +[526.000 --> 542.000] So a lot of times in games in particular, you know, I might not be fully aware of the movement mechanics. I might not be fully aware of how the camera or the player or what have you that I'm controlling can experience that space. +[542.000 --> 549.000] And so my sense of pro-price-ception there is really limited. So we should keep that in mind when we're using paths. +[549.000 --> 559.000] And then they help us, you know, prevent players from getting lost. Paths are really good at catching lost players. You can think of, you know, being lost in a desert. +[559.000 --> 572.000] You're lost in a desert. You really just need to pick a direction and start walking because you don't know where to go. But once you come across a path or a street, you've automatically eliminated one vector from your field of possibilities. +[572.000 --> 582.000] Either you just have to choose am I going to go left or am I going to go right. So having a path is a really good way to catch players who might be deviating away and starting to get lost. +[582.000 --> 589.000] Obviously, paths also do great level design things like establishing player flow and connecting large areas together, etc. +[589.000 --> 604.000] An adequate path in your space can make it difficult to connect areas of the cognitive map together. And we have those limitations from dead reckoning and path integration that manifests themselves even more in games. +[604.000 --> 612.000] One of the things to look out for as well with paths is that moving one way along a path does not necessarily mean moving the other way. +[612.000 --> 622.000] If you've had the experience of going on a hike, reaching the end destination, turning around and going back and not being fully sure if you're going down the same trail, you've experienced this. +[622.000 --> 633.000] So just because you've placed paths inside of your level doesn't automatically mean that players suddenly won't get lost. You need to make sure that you're addressing that concern as well. +[633.000 --> 640.000] The next element are landmarks also super self explanatory and loved by level designers all over the place. +[640.000 --> 648.000] They're single localized and memorable features, you know, paths were these linear elements. Now we've got point references with these landmarks. +[648.000 --> 657.000] They tend to be things you want to take pictures of and they're recognizable either visually, excuse me, narratively or experientially. +[657.000 --> 666.000] You know, it can be this Randi's donuts shop or it could be the bench that the character had their first kiss end, for example. +[666.000 --> 676.000] Landmarks can be useful in a number of different ways. One of those is orienting players from a distance. A lot of the time landmarks are tall. +[676.000 --> 685.000] And so you can see them from far away, which is great. But they're also useful for orienting players when they're going down new paths and new journeys. +[685.000 --> 699.000] If I'm going to an area that I haven't been to before, but I maintain a reference point in a previous landmark that I've already established, that'll help anchor the new information that I'm receiving in relation to that previous landmark. +[699.000 --> 707.000] Also, they tend to situate elements of the of your spaces among themselves, which is super important. +[707.000 --> 721.000] But also to note, landmarks are essentially only useful if they're stationary. So if you've got a large creature, let's say, that is walking around a big map, that creature is no longer that useful as a landmark because it's moving all the time. +[721.000 --> 730.000] Also, they're much better when they're directional. So I have this statue of liberty here on the Eiffel Tower. The Eiffel Tower is a radially symmetric. +[730.000 --> 737.000] So if I'm north of the Eiffel Tower looking back or on south of the Eiffel Tower looking back, the Eiffel Tower is basically going to look the same to me. +[737.000 --> 748.000] But if I'm north of the statue of liberty and I'm south of the statue of liberty, the way that the statue of liberty is made makes it very easy for me to discern that, hey, I'm in a different spot now. +[748.000 --> 761.000] So whenever you can, if you know, making sure that people can use that landmark to situate themselves around it is important to you, try to make your landmarks directional. +[761.000 --> 769.000] Lastly, photogrammetry is the process of taking multiple pictures of an object and then making a 3D model from that. +[769.000 --> 771.000] You want to think of your landmarks in a similar way. +[771.000 --> 778.000] Philanmark is only referenced one time. It's not really going to be useful for the player to create their own cognitive map. +[778.000 --> 786.000] You want to make sure they're able to reference it as many times as possible so that that reference point can be continually reinforced. +[786.000 --> 791.000] Next up, we have districts. This is also a self-explanatory. +[791.000 --> 800.000] District is a region identified by a characteristic or quality and we have point references with landmarks, linear references with paths. +[800.000 --> 808.000] And now we've got these zonal references with districts. Here are a bunch of examples, industrial zones, downtowns, nature preserves, etc. +[808.000 --> 813.000] A good way to identify districts is to do what I call a squid test. +[813.000 --> 825.000] So if you look at your map and you squint your eyes and everything gets all blurry, if you can start to distinguish different parts of that map, you know, like this area is a little red, this area is, you know, looking a little different. +[825.000 --> 829.000] You can most of the time assume that those are your districts. +[829.000 --> 837.000] The districts have edges and you go through them. So you enter into these new kinds of spaces. +[837.000 --> 843.000] They tend to be mid to large scale and another good way of thinking about districts is a color by number image. +[843.000 --> 858.000] So a color by number image is a kind of cohesive and consistent portrait as a whole, but it's comprised of these unique and recognizable colors that work together to make it look like a real, +[858.000 --> 868.000] that work together to make it all work out. Likewise, and our cognitive maps having cleared districts really helps to differentiate areas from each other. +[868.000 --> 878.000] An important thing with districts is the concept of clustering. So I have two clusters here. I've got set of five clusters on the left and a set of five clusters on the right. +[878.000 --> 889.000] And the set of clusters that are on the left are way easier to remember than the ones on the right. And that's because they are grouped with like objects. +[889.000 --> 900.000] This type of clustering can be semantic or mechanical, not just visual. So, you know, it isn't that just have to be, you know, three pyramids here and three box buildings over here. +[900.000 --> 908.000] It can be, you know, this is an area that I can jump really high. This is an area where I get in cards and drive around, et cetera. +[908.000 --> 917.000] Isolating these qualities in different areas like this is just generally good practice for world building and game design. +[917.000 --> 927.000] But it really helps reinforce a cognitive map because I could probably navigate away from this slide and because they're clustered in such a way that image on the left might still be something to remember. +[927.000 --> 933.000] And that image on the right, you might forget the moment that I move away from it. +[933.000 --> 949.000] Second to last, we have edges. So an edge is a linear reference, but it's not a path. These tend to control continuity or they separate things examples are gates, walls, cliffs, borderlines, et cetera. +[949.000 --> 962.000] They tend to be elevational simply because we tend to move around the world horizontally. So if we were, if we, you know, flew around a lot or climbed a lot of trees, edges could be barriers that are horizontal. +[962.000 --> 970.000] But most of the time, they're vertical things. It's stuff that you tend to go around or go along or things that you also go through. +[970.000 --> 977.000] For example, being on the outside of a building, opening a door and entering inside of that building. +[977.000 --> 994.000] You want to be deliberate when you're working with paths, you know, crossing that threshold of a path or being, sorry, an edge, crossing that threshold of the edge and or being blocked by the edge are really memorable experiences. +[994.000 --> 1006.000] And if that edge is really crisp and clean, I noticed that sometimes we have this tendency to want to blur things like we don't want these like hard, hard lines cutting through our landscapes. +[1006.000 --> 1030.000] This isn't to say that you need to make hard lines, but you should be deliberate in the limits and boundaries of these districts in the form of these edges, because if the edges are blurred, the cognitive map of players is also going to reflect that blurred nature and it's not going to be as good at anchoring them when, when they do start getting lost. +[1030.000 --> 1039.000] These occur a lot in games. There's level transitions, level boundaries, mechanical boundaries, portals, ledges, walls, cliffs, etc. +[1039.000 --> 1048.000] So we have plenty of opportunity to liberally use edges and we want to take full advantage of those. +[1048.000 --> 1062.000] And lastly, we have notes. So a note is a convergence of paths. It's a point reference, but it's a point reference that is defined by paths, which is the other element. +[1062.000 --> 1074.000] Most of the time, these are things like traffic intersections, transit hubs or home spaces or hub spaces that allow you to access multiple different locations of the game from one spot. +[1074.000 --> 1090.000] If there is a place that has many ins or many outs, that's usually a node. They tend to be denser than the adjacent areas, just because these are places that players and people flock to when they're navigating in general. +[1090.000 --> 1094.000] So they're good to have. +[1094.000 --> 1114.000] Excuse me. You know, there's that expression all load all roads lead to Rome. And in this example, Rome is a node having, having a location that is repeatedly used by people on their way to get somewhere is really important and really valuable. +[1114.000 --> 1126.000] So when you're making a node or when you've recognized that you have created a node in a game that you're working on, you really want to start working on place making there in order to make it as recognizable as possible. +[1126.000 --> 1132.000] These nodes can be destinations. They don't just have to be things that you go in or go through. +[1132.000 --> 1142.000] And a good way to facilitate that is just make sure that people can spend some time there as opposed to just, you know, be transient and move move through something. +[1142.000 --> 1156.000] I've been on plenty highway intersections having grown up my whole life in LA. But if you were to drop me in one of these big highway interchanges, I might be hard pressed to know where I am without all without all the signage. +[1156.000 --> 1164.000] If you want, if you're going to make a node, try to turn it into a place and not just some, you know, arbitrary thing you pass through. +[1164.000 --> 1171.000] So those are the five elements, you know, what they are, how you can identify them, how you can leverage them and why they're useful. +[1171.000 --> 1185.000] But now let's talk about more specifically implementing them in our design practice. So the first thing I like to do is do an audit, try to look at levels and spaces that have already made and identify existing paths, landmarks, districts, edges. +[1185.000 --> 1200.000] And nodes in those areas. Then once I've done that, I want to assess the clarity and readability of those things. If I'm kind of doing this audit and that I'm saying that could be a landmark or that looks like a good enough district, etc. +[1200.000 --> 1216.000] It probably isn't a good enough landmark and it probably isn't a district. So that's a key for me to go back and, you know, clarify those things, you know, really, really not be shy about these design decisions that I'm making. +[1217.000 --> 1230.000] Then I want to organize my space. So I tend to work because the architectural background, I like to work in plan first, and I like to think of plan as the organization or structure for everything. +[1230.000 --> 1240.000] I think the plan should be clear and legible and by looking at that plan, you should be able to easily recognize almost all of these elements there. +[1240.000 --> 1247.000] Once the plan kind of really pops with all of these elements and the really distinct, then I like to move on to section. +[1247.000 --> 1253.000] In section is where I think the emotion comes out where the storytelling happens, where the experience really takes place. +[1253.000 --> 1261.000] Just because I've got, you know, one of these elements in my plan, it doesn't mean it's actually going to come through in my section when I'm experiencing the game. +[1261.000 --> 1268.000] So if I see on my plan, there's this really strong landmark, but then when I start playing the game, that landmark isn't really reading. +[1268.000 --> 1275.000] That's a key to me to go back and edit that landmark to make sure that it's legible when I'm playing the game. +[1275.000 --> 1284.000] Speaking of playing the game, it's super important to test if you can without a HUD or a mini map or UI, et cetera. +[1284.000 --> 1288.000] Why is this a couple of things? +[1288.000 --> 1301.000] Using tools like a radar, GPS or a mini map can lead us into digesting spaces using an egocentric frame of reference as opposed to an allocentric frame of reference. +[1301.000 --> 1303.000] So what do these mean? +[1303.000 --> 1310.000] Ego-centric frame of reference means I'm the center of the universe and the universe kind of revolves around me as I go through it. +[1310.000 --> 1316.000] If you've used a GPS navigation system on your phone, that's egocentric. +[1316.000 --> 1325.000] This can be difficult because you're navigating in this piecemeal fashion where you're primarily focused on what your next maneuver is going to be. +[1325.000 --> 1330.000] I need to turn right and 500 feet and turn left, et cetera. +[1330.000 --> 1337.000] And breaking down the journey in that way has been shown to result in a decrease in route memory. +[1337.000 --> 1346.000] It can also disengage you from the environment because you're exploring the map or the UI or the HUD instead of the space itself. +[1346.000 --> 1357.000] What you're trying to do is actually take that triangle and move it to that circle and your character moving through the world is just a byproduct of you trying to get those shapes to align. +[1357.000 --> 1367.000] However, if you test without the aid of a heads up display, et cetera, you tend to do more allocentric mapping. +[1367.000 --> 1370.000] Allocentric mapping is I am not the center of the world. +[1370.000 --> 1373.000] The world exists and I am simply moving through it. +[1373.000 --> 1375.000] It's this kind of big picture navigation. +[1375.000 --> 1379.000] And when you do things this way, there tends to be an increase in route memory. +[1379.000 --> 1386.000] You're engaged more with your environment and you're exploring the space instead of just exploring the map itself. +[1386.000 --> 1390.000] I have this image here of the fish in the lake and the burden trees. +[1390.000 --> 1399.000] So the fish in the lake can see the burden trees and it can see the trees, but it can't see the lake because that's that's where it is. +[1399.000 --> 1407.000] Likewise, the bird can identify the fish and the lake, but not the forest because it's understanding things he go centrically. +[1407.000 --> 1417.000] What we want to do to get a clear picture of everything is try to get what the image in the center is, which is essentially an allocentric frame of reference, which is, you know, +[1417.000 --> 1423.000] this is the environment and the player is simply moving through the environment. +[1423.000 --> 1431.000] So the TLDR of this talk is one, there exist things called cognitive maps. +[1431.000 --> 1438.000] It is the digestion of the environments that you go through that exist in everybody's head. +[1438.000 --> 1448.000] Getting lost is when that cognitive map is misaligned with the environment, with misaligned with the environment is currently telling you. +[1448.000 --> 1456.000] So how can we, how can we prevent players from getting lost? Well, we can try to foster clear cognitive maps and how can we do that? +[1456.000 --> 1468.000] We can use a toolkit using paths, landmarks, districts, edges and nodes to make sure that our spaces are robust and can actually foster these robust cognitive maps. +[1468.000 --> 1485.000] And the good way to do that is just be deliberate. I've included this lovely food tray here because I could take the same food tray with all of those same ingredients and just kind of like mix it all up in there and everything's, you know, all up on top of each other and build up the environment. +[1485.000 --> 1489.000] And then we can do that with each other and blend it. +[1489.000 --> 1493.000] But it wouldn't be as appetizing first of all. +[1493.000 --> 1497.000] And secondly, it wouldn't be as memorable to me. +[1497.000 --> 1504.000] Excuse me, we would have the same nutritional value, but by virtue of the way it's organized, it's not going to read as clearly. +[1504.000 --> 1509.000] And with this one, you know, I can do my squint tests, you know, and I can begin to start making out districts. +[1509.000 --> 1518.000] And then we can start to make out landmarks even with this food. So you can really apply these design strategies to nearly everything. +[1518.000 --> 1529.000] Lastly, I wanted to just show you and all of these different references that I have used to try to, you know, understand these things for myself. +[1529.000 --> 1538.000] These are all people who are way more intelligent than I am. And if you're interested in pursuing this topic further, I would encourage you to go here. +[1538.000 --> 1544.000] And I would encourage you to go ahead and pause the video. +[1544.000 --> 1549.000] And any one of these is going to be an exciting paper for you to enjoy. +[1549.000 --> 1552.000] So I will go back. +[1552.000 --> 1567.000] Thank you very much. I hope that this helped enlighten you and helped you understand how to prevent your players from getting lost and introduced a new paradigm for understanding how you can design your levels specifically for that use case. +[1567.000 --> 1576.000] My name is Nick Aoyjin. I hope you have a great day. And that's it. Thank you so much. All right. Bye. diff --git a/transcript/allocentric_Qpa0nrKPYgc.txt b/transcript/allocentric_Qpa0nrKPYgc.txt new file mode 100644 index 0000000000000000000000000000000000000000..408d6cc5996996c80074300635433c8dbbe203ea --- /dev/null +++ b/transcript/allocentric_Qpa0nrKPYgc.txt @@ -0,0 +1,711 @@ +[0.000 --> 9.160] My second presentation today, a little bit different, I would talk to you now about some +[9.160 --> 14.240] projects that we've been doing previously before we got very, very heavy into the CVI aspect. +[14.240 --> 19.680] This was a large scale study about five, six years looking at video game use in blind +[19.680 --> 25.440] children, blind individuals in general, ocular blind, to try to develop navigation skills +[25.440 --> 27.480] and orientation mobility skills. +[27.480 --> 30.880] So very, very different in terms of what we talked about earlier this morning. +[30.880 --> 37.600] But nonetheless, trying to come towards this, using this evidence-based neuroscience driven +[37.600 --> 40.600] approach, and hopefully you'll have questions for this as well. +[40.600 --> 44.880] And I am available to stay and discuss also the CVI talk as well. +[44.880 --> 45.880] So let's get started. +[45.880 --> 50.520] In the same way that I started off my first presentation, I kind of want to get sort +[50.520 --> 54.760] of the lay of the land with you and try to give you a sense of how the thought process +[54.760 --> 56.600] came about for this project. +[56.600 --> 60.560] And understand that I'm going to show you in about four or five slides what I was thinking +[60.560 --> 64.440] about for like three, four years, so they're trying to compress all that. +[64.440 --> 66.440] So here's the first thing to think about. +[66.440 --> 71.240] Rebelliation in the case of wayfinding and navigation, obviously a big, big challenge +[71.240 --> 73.080] of all the people that we work with, right? +[73.080 --> 77.200] So fortunately we have a structured way to teach people with visual impairments to +[77.200 --> 78.200] how to find their way around. +[78.200 --> 81.400] And we call that, of course, orientation, mobility, instruction. +[81.400 --> 87.680] From a cane to a guide dog, for example, all very structured, well established techniques +[87.680 --> 90.840] that are really part and parcel to promote an individual's independence. +[90.840 --> 93.280] There are limitations, of course, with this. +[93.280 --> 96.400] And there are always people and O&M instructors reaching out to me and saying, you know, +[96.400 --> 97.400] what do you think about this technology? +[97.400 --> 99.240] What do you think about this approach and so on? +[99.240 --> 103.760] Is this a way that we can study this and incorporate it in a more structured fashion? +[103.760 --> 106.760] And I became very, very interested in this idea. +[106.760 --> 110.200] Some other individuals, Dan Kish, for example, you probably have heard about this guy who +[110.200 --> 111.200] uses echolocation. +[111.200 --> 113.640] They call him the human Batman. +[113.640 --> 120.120] He walks around making click noises and using the reflections off the surfaces of objects. +[120.120 --> 121.960] He's able to identify various objects. +[121.960 --> 128.200] And in this particular case, you see him riding his bicycle, even though he has prosthetic +[128.200 --> 129.200] eyes. +[129.200 --> 130.200] He has absolutely no light perception. +[130.200 --> 133.000] I don't know if everybody can load this skill. +[133.000 --> 134.600] It's certainly really quite remarkable. +[134.600 --> 139.200] And there have been some groups in Canada who have done FMRI in him and studied his brain +[139.200 --> 140.720] and how he's able to do it. +[140.720 --> 144.720] But it's quite a remarkable skill that he's developed. +[144.720 --> 148.280] Some technology that I think is quite interesting as well. +[148.280 --> 149.560] This is an interesting one. +[149.560 --> 151.840] This is from the Sendero group out of California. +[151.840 --> 156.320] And the idea is that you walk around with a GPS monitor, which tracks you. +[156.320 --> 159.120] And as you're walking through the city, and you connect this with, say, your Braille +[159.120 --> 163.320] notes, you get information about, for example, the name of the street, how far you are from +[163.320 --> 165.240] a particular destination. +[165.240 --> 170.000] You may use a Bluetooth connector as well to get some auditory input, some very, very +[170.000 --> 175.640] nice technology that's coming together to help enhance these skills, if you will. +[175.640 --> 176.640] Certainly limitations. +[176.640 --> 179.560] The big one with GPS, of course, is that it's only for outdoors. +[179.560 --> 182.440] GPS doesn't work in an indoor environment. +[182.440 --> 186.240] It also is quite limited when you're in a situation of being downtown, where there's +[186.240 --> 188.840] a lot of reflections from buildings and so on. +[188.840 --> 189.840] Satellite doesn't capture. +[189.840 --> 193.040] You have to be visible in order for this to work. +[193.040 --> 197.480] So we were thinking about what was out there, what could we change, and in particular, +[197.480 --> 201.400] we were very, very motivated, or trying to get to this idea of motivation, I should say, +[201.400 --> 205.480] how can we leverage motivation as a way to improve navigation skills? +[205.480 --> 208.880] So let's talk about a few things as well. +[208.880 --> 213.280] First point I want to make from a clinical rehabilitation standpoint, a general comment +[213.280 --> 214.880] that I'd like to make with you. +[214.880 --> 219.560] So in traditional therapy session, the patient works one-on-one with a therapist to address +[219.560 --> 223.640] specific goals like psychological issues, could be movement, folk eye, or a particular +[223.640 --> 228.160] skill in the hopes of improving that particular deficit or that particular function. +[228.160 --> 232.200] So for example, if a person has a phobia or a particular psychological issue, they work +[232.200 --> 236.680] one-on-one with a therapist, versing those concerns, walking through those issues, and +[236.680 --> 241.320] trying to make that one-on-one face time that exchange to work through that issue. +[241.320 --> 246.200] If you are working on the motor side, very, very often, what we see is a lot of repetition, +[246.200 --> 247.200] working with various tasks. +[247.200 --> 251.160] A particular skill of motion deficit that's trying to get enhanced through repetition +[251.160 --> 253.400] and repetitive exercises and so on. +[253.400 --> 257.560] That's sort of like the state of the affairs right now. +[257.560 --> 258.560] Here's my problem. +[258.560 --> 260.240] A couple of things to think about. +[260.240 --> 264.720] What is the ecological validity and the effect of context on therapy? +[264.720 --> 269.400] If I'm sitting with a therapist talking about my problems and I'm not having the problem, +[269.400 --> 272.760] how good am I transmitting that issue? +[272.760 --> 276.520] Similarly, if the therapist is providing me some strategies and I'm still not going +[276.520 --> 280.880] through that problem, how good am I in terms of transferring that into that situation, +[280.880 --> 282.200] into that scenario. +[282.200 --> 286.480] So the context, the immersion of learning the skill is extremely important. +[286.480 --> 288.440] That's the first thing I want to say. +[288.440 --> 293.200] The second aspect, if we look on the motor side of things, boredom kills us when it comes +[293.200 --> 294.200] to rehabilitation. +[294.200 --> 297.920] Everybody recognize this toy, this little stacker thing. +[297.920 --> 301.360] I don't know, you are three when I had one. +[301.360 --> 304.560] Here's a woman who just had a stroke in her 40s. +[304.560 --> 308.680] Something to do, something that she knows was designed for a three year old. +[308.680 --> 312.760] What does that do in terms of her motivation and struggles and so on? +[312.760 --> 316.800] So I really think the ecological validity and the context of therapy is extremely important. +[316.800 --> 318.320] We certainly can do better. +[318.320 --> 321.040] So the immersion aspect, I think, is extremely important. +[321.040 --> 326.520] And creating scenarios that are meaningful for that individual are also extremely important. +[326.520 --> 330.560] So let's get to some other pieces of the puzzle and I'm slowly going to edge into this +[330.560 --> 332.760] idea of gaming and how we got into that. +[332.760 --> 336.000] The times are changing, definitely. +[336.000 --> 339.920] For example, here daily emails, 2012 billion emails being sent. +[339.920 --> 342.920] We're now 247 in 2010. +[342.920 --> 346.400] Text messages, 400,000 up to 4.5 billion. +[346.400 --> 351.280] Time spent online, 2.7 hours a week to 18 hours a week. +[351.280 --> 354.800] More of the story, we are a tech-driven society. +[354.800 --> 359.080] A lot of what we do is intimately related to what we do with technology. +[359.080 --> 363.920] As I said, despite my early screen saver problems, I believe that technology isn't a +[363.920 --> 364.920] neighbor. +[364.920 --> 366.520] We should try to leverage that somehow. +[366.520 --> 369.720] And we're getting very, very good at it because costs are going down. +[369.720 --> 371.000] Everybody has a cell phone. +[371.000 --> 372.000] Everybody has email. +[372.000 --> 374.080] Everybody has ways to stay connected. +[374.080 --> 377.240] There's an opportunity here that I think we need to leverage. +[377.240 --> 379.120] A couple of other things to talk about. +[379.120 --> 381.920] The case for play, as I mentioned, my mother is a preschool teacher. +[381.920 --> 385.800] And she used to always tell me, a child who plays is a healthy child, right? +[385.800 --> 387.360] It's intimately related. +[387.360 --> 390.880] And indeed, play is extremely important in the development of a child. +[390.880 --> 395.440] Role playing, social interactions, what's fair, what's not, establishing a rapport with +[395.440 --> 396.440] kids. +[396.440 --> 398.960] All that is done at a very, very early age. +[398.960 --> 402.200] And I think that's also what makes games later in life very, very exciting as well. +[402.200 --> 406.320] It's a way to be somebody in a sense that you can't otherwise be. +[406.320 --> 407.880] Animals know this, right? +[407.880 --> 409.200] Young animals play fight. +[409.200 --> 411.560] And they know when they can bite, when they can't and so on. +[411.560 --> 415.520] So there's something very important about play and brain development that I think is very, +[415.520 --> 416.520] very interesting. +[416.520 --> 419.120] And think of the counter example, this is a child with autism. +[419.120 --> 421.480] It's a child who doesn't play, right? +[421.480 --> 424.440] And that's one of the hallmark signs of a child with autism as well. +[424.440 --> 428.680] So I think there is somehow an association between playing and brain development and so +[428.680 --> 429.680] on. +[429.680 --> 432.720] And many, many news stories that have been out there trying to get at this point. +[432.720 --> 433.720] Here's an interesting study. +[433.720 --> 435.880] I don't know if you heard about this one called the high scope study. +[435.880 --> 439.640] This was done in the state of Michigan, done by the Educational Research Foundation in +[439.640 --> 442.160] Michigan, a longitudinal study by Stuart Brown. +[442.160 --> 446.600] So what he found that by age 23, he compared individuals who went through a very, very +[446.600 --> 452.760] structural, didactic school program versus schools who had very, very, a lot of hours of play +[452.760 --> 454.280] time and interaction. +[454.280 --> 459.200] And what he found by age 23, more than a third of the kids who had attended an instruction +[459.200 --> 463.920] oriented preschool had been arrested for a felony as compared to fewer than one tenth +[463.920 --> 466.640] of the kids who had been in a play oriented preschool. +[466.640 --> 470.840] Now, doesn't mean you don't play in a rob a bank, right? +[470.840 --> 476.920] This isn't causality, but a very, very interesting association that having this early on in development +[476.920 --> 479.920] certainly seems to have a benefit from brain development as well. +[479.920 --> 484.880] So those questions earlier on about CVI, how do I wake up the visual brain? +[484.880 --> 489.080] Consider play as one of the ways to do it from an engagement standpoint. +[489.080 --> 492.440] And I hope to convince you that there's a neuroplastic and a neuroscience reason behind +[492.440 --> 494.880] this as well. +[494.880 --> 498.400] Learning through simulation, another piece of the puzzle, very, very important. +[498.400 --> 501.240] The best example to give you is flight simulators. +[501.240 --> 506.000] If you are a pilot wanting to learn how to fly a new plane or how to land at a new airport +[506.000 --> 511.960] or how to fly in very, very challenging conditions, much better that you do this in a simulator +[511.960 --> 514.880] than a compliment of 350 people behind you, right? +[514.880 --> 519.840] If you make a mistake, better you learn it there than you do it in the real world, right? +[519.840 --> 523.480] So pilots spend an enormous amount of time in flight simulators and this has been extremely +[523.480 --> 526.280] effective and has revolutionized the airline industry. +[526.280 --> 530.440] They have something called the transfer of effective ratio, which is about 50%, which +[530.440 --> 534.300] means every two hours that you spend on a flight simulator is the equivalent of one +[534.300 --> 535.900] hour real flight time. +[535.900 --> 540.820] So what you learn in the simulator, going through the motions, preparing yourself mentally, +[540.820 --> 542.420] serves into the real world. +[542.420 --> 547.720] And the closer that immersion is, the better it is in terms of the transference. +[547.720 --> 551.640] Also people have learned this, surgical, for example, the medical field is spending a lot +[551.640 --> 554.360] of money looking at surgical simulations. +[554.360 --> 558.440] Here I make the mistake, we're setting a tumor in a simulation that I do in the real +[558.440 --> 559.440] world. +[559.440 --> 561.120] The military is also spending a lot of money in this as well. +[561.120 --> 565.280] So learning by simulation seems to be another thing that the brain likes. +[565.280 --> 567.520] And again, I'll show you some evidence of that. +[567.520 --> 572.240] Here are some great examples of how video games and virtual reality are being used in therapy +[572.240 --> 573.720] in your own world. +[573.720 --> 578.720] This is work by Elizabeth Strickland, what she has is she has children with cognitive development +[578.720 --> 583.560] issues and trying to teach them basic skills like crossing the street safely. +[583.560 --> 587.680] So she has these kids wearing a virtual reality helmet and they do associations. +[587.680 --> 590.080] They call this game called street safety. +[590.080 --> 594.680] They associate good behaviors with certain friends, bad behaviors with other individuals. +[594.680 --> 597.680] And they go through these simulations learning to cross safely. +[597.680 --> 602.240] Better you learn this in the safe controlled environment of a classroom than learning this +[602.240 --> 603.520] in the real world the hard way. +[603.520 --> 606.200] So to speak, you learn these skills in a safe controlled environment. +[606.200 --> 609.720] You have reinforcement, you have repetition again, things that the brain likes. +[609.720 --> 611.400] And then you transfer that to the real world. +[611.400 --> 613.120] So a very, very interesting approach. +[613.120 --> 614.520] That's not a one that's quite nice. +[614.520 --> 616.400] This is called IREX. +[616.400 --> 617.400] This is a gesture talk. +[617.400 --> 620.520] This is a group out of Israel that have developed an interesting system. +[620.520 --> 623.840] This is a child with cerebral palsy who doesn't want to go to rehab. +[623.840 --> 627.360] Sorry, it's sounding like Amy Winehouse there. +[627.360 --> 628.960] How do you get him to go to rehab? +[628.960 --> 630.440] No, no, no. +[630.440 --> 634.600] Because what I do like is soccer. +[634.600 --> 637.560] So he has this system here where they use a small camera. +[637.560 --> 638.560] They film him. +[638.560 --> 639.880] They project it on a blue screen. +[639.880 --> 642.400] And he's the goalie while people take shots. +[642.400 --> 645.700] And the idea is that he reaches over one side blocks the ball, reaches to the other side +[645.700 --> 646.700] blocks the ball. +[646.700 --> 650.720] Then they can go systematically crossing the heavy field, the other, the other heavy space +[650.720 --> 651.720] as well. +[651.720 --> 654.680] And all this is quantified as you can see on the bottom row there. +[654.680 --> 657.600] They've got his favorite team playing, his favorite players are playing. +[657.600 --> 658.600] He's engaged. +[658.600 --> 659.600] Now he wants to go. +[659.600 --> 660.600] He's engaged. +[660.600 --> 663.720] So again, you can do good work with play and under simulation. +[663.720 --> 667.600] This is a chance to try to awaken the brain and motivate individuals. +[668.440 --> 672.240] Let's talk specifically more about video games, why I think, whether or not I think this +[672.240 --> 673.960] is a good idea. +[673.960 --> 678.960] You probably all remember when Pong came out, we thought this was, oh my god, I got to get +[678.960 --> 679.960] Pong, right? +[679.960 --> 682.440] This is revolutionary, right? +[682.440 --> 684.720] Now I think about how games have evolved, right? +[684.720 --> 685.720] It's really interesting. +[685.720 --> 689.640] We've moved outside of the arcade and now moved to two our own personal devices. +[689.640 --> 693.440] So it's really interesting that the goals remain the same, but the space that we work in +[693.440 --> 695.120] has changed dramatically. +[695.120 --> 696.680] Here's some interesting statistics. +[696.680 --> 701.160] World of Warcraft, which is a world playing game, is a very, very interesting one because +[701.160 --> 703.400] they actually log on the time that people spend. +[703.400 --> 705.040] And here are some interesting stats. +[705.040 --> 711.040] Since 1994, collectively gamers have spent close to six million years playing this game. +[711.040 --> 712.960] That's geological time scale, right? +[712.960 --> 714.960] The Grand Canyon was built. +[714.960 --> 717.280] They saved millions more. +[717.280 --> 718.280] One game, right? +[718.280 --> 720.440] And the average gamer spends 22 hours a week. +[720.440 --> 722.560] That's a part-time job, right? +[722.560 --> 726.320] These people spend a lot of time playing video games. +[726.320 --> 728.920] Another interesting book called Reality is Broken by Jane McGoniball. +[728.920 --> 733.400] She's a game designer and also a sociologist, very, very interested in that. +[733.400 --> 739.000] And she says in countries with strong gaming culture, by the age of 21, the average gamer +[739.000 --> 744.040] will spend close to 10,000 hours playing video games, which is the equivalent of time you +[744.040 --> 749.080] spend from the fifth grade to high school graduation if you have perfect attendance. +[749.080 --> 751.920] That's a lot of time spending in front of a monitor. +[751.920 --> 754.680] So they're doing it is what I'm trying to say. +[754.680 --> 758.360] Can we leverage this somehow, right? +[758.360 --> 759.360] Other things. +[759.360 --> 761.000] Our video game is useful from a rehabilitation time. +[761.000 --> 763.120] I'm going to give you a couple of other recent examples. +[763.120 --> 765.840] We, Habilitation, you're all familiar with the We-Mote, right? +[765.840 --> 770.120] This idea of a, basically has a gyroscope in it and it can sense directions in three +[770.120 --> 774.520] axes of motion and translates that onto a monitor as you interact. +[774.520 --> 777.040] Some interesting work with stroke recovery. +[777.040 --> 781.360] Again, I don't think the evidence is very, very clear how positive it is and there could +[781.360 --> 785.560] be a huge placebo effect of just simply getting engaged with a group and so on, which may +[785.560 --> 787.560] account for some of the benefits that are there. +[787.560 --> 791.480] But indeed, people are studying this and looking how to engage individuals. +[791.480 --> 796.680] The social interaction, for example, as well, we bowling, for example, in a lot of situations, +[796.680 --> 799.320] promoting social interaction and so on. +[799.320 --> 803.480] A lot easier to do this in a virtual living room than it is to actually take them all into +[803.480 --> 804.480] particular sites. +[804.480 --> 808.160] So there's some benefit in that as well, that I think is quite interesting. +[808.160 --> 809.560] Another interesting study here. +[809.560 --> 813.120] I talked to video games on training surgeons in the 21st century. +[813.120 --> 819.640] There was a link, and I quote here, a link between skill at video gaming and skill at laparoscopic +[819.640 --> 820.640] surgery. +[820.640 --> 827.760] Curved video game players made 31% fewer errors, were 24% faster and score 26% better overall +[827.760 --> 829.080] than non-player colleagues. +[829.080 --> 830.080] Again, not causal. +[830.080 --> 833.400] Doesn't mean you should be playing video games, then go to medical school. +[833.400 --> 836.600] The point is that there was an association between the two, right? +[836.600 --> 838.040] It's an observational study. +[838.040 --> 840.520] It was something again, kind of connecting from a skill standpoint. +[840.520 --> 843.720] And the last one I'll share you, which I was really, really struck with, this was a study +[843.720 --> 846.040] that was published in nature a couple years ago. +[846.040 --> 848.280] And to give you sort of a background, I'm not a biochemist. +[848.280 --> 852.200] But apparently when it comes to figuring out the three-dimensional shape of a protein or +[852.200 --> 855.160] a molecule, it's really, really difficult. +[855.160 --> 856.160] It's really complicated. +[856.160 --> 858.640] It's like a mental teaser or a puzzle and so on. +[858.640 --> 863.080] And even with some of the fastest computers, it takes months and months and years to figure +[863.080 --> 865.080] out this three-dimensional shape. +[865.080 --> 869.000] So this group of investigators decided to come up with a game called FoldIt. +[869.000 --> 873.240] And the idea was to go online, there were various rules of how you could fold this particular +[873.240 --> 874.400] shape. +[874.400 --> 878.160] And they just kind of left it out into the world to see what would happen. +[878.160 --> 882.720] And they said that using Fold, the three-dimensional structure of a protein was solved in roughly +[882.720 --> 884.760] one week, right? +[884.760 --> 887.400] By individuals with no specific training in biochemistry. +[887.400 --> 891.520] The best scientists were trying to figure this out and they couldn't do it with the fastest +[891.520 --> 892.520] computers. +[892.520 --> 895.440] And all these guys went online, we had no interest in biochemistry whatsoever and they +[895.440 --> 897.320] solved it in a week. +[897.320 --> 901.960] So gaming somehow brings the best out of us. +[901.960 --> 906.360] We think in a way that we don't typically think in more sort of didactic fashions. +[906.360 --> 909.760] And I think again, that's another aspect that I want to submit to you. +[909.760 --> 914.720] Another interesting example in our field, this was a work by Dennis Levi at University of +[914.720 --> 920.200] California Berkeley using video games to try to improve ambiopia, visual acuity. +[920.200 --> 923.840] It was a very preliminary study, but what they found that a lot of the kids were going +[923.840 --> 927.720] through this and interacting with video games showed improvement in their visual acuity +[927.720 --> 929.040] a couple of lines. +[929.040 --> 932.800] Again, largely an observational study, there were a lot of randomization and control issues +[932.800 --> 934.800] and I think it needs to be replicated. +[934.800 --> 938.880] But showing you that we can take this also directly from the visual acuity standpoint or +[938.880 --> 942.520] a visual performance standpoint as well. +[942.520 --> 945.080] Now why do I think games work? +[945.080 --> 948.160] I'm going to give you what I call my neuroscience rationale. +[948.160 --> 951.600] So all video games have three really important aspects. +[951.600 --> 954.240] It doesn't matter whether it's Pac-Man or World of Warcraft or so. +[954.240 --> 956.640] They all kind of have these three basic features. +[956.640 --> 961.840] The first one is that there's always attainable rewards, jewels, points, munitions, portals, +[961.840 --> 966.320] the epic wing, the epic wind feeling and so on for my World of Warcraft colleagues. +[966.320 --> 969.760] There's also tax, the task novelty and graded difficulties. +[969.760 --> 973.320] All games start off really, really easy and they get a little bit harder and they seem +[973.320 --> 975.520] to be almost perfectly paced with you. +[975.520 --> 977.840] And they're just at the point where you don't give up, right? +[977.840 --> 979.360] You never just say, I don't want to play this anymore. +[979.360 --> 981.640] You just, okay, one more try, one more try, one more try. +[981.640 --> 986.240] And figuring out that gradation is obviously a big, big key and this idea of having attainable +[986.240 --> 987.240] goals. +[987.240 --> 991.680] And last but not least, they always have high attention demands, survival pressure, death, +[991.680 --> 996.160] time constraints, monsters, all this sort of thing keeps you engaged obviously into the +[996.160 --> 997.160] game, right? +[997.160 --> 998.160] Well, what does this mean? +[998.160 --> 1004.440] Well, first of all, reward is intimately related to dopamine, right? +[1004.440 --> 1008.640] Intimally related to serotonin and noradrenaline. +[1008.640 --> 1013.840] And finally, attention is intimately related with acetylcholine or the colonurgic system. +[1013.840 --> 1017.280] The point here is that we are wired for this, right? +[1017.280 --> 1022.800] We like this, video, good video game designers, no understand our brain chemistry and as +[1022.800 --> 1026.040] sense are tapping into this to get us engaged. +[1026.040 --> 1030.080] And my argument is that there's an opportunity here that we don't know what typically have. +[1030.080 --> 1032.280] And how do we jump start the brain? +[1032.280 --> 1034.760] This may be one way to do it. +[1034.760 --> 1037.600] So here is the study that I want to share with you. +[1037.600 --> 1039.880] You probably remember this video game, Doom, right? +[1039.880 --> 1042.560] Came out early 90s, yeah? +[1042.560 --> 1047.200] I wasted years of my life playing this game. +[1047.200 --> 1051.080] It is really, really addictive for lack of a better term. +[1051.080 --> 1052.080] It is amazing. +[1052.080 --> 1055.440] It was one of the first games of its type, what's called a 3D first person shooter game where +[1055.440 --> 1059.560] you walk through a virtual labyrinth and I'm just going to show you a video if you're +[1059.560 --> 1060.560] not familiar with this. +[1060.560 --> 1061.560] We're going to be doing this. +[1061.560 --> 1066.360] We got to allow music going on very, very high-paced, you're walking through this 3D +[1066.360 --> 1070.080] dimensional environment, you got to kill the bad guys, you got to find your way back +[1070.080 --> 1071.080] through the four doors. +[1071.080 --> 1073.080] Very, very high-paced, very engaged. +[1073.080 --> 1074.080] You get a sense right away. +[1074.080 --> 1076.080] Who what's going on here? +[1076.080 --> 1081.920] Oh, I'll show you the violin. +[1081.920 --> 1086.680] The point here is that to play this game, to succeed in this game is you have to build +[1086.680 --> 1089.680] a mental map of your mind of the world that you're walking through. +[1089.680 --> 1093.400] You have to get a sense that, I walked through this corridor, I've been in this room before, +[1093.400 --> 1095.280] I came in from another perspective. +[1095.280 --> 1100.800] As you play the game, you develop a cognitive map in your mind. +[1100.800 --> 1106.800] That being said, I have a colleague from the University of Chile, who is a computer scientist +[1106.800 --> 1109.040] and develops video games for blind children. +[1109.040 --> 1113.320] The game that he developed was Audio Doom, which is exactly the same thing except it's +[1113.320 --> 1115.200] based purely on audio cues. +[1115.200 --> 1117.800] I'll explain to you more on how that works. +[1117.800 --> 1123.240] As you see here, just like kids, sighted kids, so they're peers, these blind kids play +[1123.240 --> 1124.600] the game forever. +[1124.600 --> 1125.600] They love it. +[1125.600 --> 1126.600] They're completely engaged. +[1126.600 --> 1129.000] Hours and hours playing the game. +[1129.000 --> 1133.360] The other thing he noticed from an observational standpoint was the kids who played the game +[1133.360 --> 1135.120] were doing better in school. +[1135.120 --> 1136.480] They seemed to be better at math. +[1136.480 --> 1138.200] They seemed to be better at spatial reasoning. +[1138.200 --> 1141.240] They were much more engaged socially with their peers than the kids who didn't seem to like +[1141.240 --> 1142.240] video games. +[1142.320 --> 1146.160] Not causal, but an interesting association nonetheless. +[1146.160 --> 1148.160] The thought was there an opportunity here. +[1148.160 --> 1152.480] There was another interesting piece that really sparked my interest when I first saw this. +[1152.480 --> 1157.240] For example, here, if I give you a target environment like this, this is the lab, the child comes +[1157.240 --> 1161.840] in here, this is another door, a dead end, they go through another dead end, series of monsters +[1161.840 --> 1165.880] and so on, and they've got to find their way to a portal that takes them to the next level. +[1165.880 --> 1170.040] If you give the child Lego pieces and ask them to build the map that they walk through, +[1170.040 --> 1175.080] they can build a perfect one-to-one representation of the virtual world they walk through. +[1175.080 --> 1180.320] They have the map in their mind, even though they've never seen the map all based on auditory +[1180.320 --> 1181.320] cues. +[1181.320 --> 1184.080] In fact, these are congenitalied line children, so they've never seen the world period, +[1184.080 --> 1187.000] but nonetheless, they can build the map in their mind. +[1187.000 --> 1189.600] They can generate this through non-visual cues. +[1189.600 --> 1191.400] The question is this. +[1191.400 --> 1197.000] Why not play the game in a world that actually exists and use that as a way to teach orientation +[1197.000 --> 1198.000] mobility and navigation? +[1198.000 --> 1199.240] That's exactly what we did. +[1199.240 --> 1203.440] We invented a game using the same sort of strategy, and this is the layout of an actual +[1203.440 --> 1206.040] physical building at the Carroll Center for the Blind. +[1206.040 --> 1209.080] We have the kids play the game, and the goal here is kind of like Pac-Man, you got to +[1209.080 --> 1212.880] roll through this building, you got to find these little jewels that you see in blue squares, +[1212.880 --> 1215.080] and I'll show you a video how the game has played. +[1215.080 --> 1218.480] You also have to be careful, these red guys, these are the monsters, right? +[1218.480 --> 1221.440] If they catch you with the jewel, they hide the jewels somewhere else. +[1221.440 --> 1225.480] It forces you to keep exploring the building and so on. +[1225.480 --> 1228.400] You have to catch as many jewels as you can and not get caught by the jewel. +[1228.400 --> 1229.840] We engage them to do this. +[1229.840 --> 1233.960] They then play the game, then we physically take them to the building and say, okay, now +[1233.960 --> 1236.800] that you have this map in your mind, can you find their way? +[1236.800 --> 1240.960] The important thing to keep in mind is that at no time do we tell them this is the goal +[1240.960 --> 1241.960] of the study. +[1241.960 --> 1246.120] We just simply say, this is a game, this is how you play it, and then we see what happens. +[1246.120 --> 1247.120] All right? +[1247.120 --> 1248.120] So a little bit more details about this. +[1248.120 --> 1250.840] As I said, this was done at the Carroll Center for the Blind and Newton, Massachusetts, +[1250.840 --> 1252.240] a little bit outside of Boston. +[1252.240 --> 1255.160] We chose this building here, which is the St. Paul building. +[1255.160 --> 1258.780] The reason why is because this is an administrative building, or at least it was at the time, and +[1258.780 --> 1261.720] the kids had no prior knowledge of the layout of the building there. +[1261.720 --> 1265.000] It's a two-story building as about 20 rooms. +[1265.000 --> 1269.440] It allows us to do sort of a real world scenario to floors, look at, for example, interactions +[1269.440 --> 1271.160] between floors and so on. +[1271.160 --> 1275.960] As I said, they don't have any prior experience with this building when they come to the campus. +[1275.960 --> 1277.720] They play the game, as I mentioned. +[1277.720 --> 1280.840] We never say, you know, memorize the layout or anything along those lines. +[1280.840 --> 1283.680] We just say, play the game, and then we take them physically there, and we have a series +[1283.680 --> 1287.280] of outcomes to see how well they're able to learn the roots. +[1287.280 --> 1288.280] All right. +[1288.280 --> 1289.840] So more details about how the game works. +[1289.840 --> 1293.360] We call this Abyss for audio-based environment simulator or Abyss. +[1293.360 --> 1297.240] We don't have a clever name as audio-dume, but this is how it works. +[1297.240 --> 1299.120] So you're all familiar with icons, right? +[1299.120 --> 1302.920] The Waste Beeper basket, for example, on your computers where you put documents you +[1302.920 --> 1303.920] don't like. +[1303.920 --> 1305.840] So we use earcons, exactly the same thing. +[1305.840 --> 1308.800] So to give you an example is a knocking sound. +[1308.800 --> 1312.880] So I think I can play this one here. +[1312.880 --> 1316.600] If you hear that sound, you know that that's the presence of the door. +[1316.600 --> 1321.120] If I hear that knocking sound in my right ear, that means the door is on my right side. +[1321.120 --> 1325.040] If I hear the knocking sound in my left ear, I know the door is in my left side. +[1325.040 --> 1328.280] If I hear it in front of me, it's the door is in front of me. +[1328.280 --> 1331.520] Keep in mind that also when I'm walking through the environment, and I hear that knocking +[1331.520 --> 1336.320] sound in my right ear, if I turn around 180 degrees and come back, I now need to hear +[1336.320 --> 1338.240] the knocking sound in my left ear, right? +[1338.240 --> 1342.840] So what the software is doing is keeping track of your egocentric heading and presenting +[1342.840 --> 1347.600] the sounds in a spatialized manner so that you can build the spatial map in your mind +[1347.600 --> 1349.840] as you interact with it. +[1349.840 --> 1354.520] So we use cardinal coordinates north, south, west, east so they can always kind of work +[1354.520 --> 1357.360] in that rigid cardinal coordinate system. +[1357.360 --> 1360.840] As I said, left ear, right ear, either with speakers or with headphones. +[1360.840 --> 1365.480] And every step they take is measured or scaled to an actual physical step in that building. +[1365.480 --> 1368.200] So here's a video of a child playing the game. +[1368.200 --> 1371.000] And remember, they don't see anything on the screen, right? +[1371.000 --> 1374.520] This is, I'm just simply showing you this so we can track them what the actual movement +[1374.520 --> 1375.520] is. +[1375.520 --> 1376.520] So here they go. +[1376.520 --> 1378.520] Up on the door. +[1378.520 --> 1379.520] Open it. +[1379.520 --> 1380.520] Lock in. +[1380.520 --> 1383.360] That's the running sound as there's a jewel. +[1383.360 --> 1386.840] As they get closer and closer to the jewel, the loudness of the sound happens. +[1386.840 --> 1387.840] Allows them to get organized. +[1387.840 --> 1390.040] They're getting orientated where that sound is. +[1390.040 --> 1391.040] They get the jewel. +[1391.040 --> 1393.400] They got to go outside now. +[1393.400 --> 1394.400] Take it outside. +[1394.400 --> 1399.640] The red dots, as I mentioned, are the monsters moving around trying to catch you. +[1399.640 --> 1400.640] Yes. +[1400.640 --> 1401.640] An obstacle. +[1401.640 --> 1402.640] North. +[1402.640 --> 1406.640] Outside, way of louds points. +[1406.640 --> 1407.640] Back. +[1407.640 --> 1409.640] First, step well one. +[1409.640 --> 1410.640] West. +[1410.640 --> 1411.640] This is the stairwell. +[1411.640 --> 1420.640] As they climb the stairs, the pitching creases. +[1420.640 --> 1421.640] Second floor. +[1421.640 --> 1424.640] And they get to the top there. +[1424.640 --> 1428.640] And they keep exploring and exploring and exploring. +[1428.640 --> 1432.640] They pay for a total of about an hour and 30 minutes, an hour and a half. +[1432.640 --> 1435.320] Completely engaged, as I said, they see nothing on the screen. +[1435.320 --> 1438.040] That's just for us to track them to see where they're heading. +[1438.040 --> 1440.640] And believe me, it's tough to get this out of their hands. +[1440.640 --> 1443.200] They're really, really engaged in playing this game. +[1443.200 --> 1444.200] Here's the study designer. +[1444.200 --> 1446.720] As I said, we did this as a randomized clinical trial. +[1446.720 --> 1448.720] So we took all the covers into the study. +[1448.720 --> 1450.000] I'll give you more details. +[1450.000 --> 1452.720] And we randomized them into three actual groups. +[1452.720 --> 1457.120] And before I show you the three groups, let me show you the breakdown of the various aspects. +[1457.120 --> 1461.880] You can play Abyss, the video game, if you will, in directed navigation mode. +[1461.880 --> 1466.840] This means that I give you a start place and an end place and you learn the layout of +[1466.840 --> 1467.840] the building. +[1467.840 --> 1471.840] And what we did is we pair each child or each individual in the study with an orientation +[1471.840 --> 1477.440] mobility instructor who sits next to them and teaches them step by step the layout of +[1477.440 --> 1478.440] the building. +[1478.440 --> 1482.200] The same way, a virtual replication, if you will, of what they would do in an actual +[1482.200 --> 1485.360] O&M instruction of that building. +[1485.360 --> 1488.320] So they work one-on-one with an orientation mobility instructor. +[1488.320 --> 1492.880] So that's called structural learning or the directed navigator arm of the study. +[1492.880 --> 1496.200] The other arm is the one, really, the intervention of interest, is the gaming arm. +[1496.200 --> 1499.160] Exactly as I said, they're monsters, they're stillers, and so on. +[1499.160 --> 1502.840] We simply explain to the child or the individual, this is how the game is played. +[1502.840 --> 1503.840] This is the goal. +[1503.840 --> 1505.600] You've got to find these jewels that are hidden throughout the building. +[1505.600 --> 1506.600] You've got to avoid the monsters. +[1506.600 --> 1509.200] If they catch you, they hide the jewels somewhere else. +[1509.200 --> 1512.800] And the more jewels you can find, the better it is in terms of your score. +[1512.800 --> 1516.320] We never tell them you have to explicitly learn the layout of the building. +[1516.320 --> 1519.680] We just simply say, this is how you play the game. +[1519.680 --> 1522.520] So now, as I said, this was a three-arm randomized clinical trial. +[1522.520 --> 1524.480] We had three arms in the study. +[1524.480 --> 1529.280] Some were enrolled or randomized to the directed navigator arm, again, working with an orientation +[1529.280 --> 1530.720] mobility instructor. +[1530.720 --> 1534.440] Some were playing the game arm and some were in the control group. +[1534.440 --> 1538.200] So this was a game, but the building had nothing to do with the target that we're trying +[1538.200 --> 1539.200] to get. +[1539.200 --> 1542.200] So we wanted to see the potential benefit of actually playing the game itself, even though +[1542.200 --> 1545.400] the overall target wasn't matching. +[1545.400 --> 1548.640] They go through, they play, as I said, for about an hour and a half. +[1548.640 --> 1552.720] Each one arm, we look at their proficiency of virtual navigating, so going from target +[1552.720 --> 1556.880] A to target B or target C to target D, and so on, we look whether or not they can do +[1556.880 --> 1558.520] it and how long they take. +[1558.520 --> 1560.360] We also then transfer them to the real world. +[1560.360 --> 1563.920] We take them to the physical building and see whether or not they can use those transfer +[1563.920 --> 1566.040] skills, what they learn in terms of their map. +[1566.040 --> 1569.280] And then the last thing we look at is what are called drop off tasks, so if you're all +[1569.280 --> 1570.760] familiar with in the O&M world. +[1570.760 --> 1575.920] So in other words, instead of asking to just go A and B, C and D, E and F, we bring them +[1575.920 --> 1580.280] to various positions in the building and we say, where you're standing now, what's the +[1580.280 --> 1582.640] shortest way out of the building? +[1582.640 --> 1586.800] So we give them sort of a task to force them to manipulate the information in their mind. +[1586.800 --> 1588.640] So that's the drop off task in this. +[1588.640 --> 1593.360] So just to remind you, the comparison of direct navigation versus game tells us something +[1593.360 --> 1595.720] about the method of instruction. +[1595.720 --> 1599.240] The comparison of the control group versus the game tells us something about the gaming +[1599.240 --> 1600.240] context. +[1600.240 --> 1601.640] And that's why we have the three arms. +[1601.640 --> 1602.640] Okay? +[1602.640 --> 1604.040] So let's take a look at some of the data. +[1604.040 --> 1606.720] Before I do here, here's some more information. +[1606.720 --> 1611.200] The inclusion criteria, we took 18-year-olds, anywhere aged between 18 and 45. +[1611.200 --> 1614.280] I'll show you a youth study that we did specifically right after that. +[1614.280 --> 1615.280] Male and female. +[1615.280 --> 1618.400] We documented legal blindness before the age of three. +[1618.400 --> 1623.200] And blindness of ocular cause, regardless of the level of visual acuity or residual +[1623.200 --> 1628.200] function, they were all blindfolded throughout the study as they played the game. +[1628.200 --> 1632.760] Outcome measures were things with a number of paths that they got correct, the time, target, +[1632.760 --> 1634.240] and also what we call creativity points. +[1634.240 --> 1638.280] In other words, how well were they able to find their way out, the quickest way possible? +[1638.280 --> 1640.600] I'll explain to you more specifically. +[1640.600 --> 1644.840] A qualitative thing, like the types of errors they made, things, for example, like strategies +[1644.840 --> 1649.480] employed, all that was documented as well to get a sense of what was happening. +[1649.480 --> 1656.480] In the first few months, we had a number of cases that were in the process of finding +[1656.480 --> 1659.480] the right way out of the process. +[1659.480 --> 1664.480] We had a number of cases that were in the process of finding the right way out of the process. +[1664.480 --> 1669.480] We had a number of cases that were in the process of finding the right way out of the process. +[1669.480 --> 1674.480] We had a number of cases that were in the process of finding the right way out of the process. +[1674.480 --> 1677.480] We had these stopping rules in there to try to get away from any concerns. +[1677.480 --> 1679.480] I can tell you that it actually never happened. +[1679.480 --> 1681.480] It was actually quite straightforward. +[1681.480 --> 1686.480] The analysis, again, just some details here in terms of how we were able to do that. +[1686.480 --> 1691.480] Just to give you some details, the software, the nice thing about it is it allows us to quantify a lot of things. +[1691.480 --> 1693.480] Here's the path that the individual took, right? +[1693.480 --> 1698.480] How much time they took in the various parts, all this can be qualified and enters into the spreadsheet +[1698.480 --> 1703.480] so we can break down the path and see areas that they struggled with versus which paths were more challenging than others. +[1704.480 --> 1707.480] Let's take a look at some of the quick data before I give you the group analysis. +[1707.480 --> 1714.480] Here are two individuals. One was in the directed navigator group and the other one was in the gamer group. +[1714.480 --> 1721.480] Notice that when we asked them to virtually navigate from the lobby of the building all the way through up the stairwell to the second floor of bedroom 6, +[1721.480 --> 1724.480] it took them about a minute and 42 seconds to do it. +[1724.480 --> 1728.480] The reason why I chose this path is it's actually the longest path in the building, physically. +[1728.480 --> 1735.480] They took about a minute, 42 seconds to do it. When we take them to the building and ask them to do the same thing, they can do it in a little bit shorter time. +[1735.480 --> 1740.480] Part of that is the physical translation and the second part is the fact that they're doing the task twice, obviously. +[1740.480 --> 1745.480] The thing that's noticeable is notice that the gamers did it equally well in about the same amount of time as well. +[1745.480 --> 1750.480] Whether you learn this as a directed navigator or you learn this as a gamer, you are able to do this. +[1750.480 --> 1755.480] Those individuals in the third arm, the control group, weren't able to transfer it all as you might imagine. +[1755.480 --> 1759.480] They got there and we say get to bedroom 6 and they're like, what's bedroom 6? +[1759.480 --> 1765.480] So the context aspect, obviously, was crucial. None of the individuals in that control group, that third arm, were able to do the task. +[1765.480 --> 1773.480] Here's what's interesting. Once they're at bedroom 6, for example, this particular location, we asked them, what's the quickest way out of the building? +[1773.480 --> 1784.480] The gamers always find the quickest way out. The directed navigators just retrace their path, the way that they came in, which is probably not surprising to you. +[1784.480 --> 1793.480] So the something tells us that the way that they manipulate the information from the gaming, learning it through gaming versus how they do it through directed navigation is probably different. +[1793.480 --> 1799.480] Even though they're very similar on the first task, getting from A to B, C to D, how they manipulate that information seems to be very different. +[1799.480 --> 1805.480] This is how we quantify it. If you can get the quickest way out, there was always three ways at least to get out of the building. +[1805.480 --> 1810.480] If you find the shortest route, we give you three points. If you find the second shortest route, we give you two points. +[1810.480 --> 1815.480] If you find the longest route, that's one. If you get lost, you can't find your way in the six minutes, you get zero. +[1815.480 --> 1818.480] Pretty simple. We call those creativity points. +[1818.480 --> 1824.480] So, the other thing, just very, very quick to notice. Notice how they're actually shorelining, very, very similar to what they actually do in the real world as well. +[1824.480 --> 1830.480] They use very, very much the same strategies in the virtual world that they actually do in the real world as well. +[1830.480 --> 1837.480] Okay, here's the hard data. There was over 31, I have the number here, or the 31 subjects who participated in this study. +[1837.480 --> 1846.480] Again, I'm not showing you the control arm group because none of the people were able to do that. I'm going head to head comparison of those in a directed navigator arms versus those in the gaming arms. +[1846.480 --> 1851.480] And I separated them from early blind to late blind as well, and to suit groups as well. +[1851.480 --> 1857.480] Because we wanted to see whether prior visual imagery had somehow an effect on the possible performance as well. +[1857.480 --> 1866.480] And here's the data. So, in the early blind group, whether you were able, whether you learn through gaming and read or navigating through blue, you had almost 90% correct. +[1866.480 --> 1871.480] You could find your route very, very easily in this regard. There was no statistical difference between the two groups. +[1871.480 --> 1878.480] In the case of late blind, very, very similar performance as well. So, take home message one, they are able to do this task quite well. +[1878.480 --> 1883.480] Almost 90%, anywhere between 80% to 90% performance or correctness, trying to find that route. +[1883.480 --> 1888.480] And it's the same whether you were a directed navigator or a gamer, and it was the same whether you're early blind versus late blind. +[1888.480 --> 1893.480] The drop off task, the creativity task, if you will, is where we saw the biggest difference. +[1893.480 --> 1902.480] The gamers, always or in general, were always able to find the shorter routes, whereas the directed navigators always choose the longer paths as well. +[1902.480 --> 1907.480] And this was true whether you were early blind or late blind, and this was statistically significant as well. +[1908.480 --> 1920.480] So, we also did a follow-up study in terms of adolescence, because a lot of the concerns that we had is that, well, if you do the virtual navigation first, you're basically consolidating the path in your mind, and then when I take you to the physical place, you're just executing on that path. +[1920.480 --> 1925.480] So, we re-did the study design in a way to try to get specifically at that question. +[1925.480 --> 1931.480] And we did this specifically also in teens, between 14 and 18 years old. And this is how we designed it in this particular case. +[1931.480 --> 1943.480] We enrolled them, we played Abbas, the video game, and there were two randomized arms. In the first case, you did the direct route, then you did the drop off task, and the second arm, you did the drop off task, then you did the route. +[1943.480 --> 1948.480] So, there was no carryover effect of what you did on the first task versus the other. It was a wash in this. +[1948.480 --> 1955.480] And what we found in this case, very, very similar performance. The direct navigation route, task one, mean performance was about 70%. +[1955.480 --> 1964.480] Task two, the drop off task, mean performance was 97. So, the game was all did very, very well in this. And the other thing too, is the mean shortest path was about 71%. +[1964.480 --> 1977.480] So, if 71% of the time, they would typically take the shortest path. So, the gaming itself, whatever the task order was, seemed to indeed allow this potential benefit of transfer in the navigation task. +[1978.480 --> 1990.480] A couple other things to think about, which was interesting. In terms of their performance, we noticed that the more jewels they found, in other words, the better they played the game, the better their overall performance was. This is true for task one and task two. +[1990.480 --> 1996.480] So, the better you played the game, the better you actually learned, and the better you actually transferred into the real world. +[1996.480 --> 2008.480] In contrast, if you look at performance as a function of a number of years of O&M skill, there was no association. So, it wasn't biased by the fact that these kids may have had more independence that were better at orientation and mobility. +[2008.480 --> 2015.480] We found that there was no significant association between the two. Performance was actually directly correlated to how well you played the game. +[2016.480 --> 2024.480] So, various results to think about. So, first of all, as I said, it's about an 85% success rate when it comes to just going from one target to another, A to B. +[2024.480 --> 2035.480] It was a correlation between navigation, success, and gameplay. Other things that we noticed, alternate routes were typically found when you learned it through gaming as opposed to direct the dactic instruction. +[2035.480 --> 2046.480] And recall also, the gamers were never told to learn the layout. They basically learned the layout for free. We simply said, this is a game, this is how you play it, and they got the map for free just by interacting with the map. +[2046.480 --> 2057.480] And the argument that I would make to you is that those maps were more flexible. The way that they manipulated the information in their mind as a gamer was very, very different than the case of a direct navigator, which is what I'm summarizing here. +[2058.480 --> 2069.480] So, in both cases, they're able to form the map in their mind. They're able to transfer that to a real world setting. But what I would submit to you is that in the case of directed navigators, there were somewhat constrained, if you will, because of the dactic learning. +[2069.480 --> 2086.480] They could only use the information that they were taught. So, structured, basically what you would call root knowledge, in terms of O&M, whereas the gamers exploratory learning, self discovery and pace allowed a certain cognitive flexibility that they didn't have in the case of the game. +[2086.480 --> 2100.480] So, in the case of the directed navigators, you might want to call this survey knowledge, for example, in the case of O&M. So, big difference in terms of how they were able to perform, even though similar performances in some aspects, very different performances in other aspects as well. +[2100.480 --> 2110.480] So, how does the brain do this? At the end of the day, we stick everybody into the scanner. That's what we did. It's how we're getting to the scanner. +[2110.480 --> 2120.480] So, let's talk about the neuroscience behind this. That was the behavioral aspect. How do they do this? Now, so let me tell you a little background behind navigation and so on, and how the brain does this. +[2120.480 --> 2133.480] A lot of the first initial work about navigation and finding your way, interestingly enough, was done studying London taxi drivers. If you're going to London, you know that that's just a terrible place to drive. Imagine being a taxi driver. +[2133.480 --> 2146.480] And if you want to be licensed in the city of London to drive a taxi, you have to do something what's called the knowledge where you spend two years of intensive teaching, I should say, intensive learning, memorizing the map of London. +[2146.480 --> 2157.480] And they go through very, very interesting exercises where they have to close their eyes and mentally imagine the route that they would take. So, they close their eyes and the instructor will say, okay, you pick up a ferret, pick a dilly, how do you bring them to Buckingham Palace? +[2157.480 --> 2168.480] So, like this, like this, I turn this right, go streets, streets, turn right, left, so on. And they memorize that map through this mental exercise all the time. They go two years of this before they actually get the license to drive. +[2168.480 --> 2182.480] Very clever. It was Elano McGuire who studied this in London and her students. And they had a very, very clever idea. They decided to take these taxi drivers and look at their hippocampus, which you know is the part of the brain responsible for memory and spatial learning. +[2182.480 --> 2201.480] And what they found is, they were not only was it larger in these London taxi drivers, it was actually correlated with the number of years that they drove the taxi. So, there's structural evidence that their part of the brain physically changed. They then compared that to Londoners who drive in London but weren't taxi drivers and there was absolutely no change over time. +[2201.480 --> 2212.480] So, their hippocampus was bigger than an aged match Londoner who didn't drive a taxi and it got bigger the longer you drove a taxi as well. So, interesting, associative evidence between the two. +[2212.480 --> 2227.480] The other thing that they figured out was the network of the part of the brains, or the part of the brain that was responsible for it. So, interesting pieces, the primal cortex you probably know again responsible for spatial processing, hippocampuses I mentioned in terms of memory and root learning. +[2227.480 --> 2239.480] The frontal cortex involved with executive decisions, right? And of course the visual cortex because you have to use visual information around you and integrate that. And how do they figure all that out using video games. +[2239.480 --> 2251.480] So, this is crazy taxi in London. They took London taxi drivers and asked them to play the video game in the scanner. And they identified all these areas that I mentioned, you know, parietal cortex, visual cortex, frontal areas and so on. +[2251.480 --> 2263.480] So, here's video games showing up again, allowing us to figure out what a taxi driver's brain looks like in terms of the map. So, with that in mind, we went back to our gamers and this is how we did it. +[2263.480 --> 2273.480] So, this is a sighted control in the FMRI scanner. He's looking through a mirror here, going through the video game like this. He's using headphones, looking through the mirror. He can see the screen projected behind him. +[2273.480 --> 2281.480] He's using a series of keys to move left and right and so on exactly the same way that you would with the keyboard of a laptop. And he's doing this visually. +[2281.480 --> 2290.480] We then bring in our blind participants and doing exactly the same thing. Obviously the monitor is on and we compare the two in terms of how they're able to do that. +[2290.480 --> 2301.480] What we found is sure enough a network of activation, very, very similar to what we saw in the taxi drivers as well. So, visual areas, auditory cortex is active, right? Because they're hearing the sounds. +[2301.480 --> 2315.480] Frontal areas, very, very important for executive decisions. We also saw activations and motor areas. This is because they're using the keys moving around. And sure enough, activation of visual cortex, as well as a parahippocampus, which you see right now. +[2315.480 --> 2322.480] So, all the areas that were identified in the London taxi study, we found the same network in our sighted participants as well. +[2322.480 --> 2335.480] What do you think the brain looks like in our ocular blind participants? The same. Yeah, exactly the same. Same areas, auditory motor, frontal, visual cortex, parahippocampus. +[2335.480 --> 2343.480] Again, the conactivity is all there. They're using the same network, even though they're not necessarily using that visual information the same way. +[2343.480 --> 2350.480] So, they're driving the same system through another portal, if you will. Let's look at this a little bit more systematically, all right? +[2350.480 --> 2357.480] So, for example, here all this brain activation and so on is it, indeed, related to the actual video game. Here's the blind individual. +[2357.480 --> 2364.480] We asked them to just simply listen to the instructions. Don't play the game, don't move, don't do anything. Just listen to the instructions. +[2364.480 --> 2372.480] And we have activation in auditory cortex. And we also have activation in sensory motor areas because they're mentally imagining doing that motion. +[2372.480 --> 2380.480] Here's the same individual just randomly walking. So, they're listening to the cues and we just say, you know, walk in a circle. Don't go anywhere. +[2380.480 --> 2390.480] And again, activation in auditory cortex and sensory motor areas as well. Now we asked the individual, we want you to walk from A to B, C to D in a gold directed fashion. +[2390.480 --> 2399.480] And that's where you see everything light up, right? So, it's the engagement that does this, right? Again, going back to that earlier question, how do you turn the brain on? +[2399.480 --> 2405.480] They could do something really hard. It really likes that. That's how you create all that engagement. +[2405.480 --> 2413.480] Other things that were kind of interesting. We started looking at all our participants one by one and we saw a really, really wide variability in terms of activation. +[2413.480 --> 2421.480] We saw some people really, really locked in, all sorts of activation everywhere. Another individual, less so, right? +[2421.480 --> 2429.480] Now we saw other individuals that had really, really strong visual cortex activation. Other individuals? Basically nothing. Or we're using other areas of the brain reactive. +[2429.480 --> 2446.480] So, we wanted to make sense of this. Why were everybody who was playing this game using different parts of their brain? Right? What was behind this? Is there somehow we can take these individuals and associate that with their performance in terms of the game and how they were actually using information? +[2446.480 --> 2456.480] So, the way we started to do this, trying to associate brain activity with behavior is we used a rationale. And this gets back to the earlier question about RLP that I promised I would not lose to. +[2456.480 --> 2474.480] So, we used something called the development of self-report measures of environmental spatial ability. This was done by Mary Hagerty. And she developed a scale that was first developed for cited individuals, translated for the blind, in terms of trying to figure out how independent an individual was in terms of their orientation, mobility, and navigation skills. +[2474.480 --> 2490.480] So, the question is like, I'm very good at giving directions. On a scale of say 1 to 8, 8 being very good, 1 being very poor. I have trouble understanding directions. Right? So, notice the negativity on this one. So, we ask it in both ways to make sure that we don't get biased by one polarity versus the other. +[2490.480 --> 2504.480] So, usually remember a new route after I have traveled it only once, scale of 1 to 8. And then it goes through analysis and it gives you an independent score. The higher the number, the more theoretically independent you are, and the more confident you are in terms of your travel. +[2504.480 --> 2513.480] All right? Here are our nine participants in the FMI study ranked ordered by their independent score from 1 to 9. And what do you notice? +[2513.480 --> 2525.480] Right now, the prematureity is in the lower half. So, just a first piece about this aspect of whether or not RLP is somehow related with spatial aspects. But I'll get back to that during the question period. +[2525.480 --> 2543.480] The main thing that I thought was quite interesting is that their independent score was really interestingly related with their primary mobility. So, the top independence were long cane users. The middle scores were using guide dogs. And the lower scores were all people, for example, using the ride program or having a driver taking them around and so on. +[2543.480 --> 2559.480] We don't ask this specifically. This actually came out as a function of the questionnaire that we asked. So, they were rank ordered and it seemed to parallel very, very much their primary mobility aids as well. So, we had a good sense that we were able to rank order these individuals in a real world setting. +[2559.480 --> 2576.480] When we do that, we take the scale. We put that into an equation based on brain activity. And we asked the software to tell us what part of the brain correlates the best with their independence. And the part of the brain was this area here called the temporal parietal junction or TPJ. +[2576.480 --> 2592.480] So, there's the correlation analysis. Brain activation has a function of their independent score. And of all the parts of the brain, this is the one that was intimately related to their independence level. And the reason why I think this is interesting is TPJ is the part of the brain that's normally active when we tell ourselves stories. +[2592.480 --> 2618.480] So, it's kind of interesting that those individuals who are the most independent in terms of navigation are probably the people who somehow necessarily can rehearse that story in their mind of where they're going. It wasn't necessarily the visual cortex, it wasn't necessarily frontal cortex. The part of the brain that kind of sits in the middle of everything, brain parietal, visual, temporal, and frontal. The nexus, if you will, of all of these brain areas. That was the part of the brain most correlated with it. +[2618.480 --> 2632.480] Okay, where are we heading now? So, we started very, very simply with this idea. We took one building, tried to map it, and tried to turn to it, and tried to get a sense of the neuroscience and all these aspects. Our goal now is to map out the entire campus, as you might imagine. +[2632.480 --> 2645.480] You can imagine going from one building to another to another to another, and we call this sort of audio Zelda, where you find the key in one building, which gives you the map to another building, which forces you to find the other building, to, again, sort of engage them and map out the whole entire campus. +[2645.480 --> 2652.480] You can also think of this might be something that you can put on a CD, or maybe downloadable from space, or from Cloudspace, I should say. +[2652.480 --> 2661.480] And if you have a client coming to the Carol Center, they can go and play the game on their own time, and once they arrive at Carol, they have already a good idea of what the layout is of the campus. +[2661.480 --> 2665.480] That's our goal right now, in a particular study, and we are working towards that. +[2665.480 --> 2672.480] We've changed the platform, we're using something called Unity, which is a very, very simple way to program virtual environments. +[2672.480 --> 2683.480] The nice thing about it is that you can use virtually any platform you want. Android, Mac, Playstation, however they want to interact with the game, they can use whatever interface they want. +[2683.480 --> 2688.480] So you build it once, play it everywhere, sort of thing. So very, very fast. It allows us to create these environments. +[2688.480 --> 2694.480] Just showing you here, we call it Haga now, for a haptic audio game application, because we're adding tactile components to it as well. +[2694.480 --> 2699.480] Here's the indoor environment, there's the outdoor environment, here's a blind individual, you can see interacting with it. +[2699.480 --> 2704.480] So using the audio, as I mentioned before, and also using the rumble feature of the Xbox controller as well. +[2704.480 --> 2710.480] So when they hit an obstacle, they get the feedback, and as they strafe, the frequency changes as well. +[2710.480 --> 2716.480] So there's tactile feedback as well as audio. You also notice this device here, it's called the Falcon, the Novit Falcon. +[2716.480 --> 2722.480] This is a force feedback device, so they can also use that to knock on the door, to use it also as a virtual cane. +[2722.480 --> 2730.480] So all sorts of immersion between the audio as well as the haptic tactile as well, to give them that sense of the indoor and outdoor environment. +[2730.480 --> 2739.480] And this is in the works right now. Other things we've played with, the we-mode, as I mentioned, is of an interesting way to use that as a virtual cane. +[2739.480 --> 2746.480] So getting that rumble feature, the problem with the we-mode is that if you hit an obstacle, you can still put your arm through it. +[2746.480 --> 2751.480] So it starts vibrating. So the nice thing about the Falcon is that it actually gives you that force feedback. +[2751.480 --> 2755.480] The we-mode is just an alarm, it doesn't really give you that same sort of tactile immersion. +[2755.480 --> 2760.480] But nonetheless, interesting, this was work that we did in Chile. Child learned a layout of a park. +[2760.480 --> 2768.480] We take the child there, and we contract them, and using that sort of sense, they build the map of the layout of the park using the we-mode as one way to do that. +[2769.480 --> 2777.480] Audio-opolis, another interesting way. This is a fictional environment. Same sort of strategy. We have the child play again. +[2777.480 --> 2783.480] Audio as well as the we-mode. With a goal here is to chase a thief that's stolen in this virtual city. +[2783.480 --> 2787.480] You have to find the thief in the, every building you find leaves cues for the next building you have to find. +[2787.480 --> 2794.480] And you kind of go through in a structured fashion. At the end, we give you just the blocks, and you have to rebuild the environment that you worked with. +[2794.480 --> 2798.480] And we again kind of get a sense of the child's spatial skills, how they put the environment together. +[2798.480 --> 2807.480] We've seen, for example, a lot of kids flip it, for example. They know the very short linear relationship between buildings, but globally, they have distortion. +[2807.480 --> 2813.480] So it allows us to kind of diagnose what aspect of their spatial representation seems to be imperative at all. +[2813.480 --> 2822.480] Other things that we've done. We now embarked with the Massachusetts Bay Transit Authority, the MBTA. This runs the subway and the bus system in Boston. +[2822.480 --> 2830.480] If you're from Boston, you might recognize this. This is Park Street Station. And we have a situation now where we've modeled Park Street Station in our virtual environment, +[2830.480 --> 2836.480] hoping that this could be a similar strategy outside of the Carroll Center, now using this in public spaces as well. +[2836.480 --> 2845.480] We can use this as an offline survey. You can learn how to explore the station before you go. You can maybe use this as an online system as well. +[2845.480 --> 2853.480] You can get information when you're in the station. It's also a way of tracking. You can use this as a way of what are the most common exits that people use, for example. +[2853.480 --> 2865.480] You put that into a pool of data as well. Antonio Grimache is a student in my lab. He has levers and a very proficient traveler in the bus and in the Boston subway system. +[2865.480 --> 2877.480] He has this idea of a very intriguing of developing a strip map. You're all familiar with what a strip map is. If you take the subway, it's basically a linear representation of all the stations and the sequence, where the connections are. +[2877.480 --> 2885.480] He's developing an app called strip map, for example, that purpose, for the Boston metro system, subway system. It looks like something like this. +[2885.480 --> 2899.480] The first thing you do, taking advantage of the tactile interface, the gestures, and the audio that you get from your iPhone, you can ask, for example, what direction do you want to head in, how do I get from one station to another, or find the best route between two stations. +[2899.480 --> 2906.480] You can choose the line that you want, again, just scrolling through in a strip map fashion, and it will calculate the optimal route for you. +[2906.480 --> 2915.480] Then you get all sorts of feedback, for example, crowdsourcing tips that people put on board. This is a good place to get a coffee, or this is a particularly complicated place. +[2915.480 --> 2924.480] Live schedule, we get an immediate feed from the MBTA because of our association of when the trains are coming, when the subway is coming, if there's any delays, they have it immediately on their phone. +[2924.480 --> 2930.480] Other accessibility services like a colony, or a call of taxi, and so on. This is in the works as well. +[2930.480 --> 2939.480] So, again, an original idea, neuroscience, now trying to translate that into real world applications that I think people can use. +[2939.480 --> 2945.480] The last example I'll give you, I think, a very, very interesting one. This isn't a project that I was involved with. This is my collaborators back in Chile. +[2945.480 --> 2955.480] This is a project called Idac, which is Inclusion de Hitar, but it happened in their ciencias. This was a project that they used to use video games to teach basic anatomy and biology for blind students. +[2955.480 --> 2962.480] In Chile, much like here in the United States, a lot of the kids are mainstream. They spend a lot of time in the public school system. +[2962.480 --> 2967.480] And what they do is they've invented a game where a blind individual has to play with two or three cited classmates. +[2967.480 --> 2974.480] And the idea is to explore the world through these various rooms. And as they go through the world, they learn the anatomy together. +[2974.480 --> 2981.480] The cited kids see each other as they move through, and the blind child is using the audio cues to navigate with them. So, they're playing the game together. +[2981.480 --> 2987.480] So, it's a combination of teamwork, as well as concrete tools, as well various models and things which they use together. +[2987.480 --> 2995.480] Notice that they're visually labeled as well as Braille. Also, all the material is exactly the same. They're reading material visually versus the tactile Braille material is identical. +[2995.480 --> 3002.480] The idea is to integrate and force the teamwork, have the kids working together. Very, very interesting results. This is a long-term study with the Ministry of Education that they're doing. +[3002.480 --> 3007.480] So, I think a very clever idea, a very interesting approach to engage them. +[3007.480 --> 3015.480] So, final slides. Great, great, great quote that I love. You can discover more about a person in an hour of play than a year of conversation. +[3015.480 --> 3025.480] In terms of Plato said that. I think that's a really, really interesting idea. The fact that plays somehow brings out our better nature, if you will, I think is a very interesting one and intriguing one. +[3025.480 --> 3035.480] We solve problems in ways that we don't normally solve in the real world. Great picture that I love. This is another one that's hanging in my office in the National Sports Center for the Disabled. +[3035.480 --> 3047.480] You see this guy dog looking up at his master, presumably climbing this wall. I love this picture because we have the sense that individuals are only as good as their technology. +[3047.480 --> 3057.480] I think the idea is to go beyond that. I think the idea is to have create independence, create confidence, create a level of functioning beyond the tools that are available. +[3057.480 --> 3067.480] That's why I think with this picture symbolizes. And the last one that I want to share with you just to close out, straight out of Texas folklore, which I think is very, very impression. +[3067.480 --> 3080.480] Let me thank a couple of individuals for this. As I said, Professor at the University of Chile, Aaron Connors was a research assistant working on this project. Mark Halco did the FMRI work and of course the Carroll Center for the Blind where we did a lot of the work. +[3080.480 --> 3090.480] And now my final slide that I want to share with you again, I think very, very appropriate and right out of Texas folklore. I love this. Clear eyes, full hearts can't lose. +[3090.480 --> 3099.480] Every time I work with a blind child, this is what I think about. It's exactly that. There's so much more to seeing than the health of the eyes and also the health of the brain. +[3099.480 --> 3107.480] I think you can experience the world in so many different ways. And I think that's our goal. And if we do that, keep our hopes full. Can't lose. +[3107.480 --> 3109.480] So again, thank you very much for our audience. diff --git a/transcript/allocentric_RSlc9IxdBw8.txt b/transcript/allocentric_RSlc9IxdBw8.txt new file mode 100644 index 0000000000000000000000000000000000000000..5d05e8df9a149041f84fcab2fcaf72b0898e0fe8 --- /dev/null +++ b/transcript/allocentric_RSlc9IxdBw8.txt @@ -0,0 +1,268 @@ +[0.000 --> 15.420] Is it possible to understand everyone at a deep and meaningful level to get what really +[15.420 --> 19.440] matters to people no matter how different they are from you? +[19.440 --> 23.840] That proposition sounds a little absurd after all. +[23.840 --> 26.760] Human psychology is really complex. +[26.760 --> 32.200] Some people are abused as children, others are loved and supported. +[32.200 --> 37.840] The brain of an 18-year-old girl who sleeps with her cell phone is different than an 80-year-old +[37.840 --> 42.880] man who can't remember the names of his children. +[42.880 --> 48.320] There's no one way to understand everyone, no broad operating principle. +[48.320 --> 49.520] That's the conventional wisdom. +[49.520 --> 51.480] It makes perfect sense. +[51.480 --> 54.560] And yet, it's a myth. +[54.560 --> 58.400] A few years ago, I was watching TV, scenes from Afghanistan. +[58.400 --> 63.600] A group of teenage boys was standing in the back of a dusty pickup waving rifles. +[63.600 --> 70.320] And one boy wrapped in a white cloth with dazzling blue-green eyes, was staring directly +[70.320 --> 72.120] into the camera. +[72.120 --> 77.000] He looked intent, menacing, and that was the point of the piece. +[77.000 --> 83.280] We should be afraid because young men were passionate about killing Americans. +[83.280 --> 87.640] Let me tell you about another boy, my nephew, Rory. +[87.640 --> 93.920] At the time I saw this piece, Rory was a freshman in college at Harvard, but Rory's not full +[93.920 --> 94.920] of himself. +[94.920 --> 97.000] In a word, he's sweet. +[97.000 --> 102.200] He's not a hugger, but always hug me because he knows that I am. +[102.200 --> 105.320] He bakes brownies with his young cousins. +[105.320 --> 108.320] He wants to be a doctor one day. +[108.320 --> 113.240] I'm proud of Rory, and I can't imagine a kid more different than that one from Afghanistan. +[113.240 --> 120.160] Except, at a fundamental level, these two boys are exactly the same. +[120.160 --> 123.160] They've chosen their respective paths. +[123.160 --> 125.240] Join the Taliban. +[125.240 --> 128.320] Go to Harvard for the same internal reasons. +[128.320 --> 131.000] They both would like respect. +[131.000 --> 134.720] Everyone knows that when you go to Harvard, people look up to you for the rest of your +[134.720 --> 136.040] life. +[136.040 --> 141.800] And when you join the Taliban, little kids look on and on as you drive by in that dusty +[141.800 --> 143.360] vehicle. +[143.360 --> 147.640] They also want community belonging. +[147.640 --> 153.160] Rory's got close friends, the men of Harvard, but no closer, I bet, than the men of the +[153.160 --> 155.080] Taliban. +[155.080 --> 160.680] And lastly, and probably most important to both, they want to make a difference in their +[160.680 --> 162.320] worlds. +[162.320 --> 165.720] They want to help those they love. +[165.720 --> 171.760] What's amazing and horrifying is that one will learn to be a doctor and the other +[172.440 --> 174.760] will learn to kill. +[174.760 --> 181.960] It's true that human behavior is amazingly varied and complex, but at the level of motivation, +[181.960 --> 188.840] at the level of what drives us to do all those different things, were actually identical. +[188.840 --> 194.720] There's a formula for understanding why we do, what we do, and once you get it, you get +[194.720 --> 195.720] it. +[195.720 --> 199.080] There are 30 basic human motivations. +[199.080 --> 201.440] Let me give you a quick primer. +[201.440 --> 203.280] Here's the obvious, the physical. +[203.280 --> 204.280] We want to survive. +[204.280 --> 206.120] We need air, food, and water. +[206.120 --> 212.800] There's a second category of relational needs that help us understand how to balance +[212.800 --> 215.400] our self-interest and that of the community. +[215.400 --> 219.360] We all want to receive care, understanding, love. +[219.360 --> 224.440] But at the same time, we want to give our love to help others in our lives. +[224.440 --> 228.320] And then there's a third category of needs. +[228.320 --> 231.400] We need to call aspirational or spiritual. +[231.400 --> 232.680] We want to grow. +[232.680 --> 236.280] We all crave adventure and beauty. +[236.280 --> 240.280] I'm not going to go through the whole list because everything on the list you're already +[240.280 --> 242.080] familiar with. +[242.080 --> 247.560] But don't then mistake this for that old high school sociology lesson where the teacher +[247.560 --> 253.560] says human beings have needs if they're not fulfilled unhappiness and war. +[253.560 --> 254.560] That's all true. +[254.560 --> 258.480] But I'm not here to make that macro sociological point. +[258.480 --> 265.520] I'm here to help you understand the micro, the human individual in any given moment. +[265.520 --> 272.680] What drives your mother, your spouse, your boss, human behavior no matter how seemingly +[272.680 --> 281.120] bizarre or mundane is designed internally to fulfill one or some of the common needs. +[281.120 --> 286.760] If you want to understand what really matters to a person at the level of deep motivation, +[286.760 --> 292.080] ask which of the common needs have they been pursuing? +[292.080 --> 294.520] Here's a story for my personal life. +[294.520 --> 298.640] My wife Shelley sometimes gets upset with me for not cleaning the dishes to her exacting +[298.640 --> 300.360] standard. +[300.360 --> 306.840] I can see her there as I'm cleaning over my left shoulder, pretending to read the mail, +[306.840 --> 308.400] watching me. +[308.400 --> 312.680] Now I could easily conclude that's a little weird. +[312.680 --> 315.880] She might be OCD. +[315.880 --> 321.280] But these brilliant observations don't get me very far. +[321.280 --> 327.880] If I want to understand my wife and I do, I ask a basic question, what needs are driving +[327.880 --> 328.880] her? +[328.880 --> 330.520] Shelley's a busy woman. +[330.520 --> 332.480] She teaches high school full-time. +[332.480 --> 334.320] She drives our kids everywhere. +[334.320 --> 336.920] She calls my mom to say hi. +[336.920 --> 338.480] And I love you. +[338.480 --> 340.480] Excuse me. +[340.480 --> 344.520] I got a little emotional. +[344.520 --> 346.360] She calls my mom to say hi. +[346.360 --> 348.600] And I love you. +[348.600 --> 352.280] Clean dishes neatly stacked and put away. +[352.280 --> 356.640] The fill in her, the common needs for order and rest. +[356.640 --> 359.760] Finally, some peace of mind. +[359.760 --> 364.440] And there's one more huge need motivating her dishwasher spine. +[364.440 --> 371.040] When I leave stuff on the dishes like that big piece of vermicellie, hanging off the +[371.040 --> 377.840] back that's so super obvious to her, after she said, Larry, do a good job this time? +[377.840 --> 380.320] This time, please do a good job. +[380.320 --> 383.240] She concludes, I don't care about her. +[383.240 --> 389.080] If you want to understand everyone, including Shelley, the outside world matters to us only +[389.080 --> 393.880] because we're trying to fulfill needs internally. +[393.880 --> 396.760] She doesn't really care about clean dishes. +[396.760 --> 402.360] At depth, she, like everyone else, wants respect to be loved. +[402.360 --> 407.800] Human behavior is complex, but human motivation is actually simple. +[407.800 --> 411.360] We seek these common needs and nothing else. +[411.360 --> 415.960] I didn't myself discover that common needs drive human behavior. +[415.960 --> 420.660] The idea was proposed around 50 years ago by the psychologist, Carl Rogers, and then +[420.660 --> 426.080] further developed by the extraordinary peacemaker, Marshall Rosenberg. +[426.080 --> 431.960] I came across their concepts around 15 years ago and they made good sense to me. +[431.960 --> 436.480] So I began to implement them in my personal life to decode family and friends. +[436.480 --> 438.200] And I was understanding people. +[438.200 --> 439.960] I was intrigued. +[439.960 --> 441.840] But I was also skeptical. +[441.840 --> 449.760] I asked Marshall Rosenberg, why 30 needs and not 755? +[449.760 --> 453.440] And he said, oh, it could be 30 or 755. +[453.440 --> 457.280] The need to survive, for example, could be further broken down into the needs to not +[457.280 --> 461.680] walk off a cliff or to not be eaten by predators. +[461.680 --> 464.200] 30 is just a useful level of aggregation. +[464.200 --> 465.480] I thought, okay, that's a good answer. +[465.480 --> 467.360] But what about this, Marshall? +[467.360 --> 470.880] What are needs from a neurological perspective? +[470.880 --> 471.880] What's happening in the brain? +[471.880 --> 474.360] How do they actually motivate us? +[474.360 --> 478.040] And here, Marshall said, oh, that's simple. +[478.040 --> 483.120] Needs are life force, human life force. +[483.120 --> 488.160] And I thought, whoa, that's not science at all. +[488.160 --> 493.320] And so I spent the next two years meeting with neuropsychologists and speaking with evolutionary +[493.320 --> 497.640] biologists and reading cognitive journals with footnotes. +[497.640 --> 503.640] And I eventually concluded, this need stuff is grounded in solid science. +[503.640 --> 512.600] And because research shows that if you mention the word neuroscience or brain in a big talk, +[512.600 --> 516.280] it's a thousand times more likely to go viral. +[516.280 --> 520.440] Let me say, this is neuroscience. +[520.440 --> 521.920] Brain science. +[521.920 --> 523.720] Neuro-N brain. +[523.720 --> 524.720] Neuro-Brain. +[525.720 --> 528.560] Now, I'm not a scientist. +[528.560 --> 532.520] I'm a lawyer, a mediator, and a writer. +[532.520 --> 538.760] But being a layperson has allowed me to unravel the science that translated away from chemicals +[538.760 --> 544.040] like oxytocin and dopamine, and into what I believe is a useful narrative. +[544.040 --> 550.040] And so here's what I believe is going on in the human brain with needs. +[550.040 --> 555.920] The human unconscious evaluates the world, telling us whether it's dangerous or friendly. +[555.920 --> 557.640] That's its job. +[557.640 --> 562.040] Once it reaches its conclusion, it's got to motivate the whole system, including the conscious +[562.040 --> 564.840] mind, to do something about it. +[564.840 --> 566.120] How? +[566.120 --> 571.720] If it concludes that the world's dangerous, we naturally feel fear or anxiety. +[571.720 --> 573.840] We try to get less of what caused it. +[573.840 --> 579.720] If it concludes the world is friendly, we naturally feel happy or excited, and we try +[579.720 --> 581.240] to get more. +[581.240 --> 590.040] But, and this is the key, how does the unconscious determine what's dangerous and what's friendly? +[590.040 --> 593.120] It's not just left up to each of us individually. +[593.120 --> 599.920] Rather, the criteria upon which we evaluate the world is born into you and born into me +[599.920 --> 602.040] and born into all of us. +[602.040 --> 604.400] Those are the human needs. +[604.400 --> 611.920] Those specific criteria were honed through evolution because they allow us to survive, +[611.920 --> 616.480] to relate to other people and ultimately to make more people. +[616.480 --> 618.520] Am I being respected? +[618.520 --> 622.080] Am I making a contribution in the world? +[622.080 --> 625.320] Does she think I'm cute? +[625.320 --> 632.560] If so, pleasure, get more of that, if not pain, change the world. +[632.560 --> 638.880] It took me several years to unravel the science in a way that made narrative sense to me. +[638.880 --> 643.680] And yet, in that time, I actually stopped caring so much about what was happening in the +[643.680 --> 644.840] brain. +[644.840 --> 650.480] I was using this and understanding people in a way that I didn't think was possible. +[650.480 --> 652.600] I was seeing their hearts. +[652.600 --> 653.600] It worked. +[653.600 --> 657.600] And really, that's what counts. +[657.600 --> 660.080] I'd like to tie this together with a, with a story. +[660.080 --> 662.080] As I said, I'm a mediator. +[662.080 --> 666.520] When people are at war, they come to me and I help them work it out. +[666.520 --> 671.720] Not too long ago, I was visited by a couple that had already been divorced. +[671.720 --> 675.720] The ex-wife Sophia said a precious object had gone missing. +[675.720 --> 677.800] What was it? +[677.800 --> 682.440] Sophia had never met her father and her mother died when she was a little girl. +[682.440 --> 685.120] She was raised by her grandmother. +[685.120 --> 690.640] And in her grandmother's house hung this large painting, painted by Sophia's grandmother +[690.640 --> 693.320] of Sophia's mother. +[693.320 --> 697.680] Sophia used to look at this painting when she was a little girl and imagine herself holding +[697.680 --> 703.600] her mom's hand and kissing her mom's cheek. +[703.600 --> 708.720] Sophia's grandmother, the painter, died a few weeks before the mediation. +[708.720 --> 712.640] And in her final hours, she signed the picture. +[712.640 --> 719.400] Sophia described this with tears and finally looked to her ex-husband and she said, Frank +[719.400 --> 722.000] took the picture. +[722.000 --> 727.360] Frank, when are you going to stop trying to punish me for the affair? +[727.360 --> 732.760] I looked at the guy and his face was cold as stone and I thought, whoa. +[732.760 --> 735.880] People come to see me because I can help solve their problems. +[735.880 --> 738.240] But I'm kind of a one trick pony. +[738.240 --> 740.320] The thing is I have this excellent trick. +[740.320 --> 744.000] I can help them understand each other's hidden motivations. +[744.000 --> 747.720] And I knew something that Sophia didn't. +[747.720 --> 750.800] Frank wasn't trying to punish her. +[750.800 --> 755.160] People often think revenge is a human motive. +[755.160 --> 758.120] But hurting another person is not a human need. +[758.120 --> 760.040] Now, how do I know? +[760.040 --> 765.000] Well, here's a trick I developed a few years ago that I find very useful. +[765.000 --> 769.960] If you ever think that somebody is motivated by something that doesn't personally give +[769.960 --> 773.600] you pleasure, you actually haven't found their motivation. +[773.600 --> 775.480] Go deeper. +[775.480 --> 778.400] I don't get pleasure from hurting other people. +[778.400 --> 781.160] If it's not in me, it's not a common need. +[781.160 --> 784.720] And if it's not a common need, it's not a human motivation. +[784.720 --> 786.040] Go deeper. +[786.040 --> 789.400] Revenge is pursued to fulfill another need. +[789.400 --> 790.800] But what? +[790.800 --> 793.880] It varies, but very often it's a need for understanding. +[793.880 --> 800.440] If I hurt you, you will understand at the level of personal pain, at the level of intense +[800.440 --> 804.560] personal suffering, what you did to me. +[804.560 --> 807.040] You'll finally get it. +[807.040 --> 809.560] This wasn't the case for Frank. +[809.560 --> 816.120] My theory that he had taken the picture in order to be understood for the pain of the +[816.120 --> 817.720] affair was wrong. +[817.720 --> 819.680] I often guess wrong. +[819.680 --> 824.200] But that I was guessing and without blame, convinced him to share something else. +[824.200 --> 826.480] His eyes well with tears. +[826.480 --> 831.200] And he looked over at his ex-wife, Sophia, and he said, soaf. +[831.200 --> 834.040] She had become my grandmother too. +[834.040 --> 837.200] She was all that I had. +[837.200 --> 841.080] You were all that I had. +[841.080 --> 844.480] Frank was an orphan too, just like Sophia. +[844.480 --> 851.880] He took the painting to fulfill a common human need of connection. +[851.880 --> 854.480] Herding Sophia was never the point. +[854.480 --> 859.120] Sophia moved next to Frank on the couch, and she wrapped her arms around him, and they +[859.120 --> 862.280] sobbed together for ten minutes. +[862.280 --> 863.280] And I cried too. +[863.400 --> 864.560] I had ten minutes. +[864.560 --> 866.720] What was I going to do? +[866.720 --> 875.720] Frank ultimately returned the painting to Sophia, and she dug up a trove of old photos, a +[875.720 --> 880.320] Frank with her grandmother, so that he could remember his family. +[880.320 --> 881.320] Understand what happened here. +[881.320 --> 887.120] We didn't make the common an easy mistake, thinking that revenge is a motive. +[887.120 --> 892.640] Instead we went to the source of all human motivation to the common needs. +[892.640 --> 897.920] And Sophia understood that Frank had simply needed connection, human connection, and in +[897.920 --> 900.680] particular to her grandmother, she got it. +[900.680 --> 905.880] She could feel it, and then the magic, and then solutions. +[905.880 --> 911.760] Now many people, including some of this audience, are wary of understanding others, and especially +[911.760 --> 913.480] during conflict. +[913.480 --> 919.760] The thought goes like this, if I understand the reasons you did what you did, I'm basically +[919.760 --> 922.560] saying you were justified. +[922.560 --> 925.520] Understanding seems like condoning. +[925.520 --> 930.000] And for this reason, people often say don't go inside the mind of a terrorist, don't +[930.000 --> 931.760] get them. +[931.760 --> 936.920] To get a terrorist is to legitimate terrorism. +[936.920 --> 938.760] It's to be an apologist. +[938.760 --> 942.960] And for this reason, it was suggested to me that I dropped from my talk, the piece about +[942.960 --> 950.720] the Taliban teenager, because then people might think, I can don't terrorism. +[951.720 --> 955.600] Let me make something perfectly clear. +[955.600 --> 960.680] Understanding reasons is different than condoning. +[960.680 --> 964.560] I've learned through thousands of mediations. +[964.560 --> 971.440] Understanding is a power to shape the world far greater than any sort or gone. +[971.440 --> 975.240] Understanding is exactly how you create the world that you want. +[975.240 --> 980.720] I began this talk asking, is it possible to understand everyone at a deep and meaningful +[980.720 --> 983.720] level, even those that are different from you? +[983.720 --> 995.520] And the answer is yes, when your teenage daughter asks you for that hair straightener, and +[995.520 --> 1002.280] just one week after you bought her that hair crimper, and she's standing at the top of +[1002.280 --> 1008.600] the stairs with this crazy crimped hair. +[1008.600 --> 1014.640] Screaming, you just don't understand this is how you understand. +[1014.640 --> 1016.440] What is she needing? +[1016.440 --> 1026.680] She wants to be accepted, liked, the desire to be accepted, to be liked, is in you, is +[1027.680 --> 1035.000] in everyone in this audience, and so you can understand exactly what she feels. +[1035.000 --> 1040.680] And that alone will transform your relationship, and then come the solutions, even if it's +[1040.680 --> 1048.360] only I see you, my beautiful little girl, I get you. +[1048.360 --> 1052.960] There's a formula for understanding why we do what we do, and once you get it, you get +[1052.960 --> 1055.280] it. +[1055.280 --> 1058.800] Human behavior is complex, but human motivation is simple. +[1058.800 --> 1061.880] We seek the common needs, and nothing else. +[1061.880 --> 1064.480] We seek the common needs, and nothing else. +[1064.480 --> 1069.000] The common needs are human motivation. +[1069.000 --> 1075.400] Learn this language of the unconscious, this language of the heart, and you'll improve +[1075.400 --> 1080.080] every relationship in your life. +[1080.080 --> 1081.080] Thank you. diff --git a/transcript/allocentric_T6INaET_Lnw.txt b/transcript/allocentric_T6INaET_Lnw.txt new file mode 100644 index 0000000000000000000000000000000000000000..8eba78752a57e448815010c6aeb9e383084a67fc --- /dev/null +++ b/transcript/allocentric_T6INaET_Lnw.txt @@ -0,0 +1,6 @@ +[0.000 --> 5.520] Ego scanning is a video fast forwarding interface to quickly find events of interest from first-person videos. +[5.760 --> 10.880] The interface features an elastic timeline that emphasizes Ego-centric cues based on users' inputs. +[11.120 --> 15.520] Playback speeds are adaptively changed in emphasized scenes to show corresponding events. +[15.760 --> 21.280] Computer vision techniques automatically extract Ego-centric cues such as movements, hands, and person. +[21.840 --> 25.840] Our user study compared Ego-scanning with a simple fast forwarding interface. +[25.840 --> 29.840] As the results, we confirm faster average scanning speeds by 38% diff --git a/transcript/allocentric_UTiFshG_xuk.txt b/transcript/allocentric_UTiFshG_xuk.txt new file mode 100644 index 0000000000000000000000000000000000000000..5549aab5d6d57a2769145c73e8a37a6de7759766 --- /dev/null +++ b/transcript/allocentric_UTiFshG_xuk.txt @@ -0,0 +1,3 @@ +[0.000 --> 8.000] Here's a challenge. Tell me the opposite of these five words in order. Always staying, take me down. +[9.200 --> 12.000] Always staying, take me down. +[12.800 --> 28.000] Never going, like, give, you down, up. Never going, give you up. diff --git a/transcript/allocentric_UpupNS6aF7o.txt b/transcript/allocentric_UpupNS6aF7o.txt new file mode 100644 index 0000000000000000000000000000000000000000..d9cef61e436b01fd858dc11923fa3bf63cb6c86f --- /dev/null +++ b/transcript/allocentric_UpupNS6aF7o.txt @@ -0,0 +1,864 @@ +[0.000 --> 3.400] Okay, thanks for joining us. +[3.400 --> 7.840] This is a thousand brains hangout with Jeff Hawkins and Suvittai Amad. +[7.840 --> 9.040] We're all from Dementa. +[9.040 --> 13.400] I'm Matt Taylor and we're going to talk about our most recent paper, the framework for +[13.400 --> 16.080] intelligence and do some Q&A. +[16.080 --> 20.160] So if you're just watching this, there's a discussion on our forum that's linked in +[20.160 --> 24.440] the show description down there and there's also a link to the paper down there if you +[24.440 --> 25.440] want to read it. +[25.440 --> 29.360] So that should provide all the context that we need for this discussion. +[29.360 --> 33.920] So that being said, we have questions that are already on the forum that we could go +[33.920 --> 37.760] through or we could just kind of roll through people that are joined right now since they've +[37.760 --> 38.760] been waiting. +[38.760 --> 41.360] So unless do you have anything you want to start off with? +[41.360 --> 43.360] I don't know. +[43.360 --> 47.360] You didn't know that before. +[47.360 --> 49.040] That was the last question. +[49.040 --> 50.040] It's a Q&A. +[50.040 --> 52.920] So let's go straight to Q&A then. +[52.920 --> 58.400] I think Paul was here first and Paul, do you actually have anything Samana unmute you? +[58.400 --> 59.400] I don't know if I can. +[59.400 --> 60.400] I'm not sure if I can. +[60.400 --> 61.400] You might have unmute you. +[61.400 --> 62.400] Hi, hi, hi, guys. +[62.400 --> 63.400] Yeah. +[63.400 --> 65.400] So yeah, I didn't have any specific questions. +[65.400 --> 67.400] I mean, they just came to listen to the question. +[67.400 --> 68.400] All right. +[68.400 --> 69.400] All right. +[69.400 --> 70.400] I was an easy one. +[70.400 --> 71.400] Thanks, Paul. +[71.400 --> 75.400] I didn't have any, I didn't have any specific questions. +[75.400 --> 78.400] I mean, they just came to listen to the question. +[78.400 --> 79.400] All right. +[79.400 --> 80.400] All right. +[80.400 --> 81.400] I was an easy one. +[81.400 --> 82.400] Thanks, Paul. +[82.400 --> 83.400] Thank you. +[83.400 --> 84.400] Thank you. +[88.400 --> 89.400] Thank you. +[89.400 --> 90.400] Thank you. +[90.400 --> 91.400] Thank you. +[91.400 --> 92.400] Thank you. +[92.400 --> 93.400] Thank you. +[93.400 --> 94.400] Thank you. +[94.400 --> 95.400] Thank you. +[95.400 --> 96.400] Thank you. +[96.400 --> 97.400] Thank you. +[97.400 --> 98.400] Thank you. +[98.400 --> 99.400] Thank you. +[99.400 --> 100.400] Thank you. +[100.400 --> 101.400] Thank you. +[101.400 --> 102.400] Thank you. +[102.400 --> 103.400] Thank you. +[103.400 --> 104.400] Thank you. +[104.400 --> 105.400] Thank you. +[105.400 --> 106.400] Thank you. +[106.400 --> 107.400] Thank you. +[107.400 --> 108.400] Thank you. +[108.400 --> 109.400] Thank you. +[109.400 --> 110.400] Thank you. +[110.400 --> 111.400] Thank you. +[111.400 --> 112.400] Thank you. +[112.400 --> 113.400] Thank you. +[113.400 --> 114.400] Thank you. +[114.400 --> 115.400] Thank you. +[115.400 --> 116.400] Thank you. +[116.400 --> 117.400] Thank you. +[117.400 --> 118.400] Thank you. +[118.400 --> 119.400] Thank you. +[119.400 --> 120.400] Thank you. +[120.400 --> 121.400] Thank you. +[121.400 --> 122.400] Thank you. +[122.400 --> 123.400] Thank you. +[123.400 --> 124.400] Thank you. +[124.400 --> 125.400] Thank you. +[125.400 --> 126.400] Thank you. +[126.400 --> 127.400] Thank you. +[127.400 --> 128.400] Thank you. +[128.400 --> 129.400] Thank you. +[129.400 --> 130.400] Thank you. +[130.400 --> 131.400] Thank you. +[131.400 --> 132.400] Thank you. +[132.400 --> 133.400] Thank you. +[133.400 --> 134.400] Thank you. +[134.400 --> 135.400] Thank you. +[135.400 --> 136.400] Thank you. +[136.400 --> 137.400] Thank you. +[137.400 --> 138.400] Thank you. +[138.400 --> 139.400] Thank you. +[139.400 --> 140.400] Thank you. +[140.400 --> 141.400] Thank you. +[141.400 --> 142.400] Thank you. +[142.400 --> 143.400] Thank you. +[143.400 --> 144.400] Thank you. +[144.400 --> 145.400] Thank you. +[145.400 --> 146.400] Thank you. +[146.400 --> 147.400] Thank you. +[147.400 --> 148.400] Thank you. +[148.400 --> 149.400] Thank you. +[149.400 --> 150.400] Thank you. +[150.400 --> 151.400] Thank you. +[151.400 --> 152.400] Thank you. +[152.400 --> 153.400] Thank you. +[153.400 --> 154.400] Hello. +[154.400 --> 155.400] Hello. +[155.400 --> 156.400] Hello. +[156.400 --> 157.400] Are you guys still there? +[157.400 --> 158.400] Is anybody here? +[158.400 --> 159.400] Yeah. +[159.400 --> 160.400] Thank you. +[160.400 --> 161.400] Thank you. +[161.400 --> 162.400] I was weird. +[162.400 --> 163.400] I don't know what happened. +[163.400 --> 164.400] It completely kicked us out. +[164.400 --> 165.400] I had to. +[165.400 --> 167.400] It signed me out of my Google account. +[167.400 --> 168.400] Okay. +[168.400 --> 169.400] We're back. +[169.400 --> 170.400] Okay. +[170.400 --> 172.400] Let's jump right back in with, uh, +[172.400 --> 174.400] Hey, Marty, you joined it pretty soon after that. +[174.400 --> 176.400] You have anything you want to ask? +[176.400 --> 177.400] No, not particularly. +[177.400 --> 179.400] I'm not that deep into the series yet. +[179.400 --> 180.400] Okay. +[180.400 --> 182.400] Constantine, you're up. +[182.400 --> 184.400] You want to talk about anything? +[184.400 --> 185.400] Hello. +[185.400 --> 186.400] Uh, yes. +[186.400 --> 187.400] Can you hear me? +[187.400 --> 188.400] Yes. +[188.400 --> 189.400] Yes. +[189.400 --> 190.400] Okay. +[190.400 --> 191.400] Thank you very much. +[191.400 --> 192.400] So, uh, +[192.400 --> 194.400] So my plan now is to +[194.400 --> 199.400] actually working on my master thesis on an idea with HTML. +[199.400 --> 200.400] And, uh, +[200.400 --> 203.400] basically, the core idea is to try and do a normal detection. +[203.400 --> 206.400] But in the cross correlation space of multiple metrics. +[206.400 --> 212.400] And there, I think that it could be a very useful analogy with the object, +[212.400 --> 215.400] with the object recognition work. +[215.400 --> 217.400] And, uh, in that context, +[217.400 --> 222.400] I wonder if you think about how this theory can help us to learn new objects. +[222.400 --> 224.400] Because especially in the 2017 paper, +[224.400 --> 230.400] the process of learning a new object was kind of final stated with extrinsic information. +[230.400 --> 233.400] So, you know, +[233.400 --> 236.400] basically, they're like the model that now we're looking at the new model, +[236.400 --> 238.400] at the new object, learn this. +[238.400 --> 245.400] But the model on its own could not understand that it was transitioning from an already known model to a new model now. +[245.400 --> 246.400] Yeah. +[246.400 --> 250.400] So, is that a question about how do we handle continuous learning? +[250.400 --> 251.400] Yeah. +[251.400 --> 253.400] So I think in the column's paper, +[253.400 --> 256.400] we explicitly told the system whenever we're learning a new object. +[256.400 --> 259.400] We didn't, uh, smoothly transition like we did in the temple memory. +[259.400 --> 260.400] Yeah. +[260.400 --> 261.400] Um, you want to channel? +[261.400 --> 265.400] I think we've explored with a few different, um, ideas there. +[265.400 --> 267.400] But I don't think we've really settled on anything. +[267.400 --> 269.400] I think the whole theory has been moving quite fast. +[269.400 --> 273.400] And, um, so we haven't really focused on the continuous learning aspect so much. +[273.400 --> 275.400] But there's some general ideas. +[275.400 --> 278.400] And even on the forum, there are some ideas about, um, +[278.400 --> 280.400] you know, when you're learning an object, +[280.400 --> 282.400] um, if you're learning fairly slowly, +[282.400 --> 284.400] and then you detect, uh, +[284.400 --> 287.400] and, but you get a lot of unpredictable behavior. +[287.400 --> 292.400] Um, you can use that as a way of, um, kind of triggering, +[292.400 --> 295.400] uh, the system, or, or, uh, notifying the system. +[295.400 --> 297.400] Somehow that, uh, there's a new object. +[297.400 --> 299.400] The same way that in the temple memory, +[299.400 --> 301.400] a lot of bursting kind of triggers, uh, +[301.400 --> 302.400] learning of new sequences, +[302.400 --> 304.400] you could potentially do something like that with, uh, +[304.400 --> 306.400] with the columns paper as well. +[306.400 --> 308.400] But I wouldn't say we really explored it or simulated this. +[308.400 --> 309.400] Yeah. +[309.400 --> 310.400] Too much. +[310.400 --> 313.400] But surprise is a good signal always for, for learning. +[313.400 --> 316.400] Uh, there's another idea we've explored too, +[316.400 --> 319.400] uh, that we haven't really taken very far. +[319.400 --> 325.400] This is that, um, that any individual network can be doing +[325.400 --> 329.400] inference, meaning trying to recognize existing, uh, +[329.400 --> 332.400] objects and learning, uh, simultaneously, +[332.400 --> 335.400] sort of on alternate phases of a cycle. +[335.400 --> 337.400] And there's a lot of evidence for this in some, +[337.400 --> 340.400] some parts of the brain where you literally every, uh, +[340.400 --> 343.400] phase of a cycle, you, you, +[343.400 --> 345.400] the, the neuron switch between, um, +[345.400 --> 348.400] assuming that you're learning something new and then assuming +[348.400 --> 350.400] that you're trying to infer something, uh, +[350.400 --> 351.400] that was already learned. +[351.400 --> 354.400] Um, that, that, that crazy, that sounds, +[354.400 --> 356.400] it really is actually a lot of evidence for that. +[356.400 --> 358.400] So, um, that's a, we've, +[358.400 --> 360.400] we've taken this problem, +[360.400 --> 363.400] the concept that you asked about and we sort of put it on the +[363.400 --> 364.400] back burner for now. +[364.400 --> 367.400] Um, that may not be happy for you, +[367.400 --> 370.400] but, uh, because we feel that there's a solution there, +[370.400 --> 372.400] we don't know it with a subitized suggestion, +[372.400 --> 374.400] but, um, +[374.400 --> 377.400] but we're trying to get to sort of the basic mechanisms +[377.400 --> 379.400] down first before we decide exactly, +[379.400 --> 382.400] okay, is this continuing to learning happening exactly? +[382.400 --> 384.400] How is it happening and under what conditions, +[384.400 --> 386.400] maybe it's different in different parts of the brain? +[386.400 --> 389.400] Um, I guess that's a punting on that question in some sense. +[389.400 --> 390.400] We have some ideas, +[390.400 --> 392.400] but we just haven't, uh, +[392.400 --> 394.400] um, it hasn't been our main focus. +[394.400 --> 395.400] Yeah. +[395.400 --> 397.400] But those are two main ideas. +[397.400 --> 399.400] Yeah, one, one area that I like to, um, +[399.400 --> 401.400] that I think also applies here is when are you talking about +[401.400 --> 404.400] multiple modalities or things in context of other things, right? +[404.400 --> 408.400] So I can know that it's something new when I'm seeing a lot of, +[408.400 --> 410.400] the anomalies across all of the, +[410.400 --> 412.400] all of the different inputs, you know, +[412.400 --> 413.400] you sort of, but, +[413.400 --> 415.400] you know, whereas versus an anomaly just in one area. +[415.400 --> 417.400] Yeah, I think that was a subitized comment. +[417.400 --> 418.400] Yeah, that's a good idea. +[418.400 --> 419.400] Yeah. +[419.400 --> 420.400] I think it's a, you know, +[420.400 --> 422.400] people in the farm often ask for ideas of projects to try. +[422.400 --> 425.400] This would be a great one to try it. +[425.400 --> 427.400] There's been a lot of problem. +[427.400 --> 430.400] Yeah, you worked out. +[430.400 --> 431.400] Yeah. +[431.400 --> 435.400] I actually think that you allude a little bit into this, +[435.400 --> 439.400] uh, with, uh, with framework paper with a thousand brains paper. +[439.400 --> 443.400] When you, when you talk about the re-anchoring of grid cells, +[443.400 --> 445.400] maybe we could, uh, +[445.400 --> 448.400] refresh the relevant question as what exactly triggers the re-anchoring +[448.400 --> 449.400] of grid cells? +[449.400 --> 453.400] What makes an environment actually new? +[453.400 --> 454.400] Yeah. +[454.400 --> 457.400] That is, that's another perfect example, +[457.400 --> 460.400] although the mechanisms there may be different than, for example, +[460.400 --> 463.400] mechanisms and sequence memory. +[463.400 --> 465.400] Um, what, there was a, +[465.400 --> 466.400] um, +[466.400 --> 468.400] I have some thoughts about that. +[468.400 --> 469.400] I think it was. +[469.400 --> 470.400] I think it was. +[470.400 --> 471.400] I mean, +[471.400 --> 474.400] one of the things that's generally accepted. +[474.400 --> 476.400] There was a more kind of, +[476.400 --> 477.400] what was that? +[477.400 --> 478.400] There's a problem. +[478.400 --> 479.400] Yeah, you're good. +[479.400 --> 481.400] Uh, one of the things that, +[481.400 --> 482.400] uh, there's, +[482.400 --> 484.400] there's literature about this in, +[484.400 --> 486.400] I grid cells, the antelinar cortex, +[486.400 --> 488.400] uh, related to, +[488.400 --> 490.400] um, it's, you know, +[490.400 --> 494.400] the grid, the grid cells are driven by several different factors. +[494.400 --> 496.400] Uh, one factor is, of course, +[496.400 --> 498.400] they're, they're updated by, um, +[498.400 --> 499.400] motor commands. +[499.400 --> 501.400] So that's how they're known to do it. +[501.400 --> 503.400] But they're also, um, +[503.400 --> 504.400] they're anchored. +[504.400 --> 506.400] It's believed they're anchored by sensory input. +[506.400 --> 507.400] And so one of the theories is, +[507.400 --> 510.400] and one of the things that we sort of subscribe to is that play cells. +[510.400 --> 512.400] Uh, once you've learned, +[512.400 --> 514.400] once you've learned a connection between, +[514.400 --> 518.400] uh, play cells and, uh, grid cells, +[518.400 --> 521.400] that the play cells are constantly re-anchoring the grid cells. +[521.400 --> 523.400] It's, it's, it's constantly happening. +[523.400 --> 526.400] You're getting to play cells and it's constantly trying to re-anchor the grid cells. +[526.400 --> 528.400] Um, but then there's a question of, +[528.400 --> 529.400] well, what if it can't? +[529.400 --> 532.400] And then how does it decide to pick a random anchor? +[532.400 --> 535.400] Um, that, that question is unknown still, +[535.400 --> 538.400] uh, still to, uh, in that literature. +[538.400 --> 539.400] So it's a good question. +[539.400 --> 541.400] I don't think we have a good answer for it yet. +[541.400 --> 542.400] Right. +[542.400 --> 546.400] I'm going to roll down the line because I know we'll talk a bit more about that when we go down the form questions. +[546.400 --> 547.400] So I mean, +[547.400 --> 550.400] are constantly having to follow up to that? +[550.400 --> 551.400] Uh, well, +[551.400 --> 552.400] I slide follow up in the same, uh, +[552.400 --> 553.400] area is that, uh, +[553.400 --> 554.400] that in the native paper, +[554.400 --> 555.400] you mentioned that during learning, +[555.400 --> 558.400] the location layer doesn't update in response to sensory input. +[558.400 --> 560.400] So I was simply wondering there if, uh, +[560.400 --> 562.400] this separation of learning and inference, +[562.400 --> 564.400] is that all neurologically plausible, +[564.400 --> 566.400] or simply an artifact of the model. +[566.400 --> 567.400] What was it? +[567.400 --> 572.400] What was the kind of a miss at the location layer doesn't update with input? +[572.400 --> 573.400] In the paper, +[573.400 --> 574.400] you say during learning, +[574.400 --> 578.400] the location layer doesn't update in response to sensory input. +[578.400 --> 580.400] Whereas during inference, +[580.400 --> 581.400] it does. +[581.400 --> 582.400] Yeah. +[582.400 --> 583.400] Yeah. +[583.400 --> 584.400] Yeah. +[584.400 --> 585.400] Yeah. +[585.400 --> 589.400] Because it's learning the connections between the sensory and the right learning. +[589.400 --> 590.400] You're basically doing, uh, +[590.400 --> 591.400] you're relying on the, +[591.400 --> 592.400] the location layer being, +[592.400 --> 594.400] um, updated by most of the data. +[594.400 --> 595.400] I'm just wondering, +[595.400 --> 596.400] um, updated by motor coming out. +[596.400 --> 597.400] And then you're constantly learning, +[597.400 --> 599.400] well, what's the new sensory input for that location? +[599.400 --> 600.400] What's the new sensory input for that location? +[600.400 --> 601.400] What's the new sensory input for that location? +[601.400 --> 602.400] Um, +[602.400 --> 603.400] it's however this separation +[603.400 --> 605.400] between learning and inferencing at all levels. +[605.400 --> 606.400] Yeah, +[606.400 --> 607.400] it's a slow. +[607.400 --> 608.400] Oh, sorry. +[608.400 --> 610.400] Interrupt you on the. +[610.400 --> 612.400] The separation between learning inference. +[612.400 --> 613.400] Is that logical in the brain? +[613.400 --> 614.400] Is that biological plausible? +[614.400 --> 615.400] Uh, totally. +[615.400 --> 616.400] Yeah. +[616.400 --> 618.400] Uh, +[618.400 --> 619.400] uh, +[619.400 --> 620.400] uh, +[620.400 --> 621.400] yeah. +[621.400 --> 622.400] Uh, +[622.400 --> 623.400] yeah. +[623.400 --> 624.400] Well, as I said, +[624.400 --> 627.400] that, that idea that is occurring on different oscillatory cycles +[627.400 --> 628.400] comes from reasonably observation, +[628.400 --> 629.400] that, +[629.400 --> 630.400] uh, +[630.400 --> 631.400] uh, +[631.400 --> 632.400] that, +[632.400 --> 633.400] that there's evidence, +[633.400 --> 634.400] empirical evidence, +[634.400 --> 636.400] and that is actually what's going on. +[636.400 --> 637.400] We didn't make that up. +[637.400 --> 639.400] I wouldn't have ever thought of that. +[639.400 --> 640.400] Um, +[640.400 --> 641.400] so, uh, +[641.400 --> 642.400] there is, +[642.400 --> 645.400] the evidence that cells go through these two different phases, +[645.400 --> 646.400] and different, +[646.400 --> 647.400] uh, actual activation cycles, +[647.400 --> 648.400] um, +[648.400 --> 649.400] two different, +[649.400 --> 650.400] like, +[650.400 --> 651.400] in the interrionic culture, +[651.400 --> 657.760] cycle. Now we don't know if that's happening in the neocortex, but there's strong evidence +[657.760 --> 663.920] for that. That comes from empirical evidence. That's not something we made up. +[663.920 --> 671.240] Thanks, Constantine. I know Chris was, it was, had a question too. You're ready, Chris. +[671.240 --> 678.240] I'm really here to listen. I'm just getting started on this whole thing. I tripped across +[678.240 --> 689.280] a, I got rye a light, which are really bad tailor. We didn't know that. So I tripped across +[689.280 --> 695.760] your November hangout and that led me to your forums and that got me, it looks like a good +[695.760 --> 700.600] place to ask a lot of questions. I have. I have a question today. Is that right? Well, +[700.600 --> 709.840] I'm not sure relative to this discussion. We talked before the hangout about the phobia +[709.840 --> 716.040] in Matti. I think there was an interesting mix between what something you were doing. +[716.040 --> 719.960] And I can ask my question, but it's not actually relevant to these grid cells and I don't +[719.960 --> 723.560] want to distract from it. Well, I'll pay our surveys. I'll get a talk +[723.560 --> 728.120] in a minute earlier. I think everybody who goes on this journey of understanding, I think +[728.120 --> 732.280] of the brain, at some point realizes, holy cow, it's all a simulation of reality and +[732.280 --> 736.040] we live in a simulation. And you can't become a cryptographer of that. We were talking about +[736.040 --> 743.000] that earlier. But I mean, his thing was, you know, the I, the phobia says such a small field +[743.000 --> 748.920] of view. And as far as the whole thousand brains idea, how does that relate to when you've +[748.920 --> 752.920] got such a small field of view and things around it that get such abstract detail about +[752.920 --> 758.840] what's going on? How does the thousand brains model, you know, make that work? +[758.840 --> 764.680] Well, first of all, you know, it's a small field of view, but that of course gets expanded +[764.680 --> 771.240] to represent a huge part of V1 in the brain. Oh, yeah. So, so you can say it's a small field +[771.240 --> 775.320] of view. It's like your thumb, right? Yeah, something like that. And it gives up a huge +[775.320 --> 781.240] amount. But then it expands and occupies the majority of V1, which is just one of the +[781.240 --> 786.920] largest regions in the cortex of the human. So it's not like it's a small thing. It's a huge +[786.920 --> 794.280] amount of processing going on with the phobia. And I don't see, there's no inherent reason to +[796.040 --> 800.680] change the thousand brain theory or the frameworks theory at all related to that. +[801.720 --> 806.200] The framework does not rely on that. It doesn't rely on phobia. There's other animals that don't +[806.200 --> 811.080] have a phobia. Rats have a vision without a phobia. They have opposing eyes, largely posing eyes. +[811.560 --> 816.760] The theory doesn't really care about those, that those differences. All that says is you have a +[816.760 --> 822.600] sensory array. And the sensory is observing different parts of an object. The phobia wouldn't +[822.600 --> 827.480] be doing that. I'm looking at the camera in front of me right now, which it's actually occupying +[827.480 --> 833.800] a small part of my visual field. But it's I see its details. And so the different parts of my +[833.800 --> 838.040] photo would be attending the different parts of that camera. And they would, as a group, +[838.040 --> 843.800] the different columns would be voting on what that object is. So it just says that I wouldn't be very +[843.800 --> 850.840] good. If this object, the camera, I could probably like, you know, huge part of my visual field, +[850.840 --> 854.120] that might make it difficult to see what it is. But the fact that it's small and it occupates +[854.120 --> 859.400] small part of my visual field, it allows me to do object recognition at a long distance. +[860.520 --> 864.600] So I can see things that are actually, you know, just certain details and things that are very far +[865.560 --> 868.360] away. I don't know who wrote that. Oh, Falco says Braille readers can read, you know, +[868.360 --> 872.200] a whole book with one finger. Yeah, yeah. That's right. It's a finger sort of like the +[872.200 --> 876.040] equivalent of a phobia. You know, we have a very high acuity on the tip of our finger. You develop +[876.040 --> 881.640] that. Yeah. Well, actually, you know, physiologically, your body is built with the high acuity on +[881.640 --> 889.160] your tip of your finger. And Braille readers actually develop, you know, apparently they learn how +[889.160 --> 892.360] to discern those patterns. You're not a Braille reader. You try to discern those patterns, you know, +[892.360 --> 896.520] it's really hard. Yeah, I know. But that's sort of like saying, if I'm not a Russian speaker and +[896.520 --> 899.880] I hear Russian, I don't understand it. And it would be like, if I never really learned to see, +[899.880 --> 903.640] and also, like I'd vision, I really can't see either. So you have to train the system, +[903.640 --> 907.960] has to train to recognize these patterns. But my point is that the finger is like a phobia, +[907.960 --> 911.480] because it's an area of high acuity. Maybe you can discern very small differences in the +[911.480 --> 915.640] area represented by your finger in the cortex as well, it's a very large compared to say the back +[915.640 --> 921.640] your hand or something like that. Since I don't get this up to you much, that asks that +[921.640 --> 927.560] people is knowledgeable as you. Is it fair to say I've never actually seen through my eyes, +[927.560 --> 934.360] I've only ever seen the updated model in my head? Well, it depends what you mean by C. +[935.080 --> 939.880] Obviously, the vision comes through your eyes and those of the patterns are going to the brain. +[939.880 --> 949.480] I think the general consensus of brain researchers is that your perception of the world is really the +[949.480 --> 953.400] model that you have into the world. So you've built an internal model of the world, that's what the +[953.400 --> 960.280] framework is all about. How do you build models of the world? And what you perceive, what you sense +[960.280 --> 965.960] is really based on that model. And that doesn't mean it's wrong, it doesn't mean it's fake, +[965.960 --> 970.600] it just means it's a model of what you've experienced and under certain conditions we can see +[970.600 --> 975.480] different things because we have different models of the world. Right. Thank you for that. Yeah. +[976.520 --> 984.040] Thanks Chris for your question. Okay, Hey Falco, you ready? You got something to say? He's been +[984.040 --> 992.360] on our forum a lot lately. Yeah, I have a thousand questions for that. Okay, prioritize. +[992.840 --> 998.440] Yeah, sure. Sure. Okay, there's one I posted on the forum and maybe it's been answered on all the +[998.440 --> 1004.440] parts I'm not always up to date. One of the things I don't really understand is that +[1005.880 --> 1011.160] when you look at an object, you obviously have to make a model. And when you touch an object, +[1011.160 --> 1018.440] you have to make a model. You need this model to navigate the world. But the Neocortex does a lot +[1018.440 --> 1026.040] of other things, very abstract things. And somehow I don't understand how this hardware that you +[1026.040 --> 1034.920] have there to make these models, how it is also useful, for instance, for language. And I suppose +[1034.920 --> 1041.320] you can put words together and they go next to each other or they represent something abstract. +[1041.800 --> 1048.120] But it seems to me like you have a tremendous amount of hardware in every cortical column. +[1048.920 --> 1057.320] And it's very useful for looking and perhaps even hearing things. I orientate yourself based on +[1057.320 --> 1065.160] what you hear. But for a lot of other things, I don't really understand how this is useful or +[1066.120 --> 1071.240] well, I guess you understand what I mean. But I don't say it's not right. I don't say it's wrong. +[1072.200 --> 1076.520] But it doesn't make much sense to me and I don't get it. Yeah. +[1078.840 --> 1082.680] I can take that. You want to take that? All right. We try to adjust this a little bit in the +[1082.680 --> 1090.920] frameworks paper, but it is confusing. So let's just review a few facts. One of the facts is +[1091.000 --> 1096.280] everywhere you look in the neocortex, the architecture is extremely similar. There are differences, +[1096.600 --> 1103.480] but the similarities are remarkable. And the differences are more tweaks, apparently, than +[1104.200 --> 1110.520] fundamental differences. And many parts in the neocortex, if you look at them in great detail, +[1110.520 --> 1116.120] you cannot discern what they're doing or how they're different than another part. And so the one +[1116.200 --> 1120.440] of the basic themes of neuroscience is that there is this common circuitry that does everything. +[1121.880 --> 1126.280] That's true of language areas. Really, language areas. They look remarkably similar as the touch +[1126.280 --> 1131.960] of the vision areas and so on. It's incredible. And so for a long time, it's been +[1131.960 --> 1138.200] believed that there's some underlying computation that's done everywhere that somehow applies to +[1138.200 --> 1143.800] all the different things in your cortex does. There's little evidence that yes, that's not true. +[1144.680 --> 1150.280] And so now we've attacked it from two different parts. We've been looking at it from how it is +[1150.280 --> 1156.120] that we build models of the world through sense and vision and hearing at a low level. And the +[1156.120 --> 1162.920] basic, what we deduce there is that the cortex does this by building this a model is using spaces. +[1163.560 --> 1168.360] And they can be three-dimensional spaces or two-dimensional spaces. And we assign features +[1168.360 --> 1171.160] all up through those spaces. And we move through those spaces. And we move our fingers, +[1171.160 --> 1178.520] or move our eyes, or move our bodies. And so if that explains, that's a very powerful idea, +[1178.520 --> 1185.320] modeling through movement and building structures of almost like CAD models of things, +[1185.320 --> 1192.680] like these locations and features and spaces. That's a very powerful idea. And even beyond what +[1192.680 --> 1196.440] we've written so far, it looks like you can explain the vast majority of the circuitry in the +[1196.440 --> 1200.040] near cortex. So we didn't really get into that in the framework's paper, but neck paper is going +[1200.040 --> 1205.560] to get into that. So now we say to ourselves, okay, well, if that's true, how would it apply +[1205.560 --> 1209.800] to language? And how would it apply to other things? At the same time, there are people who've been, +[1209.800 --> 1213.800] and we referenced some of these in the framework's paper. There are people who've been looking at +[1214.920 --> 1219.480] FMRI data, which suggests that there are grid cells in the near cortex, while people do +[1219.480 --> 1225.880] quote high-level tasks thinking about things. And so they not like how do I sense what something +[1225.960 --> 1230.920] is or see something is, but when I'm thinking about birds or I'm thinking about sort of mental +[1230.920 --> 1236.200] cognitive task, they find evidence that grid cells underlying that. So there's some empirical +[1236.200 --> 1240.200] evidence saying, yes, some high-level thought processes are also somehow built on grid cells, +[1240.200 --> 1245.640] and that we mentally mapped out things in the world in a space. So the classic example I think +[1245.640 --> 1250.440] was the dollar paper, where they talk people thinking about birds and the different attributes +[1250.440 --> 1253.160] of birds. And when you're thinking about the different attributes of birds, they have the evidence +[1253.160 --> 1257.560] that you're assigning them to location spaces or locations in a space. You're not aware of +[1257.560 --> 1261.960] you're doing this, but that's how you categorize data about something. Birds of +[1261.960 --> 1265.960] taller or smaller or different attributes, you put them on these dimensional axes, and it looks +[1265.960 --> 1271.800] like grid cells are modifying them. So there's a lot of evidence, which is sort of triangulating on this, +[1271.800 --> 1274.680] and then you can say, well, how does that really apply to something like language? Well, we don't +[1274.680 --> 1280.840] really know, but the evidence, which suggests it does. And one way you might think about it, you +[1280.840 --> 1284.920] can think about words is objects. You know, there are visual objects, there are auditory objects, +[1286.360 --> 1291.400] and now you have a series of objects and you're going through them in sequence, and they have a +[1291.400 --> 1296.520] relationship to one another. Those objects have a, literally a written word, has a spatial +[1297.560 --> 1303.640] structure to it, an auditory word has an auditory structure to it as well. And now you're +[1303.640 --> 1309.160] sticking together these objects in that sort of like, then you're putting them on a timeline, +[1309.160 --> 1313.160] or you're putting them in a reference frame. And so there's some, there's some ideas there that +[1315.640 --> 1321.560] we can start hinting at how language might be done like this. But we don't know, but I'm very +[1321.560 --> 1327.960] confident in saying that there's everything we've learned so far says that all high-level concepts, +[1327.960 --> 1336.360] all plot processes are built on a framework of spaces and look reference frames, and they may not +[1336.760 --> 1339.640] happen. And I would say in the frame of this paper, those reference frames don't have to correspond +[1339.640 --> 1345.640] to physical things in the world. You know, they dislike the birds, you know, the we put these +[1345.640 --> 1349.160] birds on these sort of reference frames that don't really correspond to locations in the world, +[1349.160 --> 1353.480] but we built the reference, we built the brain teams to build this reference frame for how to +[1353.480 --> 1360.440] place knowledge, and we move through it. So we took an attempt at describing this the best we +[1360.440 --> 1367.640] could in a discussion section in the framework's paper. And we're not the only people starting to +[1367.640 --> 1374.680] talk about this. So it is an interesting question, your questions are correct, no one really knows +[1374.680 --> 1380.760] exactly how this works yet, but the evidence is very strong that everything we do is built on +[1380.760 --> 1385.240] reference frames. And if you haven't read it, go back, you know, the best I can explain is what we +[1385.320 --> 1389.640] wrote in the frameworks paper about it. What we gave these references, we talked about this, +[1390.840 --> 1395.000] but we also admit we don't really understand it. But it seems like that's going to be part of the +[1395.000 --> 1398.440] answer. And there doesn't seem to be something else. There's no other magic things going on +[1398.440 --> 1402.760] elsewhere in the cortex would say, oh yeah, language works differently. It doesn't seem to be that way. +[1406.280 --> 1410.760] We've been around a lot of time talking about the here too. It's like how the hell does work, +[1410.840 --> 1415.080] but it seems to be doing it so. David, do you have any questions? I'm going to unmute. +[1418.680 --> 1423.240] I have to text people all to find the mute button. Yeah, oh my god, I'm going to go on the +[1423.240 --> 1430.600] fun. You may just be out. I'm going to skip you, David, if you figure out how to unmute, let me know. +[1430.600 --> 1437.800] Ryan, are you there? You have a question. Yeah. Yeah, hey guys, so I'm pretty new to this. So +[1437.880 --> 1444.920] sorry, this question has been answered elsewhere. But in regards to grid cells and cortical columns, +[1444.920 --> 1453.960] do we have any idea? Kind of like the similarity between different columns of how would you say, +[1453.960 --> 1459.960] like, I guess like granularity between, or like similarity between different cortical columns +[1459.960 --> 1467.080] and how the grid cells are arranged? Is this question sort of like where actually, what is the +[1467.080 --> 1471.880] arrangement of grid cells in the cortical column? What is the physical structure? Right, and kind of +[1471.880 --> 1477.080] like similarity between different cortical columns. I mean, whether they're comparable or not +[1477.080 --> 1483.800] between the cortical columns? Yeah, exactly. Yeah. Well, everything we do assumes that cortical columns +[1483.800 --> 1492.040] are very similar. We don't make any assumptions about that standard neuroscience dogma. So we don't +[1492.360 --> 1498.280] we don't have any evidence that columns treat are different. But there is a very interesting question +[1498.280 --> 1506.440] as where are the grid cells and exactly what is their structure in a cortical column? And I can +[1506.440 --> 1512.200] talk about this for hours because we're spending a lot of time on I'm spending a lot of time on it. +[1512.840 --> 1517.000] So I can give you, I don't know how much we want to go into this since you say you're relatively new to it. +[1517.000 --> 1526.760] We probably want to wrap up in 30 minutes. Okay, okay. So let me let me give you a sort of a big picture +[1526.760 --> 1532.440] of this. Okay, grid cells, of course, were discovered not in the neocortex, but in the in the +[1532.440 --> 1537.960] hippocampal complex or in the entoronocortex. And grid cells in the entoronocortex represent, +[1537.960 --> 1544.600] you know, they've been studied mostly in rats running around in mazes or rooms. And in that situation, +[1544.600 --> 1550.280] the grid cells represent a 2D space, a 2D dimensional space where the rat is on, you know, rats don't +[1550.280 --> 1555.880] fly to the space. They kind of stay on the ground and they move around to D. And so everything that's +[1555.880 --> 1562.840] been written about grid cells is about 2D representations of space. And that system was evolved +[1564.360 --> 1571.160] to represent a location of an animal in a 2D environment. Now we have hypothesized that the same +[1571.160 --> 1576.680] basic mechanism exists in the neocortex. But the neocortex isn't necessarily doing with 2D +[1576.680 --> 1584.040] spaces. We move in 3D spaces, objects of 3D dimensions. They might even, we might even be modeling +[1584.040 --> 1589.400] higher dimensional spaces, but at minimum, we know they're modeling 3D dimensions spaces. So how does a +[1589.400 --> 1595.720] 2D dimensional grid cell represent 3D spaces? We have a paper that's being written by a couple of +[1595.720 --> 1602.280] our researchers right now, which is very close to being submitted. About this very topic, +[1602.280 --> 1607.240] about how you could represent higher dimensional spaces using 2 dimensional grid cell modules. +[1607.240 --> 1612.280] And this is getting to your question in a moment. So what this tells us is that you need to have, +[1612.280 --> 1617.000] to represent a 3D dimensional space, you need to at least have multiple 2D grid cell modules that +[1617.000 --> 1621.560] some sense slice up the 3D space differently. A 2D, you can think about 2D grid cell module is +[1621.560 --> 1629.320] representing a projection of 3D space onto 2D to achieve a grid cell space. And so you need more +[1629.320 --> 1635.800] than one slice to 3D dimensional space to represent 3D dimensional space. You can represent +[1635.800 --> 1640.120] a multiple 2D modules that are basically intersecting the 3D space at different projections. +[1642.440 --> 1646.120] So that's one cool thing. We know that it's definitely going to be different in the neocortex +[1646.760 --> 1655.080] and in the antironic cortex. It's I'm currently working on the idea that it's possible that in the +[1655.080 --> 1660.200] neocortex, the grid cell modules are one dimensional. We know they already have to be different. +[1661.560 --> 1668.040] And there's some evidence to suggest this might be true. And so you can say, what does that mean? +[1668.040 --> 1672.360] Basically, if I want to represent a 3D space or a 2D space, I have to have a whole bunch of +[1672.360 --> 1679.320] 1D modules that are basically projections of the 3D space onto a 1D line. And hopefully, +[1679.320 --> 1686.360] that's that you can imagine that in your head, what that means. So much of the movement through +[1686.360 --> 1691.080] 3D space would not be reflected on all these 1D modules because they don't all move depending +[1691.080 --> 1695.320] on the projections. So I'm moving perpendicular to the 1D module, it's not going to reflect that +[1695.320 --> 1700.520] chain, but some other 1D modules would. So this is a long question to say is in a cortical column, +[1700.520 --> 1705.240] we believe there have to be multiple grid cell modules. So in one square millimeter, for example, +[1705.240 --> 1710.360] there have to be deduced logically, have to be more than one grid cell module. They have to be +[1710.360 --> 1714.520] multiple ones, especially if they're 1D, but even if they're 2D, they have to be multiple ones. +[1714.840 --> 1721.400] They have to represent different projections in 3D space. And then we know something about how +[1721.400 --> 1727.000] these physically look in the antironic cortex. There's a nice paper came out recently by David Tank, +[1727.640 --> 1732.680] Princeton, I think, when he talks about the structure of what these actually look like in antironic +[1732.680 --> 1738.120] cortex. I'm working on the idea right now, which I would consider very speculative, but just throw +[1738.120 --> 1745.560] it out, that actually the mini columns in the neocortex actually each mini column could correspond to +[1747.320 --> 1756.040] a unique grid cell module and actually unique orientation module, head direction cell module. +[1756.040 --> 1761.080] And so that in a quarter of a column of a square millimeter, you have several hundred mini columns. +[1761.080 --> 1767.800] Each one could be a unique grid cell module and that together they represent that entire space. +[1767.800 --> 1774.600] And they might be 1D grid cell modules. This is some evidence for this. It's got a lot of, +[1774.600 --> 1780.040] but it's very speculative still, but it's elegant in some ways. If it's not in the mini columns, +[1780.040 --> 1784.600] it still has to be divided up somehow. A cortical column has to have multiple grid cell modules +[1784.600 --> 1790.600] that are acting independently, slicing up space in different ways. So that's a very long answer +[1790.600 --> 1796.360] to your question. But right now, the simplest explanation I can come up with is that each mini column +[1796.360 --> 1801.560] is doing this. I think another part of it, maybe, if you think about the structure of the cortical +[1801.560 --> 1806.440] column and where grid cells might be within the layers, there are some anatomical constraints that +[1806.440 --> 1813.000] have to be met as well. So we know that grid cells update their representation based on motor +[1813.000 --> 1818.040] commands. So wherever the grid cells are, they should be receiving some sort of a motor copy +[1818.040 --> 1822.280] or a motor command coming in. And there are only a couple of layers in the neocortex where that +[1822.280 --> 1828.440] happens. And the other anatomical constraint is that we think there's this sort of back and forth +[1828.440 --> 1833.320] between the location representation and the sensory or the play cell and log representation. So +[1833.320 --> 1840.440] there has to be sort of strong current connectivity between the sensory layers and the grid cell +[1840.520 --> 1845.080] layers. So we talked about that in the frameworks paper or a little bit in the cons paper and it +[1845.080 --> 1851.800] kind of suggests that the grid cell modules could be in the subgranular layers of the lower layers +[1851.800 --> 1857.480] of the cortex because they kind of mashed these anatomical constraints. So there we have to say +[1857.480 --> 1861.000] should or could, but I actually feel really confident about that. I think they're really in the +[1861.000 --> 1867.640] layer six and we know which cell tops they are. But it is obviously theory. So, but we can still +[1867.640 --> 1873.080] put different levels of confidence on these things. So I'm very confident that those layer six cells +[1873.640 --> 1878.120] but it could be wrong but I'm very confident then. But other things where you know this +[1878.120 --> 1882.360] thing I just mentioned about many columns, well that's much more speculative and we don't know yet. +[1883.480 --> 1887.720] All right, let me give David a chance to unmute himself for a few seconds. +[1887.720 --> 1894.920] The KCS has something he wants to say and if not then we'll go to Walter. Then he's been to +[1894.920 --> 1901.560] view of our hackathons and fast. I remember you Walter or Haker I think. He's old from the old +[1901.560 --> 1906.920] HKK. Oh, okay. Oh, huh. Looks like David's not figured out how to unmute. So Walter, you want to say +[1906.920 --> 1917.080] anything? If not we'll go to chat questions. He said no. Okay. So there were a couple interesting +[1917.080 --> 1922.200] things here. Oh someone that was to know what we're working on for research. Well Jeff sort of +[1922.200 --> 1926.680] just talked a bit about that. Well, I think we should we should be a little clearer. If you've +[1926.680 --> 1930.600] been I don't know how if you've been following them at the very close so you might know this. But +[1931.560 --> 1935.480] but Suiton I, Suiton I or Sud of it going under going to divorce right now. +[1937.720 --> 1944.840] Just a joke. You have to make it awkward. Yeah, let me just explain that. We've been focusing +[1944.840 --> 1950.040] purely on the neuroscience side lately and I am continuing to focus on the neuroscience side +[1950.040 --> 1955.480] so I can talk about what we're doing on the neuroscience side. Suiton is now this all about +[1955.480 --> 1959.560] we're doing this together. There's no acrimony here. I don't know who I'm going to live with. Yeah, +[1959.560 --> 1965.960] with the plant. Suiton and Lewis one of our other researchers are starting to focus on how to apply +[1967.000 --> 1972.360] some of what we've learned to machine learning techniques. So going back in that direction. I don't +[1972.360 --> 1978.520] love to put it. I want to talk about that more. Yeah, I can I can talk a little bit more. I +[1978.520 --> 1984.760] wouldn't really call it a divorce at all. I'm trying to live it up here. I'm sorry. Yeah, but +[1985.720 --> 1991.080] of course I'm continuing to be extremely interested in the neuroscience. But you know, Numentos +[1991.080 --> 1995.560] always had this kind of two-pronged mission of understanding the neuroscience side of it and then +[1995.560 --> 1999.960] trying to see if the principles that we learned from the neuroscience can be applied to practical +[1999.960 --> 2004.200] problems and to machine intelligence. And we've done a little bit of that in the past but the +[2004.200 --> 2009.160] last few years I've been really focused primarily on the neuroscience. And I got pretty excited, +[2009.160 --> 2015.480] you know, with the frameworks paper. I felt that we had an almost kind of complete kind of structure +[2015.480 --> 2020.680] about how a cortical column works. And there are a number of principles that are embodied in there +[2020.680 --> 2024.520] and some of which that we talked about. And if you look at the world of deep learning and machine +[2024.520 --> 2029.160] learning, there are kind of fundamental problems there. And I could almost see that if applying these +[2029.160 --> 2033.560] principles from this framework could actually help solve some of these really big problems in deep +[2033.560 --> 2040.280] learning. So the kind of the research direction that I'm pursuing a little bit now is to take some +[2040.280 --> 2046.360] of the concepts that we've found from the neuroscience and apply them more directly to neuroscience. +[2046.360 --> 2053.160] And I think in that research, which is still very speculative and exploratory at this point, +[2053.160 --> 2059.160] I think there are basically two components to it. If I look at everything we've done, I think there's +[2059.240 --> 2064.760] like two fundamental pieces. One is kind of a representational component. And a lot of you on the +[2064.760 --> 2069.560] forum know about how much we rely on sparse distributed representations and the properties of +[2069.560 --> 2075.320] SDRs. And deep learning systems don't really embody SDRs today. They're primarily dense +[2075.960 --> 2081.320] representations. So the question is, can we embody SDRs into deep learning systems, +[2081.320 --> 2086.280] of machine learning systems and take advantage of some of their properties? And the second part of +[2086.280 --> 2091.880] it is just looking at the cortical column as a structure. If you look at a deep learning system +[2091.880 --> 2096.920] or neural network today, it's extremely simplistic feed forward structure, whereas the cortical column +[2096.920 --> 2103.640] structure is a lot more complex. So can we take that structure and along with SDRs, improve +[2103.640 --> 2108.200] machine learning and deep learning to embody everything that's in this kind of common algorithm +[2108.200 --> 2113.400] on the common cortical micro circuits? So that's a very quick description of the kind of the research. +[2113.400 --> 2118.600] I'm just really starting on. He's getting really interesting results already. So +[2120.520 --> 2125.720] I'm going to focus on, if you don't mind, I can just say, what my work for this year is still on +[2125.720 --> 2130.440] the biology side. And I'm trying to fill in all these missing pieces of a cortical column. +[2131.560 --> 2136.280] And specifically the role of orientation, which is like head direction cells and the equivalent +[2136.280 --> 2142.760] of the play cells, which, and so I'm working on the idea that I actually mentioned a year ago, +[2142.760 --> 2148.120] and I talked about MIT, but I'm back to it with the vengeance now, is that in a cortical column, +[2148.120 --> 2155.640] there's actually two different sensory motor inference mechanisms being done. One is movement +[2155.640 --> 2160.840] through space is what the framework paper talks about a lot. And that's the idea of grid cells +[2160.840 --> 2166.440] and moving through space. And the other is a sensory motor mechanism which has to do with orientation +[2166.440 --> 2173.000] or changing orientation to an environment. And so, and that produces what the equivalent of play +[2173.000 --> 2178.200] cells. So I think the cortex, to fill out the framework, and many of the details, we can understand +[2178.200 --> 2183.640] a cortical column is doing two types of inference at the same time. One is angular movement, +[2183.640 --> 2189.240] which is your orientation to the world, and that's figuring out like play cells, what, where am I, +[2189.240 --> 2193.640] based on my sensory input? And then there is the movement through space, which is a more +[2193.640 --> 2199.160] of a linear sensory motor inference. And I believe you can map these two inference mechanisms +[2199.160 --> 2204.840] precisely onto different cortical layers and adding orientation. And it really fills out the +[2204.840 --> 2210.040] complier, the complete picture of what a cortical column does. So that's a paper I hope to get +[2210.040 --> 2214.040] written by the end of the year. Is there a related question on chat from Eric Collins? How are +[2214.040 --> 2221.880] features selected to generate play cell representations? Oh boy, that's a, first of all, play cells +[2222.680 --> 2227.640] are in the hippocampus, right? That's where the term comes from, these are cells in the hippocampus. +[2227.640 --> 2231.640] We think there are equivalent cells in the in the near cortex, although we have not really +[2231.640 --> 2235.160] talked about them as much. We didn't mention that in the in the framework's paper. +[2236.600 --> 2242.360] So how are they selected? It's more, it's, here's one way to think about it. +[2244.360 --> 2252.360] There, first of all, what do play cells do? Play cells represent some sensory input that encodes +[2252.360 --> 2257.880] your location. So it's like when an animal is in a particular location, based on the +[2257.880 --> 2263.640] sensory inputs around the animal, these play cells represent that. But they don't, they represent +[2263.640 --> 2268.360] independent of the orientation of the animal. So it's not like I see something in front of me. +[2268.360 --> 2274.280] It's like there's something relative to the room in front of me. So the play cells don't change +[2274.280 --> 2278.520] when the animal changes its orientation to the room. It's not pure sensory because when you're +[2278.520 --> 2283.160] sensory input changes when you rotate your position relative to the room, the play cells are not. +[2283.720 --> 2287.720] And this is the inference I was talking about. There's a, we believe what's going on is there's +[2287.720 --> 2292.360] a sensory motor inference which, which says, given these features that are relative to me, and as I move +[2292.360 --> 2297.960] around, I'm going to form a representation which is oriented to the environment. And it's stable +[2297.960 --> 2303.560] relative to the environment independent of my movement. And what features you use to select that +[2303.560 --> 2309.400] can vary in, and it, there's all kinds of literature about what actually goes on in a rat's brain +[2309.400 --> 2314.840] in this regard. But it could be whiskers, it could be vision, it could be hearing. It doesn't really +[2314.840 --> 2320.520] matter. It's, as long as I sense something that I can then turn it into a representation of +[2320.520 --> 2326.200] the location in the room based on that one thing. So there's, it's, it's not really critical to +[2326.200 --> 2331.560] what senses you sense. It's more critical to how you do the sensory motor inference. And that's +[2331.560 --> 2336.920] the long topic. So I don't think the actual features are really that important. It could work +[2336.920 --> 2343.160] of any kind of sensory modality. Okay. I want to, I promise we would answer the forum questions. +[2343.160 --> 2346.840] So let me go through these and because we only got about 15 more minutes because that might generate +[2346.840 --> 2352.760] more topics and I'll get to rest of the chat stuff. So, so someone was asking about, is there any +[2352.760 --> 2358.040] relationship between grid cells and the orientation stripes or bands that we observed and who +[2358.040 --> 2363.240] won't be cell papers? Yeah. So I remember earlier I was saying that I'm working on the cypods, +[2363.240 --> 2369.880] is the grid cells are, there's a grid cell module per minicolumn. And each minicolumn in the, +[2369.880 --> 2376.360] in the human visual model of V1 has a specific orientation. So the next one, the lines of +[2376.360 --> 2379.480] one orientation, the next minicolumn might be a larger, different orientation. It responds to +[2379.480 --> 2384.680] stimulus. Yes, visual stimulus at the overall orientation. But also very importantly, those +[2384.680 --> 2389.160] cells, many of those cells respond to motion. So they're actually not just orientation, but they're +[2389.160 --> 2395.240] actually, that line is moving this way or this way, that's what they prefer. I won't have time +[2395.240 --> 2403.560] to explain all of this, but that is exactly the signal you would need to, to update and create a +[2403.560 --> 2409.160] one-dimensional grid cell module. That movement command, it would tell you which way the bump +[2409.160 --> 2413.880] should move on a one-dimensional grid cell module. It already, it already represents a one-dimensional +[2413.880 --> 2421.160] slice through a three-dimensional visual space. And so that's the hard concept to get across. +[2421.160 --> 2427.080] I'm still struggling with the words for it, but it is possible that those orient, it's possible +[2427.080 --> 2431.240] that they need interpretation of those orientation columns that human visual gave is completely wrong, +[2431.240 --> 2438.840] or mostly wrong. It's possible that they actually represent in some sense, like they represent +[2438.840 --> 2444.920] essentially an orientation conjunctive type of cell, where they're defining the grid cell modules +[2444.920 --> 2451.240] and they're defining orientation less than visual features. They are visual features, but they +[2451.240 --> 2455.960] actually, the movement defines those the metrics we need to create grid cell modules and orientation +[2455.960 --> 2461.560] modules, the head direction cells equivalent. So that's an interesting idea that I don't know +[2461.560 --> 2467.320] of anyone else that I've ever thought of before. I said earlier, it's very speculative, but I'm working +[2467.320 --> 2475.240] on it. Okay. The next question is about invariance with respect to object representation. Does this +[2475.240 --> 2479.880] do its own model help? How does it help with invariance? Why don't you take that one? +[2480.680 --> 2484.040] Yeah, I think we were talking about this a little bit earlier. There's many different aspects to +[2484.040 --> 2490.840] invariance, but I would say this whole idea of having a location signal within a cortical column +[2490.840 --> 2498.040] came from the, came in part from thinking about invariance and what, and the idea of reference +[2498.040 --> 2503.640] spaces. So if you think about what invariance is, you want to have some sort of a signal that's +[2503.640 --> 2508.440] stable while you are sensing different aspects of the same thing, that sort of one way you can think +[2508.440 --> 2516.040] about invariance. And in order to do that for an object, if as I'm sensing an object, I have to +[2516.040 --> 2521.480] have a representation of an object that's in the reference frame of the object itself. That way my, +[2522.200 --> 2528.120] the output of my system of our system can be invariant regardless of the pose of this object +[2528.680 --> 2536.440] relative to me. So grid cells and a location signal by encoding relative positions of features +[2536.440 --> 2542.200] within the reference frame of the object allow you to have a very invariant kind of predictive model +[2542.200 --> 2548.200] of the object itself. So there's that's at least sort of one, you know, relationship between those +[2548.200 --> 2552.440] concepts and the, that's probably the biggest one, right? I mean, essentially you're going from +[2552.440 --> 2557.080] some presentation on a two-dimensional sensory array, whether it's your fingers or eyes or +[2557.080 --> 2561.720] something like that, and you're turning into an internal representation, which is completely +[2561.720 --> 2567.640] independent of your pose relative to that object. It's a 3D model of the object. It doesn't matter, +[2567.640 --> 2572.280] you know, once you have a 3D model of the object, that 3D model is invariant to any other position +[2572.280 --> 2579.320] and orientation to anything else. I think one other aspect is for the thousands brain theory to work, +[2579.320 --> 2583.560] every cortical column has to have some sort of invariant representations of objects. In order to +[2584.040 --> 2588.840] the voting to occur, you have to have stable representations of objects of the same object in +[2588.840 --> 2594.120] multiple particle columns, even though they're actually sensing completely different inputs. It's +[2594.120 --> 2598.840] that stability that allows the kind of the voting mechanism to work. I think one of the interesting +[2598.840 --> 2603.240] things is that the internal representations of each of these cortical columns is entirely different +[2603.240 --> 2608.040] because they've got different sensory input coming in. Yeah, they can't have the same +[2608.040 --> 2613.320] variable representations at a low level. But if they're stable, then you can form associations +[2613.320 --> 2620.120] between them. That's less the voting work. Yeah, so a tactile coffee cup model and a visual +[2620.120 --> 2625.480] coffee cup model, the actual details are completely different. But if they both agree that it's a coffee +[2625.480 --> 2631.800] cup, then they can vote independent of how that was derived. That's the basic idea of the long way. +[2631.800 --> 2634.600] Even if they're all modeling this object in different ways. +[2634.600 --> 2639.320] Different modalities. Yeah, different reference frames. Different modalities. +[2640.840 --> 2646.760] And as I suppose I pointed out, the key thing about invariance is you have a stable representation +[2646.760 --> 2651.240] while inputs are changing. That is in some sense the definition of invariance. +[2652.200 --> 2656.920] And we propose there's this very specific mechanism for that, which I think is pretty good. This +[2656.920 --> 2663.720] is the temple pooler, which is in the college paper and the columns plus paper. And I'm very +[2663.720 --> 2669.640] confident that that's basically happening. Okay, we're about 10 more minutes. Another question about +[2669.640 --> 2674.040] lateral connections. If there's a long range lateral connections, is there any problem with temporal +[2674.040 --> 2680.280] variation for syncing up the activity across? Yeah, we were talking about this question +[2680.280 --> 2684.920] before the hangout started. And it's a great question. And I don't think we've actually talked +[2684.920 --> 2691.560] about it or really thought about it. Yeah, the idea is that you want essentially the +[2692.440 --> 2697.720] ideally you want the sort of the axons on a particular neuron. You want sort of the action +[2697.720 --> 2701.960] potentials arriving sort of at the same time. No latency. Well, not latency is not the problem. +[2701.960 --> 2705.640] It's not the latency is the problem. You want to sort of arrive at the same time. They can be +[2705.640 --> 2713.400] all delayed. Oh, great. It's like if you go back to the neuron paper, the sequence memory paper, +[2713.400 --> 2719.880] we laid out a very detailed model of the neuron and how the dendrites work and what they're +[2719.880 --> 2724.360] computing. And part of that was that they have to detect these coincidence patterns on a +[2724.360 --> 2730.040] dendritic branch. And the biology tells us that those synapses have to be active within a few +[2730.040 --> 2734.920] milliseconds of each other. So there needs to be, you'd like to have some sort of synchronizing +[2734.920 --> 2739.720] abilities to get the action potential to arrive at the same time as opposed to scattered over time. +[2743.720 --> 2748.840] That's just a biological. Seems to be biological requirement. And this question is saying, how is +[2749.320 --> 2755.240] guaranteed? I believe that's just a question of saying. And we don't know. There are lots of ways it +[2755.240 --> 2763.080] could occur. The basic belief is that there are cycles in the brain and the cycles will, the +[2763.080 --> 2767.400] cells will tend to fire on the peaks of these cycles and not on the troughs of these cycles. And +[2767.400 --> 2770.520] therefore they, if they're going to make a, they're going to spike, they tend to do it at the same +[2770.520 --> 2776.360] time. But this question, and as if they're traveling along distances and there's, there's delays, +[2776.360 --> 2779.080] and the delays would be different. So now they're not going to arrive at the same time. +[2779.720 --> 2783.720] It's a good question. I don't have any answers to it. There you go. But there will be an answer to it. +[2784.760 --> 2790.520] About that. But yeah, it's not hard to imagine how the answer could, you know, there's so much it's +[2790.520 --> 2795.080] not known about some of this stuff. Maybe the dendrites aren't as critical as people think they are. +[2796.200 --> 2802.280] Maybe there's local dynamics which make these things happen. There are some, many synapses have +[2802.760 --> 2808.120] metapotropic response means that they lead to a long-term depolarization that would +[2808.120 --> 2813.560] bridge these time gaps. So there's lots of possibilities, but it's not an area that we focused on. +[2813.560 --> 2819.400] Okay, the last form of question is about displacement cells in L5. And they're saying, are these +[2819.400 --> 2825.320] like multiplex representations for movement vectors and object compositions asking for more detail +[2826.040 --> 2831.640] about that displacement cell layer? Yeah, okay. +[2833.720 --> 2838.280] I always say in the video that I made, there's two types of displacement sort of, +[2838.280 --> 2841.960] when you're moving within an object reference frame that's one displacement. We make this +[2841.960 --> 2847.560] really clear in the paper too. I don't know if you want to rest out or I've thought about it. +[2847.960 --> 2854.680] I'll go for it. Okay, so some of this, a few more speculates in other parts. +[2855.800 --> 2861.160] The idea for displacement cells originated with Marcus and maybe Scott, I'm not sure +[2863.000 --> 2866.760] the idea there, but we were trying to come up with a mechanism for object composition, +[2867.320 --> 2872.360] how to object, a line of objects. And the mechanism that I was outlined in the +[2873.160 --> 2881.240] in the frameworks paper addresses some of that. But we also realized that that mechanism would +[2882.600 --> 2889.000] allow the system to figure out the distance or to navigate from a point to another point. +[2889.720 --> 2895.720] And in fact, some of the research which Marcus and Scott used to come up with the displacement +[2895.720 --> 2900.520] was literally, and we referenced this in the paper, literally came about from people trying to +[2900.520 --> 2904.280] figure out how we navigate, how you know how to get from point A to point B in the same space. +[2905.000 --> 2908.920] Now we have this mechanism which we were trying to figure out how to do object compositionality, +[2908.920 --> 2915.400] but clearly could also do navigation within the same space. So now we have these two dual ideas. +[2915.400 --> 2920.680] One is like between, and this is very clearly written in the frameworks paper, that this concept +[2920.680 --> 2924.920] of displacement cells could do both of these things. Could say, hey, here's how I get from point A to +[2924.920 --> 2930.440] point B in one object in a space. And here's how I relate two different points and two different +[2931.160 --> 2938.120] reference frames. Now, as we go forward in time, it's clear that one of those still works really +[2938.120 --> 2943.320] well. That's the how to get from point A to point B, how to generate behavior. The compositionality +[2943.320 --> 2948.200] one is starting to have some problems. We're struggling with trying to get the details working. +[2948.200 --> 2952.840] So there's issues of orientation and scale that we haven't quite figured out how to get +[2952.840 --> 2959.320] working in the displacement as an object compositionality problem. So I'm now far more comfortable +[2959.320 --> 2966.840] that the displacement cells exist and they're doing movement. I'm confused now exactly how they're +[2966.840 --> 2971.400] doing object compositionality. And maybe we might move to slightly different mechanisms for that. +[2971.400 --> 2976.680] Maybe we'll separate them out. There's two different things. So we wrote them as the displacement +[2976.680 --> 2982.520] cells could do both that may still be true, maybe not. But I do know that they could do movement. +[2983.160 --> 2987.720] So this is an area where we're trying to, it's very difficult to think about, but we're trying to +[2987.720 --> 2993.080] really get the core of how we do object compositionality exactly, how to deal with these problems of +[2993.080 --> 2997.240] orientation. And meaning like, imagine we used the coffee cup example and we said, oh, there's a logo +[2997.240 --> 3000.600] on the coffee cup. Well, we didn't really address what happens if the logo is oriented, +[3000.600 --> 3004.120] change an orientation to the coffee cup. We didn't address that. We didn't address how the +[3004.120 --> 3008.600] logo wraps around in three dimensions on the coffee cup. We didn't really address the issue of +[3008.600 --> 3012.360] how the scale of the logo can change on the coffee cup. So there's a lot of things with +[3012.360 --> 3016.280] the displacement cells that Billy did in address those issues. We pointed those out in the paper. +[3016.280 --> 3020.680] We made it clear like, hey, we don't understand this stuff. But as we get into it, it's getting +[3020.680 --> 3024.280] more complicated. So I'm sticking with the idea that the displacement cells exist. They're +[3024.280 --> 3031.080] doing, they're definitely doing the motor behavior. But the compositionality part is under flux right now. +[3032.760 --> 3036.440] Okay. Going through some of these questions, don't skip some of them. +[3037.400 --> 3041.240] How do you envision the transformation of reference frames to allow the invariance of objects? +[3041.240 --> 3045.560] I think we're talking about the displacement cells represent that transformation. +[3046.840 --> 3051.240] Displacement cells, you know, you think of them as a movement between two points, not the two +[3051.240 --> 3057.640] points, but the movement between the two points, right? Yeah. And then, so Mark Brown, +[3058.920 --> 3065.160] how does the local grid in a mini column square with the known repeating grid patterns across +[3065.160 --> 3073.800] the interrhynal cortex? Yeah. So the idea here is a grid cell responds at multiple locations, +[3074.840 --> 3082.120] right? And those are spaced out in the enthronic cortex. They're on a 2D sheet and they're at +[3082.120 --> 3089.560] these, you know, sort of a 60 degree hexagonal patterns. But if you had a linear, +[3090.520 --> 3096.280] a one dimensional grid cell module, that means as you go in one dimension, you have a series of +[3096.280 --> 3101.320] cells become active and they become active at various repeating points along the line. It's the +[3101.320 --> 3108.600] same basic idea. You're just repeating on a linear line versus repeating on a hexagonal grid on +[3109.160 --> 3115.720] a 2D sheet. So it's same basic idea. And I didn't know if I answered the question. +[3116.680 --> 3120.440] Oh, yeah. How does it square cross for repeating the each year, +[3121.400 --> 3125.480] repeating grid pattern across the interrhynal cortex? Yeah. It could be the exactly the same thing +[3125.480 --> 3130.440] in the neocortex. So you might have a 2D grid cell module. We haven't eliminated that possibility. +[3130.440 --> 3136.360] That's the first to sport or assumption. In which case you'd have cells that repeat. If I, +[3136.360 --> 3143.560] you know, as I move over objects, they would repeat. And in the same sort of, but now in a 2D +[3143.560 --> 3150.360] projection of a three dimensional space, which is a little bit odd to think about. But imagine if +[3150.360 --> 3156.920] I could just move through some space continuously relative to some object, the cell would repeat this. +[3156.920 --> 3162.840] And I can move those 2D projection of that space. Well, then the cell would repeat over that 2D +[3162.840 --> 3168.040] in an hexagonal way. I think one thing that came out of the work that Marco and Marcus are +[3168.040 --> 3173.080] working on is that the dimensionality of the grid cell modules is kind of independent of the +[3173.080 --> 3178.120] dimensionality of the location space itself. You can take any dimensional location space and +[3178.120 --> 3183.080] represent it with almost any dimensional grid cell modules. As long as you have enough of them, +[3183.080 --> 3187.480] any of these random projections that do it. So you can kind of divorce the 2 of them to some +[3187.480 --> 3192.120] extent. There's capacity issues and stuff like that. But generally speaking, any +[3192.120 --> 3197.720] n dimensional space can be represented by a set of 1D modules or 2D modules or 3D modules or whatever. +[3197.720 --> 3201.480] The same thing happens with orientation, by the way. We think there's an orientation of your +[3201.480 --> 3206.280] finger to the cup just like the rat has an orientation into the room. You can think of rat in the +[3206.280 --> 3210.600] room. The orientation is a 1D vector. The head direction cells, they're just like, you know, +[3210.600 --> 3218.120] there's just one, it's an angular, it represented angular position in its 1D. And if you go all the +[3218.120 --> 3221.960] around, then you're back to the same cells again. So it's a repeating pattern, but it's a closed +[3222.040 --> 3227.240] space because you're doing angular movement. But how would I represent my orientation to my +[3227.240 --> 3231.880] finger to this cup? That's not a one-dimensional orientation. There's all kinds of movements here. +[3231.880 --> 3236.760] I can do where I'm on the same location of the cup, but it's different orientations. And so even +[3236.760 --> 3242.120] there, if I represented orientation with 1D orientation modules, I would need multiple +[3242.120 --> 3247.160] of them to represent the orientation of my finger to this cup. So it's the same basic problem. +[3247.720 --> 3255.480] And so this is a fact. I continue as a factor. Half the multiple slices of orientation space or +[3255.480 --> 3262.440] multiple slices of location space in each cortical column. So Marx really interested in long distance +[3262.440 --> 3267.480] coordination, especially between cortical columns and representation at a level of love that. So +[3267.480 --> 3273.560] he's continued to ask, what is the long distance coordination mechanism? Are these cortical +[3273.560 --> 3279.480] columns local? We already addressed one aspect of it, right? With the voting thing? I'm not sure. +[3281.000 --> 3285.240] Mark doesn't understand that. We can go over that again. We've also talked about this as a long distance +[3285.240 --> 3291.080] coordinate. That is a long distance coordinate. Yes. It's voting on what? There's only two cellular +[3291.080 --> 3295.720] layers in the cortex, which send long distance connections to other parts of the cortex. +[3296.600 --> 3301.560] There are cells in basically the layer 2, 3, and there are certain cells in layer 5. +[3302.280 --> 3308.600] And those are the only, that's a subset of layer 5. And those are the only two cell types that +[3308.600 --> 3313.640] project long distances. And the current theory, which goes a little bit beyond, +[3313.640 --> 3318.360] what was in the frameworks paper, is that in this, we, in the frameworks, but we talked about one +[3318.360 --> 3322.920] of those being the, we actually in the columns paper, we talked about it initially. We modeled it +[3322.920 --> 3328.200] in the columns paper last year of paper. We modeled as one of those representing the object. And so as +[3328.200 --> 3332.040] we talked about earlier, everybody can be looking at different parts of the world. But if they +[3332.040 --> 3336.520] are modeling the same object, then all you have to do is have a associative memory that links +[3336.520 --> 3341.480] a pattern in this column or the pattern at column. And they vote. And they learn to say, when we're both +[3341.480 --> 3344.840] looking at the same thing, we can make those connections. And so they vote to decide what they're doing. +[3344.840 --> 3352.120] He's asking what's the neural substrate? The neural substrate is long range axons in layer 2, 3, +[3352.120 --> 3357.800] to other cells in layer 2, 3, anywhere in the cortex that might be modeling the same object. +[3358.200 --> 3362.520] And all you have to do is take a population of cells and another population of two sparse +[3362.520 --> 3368.360] populations. And you say, okay, we're both learning the coffee cup now. Let's form these long range +[3368.360 --> 3373.240] connections, basically just to associate this pattern with that pattern. And they can do that from +[3373.240 --> 3377.480] hundreds of different patterns. And now when you see this pattern, it's going to invoke that pattern +[3377.480 --> 3381.640] over here. So why touch the coffee cup? It's going to, it's going to bias the visual +[3382.440 --> 3387.480] cortical columns to say, you're probably going to be seeing a coffee cup. That kind of idea. +[3387.480 --> 3391.560] Yeah, maybe I know we only have a minute or two. I see Seth asked me a question about Hinton's +[3391.560 --> 3396.360] capsules. Oh, I'm not there. So let me just address that. That's good. So yeah, there is, there are +[3396.360 --> 3404.200] links between the frameworks and the 1000 brains idea and capsules. I wrote actually a whole blog post +[3404.200 --> 3409.240] about it about a year and a half ago. So you can search for that on our website. But I think +[3409.240 --> 3416.280] Hinton's capsules includes the idea of having representing objects based on their relative +[3416.280 --> 3421.000] locations and doing kind of a voting mechanism to come up with a consistent interpretation of +[3421.000 --> 3425.400] everything. So to that extent, there are analogies. I think the framework's idea and the +[3425.400 --> 3429.560] cortical columns idea goes quite a bit beyond that because we're dealing with sensory motor +[3430.920 --> 3435.960] information, reference frames and a whole bunch of other things in there as well. And of +[3435.960 --> 3439.880] course, we're trying to model the actual biology and the neuroscience. But there are some really +[3439.880 --> 3444.920] interesting relationships with Hinton that you can look up my blog post if you want to know more. +[3446.680 --> 3454.200] Okay, so we, I think we need to wrap it up because we have a hard stop. A link to the blog post. +[3455.640 --> 3459.720] Yeah, I'll put it on the forum. Maybe to make it fine for me. It's on numin.com slash blog. +[3461.960 --> 3465.080] All right, that's it. Closing thoughts at all. Thanks to everybody. +[3465.080 --> 3469.400] I have a couple of closing talks as always. I want to thank Matt for organizing and running the +[3469.400 --> 3474.200] community. And I love, I really want to appreciate everyone out there who's actually following this +[3474.200 --> 3480.280] work and trying to understand it and contributing to it. I think the quality of the questions was +[3480.280 --> 3484.840] great. The fact that there are so many questions that are just at the edge of what we're researching +[3484.840 --> 3488.920] right now is that I think people are really understanding what we're doing and following it. +[3488.920 --> 3495.320] Yeah, we appreciate that. These questions push our our own knowledge and make us think about like, +[3495.320 --> 3499.880] hey, what do we understand? And sometimes we get good suggestions from the community. So +[3500.840 --> 3505.560] anyway, just want to make sure that one knows we appreciate that. Thanks community. +[3507.480 --> 3510.920] All right, take care everybody. We'll see you on the forums. Join HGM forum. +[3512.280 --> 3514.280] Bye. diff --git a/transcript/allocentric_VGSDUFAtf1E.txt b/transcript/allocentric_VGSDUFAtf1E.txt new file mode 100644 index 0000000000000000000000000000000000000000..ebe63c0e1316a36e26e0bf4f9a8df8705794324e --- /dev/null +++ b/transcript/allocentric_VGSDUFAtf1E.txt @@ -0,0 +1,216 @@ +[0.000 --> 8.160] Hello everybody. Thank you very much for inviting me to talk here today. So I'm going to switch +[8.160 --> 15.920] here a little bit from brain activity to just pure behavior. And I'm going to tackle a relatively +[15.920 --> 21.760] simple question, but it'll get a little more complicated as we go. So when we make sacats +[21.760 --> 26.800] looking around in the world, we can actually move our eyes in all directions. We move our eyes +[26.800 --> 33.440] horizontally, vertically. We can make oblique sacats. But it turns out that at least for humans, +[34.400 --> 40.880] all these directions of sacats are not equally likely. We make a lot more sacats as shown here +[40.880 --> 47.760] in this cartoon depiction. By arrows, we make more horizontal sacats. And then we make quite a +[47.760 --> 53.600] bit of vertical sacats, but we make very few oblique sacats. So what we're going to try to +[53.600 --> 63.040] start to tackle today is kind of why this may be. And here is an actual data recording and summary +[63.040 --> 68.160] of this effect. And we're going to see quite a bit of this plot today. So I'm going to try to +[68.160 --> 74.640] describe it well today right now. So here we have a plot that shows the frequencies of sacats +[74.640 --> 81.200] for different directions. So the blue line shows for each direction how many sacats or what was +[81.200 --> 87.680] the probability of a sacadena given direction. And for this example here, which was recorded +[87.680 --> 94.560] during free viewing of static images, we see that there are many sacats towards the left and the +[94.560 --> 100.560] right. There are quite a few going up, a little less going down and even less in oblique +[101.440 --> 108.080] 45 degree approximately directions. So this is a very robust effect. It's seen in humans in +[108.080 --> 117.200] multiple tasks. And I wonder why. So if we start to think of possible hypothesis, I think the +[117.200 --> 122.400] first one that usually comes to mind is just thinking about the symmetry of the system. So if we +[122.400 --> 127.920] think of the eye muscles or the brain areas that control sacats, they have this horizontal symmetry +[127.920 --> 133.040] in general. We're pulling one eye to one side. It's controlled by one side of the brain and one +[133.120 --> 139.520] muscle on one side and pulling the eye towards the other side is symmetrically control in the other +[139.520 --> 144.800] side. On the other hand, for vertical, I move and things get a little more complicated. We have a +[144.800 --> 151.840] pair of muscles that also produce torsion and it takes more computation and more control to move the +[151.840 --> 158.640] eye perfectly vertical or obliquely. So maybe horizontal sacats are just easier to program for the +[158.640 --> 163.360] brain or somehow more energy efficient. So that's where we make more of them. +[165.760 --> 172.720] But this gets very soon kind of this proof or at least challenged by a fact that has been shown +[172.720 --> 178.560] by previous studies already that if you simply tilt the visceral scene that a subject is looking at, +[179.680 --> 185.680] then the direction of sacats rotates with the image. So when the image is upright, most of the +[185.680 --> 190.960] sacats the subjects make horizontal but when the image is tilted and the head is still upright, +[192.000 --> 197.760] now oblique sacats become the most likely. So if it was really that much more costly, that shouldn't +[197.760 --> 206.880] have. You should still make sacats horizontally most of the time. But then we're going to see that +[206.880 --> 213.200] it's not really just the image itself. So for example, if we look at sacats that we make when we +[213.200 --> 219.600] just try to fixate at the spot, as Jake showed when you fixate at the spot, your eyes are still +[219.600 --> 225.440] are still moving and they'll produce the small sacats, usually called microsecats. So if we look at +[225.440 --> 231.440] the direction of those microsecats made during fixation, they show a very strong directional bias as +[231.440 --> 239.120] well. So most of the microsecats in humans tend to be horizontal. And this happens when people are +[239.120 --> 245.360] looking at a simple fixation spot in the middle of a blank scene. So that would suggest that +[245.360 --> 249.600] is not really just about the visual scene, at least not the visual scene that you're looking at, +[250.240 --> 255.440] but there may be something about the regularities on the visual images that we look at or the behaviors +[255.440 --> 263.760] we usually have that produce this bias. Okay, so what are we going to show today is a few experiments +[263.840 --> 270.480] and analysis that try to at least tackle some questions to start to understand where this bias +[270.480 --> 276.640] comes from. First, we are going to show experiments where we are trying to really understand better +[276.640 --> 281.440] what the reference frame of this bias is. So we are going to try to decouple the head orientation +[281.440 --> 287.360] in space, the orientation of the image in the world, and a little bit of the eye orientation of the +[288.320 --> 296.080] to see where this bias really stays uncurtured. Then we look at how the bias is different, maybe for +[296.080 --> 302.560] different saccade sizes. And finally, we are going to try to analyze the images to try to predict +[302.560 --> 307.200] what are the features in an image that may influence this saccade bias. +[310.480 --> 314.880] And this is all work that has been done by a graduate student in the lab, Stephanie Rives, +[315.680 --> 320.560] and she's a vision science graduate in Berkeley. So in the first experiment, +[321.440 --> 327.600] we are using virtual reality. So we have this headset, the fove that is equipped with eye tracking +[327.600 --> 334.560] internally. And we are going to show subjects either these fractal scenes that have the main +[334.560 --> 339.600] characteristic that they are rotational asymmetric. So they don't have any cue for where a bright is. +[340.320 --> 345.120] And then they are also going to look at natural scenes that, of course, you saw a little you were +[345.120 --> 352.640] upright is. And then we are going to keep their head either upright or tilt. Here is the complete +[352.640 --> 359.440] set of conditions in this experiment. As we see, there are three different head tilts. And then +[359.440 --> 364.400] within each head tilt, we have three different image tilts. So we always have either the image +[364.400 --> 369.280] aligned with the head or tilted 30 degrees more than the head or 30 degrees less than the head. +[369.840 --> 374.480] And as I said before, the images could be the fractal images or the natural scenes. +[376.240 --> 383.920] So first we are going to focus on those fractal images. Now it's a simple case where we have just a +[383.920 --> 389.760] head tilt and an image that really doesn't have any tilt information is always the same. +[389.920 --> 395.440] So in this scenario, we could think of two possible hypotheses. So +[398.480 --> 404.320] when the head is upright, the two reference frame, either the world reference frame or the head +[404.320 --> 409.760] reference frame, they are really the same. So with the two hypotheses, we get the same saccade +[409.760 --> 416.080] distribution. The picture here, but these ellipse. But then when the head is tilted, we could get +[416.160 --> 420.480] these two different results. The saccades could go with the head or they could stay with the world. +[421.680 --> 427.120] So let's see what happens. So here we have the typical distribution for upright. +[427.840 --> 433.440] And now Sabia's we're just really free viewing this fractal scenes. But then when the head is tilted, +[434.400 --> 440.000] we see that the saccades rotate with the head. So they stay in a head reference frame. When the +[440.000 --> 446.400] head is tilted to the right, the distribution is shown here, show a tilt to the right. And when +[446.400 --> 450.240] this head is tilted to the left, the saccades tilt to the left. They go with the head. +[452.160 --> 457.440] But if we start to look into the data, it may seem that it's not exactly with the head. So we're +[457.440 --> 461.760] going to do a further analysis. And this is how the data is going to be shown most often today, +[462.560 --> 468.400] where we actually just look at how much this distribution deviate from a head reference. +[468.640 --> 476.160] So now we've rotated the distribution. So the horizontal line represents the head orientation. +[476.160 --> 481.920] And we are comparing in black the distribution when the head is upright and in blue or red, +[481.920 --> 487.280] the distribution with the head is tilted. And when you do these plots, just start to getting a +[487.280 --> 493.680] hint that there is a small rotation. But we can further quantify that. We use some cross correlation +[493.680 --> 499.040] analysis to measure how much you need to rotate when distribution to better match the other +[499.040 --> 507.840] distribution. And we see that there is actually a small shift of these distributions where when the +[507.840 --> 513.680] head is tilted to the left, the saccades directions rotate a little bit to the right. And when the +[514.400 --> 522.160] head is tilted to the right, the saccades rotate a little bit to the left. And initially we try to +[522.160 --> 529.920] quantify with just a summary index for each subject how much the saccades remain in a head +[529.920 --> 536.800] orientation reference frame or in a world orientation. And we see that we show before it's very close +[536.800 --> 541.600] to the head, but not exactly right there. Now +[552.880 --> 570.960] okay how is this good? So yeah I was mentioning that there is this small deviation from a pure +[571.680 --> 577.200] head reference frame. And this amount is actually consistent with the amount of eye movements that +[577.360 --> 583.360] are produced usually with a head tilt. So when the head tilt, we get a torsional rotation of the eye, +[583.920 --> 589.760] which is a rotation around the line of sight. And typically for a head tilt like here of 30 degrees, +[589.760 --> 593.520] the eye is going to rotate around four or five degrees in the opposite direction. +[594.560 --> 599.280] So this starts to suggest that maybe this bias is not in a pure head reference frame, but actually +[599.360 --> 609.360] maybe in an eye reference frame. Okay so next we are going to focus on another set of conditions +[610.080 --> 615.680] where the image is always horizontal, respect to the world, but now the head maybe tilt. +[616.720 --> 621.760] Remember this is still done in virtual realities, it's not a completely natural condition, but it's +[621.760 --> 626.640] the set of conditions that more mimic maybe the natural behavior where you may be looking at the world +[626.640 --> 632.400] when your head is tilted. In this case again we find that the saccade directions +[632.400 --> 637.680] deviate even more. So we don't get just this eye reference frame, but the saccade directions +[638.480 --> 644.800] rotate to align themselves closer to where the horizon in the world is. +[645.440 --> 650.880] To remember these graphs here now they are showing head orientation, so horizontal means aligned +[650.880 --> 656.960] with the head, so when the head is tilted this moves in the direction that will align it closer to +[656.960 --> 662.960] the world. Okay and finally we have sort of the opposite condition where the head stays just +[662.960 --> 668.800] upright and now the image may be tilted and this is just replicating previous results, but it shows +[669.680 --> 677.600] also that the saccade rotates so they align with the image, but only partially if we measure the +[678.480 --> 683.120] angle that the saccades rotate we see that even though the image was tilted 30 degrees, +[684.800 --> 693.120] the saccades distribution rotate about 10 to 15 degrees and we can represent again this here with +[693.120 --> 699.840] this reference frame index where one would be a perfect orientation with the image and zero perfect +[699.840 --> 704.320] orientation with the head and for most subjects we end up getting somewhere in between. +[705.280 --> 711.840] So that's what the next experiment is going to start to tackle. We just purely here we're going to +[711.840 --> 717.840] use image tilt, the head is always going to be upright and we're going to have two different +[718.480 --> 723.040] behavioral conditions when where they are freely viewing the image and when they are +[723.040 --> 729.760] fixating at the center of the screen where that is as small dot and now the image can be tilted +[729.760 --> 736.320] 30 degrees to the left or 30 degrees to the right. Now overall this is the result we get and we've +[736.320 --> 741.600] seen a few of this already when the image stills and subjects are reviewing the saccade direction +[741.600 --> 750.240] rotate. Now surprisingly potentially initially is that when subjects are fixating in a dot and the +[750.240 --> 758.800] image in the background is tilted, these micro saccades that are made for infixations don't change at +[758.800 --> 763.920] all. So the distributions of these micro saccades remain exactly the same no matter if there is an +[763.920 --> 771.760] image in the background that is tilted or upright. Okay and we can quantify this in the same way as +[771.760 --> 778.560] shown before. So we've been reviewing we get a big rotation about 10 degrees with a reference frame +[778.560 --> 785.040] that we would think is closer to an agrocentric so a reference frame in the head but it's still +[785.120 --> 791.440] very affected by the image. On the other hand for fixation we get no effect and the saccades are +[791.440 --> 799.040] made in a purely agrocentric head reference frame. Now of course this could about the task in one +[799.040 --> 803.600] case they're fixating they may be ignoring the background so it may make sense that they are not +[803.600 --> 807.600] affecting by the background because they're just looking at the dot and in the other case they're +[807.600 --> 813.120] actually reviewing and engaging with the image. So then we did that far the analysis where we just +[813.120 --> 820.160] look at the free viewing data but we group the saccades depending on their size. So we did four +[820.160 --> 828.480] quartiles where we get the smallest saccades maybe you're interviewing less than one degree then we +[828.480 --> 834.720] have other groups from one to two more or less two to four and bigger than four. What we clearly see +[834.720 --> 841.440] is that there is a pattern that changes so for the big saccades we get a very strong effect of the +[841.440 --> 847.840] tilt of the image but for the small saccades we get almost no rotation and again we can quantify +[848.640 --> 855.840] with this reference frame index where it's zero means align with the head and one means align with the +[855.840 --> 862.560] image and the small saccades remain aligned with the head but big saccades align more and more with the +[862.800 --> 872.160] image. Okay so after doing this when we look more closely at the data we find that not all images +[873.200 --> 879.200] are the same. If we were to show what these effects are for different images we'll find images that +[879.200 --> 883.920] have a very big effect meaning they pull the saccades to be oriented with that image when it's +[883.920 --> 889.760] tilted and other images that don't seem to have the same effect. So what we are trying is to +[889.760 --> 894.640] find what are the features the characteristics of those images that would predict which images +[894.640 --> 900.480] affect the saccades and which ones don't. So in the first option that we thought of we are +[900.480 --> 906.240] studying the the saliency of an image so this is something that has been done a lot in the field +[906.240 --> 912.640] of imovements where you extract what are the most salient features of an image by contrast orientation +[913.280 --> 920.880] etc and we can build a saliency map as shown here that essentially would predict the positions +[920.880 --> 926.640] in the image where it's more likely to fixate. So now you could end up with some images +[927.520 --> 934.160] that have some structure on this saliency and that the structure would potentially induce a bias. +[934.160 --> 939.120] So if you have like here only two very salient targets you could predict that the subject is +[939.120 --> 944.000] going to be looking between the targets a lot so you're going to get more saccades in particular +[944.000 --> 948.960] directions. Well on the other hand you may have other images where the saliency map is more uniform +[950.320 --> 955.120] so it would not predict a lot of bias in the directions just purely caused by this +[955.120 --> 962.080] structure in the saliency map. As a second option we are going to look at the special frequency +[962.080 --> 966.880] distribution similar to again what Jake was doing but in this case we're going to focus at the +[966.880 --> 973.920] the power of the spectrum in different orientals. So here I have two examples one where there is +[973.920 --> 979.920] a very strong orientation signal in this case probably more in the low frequency where also in the high +[980.960 --> 987.600] where you have a very distinctive bias distribution of power in different +[988.640 --> 992.560] in other images you may have a more uniform power in all directions. +[992.720 --> 1001.600] And finally the third option of feature we are looking at is maybe the harder to analyze but +[1001.600 --> 1009.120] is the more cognitive the the cues about work gravity or where the floor is. So this cannot be +[1009.120 --> 1014.960] directly started with low level features so we are using a deep learning network trained with +[1014.960 --> 1022.160] actual images of known orientation and that network can tell us what the orientation of that +[1022.160 --> 1028.640] image is of any image and how certain the network is of that orientation. So then we can end up with +[1028.640 --> 1035.360] images that clearly tells us where a bright is and other images that may not have so much the +[1038.320 --> 1042.480] so with these three options we are going to do essentially the same analysis with all of them we get +[1042.480 --> 1049.120] a metric that tells us how the saliency this is strongly biased how the frequencies are +[1049.120 --> 1054.080] strongly biased or how strong the deep neural network can tell us where a bright is. +[1054.960 --> 1062.160] And we can correlate that with the strength of the effect on rotating the saccades that I shown before. +[1063.680 --> 1068.320] And this is the summary of the result and what we can see at least preliminary because this is +[1068.320 --> 1074.320] still a small set of images is that the special frequency either lower height seems to be the +[1074.880 --> 1084.800] the strongest contributor to this effect. So to summarize today we shown that saccade generation is +[1084.800 --> 1090.640] not uniform in all directions. We humans make saccades especially horizontal and this may not be +[1090.640 --> 1097.040] true for other animal species which is another interesting line of approach to this project problem. +[1097.840 --> 1104.640] We have these big bias towards horizontal directions unless so to vertical and this bias is not +[1104.640 --> 1111.920] really fixed neither to the head or the world it's probably initially biased towards the head but it +[1111.920 --> 1118.320] can be affected by the image content. And this is especially true for larger saccades. +[1119.120 --> 1125.760] Larger saccades seem to take more information about the stillt of the image and reorient themselves +[1125.760 --> 1130.080] with the image while the small saccades are more tightly tied to the head. +[1131.920 --> 1136.560] And then the special frequency content and how it is directionally biased seems to be the best +[1136.560 --> 1145.040] predictor for now about the effect of different images on this saccade tilt. So thank you everybody +[1145.040 --> 1149.360] and I want to thank also the people in my lab especially Stephanie Rives and Raul Rodriguez +[1149.360 --> 1155.280] which contributed to this work and the funding agencies. And that's my eye doing a little bit of +[1155.280 --> 1165.280] torsion. Okay thank you. +[1173.280 --> 1180.480] So that was really interesting basically telling us that saccade directions are modulated by the +[1180.480 --> 1185.680] information and the image the saccades are seeking the information. Do you guys have any questions? +[1187.200 --> 1188.160] Laura is that a hand? +[1198.480 --> 1203.840] Yeah thank you very much nice talk and very nice project. Do you think that is related somehow +[1203.920 --> 1212.080] to listings plain? So the initial data that I show with the small effect that could be +[1212.720 --> 1217.120] but the effect of the image is so much bigger than whatever anything that you could really predict +[1217.120 --> 1222.640] with the listings plain. Okay thinking rather of the of the micro saccades which are less sensitive +[1224.240 --> 1231.760] to the image. Yeah no certainly but still I don't think that all saccades are generated by the +[1231.760 --> 1238.960] same circuits as far as we know right now. So I don't think enforcing the restrictions of listings +[1238.960 --> 1243.760] plain would necessarily predict why the small ones versus the big one would be differently affected +[1243.760 --> 1260.480] by the image. Thank you. Thank you Hoi for your lovely talk. Being somebody who studies development +[1260.480 --> 1271.360] you know where I'm going to go. And so I can imagine spatial frequency piece being quite reflex +[1272.000 --> 1278.080] and then I can imagine the world and the task and the free viewing being more learned. +[1279.440 --> 1281.760] Are you willing to speculate about any of that? +[1282.720 --> 1292.560] Not too much but I think there are some studies that have shown so people have looked at the +[1292.560 --> 1299.920] horizontal bias across ages there may be one study that they know of and it seems to become more +[1299.920 --> 1306.240] and more biased with age. So it starts more with saccades made in all directions and with age +[1306.320 --> 1313.120] the distribution becomes tighter and tighter for a sometime. But I don't know of the effects of +[1313.120 --> 1320.480] building the image if that would independently change with age or not. No idea. Thank you. +[1324.480 --> 1330.720] Hi you said that spatial frequency influences. I want to know what spatial frequencies do what? +[1331.200 --> 1339.920] Yeah so right now we essentially group we tried to look at different spatial frequency bands +[1340.640 --> 1345.760] but we didn't find that different effect. So we found the same result for lower height right now. +[1346.640 --> 1353.520] So if we just look at how each band of frequencies is biased in directions we see the same effect. +[1361.680 --> 1367.600] Hello. Hello. Very cool talk. So when you say it goes with the head I mean there's two ways +[1367.600 --> 1372.240] you can think of that right it can go with the eye line or it can go with the stibular signal of +[1372.240 --> 1378.240] the head right. So if you were to have some to have a situation where the tilt does not go with +[1378.240 --> 1383.200] gravity let's say a person's lying down right and they're doing the same head tilt. How would you +[1383.200 --> 1388.560] predict that would the behavioral change? Yes good question but I think it +[1388.880 --> 1396.160] since we see that still stay with the head so it's not aligned with gravity I would expect to +[1396.160 --> 1405.040] still go with the head mostly when even is the eye line yeah or the the roll direction of the head. +[1405.040 --> 1413.440] Yeah thank you. My question is how do people scan an entire scene if they're primarily only +[1413.520 --> 1419.360] using horizontal scuds are these coming back to the same place and then going up or are they +[1419.360 --> 1423.840] actually too unique target? It's a good point I forgot to clarify so in those directions we ignore +[1423.840 --> 1429.840] size when we say there are more saccades in one direction they could be more as small ones where +[1429.840 --> 1439.280] you have more big ones in the other direction so I think in general we'll cover we you can cover +[1439.280 --> 1444.160] the entire field but you're going to switch more for each other and still those saccades are not +[1444.160 --> 1450.320] perfectly horizontal they still have a oblique component so you can zigzag around the image. +[1451.520 --> 1456.480] Yep this interest it doesn't seem the most efficient thing to do it to distract information. +[1457.280 --> 1462.720] Okay so we have a longer discussion section at the end of every session so if you have more +[1462.720 --> 1466.640] questions let's reserve it for then thank you Jorge. diff --git a/transcript/allocentric_WmtANkx6Bok.txt b/transcript/allocentric_WmtANkx6Bok.txt new file mode 100644 index 0000000000000000000000000000000000000000..b64b0927aa5d50f9b0806ea0fbe80c3d02b1065e --- /dev/null +++ b/transcript/allocentric_WmtANkx6Bok.txt @@ -0,0 +1,24 @@ +[0.000 --> 2.880] Why is Arvind K. T. Wal not wearing a suit in a tie? +[2.880 --> 4.560] The way that Ankar was wearing. +[4.560 --> 5.880] Is it because he cannot afford it? +[5.880 --> 8.840] Why is K. J. Rivalji carefully wearing a shirt? +[8.840 --> 11.440] Or if you think about it, it's deflective of the middle glass. +[11.440 --> 14.800] Why the hell is there a renault pen tucked into one of his pockets? +[14.800 --> 15.880] And camera capturing it. +[15.880 --> 18.520] Why is Modiji not wearing a T-shirt or Sharwani? +[18.520 --> 19.440] He should have bleeding blue. +[19.440 --> 22.160] He's rather wearing a blue colored jacket, a blue colored stool. +[22.160 --> 26.240] At a time when temperature was 32 degrees, humidity was 36 percent in him the birth. +[26.240 --> 28.000] Why is Gandhiji's stand really addressed? +[28.000 --> 31.920] Why is he just wearing sandals to attend the second round table conference? +[31.920 --> 33.480] At a time when London is freezing. +[33.480 --> 36.960] Each of these individuals, they choose their attire as a form of communication. +[36.960 --> 39.120] The timings may change, the rears may change. +[39.120 --> 41.760] But each of those leaders want to convey a message. +[41.760 --> 44.480] A message about their values, their beliefs, their affiliations. +[44.480 --> 48.320] Many people have this wrong perception that polity, governance, society, +[48.320 --> 52.160] it is all about understanding the constitution or judiciary, fundamental rights, +[52.160 --> 53.640] APSPs, parliament. +[53.640 --> 53.880] No. +[53.880 --> 57.080] These are the people who merely try to books or mechanically revise those subjects. +[57.080 --> 59.080] And they really become the content of the book. diff --git a/transcript/allocentric_WwYDMpD7j4Q.txt b/transcript/allocentric_WwYDMpD7j4Q.txt new file mode 100644 index 0000000000000000000000000000000000000000..2dddc0467528aea04f7d87a741eab428a0ffce3e --- /dev/null +++ b/transcript/allocentric_WwYDMpD7j4Q.txt @@ -0,0 +1,178 @@ +[0.000 --> 7.000] All right, excellent. +[7.000 --> 8.000] Excellent. +[8.000 --> 9.000] Okay. Sorry about that. +[9.000 --> 10.000] Thanks for the introduction. +[10.000 --> 12.000] I'm sorry for wasting a minute or two there. +[12.000 --> 14.000] Also thanks to the neuro-matrachonizer. +[14.000 --> 16.000] We're putting on this greatly needed meeting. +[16.000 --> 18.000] So yeah, I'm a computational neuroscientist. +[18.000 --> 21.000] And much of my work is focused on the theories of the neural +[21.000 --> 23.000] computations underlying spatial cognition, +[23.000 --> 26.000] including interactions between play cells and grid cells. +[26.000 --> 29.000] But in this talk, I'd like to describe some data and some modeling +[29.000 --> 33.000] that potentially supports a theoretical role for septal neurons +[33.000 --> 37.000] in maintaining the allocentric reference frame +[37.000 --> 41.000] of the spatial cognitive system, at least in rats. +[41.000 --> 46.000] Particularly, I'm going to focus on the question of path integration +[46.000 --> 57.000] and how, and how you might be able to maintain an allocentric reference frame +[57.000 --> 61.000] of the path integration, which basically comes down to the problem +[61.000 --> 66.000] of how to, how to bring reset the accumulation of encoding errors. +[66.000 --> 72.000] So path integration as has been described is the critical component of navigation +[72.000 --> 76.000] that contributes spatial information by integrating self-motion signals. +[76.000 --> 77.000] So resetting is necessary. +[77.000 --> 83.000] It was integrating self-motion that accumulates errors in position estimates over time. +[83.000 --> 87.000] And that's illustrated by this green trajectory here, +[87.000 --> 90.000] starting with this journey. +[90.000 --> 92.000] And so there's a lot of different ways of doing that. +[92.000 --> 97.000] There's been two main kind of camps in a theoretically speaking for how to model +[97.000 --> 102.000] path integration and how it might interact with landmarks and queues in the environment. +[102.000 --> 106.000] One of my much older papers kind of explored the oscillatory interference mechanism, +[106.000 --> 110.000] which is where if you have a theta-rhythmic oscillators that are also modulated +[110.000 --> 115.000] by direction and speed, then you can combine them in different ways to form grid cells. +[115.000 --> 120.000] For instance, those were models from John Keath and others and Neil Burgess. +[120.000 --> 126.000] But I showed that you could generalize this or generisize this into a place field, +[126.000 --> 131.000] one of a generic spatial mapping model where if you randomly, +[131.000 --> 134.000] if you have random preferred directions with these VCOs, +[134.000 --> 139.000] these velocity control oscillators, then you can build place fields. +[139.000 --> 143.000] That are stable and basically on path integration. +[143.000 --> 148.000] And this was work with Jim Kinerim's lab of modeling his data on circular tracks. +[148.000 --> 153.000] And so this is just kind of showing off the amplitude of synchronic and map into space +[153.000 --> 156.000] through path integration with oscillators. +[156.000 --> 160.000] But one of the things that I was really concerned about in this paper was, +[160.000 --> 165.000] was how does a sensory cue actually feed back into the phase code in order to stabilize a phase code +[165.000 --> 167.000] from drifting away. +[168.000 --> 172.000] And so in the bottom panels here, you can see that if you have, +[172.000 --> 174.000] if you have a positive sum amount of phase error, +[174.000 --> 177.000] and you want that to be corrected at some point in time, +[177.000 --> 180.000] that represents the interaction with an external cue, +[180.000 --> 184.000] then I simply positive a very abstract, +[184.000 --> 190.000] an abstract feedback process represented in this diagram on the bottom right, +[190.000 --> 195.000] which is what I needed to be able to study this in terms of remapping and partial remapping. +[195.000 --> 198.000] But this is definitely a black box. +[198.000 --> 201.000] So this kind of really raised the big question, +[201.000 --> 207.000] like, is there an actual neurobiological basis for having this kind of spatial feedback, +[207.000 --> 209.000] particularly in the phase domain? +[209.000 --> 217.000] So the question is, is there a phase code and oscillatory code outside of potentially outside of the campus +[217.000 --> 221.000] that could serve a resetting function for path integration? +[222.000 --> 226.000] And so this data is a, is a result of collaboration with, +[226.000 --> 229.000] with a TAD Blaires lab at UCLA, +[229.000 --> 234.000] where he performed these very long duration recordings and an 80 centimeter cylindrical arena, +[234.000 --> 239.000] basically standard random foraging tasks. +[239.000 --> 242.000] While recording a reference hippocampal LFP signal, +[242.000 --> 248.000] he recorded essentially from all of the subcortical brain areas highlighted here, +[249.000 --> 256.000] that are in one way or another interconnected with the hippocampal and Serenal formation. +[256.000 --> 259.000] And so he recorded from all of these areas, +[259.000 --> 268.000] and I basically took this into an ET analysis paradigm to look at the amount of spatial information carried in phase, +[268.000 --> 275.000] in the theta phase, as well as correlating that to velocity and position and other characteristics. +[276.000 --> 280.000] And I just kind of give the hint of where things are going. +[280.000 --> 286.000] I found this kind of spatial phase code, which I think might serve the theoretical role that I described, +[286.000 --> 289.000] only in one place, and that was in lateral septum. +[289.000 --> 294.000] And lateral septum happens to be the primary subcortical output target of the hippocampus. +[294.000 --> 299.000] So it's possibly the entrance point to an interesting feedback loop, +[299.000 --> 304.000] if we consider all the interconnectivity within these networks. +[304.000 --> 308.000] And so I'm going to take you through what some of this lateral septal data looks like. +[308.000 --> 311.000] And just as a brief overview, which is recorded, +[311.000 --> 314.000] what the theta signal is, probably this crowd doesn't need that. +[314.000 --> 318.000] You can record the LFP, you can do a band pass filter, find where the peaks are, +[318.000 --> 325.000] and then we can take the spike timing from a relative to peak to peak within each theta cycle. +[325.000 --> 329.000] And we map that to the phase domain that goes from zero to two pi. +[330.000 --> 335.000] And just to be so that explains the y axis of a lot of the plots that it will show. +[335.000 --> 340.000] And so here's just a standard spike trajectory plot of one of these cells. +[340.000 --> 344.000] And you can clearly see this is a very long, I think a two hour recording. +[344.000 --> 349.000] So you've got the gray trajectory there with the random foraging and the red spikes. +[349.000 --> 351.000] And you can see clear spatial modulation. +[351.000 --> 357.000] So if we look at the top map here, you can see that the firing rate clearly illustrates +[357.000 --> 362.000] that it is a broad, place like field and all the west side of the arena, +[362.000 --> 363.000] which you know, that's great. +[363.000 --> 368.000] We've got spatial modulation and lateral septum that's been shown sparsely in the literature, +[368.000 --> 369.000] but it hasn't shown before. +[369.000 --> 376.000] What hasn't really been shown is that relationship to the ongoing theta oscillation and the difficult campus system. +[376.000 --> 383.000] So if you look at the bottom map here, this is a phase map of the average phase at every location. +[384.000 --> 393.000] And you can see that there is a core spot in between the pattern of modulation in the rate map on the top and the phase map on the bottom. +[393.000 --> 395.000] And so this is, this is interesting. +[395.000 --> 397.000] This is over a very long period of time, a very long recording. +[397.000 --> 401.000] And there appears to be a very strong relationship between rate and phase. +[401.000 --> 405.000] So that's kind of the basis of the idea going forward. +[405.000 --> 409.000] And that was an I quantified that by looking at the phase rate correlation, +[409.000 --> 414.000] a circular linear correlation for the basic pixels in these maps. +[414.000 --> 416.000] And you can see what that looks like. +[416.000 --> 420.000] So you get kind of the expected negative phase rate relationship. +[420.000 --> 424.000] And I termed these cells, I called them phaser cells. +[424.000 --> 430.000] So these are lateral septal phaser cells for want to the better word. +[430.000 --> 432.000] We'll see if it catches on. +[432.000 --> 435.000] But basically we're going to analyze that correlation. +[435.000 --> 442.000] But the correlation kind of immediately brings to mind that like you can explain this with a fairly simple phase coding mechanism. +[442.000 --> 450.000] So if you posit that there's some cell that it receives a inhibitory theta rhythmic input such as the the magenta magenta sinusoid that you see here. +[450.000 --> 462.000] And then you also posit that well maybe it also receives a slowly changing or slowly ramping depolarizing input like the green triangle wave here. +[462.000 --> 470.000] And that's all you need to get kind of the phase coding relationship that we just saw where as the input increases. +[470.000 --> 476.000] You start to get activity in the cell and then you get more and more activity within each data cycle as the input goes up. +[476.000 --> 482.000] But then the activity within each data cycle also occurs an earlier time or it initiates an earlier time. +[482.000 --> 487.000] So you have this joint modulation of phase and and rate. +[487.000 --> 494.000] And critically has once an input for slowly ramp stop it gets slowly ramped back down and symmetrically you see the same exact thing. +[494.000 --> 502.000] The phase will deflect all the way back up to the baseline phase once that input has gone away. +[502.000 --> 505.000] So there's no kind of history system of learning going on here. +[505.000 --> 516.000] So this is a symmetric bidirectional phase coding mechanism and something like this has been positive for face for place cells and hippocampus before learning or you know any kind of +[516.000 --> 519.000] network effects of comment to play. +[519.000 --> 523.000] But the idea is that this gives you a scalar code. +[523.000 --> 533.000] So this is basically the co modulation of phase and rate means that the phase is basically just a conversion of rate. +[533.000 --> 540.000] It's analogous to taking this facial information in a rate code and then and then putting it into the phase domain. +[540.000 --> 544.000] And so we can think about functionally why would you want to do that. +[544.000 --> 553.000] And it is in particular a high contrast of what you see in and in typical place cells and hippocampus. +[553.000 --> 562.000] So on the right I've taken a figure from from Sousa and Torch 2017 paper with the analyzed a large place cell dataset and showed this kind of condominical +[562.000 --> 567.000] unit direction all asymmetric relationship between phase and rate. +[567.000 --> 573.000] So as the animal goes through a field the the phase will continually go down and does not return. +[573.000 --> 587.000] However, if you had a phase yourself field as the rate goes up, but the phase would advance and then as the rate goes down on the scene of a leaves the field, the phase would delay back up to the previous level. +[587.000 --> 594.000] And so looking for the type of phase code I set up a number of different criteria of four criteria three important ones. +[594.000 --> 604.000] So looking at the spatial phase information, looking at the total phase shift, how much does it change within that correlation and then the strength of that phase rate correlation essentially. +[604.000 --> 615.000] And then using that those criteria I was able to filter the tads entire data set this a cortical data set all those single unit recordings into. +[615.000 --> 624.000] Cells meet these major solid criteria and those that don't and particularly it's important to look at whether this is actually a stable code or not. +[625.000 --> 634.000] So just looking within session I compared the up to the first hour of a session the early part of to the up to the last hour, the late part. +[634.000 --> 649.000] And so if you look at on the left of the spatial correlation and on the right a change in that total phase shift, this kind of illustrates that you do maintain spatial correlations across these long duration recordings. +[649.000 --> 667.000] And the phase shift that comprises for that constitutes that face yourself code does not significantly change most cells remain within about pie over four or about 45 degrees across across these multi hour recordings. +[667.000 --> 689.000] And then you also want that to exist across days and that is basically what we found with the curve looking very similar and between between days with the identified units we cannot we can track with each individual cells looking like across days and the vast majority of them do not have significant changes or flips in the in the direction in their phase shifts for this phase code. +[690.000 --> 696.000] A couple of them do the most of them are pretty stable, which is, which is pretty good. That's what you want to see. +[696.000 --> 706.000] But then the last thing besides stability is that you want to make sure these spatial responses really are spatial and not just a confound of spatial correlations of other aspects of the trajectory. +[706.000 --> 716.000] And then to kind of decon found that I trained a GLM generalized linear model with both spatial predictors and trajectory based predictors. +[716.000 --> 722.000] And so these variables called LMQ or just linear and quadratic sources up to second order spatial variation. +[722.000 --> 727.000] And then the trajectory predictors are wall distance speed and direction basically. +[727.000 --> 740.000] So this top grid. This top grid is showing you that the responses are utterly dominated by the spatial factors and almost not at all the trajectory based factors and other confounds. +[740.000 --> 749.000] And even if we look at the maximum possible contribution that each of these predictors made, which is the bottom plot here, you still see that dominance of the spatial relationships. +[750.000 --> 759.000] Though that does kind of also reveal that there is this hint of a trade off between how spatial the cells are and then how tuned to speed they are. +[759.000 --> 763.000] If you look at the sorting along the fourth column of S here. +[763.000 --> 768.000] I'm not sure if people can see my maps actually. I'm waiting my master on to them. +[768.000 --> 772.000] So once that's kind of established, these are spatial cells and they are pretty stable. +[772.000 --> 777.000] We have those criteria. We can kind of see where these cells fall. +[777.000 --> 784.000] So on this plot, I'm showing all cells that have significant spatial phase information. +[784.000 --> 791.000] And so that spatial phase information on the x axis and then the total phase shift that phase modulation is on the y axis. +[791.000 --> 797.000] And you can see we have these these cells with a negative phase rate correlation here in the bottom part of the plot. +[797.000 --> 803.000] And these these circles are the size of the circle correlates to how strong correlation is. +[803.000 --> 813.000] It's like nice strong phase rate correlations. I'm showing these strong negative phase modulation giving us lots of information about space and phase, which is great. +[813.000 --> 817.000] But then the kind of a surprising thing was that if we look at the top, we also saw cells. +[817.000 --> 825.000] So the cells appear with positive face shifts. This was surprising because of that simple mechanism that I implied earlier, but I described earlier. +[825.000 --> 831.000] You wouldn't expect higher firing rate to correspond to later firing and that on two of the model. +[831.000 --> 837.000] So there's probably something else going on here and this potentially has interesting implications. +[837.000 --> 846.000] So this shows some examples. These are five different examples of those negative phase or cells from from different animals. +[846.000 --> 855.000] The top row shows you the rate maps, the middle row shows those phase maps. And then the bottom shows the phase rate correlations, just like the example cells we showed earlier. +[855.000 --> 865.000] And you can see those like wall responses, there's place like responses, place boundary conjunctive responses and kind of a broader but still spatial responses. +[865.000 --> 867.000] So it's a strong diversity. +[867.000 --> 869.000] Joe, Joe, if you have one minute left. +[870.000 --> 876.000] Darn, okay. And so here's an example of the positive phase cells, which are not as strongly spatial. +[876.000 --> 883.000] So if you look at the phase rate, the trajectories of these cells, you can see across phase. +[883.000 --> 891.000] They interleave very nicely. You can see in the spot here. And then if you look at the typical firing phase of these populations, you can see just how interleaved they are. +[891.000 --> 897.000] So at any moment in time, you've got information coming in the phase domain about space from one population or the other. +[897.000 --> 911.000] And so very quickly, the modeling of this is that I had a dynamical circuit model with a very simple structure here using feed for the suppression of the negative of the positive cells by the negative cells. +[911.000 --> 923.000] And you can get this complimentary relationship pretty much exactly, which is very nice. And then I use that GLM as a generative model to generate spatial tuning curves for both a thousand negative. +[924.000 --> 929.000] And so these are just random target bursting. +[929.000 --> 935.000] Is there random theta bursting neurons that are not path integrating our spatial. +[935.000 --> 940.000] The questions can we make can we make them reset to a path integration signal. +[940.000 --> 946.000] And so with different types of supervised base codes, which I won't get into detail because I'm running at a time. +[947.000 --> 957.000] You can actually see how well these these codes were learned by by down by these downstream target cells based on a very simple competitive learning mechanism. +[957.000 --> 963.000] And so once you have that, we have a very small number of these cells, you can actually do population decoding of just the phase. +[963.000 --> 972.000] And you can see that well, this this top structure didn't work very well, but bottom one did. And that is going to lead to a very rapid phase resetting. +[973.000 --> 978.000] The mechanism basically a sub second reset mechanism for path integration. +[978.000 --> 984.000] So that's basically the idea that we've got fairly simple network structures and circuits. +[984.000 --> 996.000] And using this kind of a single location based synchrony idea, there might be a sub cortical pathway for for Facebook feedback in the spatial system that might support path integration and other elements of navigation. +[997.000 --> 999.000] And I would just end there. +[999.000 --> 1009.000] Perfect. Okay. So we have to move swiftly on to make sure that Balaash has enough time for his talk. You do have a few questions in the Q&A from Eleanor. +[1009.000 --> 1014.000] So I would encourage you to go check a look at that. And thank you again for this great talk. +[1014.000 --> 1017.000] I'm going to move swiftly swiftly on. diff --git a/transcript/allocentric_XhhkhpK-3L4.txt b/transcript/allocentric_XhhkhpK-3L4.txt new file mode 100644 index 0000000000000000000000000000000000000000..6d1b397f5d2dee0f0d7f4c3aea047b520733f75c --- /dev/null +++ b/transcript/allocentric_XhhkhpK-3L4.txt @@ -0,0 +1,209 @@ +[0.000 --> 9.880] Say you're at a cookout when you notice that there's a giant spider hanging out on your +[9.880 --> 10.880] friend's shoulder. +[10.880 --> 15.360] You want to avoid total pandemonium, so you casually wave to get their attention, then +[15.360 --> 17.400] make a brushing motion on your left shoulder. +[17.400 --> 21.360] But instead of realizing that there inches away from certain death, your friend thinks +[21.360 --> 25.160] that you're busting out a new dance move, and the whole cookout starts breaking it down. +[25.160 --> 29.040] Waving to say hello, yelping when you get hurt or brushing at your shoulder to try to +[29.040 --> 34.000] save your friend from mortal danger are all examples of non-verbal communication. +[34.000 --> 38.200] Non-verbal communication is the process of sharing thoughts and ideas using behavior other +[38.200 --> 39.200] than words. +[39.200 --> 43.800] The gestures, movements, and facial expressions we use to share information with one another +[43.800 --> 46.160] are all forms of this type of communication. +[46.160 --> 50.040] It also includes things like smiling to show you're happy, or giving a thumbs up to say +[50.040 --> 51.040] okay. +[51.040 --> 54.640] In other words, non-verbal communication is kind of like a game of shurides. +[54.640 --> 57.920] Only you're playing it all the time, even if you don't realize it. +[57.920 --> 63.720] In fact, around 65% of the meaning we get from communication comes from non-verbal signals. +[63.720 --> 68.000] So understanding how non-verbal communication works can help you better express yourself +[68.000 --> 69.640] and avoid being misunderstood. +[69.640 --> 74.480] I'm Cisandra Ryder, and this is Study Hall, intro to human communication. +[74.480 --> 82.840] But non-verbal communication isn't a solo act. +[82.840 --> 84.240] It's more like a duet. +[84.240 --> 87.560] This is because our non-verbal and verbal communication work together as part of the +[87.560 --> 88.560] same system. +[88.560 --> 93.040] Verbal communication uses words to share ideas, and non-verbal communication uses gestures +[93.040 --> 94.040] and sounds. +[94.040 --> 98.120] It's like verbal communication is the melody, and non-verbal communication is the harmony. +[98.120 --> 101.880] And when their powers combine, our messages become even more meaningful. +[101.880 --> 106.120] For instance, we tend to rely on verbal communication to share complex ideas and express ourselves +[106.120 --> 107.120] clearly. +[107.120 --> 110.920] Like when someone asks us for directions, we use spoken or written words to explain which +[110.920 --> 111.920] route they should take. +[111.920 --> 116.120] You know, like turn left to the library, or it's the second door on your right. +[116.120 --> 120.320] Because to help someone get from point A to point B, they need as much specific information +[120.320 --> 121.320] as possible. +[121.320 --> 123.720] And that's where verbal communication really shines. +[123.720 --> 128.560] Non-verbal communication, on the other hand, adds extra context to the words that we use. +[128.560 --> 132.680] So along with using words to give directions, we can also use our hands to point out which +[132.680 --> 134.080] way someone should go. +[134.080 --> 137.840] Non-verbal cues can also clear things up when our words might be misinterpreted. +[137.840 --> 140.280] Like telling someone, go that way. +[140.280 --> 143.440] You'd be confusing unless you also pointed to where you wanted them to go. +[143.440 --> 147.080] We also use non-verbal communication to convey emotions and connect with others. +[147.080 --> 150.680] For instance, you'd probably smile while giving directions so the other person knows +[150.680 --> 152.440] that you're friendly and willing to help. +[152.440 --> 157.280] And finally, non-verbal communication also helps us make judgments about a person's credibility +[157.280 --> 158.560] or trustworthiness. +[158.560 --> 162.000] Like someone whose lost might not ask you for help if you're looking around and have +[162.000 --> 163.000] your arms crossed. +[163.000 --> 166.600] In this case, you're broadcasting that you're probably waiting for someone and don't have +[166.600 --> 168.560] time to answer a stranger's questions. +[168.560 --> 172.800] So if non-verbal communication can do all of these things, does that make it more important +[172.800 --> 174.120] than verbal communication? +[174.120 --> 176.400] Well, it depends on the context. +[176.400 --> 180.200] Like verbal communication is probably more important when you're making a big business +[180.200 --> 182.920] deal and want to make sure everyone's on the same page. +[182.920 --> 186.240] But if you're disagreeing with a friend, paying attention to their tone of voice and body +[186.240 --> 189.240] postures can clue you into how they're really feeling. +[189.240 --> 190.520] And that's normal. +[190.520 --> 194.400] Because non-verbal and verbal messages play different roles in how we communicate. +[194.400 --> 196.480] But they also have a few things in common. +[196.480 --> 201.640] Like both verbal and non-verbal communication include non-vocal and vocal elements. +[201.640 --> 207.040] For instance, writing in American Sign Language are non-vocal elements of verbal communication +[207.040 --> 209.200] because they both use symbols to make meaning. +[209.200 --> 211.360] And you don't actually speak them with your voice. +[211.360 --> 214.760] We also use non-vocal elements during non-verbal communication. +[214.760 --> 218.600] According to the field of kinesics, which is the study of movement, there are three main +[218.600 --> 224.120] types of non-vocal, non-verbal cues, gestures, facial expressions, and postures. +[224.120 --> 228.680] These are non-vocal and non-verbal because most gestures don't refer to a specific word +[228.680 --> 230.920] like a written or signed symbol does. +[230.920 --> 235.360] Like when you wave to your friend at the cookout, you could have been saying, hello, goodbye, +[235.360 --> 236.840] or trying to get their attention. +[236.840 --> 241.120] Because there isn't one single word that we associate with waving, we have to use context +[241.120 --> 246.360] clues, like facial expressions or spoken words to understand what the wave really means. +[246.360 --> 250.680] And while many gestures have more than one meaning, kinesics lets us sort them into different +[250.680 --> 253.680] categories based on the type of information they're sharing. +[253.680 --> 257.560] For instance, gestures that describe something are called illustrators. +[257.560 --> 260.840] Illustrators are used to clarify or reinforce a verbal message. +[260.840 --> 264.720] Like if you'd pointed at your friend's shoulder during the cookout and said, there's a huge +[264.720 --> 265.720] spider. +[265.720 --> 269.720] They would know exactly what you're communicating, in this case, that they need to brush +[269.720 --> 270.800] the spider off. +[270.800 --> 275.560] And by using an illustrator to clarify your verbal message, you can save your friend and +[275.560 --> 276.560] the cookout. +[276.560 --> 280.000] Then there are emblems, or gestures that have a meaning that people in a community or +[280.000 --> 281.400] culture have agreed upon. +[281.400 --> 284.760] Some of them and emblems include shaking your head to say no, or shrugging to show that +[284.760 --> 285.920] you don't know something. +[285.920 --> 289.720] In the cookout scenario, if your friend went to brush the spider off and asked if it was +[289.720 --> 293.680] gone, you might use the emblem of nodding your head instead of saying, yes. +[293.680 --> 298.000] Or if they asked how many spiders were on their shoulder, you could hold up one finger, +[298.000 --> 299.680] which would also be an emblem. +[299.680 --> 303.760] Basically, emblems are super helpful because they give us a way to communicate clearly without +[303.760 --> 305.440] using words at all. +[305.440 --> 309.840] We can also use gestures called regulators to manage our conversations with others. +[309.840 --> 313.520] Just keep the conversation flowing, like when we lean forward to show that we want someone +[313.520 --> 314.520] to keep talking. +[314.520 --> 317.200] But we can also use regulators to pause a conversation. +[317.200 --> 320.880] Like if your friend is telling a wild story, but you really need to tell them about the +[320.880 --> 324.600] spider on their shoulder, you might hold your hand out with your palm open to get them +[324.600 --> 325.600] to pause. +[325.600 --> 329.360] And in any scenario, regulators help us keep the conversation flowing and ensure everyone's +[329.360 --> 330.360] voice is heard. +[330.360 --> 333.840] Then there are adapters, which are gestures that help our bodies release tension during +[333.840 --> 338.280] stressful situations, like twirling our hair or clicking a pen during a job interview. +[338.280 --> 341.680] These are different from the other types of gestures because we usually aren't aware +[341.680 --> 342.680] that we're doing them. +[342.680 --> 346.800] And while they make us feel better in a tough situation, adapters can actually distract +[346.800 --> 348.360] the people we're communicating with. +[348.360 --> 352.280] Like hair twirling during an interview totally steals a spotlight from your awesome story +[352.280 --> 354.680] about how you saved your friend from a deadly spider bite. +[354.680 --> 358.800] Because even when we don't realize it, our non-ribble cues still send messages to other +[358.800 --> 359.800] people. +[359.800 --> 361.760] Even our subconscious hair twirling and pen clicking. +[361.760 --> 366.000] But with a little self-awareness, we can recognize and monitor our adapters and project confidence +[366.000 --> 367.760] in any situation. +[367.760 --> 371.520] Directors, emblems, regulators and adapters are important because they add meaning to +[371.520 --> 375.480] what we say and even replace verbal communication when the moment is right. +[375.480 --> 379.240] But gestures aren't the only non-vocal elements of non-brible communication. +[379.240 --> 383.600] We also use things like eye contact to create connections, share information, establish +[383.600 --> 387.000] our credibility, and even make a good impression when meeting someone new. +[387.000 --> 390.480] But eye contact can also be used to intimidate others. +[390.480 --> 394.880] Like we probably all remember disobeying the rules as a kid and getting the look from our +[394.880 --> 395.880] parents. +[395.880 --> 400.680] And they made eye contact, oh man, you knew you were in big trouble and needed to clean +[400.680 --> 402.200] your room right away. +[402.200 --> 407.040] Eye contact also interacts with other non-brible cues, like facial expressions, so we can better +[407.040 --> 409.360] understand what people are thinking and feeling. +[409.360 --> 413.880] For example, if you smile at a baby, they'll know your friendly and might even smile back. +[413.880 --> 417.880] Facial expressions, like smiles, are often viewed as innate, emotional reactions to the +[417.880 --> 418.880] world around us. +[418.880 --> 422.440] Like, smiling at strangers in public might feel totally involuntary to you. +[422.440 --> 428.280] But the truth is that all of our facial expressions, including smiles, are also social behaviors. +[428.280 --> 431.600] In many cultures, we smile to make other people feel at ease. +[431.600 --> 435.440] And because we wear those social smiles for the benefit of others, we view them differently +[435.440 --> 440.200] than the genuine smiles we put on when we're feeling strong emotions, like joy or excitement. +[440.200 --> 444.080] So like waving or giving the thumbs up, most facial expressions have different meanings +[444.080 --> 446.400] depending on how we use them in different contexts. +[446.400 --> 450.400] And the better we are at pairing facial expressions with our verbal communication, the more +[450.400 --> 452.080] effective our messages can be. +[452.080 --> 455.320] But there are also vocal elements of non-verbal communication. +[455.320 --> 457.560] Yep, you heard that right. +[457.560 --> 460.960] Some of the sounds we make count as non-verbal communication. +[460.960 --> 462.600] I know, I know. +[462.600 --> 463.760] That's pretty confusing. +[463.760 --> 467.440] But we often use sounds to add meaning to the words we speak, like when you raise your +[467.440 --> 470.360] voice when you're angry or speak quickly when you're excited. +[470.360 --> 474.760] Because these sounds aren't included in our grammar system, we call them pary language, +[474.760 --> 477.400] which literally means alongside language. +[477.400 --> 482.560] Pair language refers to the vocalized but non-verbal parts of a message, like pitch, volume, +[482.560 --> 484.360] rate of speech, and verbal fillers. +[484.360 --> 488.440] Like if I start talking loud and really fast, you might think something exciting is about +[488.440 --> 489.440] to happen. +[489.440 --> 493.000] Once we learn how pary language works, we can use it to convey meaning and emotion in our +[493.000 --> 494.480] conversations with others. +[494.480 --> 498.720] For instance, in English, we use a rising pitch to indicate that we're asking a question, +[498.720 --> 499.720] like this. +[499.720 --> 501.320] Is there a spider on my shoulder? +[501.320 --> 505.200] And if we want to emphasize the intensity of a verbal message, we might increase the volume +[505.200 --> 507.080] of our voice like this. +[507.080 --> 509.240] There's a giant spider on your shoulder. +[509.240 --> 513.880] Vocal elements of non-verbal communication make our words more expressive, and they can +[513.880 --> 519.200] even stand in for words when we need to express sudden feelings, like surprise or fright. +[519.200 --> 523.040] Without these vocal cues, our verbal communication just wouldn't be as exciting. +[523.040 --> 526.800] So if non-verbal communication is so important, how do we learn to do it? +[526.800 --> 530.480] It's not like you take classes on when to use an illustrator versus an emblem in school. +[530.480 --> 534.680] Instead, we learn how to use non-verbal communication by participating in our culture. +[534.680 --> 538.360] Non-verbal communication cultures have unique norms or guidelines for how to use non-verbal +[538.360 --> 539.360] cues. +[539.360 --> 543.520] For example, pointing is fine if you're from the United States, but in China and Indonesia, +[543.520 --> 545.320] it's considered really rude. +[545.320 --> 549.200] Artifacts or objects and possessions we use are another form of non-verbal communication +[549.200 --> 551.160] that's shaped by the culture we live in. +[551.160 --> 555.920] Most cultures have rules about how we use artifacts, which include our clothes, jewelry, and +[555.920 --> 557.760] the decorations we put up in our spaces. +[557.760 --> 562.600] For example, on some college campuses, it's the norm for students to wear pajamas to class. +[562.600 --> 566.960] There's a good chance no one told students that wearing fuzzy slippers to class is cool. +[566.960 --> 569.920] They just saw older classmates doing it and assumed it was okay. +[569.920 --> 574.040] But some cultures have explicit rules about how artifacts should be used, like wearing +[574.040 --> 576.560] a wedding ring on your third finger on your left hand. +[576.560 --> 579.880] And using artifacts to express ourselves can also be fun. +[579.880 --> 583.480] Like if you're a huge Lord of the Rings fan, you might have a bumper sticker of the +[583.480 --> 585.360] ring of power on the back of your car. +[585.360 --> 588.800] But someone who hasn't seen Lord of the Rings might think your bumper sticker represents +[588.800 --> 593.400] your passion for ancient jewelry, instead of your undying devotion to the fellowship. +[593.400 --> 596.640] Navigating non-verbal communication can be a little confusing if you're not familiar +[596.640 --> 598.480] with cultural rules and norms. +[598.480 --> 603.280] But it's impossible to know all the non-verbal norms from every culture in the entire world. +[603.280 --> 606.840] So it's inevitable that non-verbal messages are going to get mixed up sometimes. +[606.840 --> 611.080] It's just a normal part of living in a world with so many amazing cultures and traditions. +[611.080 --> 615.600] But just like we use context clues to figure out what unfamiliar words mean, we can also +[615.600 --> 618.800] look for context clues to understand non-verbal communication. +[618.800 --> 622.880] For instance, if you notice young people bowing to older people, you can infer that bowing +[622.880 --> 624.360] is a sign of respect. +[624.360 --> 626.800] And add that to your non-verbal vocabulary too. +[626.800 --> 631.040] At the end of the day, we can't not communicate when it comes to non-verbal communication. +[631.040 --> 635.040] Our non-verbal cues are a window into our feelings and emotions, and they're constantly +[635.040 --> 636.760] seeping out of us. +[636.760 --> 637.880] Even if we don't realize it. +[637.880 --> 642.120] So to make sure our non-verbal communication reflects what we truly want to say, we have +[642.120 --> 643.640] to be extra thoughtful. +[643.640 --> 648.120] Because a single hand gesture can be the difference between squashing a giant spider and accidentally +[648.120 --> 649.120] starting a dance party. +[649.120 --> 652.840] Thanks for watching Study Hall, Intro to Human Communication, which is part of the Study +[652.840 --> 655.920] Hall project, a partnership between ASU and Crash Course. +[655.920 --> 658.800] If you liked this video and want to keep learning with us, be sure to subscribe. +[658.800 --> 662.920] You can learn more about Study Hall and the videos produced by Crash Course and ASU in the +[662.920 --> 664.440] links in the description. +[664.440 --> 665.040] See you next time! diff --git a/transcript/allocentric_YSd6nSYr2ZA.txt b/transcript/allocentric_YSd6nSYr2ZA.txt new file mode 100644 index 0000000000000000000000000000000000000000..12df460ba535bd1bfe90bc6fdad1414cd215caee --- /dev/null +++ b/transcript/allocentric_YSd6nSYr2ZA.txt @@ -0,0 +1,6 @@ +[0.000 --> 9.940] Oh, little, uh patient. +[10.140 --> 13.100] I'm ag sympathized. +[20.120 --> 25.680] And jib darauf que ain et en participate. +[25.680 --> 27.680] Do you need anything? +[29.680 --> 33.680] You may have gone to the law school for you to have breakfast. +[34.680 --> 37.680] I brought this ice cream. diff --git a/transcript/allocentric_YrMiKxPV_Ig.txt b/transcript/allocentric_YrMiKxPV_Ig.txt new file mode 100644 index 0000000000000000000000000000000000000000..f803d66648cc0316b87cf013d28b36c5952c50e6 --- /dev/null +++ b/transcript/allocentric_YrMiKxPV_Ig.txt @@ -0,0 +1,1096 @@ +[0.000 --> 3.840] This is a problem with relationships. Everybody has a different version of what a relationship is. +[3.840 --> 8.720] And a lot of conflict comes from when two people have different ideas of what a relationship should look like. +[8.720 --> 13.360] My uncle gave me this test and without fail, it is predicted every divorce. +[13.360 --> 13.840] What is it? +[13.840 --> 19.040] The test was so simple. It was like, that is a huge red flag. +[19.040 --> 23.200] The best paradigm for Gates is storytelling. Never questions. +[23.200 --> 27.200] So as soon as you get in the table, not just like, hey, are you? It's just like, the craziest thing just happened. +[27.200 --> 33.440] You were on a double date with your wife and you met somebody for the first time at the end. +[33.440 --> 37.120] You were like, they're cheating on their partner. How did you come to that conclusion? +[37.120 --> 38.240] There was two things. One. +[46.320 --> 51.520] I want to start with how I can improve my ability to read non-verbal cues. +[52.000 --> 54.000] Yeah. Okay. That's a process. +[54.000 --> 59.680] All right. So it starts with evaluating how you process behavior as it is. +[59.680 --> 62.960] So it starts with your default approach. So the problem is this. +[62.960 --> 67.120] We go down through life. We have this like neural net in our head. +[67.120 --> 70.880] And it's based on all past model of all the interactions that we've had. +[70.880 --> 76.560] And the truth is you have to understand how much you're wrong before you can improve the accuracy of what you're right on. +[76.640 --> 84.320] So there's a lot of like pop science stuff or cultural biases or just experiences in your life that shifted the way that you view things. +[84.320 --> 86.720] And the first is acknowledging all of that. +[86.720 --> 90.000] It's like the first step, which is a pretty robust process, which I'll talk about. +[90.640 --> 94.880] It's really looking at your perspective for life and how you view social interactions. +[94.880 --> 98.800] And then after that, it's increasing your behavioral awareness. +[98.800 --> 105.280] So this is just the ability to notice and pay attention to the shifts and the variation in somebody's behavior. +[105.360 --> 109.280] And understand in real time the meaning you're deriving from it. +[109.280 --> 114.400] So the truth is a lot of people, using bodily language experts will look at someone's behavior and be like, +[114.400 --> 118.160] oh, like this, this means this and this means that that's not really what you're doing. +[118.160 --> 121.360] I think it's more about noticing than making meaning out of. +[121.360 --> 125.840] So just like noticing. So for example, like noticing that the way that you're nodding your head in me right now, +[125.840 --> 128.320] that is a facet of social coordination. +[128.320 --> 131.120] You're showing me that you're listening to what I'm saying. +[131.120 --> 134.000] It doesn't mean that you're quote unquote interested. +[134.000 --> 138.400] But if we really looked at your head nods over the course of like a six month period, +[138.400 --> 143.840] we could find out when you're genuinely or probably really interested in something, +[143.840 --> 147.920] when you're just socially coordinating because you're going to shake my head because this is interesting. +[147.920 --> 152.320] So it's a multi-step process, but it really starts with confronting all your biases. +[152.320 --> 157.120] That's fascinating. How does it work in terms of cues I might misread then? +[157.440 --> 164.240] Because if I don't have knowledge of myself or baseline of sort of how I'm approaching things, +[164.240 --> 168.720] and I'm just meeting you for the first time, how can I deal with that situation? +[168.720 --> 173.040] Where are there obvious things I can do? What are the things that are more likely to misinterpret? +[173.760 --> 179.200] I think people are... So first you have to understand where you are on this threshold of, +[179.200 --> 184.080] I call it, literals and contextual. So there are certain people that... +[184.160 --> 188.080] I can say these things three ways. I can say, Shane, how are you? I can say, Shane, +[188.720 --> 194.160] how are you? I can say, Shane, how are you? There are people out there that see no difference +[194.160 --> 198.640] between those three things. They're all just Shane, how are you? And there's other people that +[198.640 --> 203.760] over contextualize or try to extract more value from the tonal shifts in what I'm saying. +[203.760 --> 209.360] And the truth is most people over contextualize certain things. So for example, I'm someone who, +[209.360 --> 213.120] like on my team, there's a rule that no one could give me a one word response. +[213.200 --> 217.920] So no one could ever say yes or no. It has to be like, yes, emoji, yes, gift or something like that, +[217.920 --> 222.240] because I think someone's mad at me when they just say yes. That's my own weird cultural type of +[222.240 --> 227.760] thing where you get that passive aggressive, like sure. Yeah, I'm like sure what? Like I can't stand +[227.760 --> 232.560] that stuff. And I don't know why, but I have to know that first. So there's behavioral signs of this +[232.560 --> 237.360] as well. So you'll have people like watch videos of like a 20-second interaction and they'll say, +[237.360 --> 241.360] oh, this person doesn't like them. And I'm like, well, why do you say that? I'm not really sure. I'm +[241.360 --> 245.840] you break down second by second by second. And they go like, oh, I think it's the way that they're +[245.840 --> 251.760] smiling. Or I think most people don't know how they're coming up with these perceptions. They don't +[251.760 --> 256.000] really understand the origins of it. And when you break it down systematically and they start to +[256.000 --> 261.200] see it, they get to understand a little bit more depth. So the true work is video work, looking at +[261.200 --> 266.400] videos and understanding what's going on with those. How does trust form between people? +[267.360 --> 273.440] Because it has a large indication that the nonverbal and also the verbal cues that we get from +[273.440 --> 279.040] other people. And I'm thinking specifically meeting somebody and forming trust with them in person +[279.040 --> 284.320] where you're getting a three-dimensional view of them. But now Zoom, right? It's very common to +[284.320 --> 288.720] meet new colleagues over Zoom and not in person. And you have to form a trust relationship with +[288.800 --> 296.960] each other. And maybe there's less detail in those interactions. How do we look for signs that +[296.960 --> 303.200] somebody might be untrustworthy? And conversely, how do we convey a trustworthiness to other people +[303.200 --> 310.080] through these interactions? That's a great question. Okay. So trust from a nonverbal perspective is just, +[310.080 --> 316.560] you could think of behaviors on a bell curve distribution. Right? So certain people are going to +[316.560 --> 322.240] act in certain ways that are not in alignment with how society perceives that behavior to be trustworthy. +[322.240 --> 327.040] So an example of this is eye contact. Right? If all of a sudden your eye contact is constantly +[327.040 --> 330.560] darting all around the space, people have a perception of like, why are they doing that? Like, +[330.560 --> 334.480] what's going on there? But on the other hand, like if you asked me, Blake, what is the most +[334.480 --> 338.400] important moment that happened in your life? And I go, well, the most important moment that happened +[338.400 --> 343.120] in my life, I look at you dead at your eyes when I'm saying that. Look, it looks better when I'm like, +[343.360 --> 349.280] I mean, the most important moment that ever happened in my life, it looks more genuine because it +[349.280 --> 353.760] makes sense that I'm looking away to recall an emotional event and then look back at you. So it's +[353.760 --> 359.440] in alignment with how society perceives things. And that really, in my opinion, is what it's about. +[359.440 --> 366.160] So everybody has their own sort of perceptual lens for what trustworthy behaviors are and aren't. +[366.240 --> 373.120] Right? So for me, I always look to have conversations or do things that are like three standard deviations +[373.120 --> 377.280] to the right of a bell curve. So like, if me and you had a conversation for an hour, we're +[377.280 --> 381.840] bringing up topics that you normally don't have with other people, which is going to create a higher +[381.840 --> 387.360] level of trust between you and you. So I'm looking for more nuanced topics and nuanced areas to draw +[387.360 --> 393.520] that conclusion and then to have the mimicking of my behavior to be associated with the excitement for +[393.520 --> 399.680] those things. So for example, like if me and you had like a very long conversation about your +[399.680 --> 404.320] children, for example, like your children are obviously important to you. If my behavior is just +[404.320 --> 408.720] asking you standard questions and I'm doing all that, but it doesn't actually look like I'm interested. +[408.720 --> 412.640] It's like, uh, something's off here. Like what's this person trying to do? What's this person trying +[412.640 --> 419.440] to get? But the truth is it's different for each person, which is why the puzzle is so fascinating. +[419.520 --> 423.920] Because it's, I mean, I've met with executives that have these weird things that like, oh, I never +[423.920 --> 429.680] trust somebody who walks in the room and doesn't shake my hand first. I'm like, well, okay, so why? +[430.320 --> 433.360] And we look at it and it's like when they were 12 years old, their dad taught them that. +[433.360 --> 439.360] Right. So you got to understand that it's build trust build trust. There's not this like step one, +[439.360 --> 445.920] two, three, four, five, it's way more complex than that. But the first thing is don't be so outside +[445.920 --> 449.600] the bell curve distribution of how someone acts that you're just not trusted from the get go. +[450.160 --> 456.000] And how much do we update our information once we form an opinion with you? Like how much +[456.000 --> 460.400] information would it take to, because I'm not looking for signs anymore, right? Like my brain +[460.400 --> 464.720] shuts down. I'm like, oh, this person's trustworthy. Therefore, I stop looking for it or I say, +[464.720 --> 470.240] this person's not trustworthy. Therefore, that's all I see. And how do we go about changing other +[470.240 --> 475.520] people's perceptions of us and also changing our own or being open to changing our own perception +[475.600 --> 480.720] of other people. So like we have this Bayesian brain and even people that are the most like +[481.520 --> 487.520] think they're the most Bayesian approach when it comes to human behavior, we get lazy and we build +[487.520 --> 492.240] these things and we don't change. And I feel like the only time we're changed is if we're shown +[492.240 --> 497.760] we're really wrong. And that's one of the things that I do a lot. So in like working with executives +[497.760 --> 502.480] and programs and training, I get somebody to make a read or a prediction about someone else's +[502.480 --> 506.640] behavior. And then I show them like, oh, no, you were completely wrong. And they're like really? +[506.640 --> 510.720] And they're like, oh, yeah, this was the reason why they boom, boom, boom, boom, boom. And enough +[510.720 --> 516.080] of those wrongs, it gets people to sort of challenge how they're viewing the world. And then you +[516.080 --> 521.600] can start to have the self-growth. But it's not an easy path. Like some people will hold on to their +[521.600 --> 526.880] perceptions. They're like, oh, no. Like I've been doing this thing for 25 years and it's never +[526.880 --> 532.080] got me wrong. All the cognitive and decision making bias stuff that you're famous for gets applied +[532.080 --> 538.000] to behavior every single second of every single day. You did a lot of work with prisoners. +[538.000 --> 543.840] Did you feel safe? Did you feel like you could trust them? Yeah, I never felt. You know, +[544.400 --> 549.760] my friends like background is no way as robust as some people have been doing this for like 30 or +[549.760 --> 557.200] 40 years. But I never felt unsafe in a forensic setting. I just didn't. I view people as like, +[557.200 --> 561.120] most people are still reasonable. And if they're sitting down and talking to me, there's a level +[561.520 --> 568.800] reasonable there. And yeah, I mean, I think I just threw a conversation through looking at my lens +[568.800 --> 574.320] for the world. I used to be scared by people when I was a kid, like a lot of social anxiety, all +[574.320 --> 580.560] that. And there's been so many instances in my life where, you know, even on the street with +[580.560 --> 584.480] somebody that's looking like really tough for on the way here, it's something happened where I was +[584.480 --> 588.400] like, oh, just a person. And I believe through discourse and through conversation, anything could +[588.480 --> 594.640] be settled. I wasn't really, yeah, it's a great question. I can't remember fear. But did you feel +[594.640 --> 599.360] like you could trust these people to give you honest answers? Well, in a forensic setting, like, so +[600.400 --> 605.760] no, so like that's one of the coolest and all forensic psychology. There's a lot of personality +[605.760 --> 612.720] tests, which I'm very anti. But forensics has one, it's built by a process. I think this is +[612.720 --> 617.520] correct. Empirical, keen procedure. So it's really cool. What they basically do is they'll like +[617.520 --> 622.800] ask people that have been diagnosed with like schizophrenia, like 15 or 20 questions. And they find +[622.800 --> 628.560] the patterns that like 99% of people hear voices in like one year or two years or whatever it is. +[628.560 --> 632.720] And then they ask people those same questions that are mullingering. Right. And that's how they +[632.720 --> 636.400] determine whether or not they're lying or not. There's always, and that's why like one of the biggest +[636.400 --> 643.600] tests, I think, still use the MMPI-2 Minnesota multi-phasic personality inventory. It's like 530 questions. +[643.600 --> 649.120] And a lot of the inventory is asking questions in different ways. So you don't remember what you +[649.120 --> 653.280] said on question 26. And that's the point of it. And that's the whole point, right? So there's like +[653.280 --> 658.800] this embedded in these forensic intramacies embedded concept of the person might be lying and we're trying +[658.800 --> 664.480] to figure that out. So I think it's, it's just different. But yeah, but I'm, you know, people lie for +[665.520 --> 670.000] different reasons for shame, for embarrassment. I remember I had to ask this inventory about +[670.560 --> 676.400] sex, like how have they had sex in the past like 12 months, nine months, six months, three months. +[676.400 --> 680.640] And they're in prison and in men population. And when you ask that question, people get like very, +[680.640 --> 684.640] like what'd you say? And like people react in a different way. And it's like it makes total +[684.640 --> 690.400] contextual sense that they would. I'm going to use a very subjective term in terms of like toughest, +[690.400 --> 696.240] your baddest, your most dangerous person. If you had to rank them and categorize them sort of like +[696.240 --> 701.040] by percentile and to less or more dangerous without knowing their background and only having +[701.040 --> 705.920] the information from sort of their answers and their nonverbal cues, how would you go about doing +[705.920 --> 713.200] that? The great question. So I would always focus on massively erratic behavior that worries me +[713.200 --> 718.320] more than anything else. So seeing somebody, seeing like a schizophrenic in a full violent rage +[718.320 --> 723.440] or episode like that, which society thinks it's way more common than it actually is. It's not, +[723.440 --> 728.080] it's like a stigma that these most people walking down the street that you are talking to +[728.080 --> 731.840] themselves just continues to talk to themselves. They don't really hurt anyone. But when somebody has +[731.840 --> 737.120] loop really lost their touch with reality and their violent, I think that's the scariest thing +[737.120 --> 743.040] by far. And that's always like, all right, just watch out. Do you think the biggest +[743.040 --> 748.160] talkers are more dangerous than the silent type? I've always, in my experience, I've always found +[748.240 --> 756.000] the silent types to not be, I, the biggest talkers are the ones that just talk, talk, talk, talk. +[756.560 --> 763.360] And they don't really do much. But the problem with them is that like if that threshold of ego or +[763.360 --> 769.440] disrespect gets violated, I feel like they feel the need to actually assert themselves. But some of +[769.440 --> 774.480] the big tough people are just big and tough. And are they more like a light switch? Yeah. I think +[774.480 --> 777.360] they're more erratic in that regard. Like all of a sudden you say something like what you said, +[777.360 --> 782.480] like it's just, it's quick. Where like some of the bigger, if they want, they'll just, +[783.200 --> 786.800] just lay you out right there. Like it's, they're big. They've got that aggression. They've got +[786.800 --> 796.160] that power. But yeah, I would definitely say that. And also the younger population, like people in +[796.160 --> 804.560] their 20s that are just 25, 26 in prison, it's different. It's a different time for someone versus +[804.560 --> 811.680] 40 or 50 that have had that experience in that lifelong. It's just, it's to completely different +[811.680 --> 815.760] when you've been institutionalized and grown up in that area and understand how to handle yourself +[815.760 --> 820.400] and how to play the game and how to navigate all these things. But I've met some really, really, +[820.400 --> 826.320] really smart people that were like just really ingenious ideas and concepts that came from prison. +[826.320 --> 830.880] Retent to judge people based on their worst choice in some cases, right? Oh, I mean, I am, +[830.960 --> 838.800] that is probably like my advocacy. Like so my foundations will be all for prison reform, +[838.800 --> 847.120] all forgiven people second chances. I believe that society, it's kind of crazy to me that if you +[847.120 --> 853.280] were put in that situation, you might, if you, like people don't understand, you see somebody +[853.280 --> 858.560] commit a horrible crime and they do something horrible. But that person has a story. And if you were +[858.560 --> 863.520] to touch that person and see every step of that story, most people would be like, I'd probably +[863.520 --> 868.240] commit that crime too. And there's just this massive fallacy that people are like, well, no, I, +[868.240 --> 873.680] no, you grew up in a completely different world and a completely different context. So be careful +[873.680 --> 879.360] with that whenever I see somebody who does something that I think is not something I would do, +[879.360 --> 884.720] I always ask myself, what would the world have to look like for me for that to be my default +[884.800 --> 890.000] behavior? And it can be often an illuminating process to get out of your perspective, shift into +[890.000 --> 895.600] a different perspective. And you're right. Like their behavior makes sense to them. Yeah. And when +[895.600 --> 900.160] you hear these people's stories, like they're real stories, it's like, like you're doing quite well +[900.160 --> 906.000] considering, like just like the most horrible things that people can't even sort of fathom. And +[907.040 --> 911.920] if you're not, if you don't view it through their lens and through their perspective, it's impossible +[911.920 --> 919.920] to like truly relate. I want to come back to the trust thing again, because you were on a +[919.920 --> 928.880] double date with your wife and you met somebody for the first time I believe and you were like, +[928.880 --> 934.080] they're cheating on their partner. Yeah. Within a few minutes. Yeah. And I'm wondering like, what +[934.080 --> 938.640] went into that? How did you come to that conclusion? How confident were you in that conclusion? +[939.600 --> 945.840] Yeah. I was like 100% confident. I would have placed a lot of money. There was two things. One, +[945.840 --> 951.840] it was like the gaze direction in the person's eyes. So for example, like, you know, +[952.960 --> 957.840] an attractive woman walks by. It's not uncommon for men to go like that and glance away. +[957.840 --> 963.760] The way that he glanced was like, there was just a certain amount of desire in his eyes, +[963.760 --> 969.520] where he just stared and was like, and what he was doing was doing all these like more like a predator, +[969.520 --> 974.080] or more like a predator, more like a predator. That's a really good term. Predator desire for it. +[974.080 --> 978.240] And he kept saying things to me, like little things that were like, are you one of these people +[978.240 --> 982.880] that cheat on your wife? Like I could just say, he didn't say that. No, but he knew about, +[983.840 --> 988.720] he knew about this poker game. And there was like a couple of things that he stated that I was just +[988.720 --> 993.360] kind of like, this is a little sketch. So you're testing the water. Yeah, he was like, +[993.360 --> 1000.640] testing the water. Yeah, are you one of us? And I think what I did is I walked the line of like, +[1000.640 --> 1005.360] oh, I know that game or I know that I know that person, but I didn't actually cross it. +[1005.360 --> 1008.720] And then that gave him a little bit of trust with me to kind of go a little bit more. And I'm like, +[1008.720 --> 1014.800] I'm pretty sure, Gene, I know your wife. How much of that is ego coming out to? Like why would he, +[1015.440 --> 1019.600] I'm also just trying to like understand his, but like, why would I put myself in a situation +[1019.600 --> 1026.960] in this double day where I'm exposing part of myself. I'm trying to hide. I assume he was trying +[1026.960 --> 1031.200] to hide the fact that he was having an affair. I don't think he was. I think also, I mean, +[1032.480 --> 1038.240] I definitely have a disarming quality to me when I meet people because I'm very, I'm not really +[1038.320 --> 1045.040] judgmental. I'm a, I'm like a low judgment. So I have, I have friends on every spectrum of everything, +[1045.040 --> 1050.720] right? So I think I give permission to people to just sort of be themselves. And then they keep pushing. +[1050.720 --> 1055.440] And I'm still fine with it. And then they keep pushing. And I'm like, okay, like, it is what it is. +[1055.440 --> 1059.200] You go on there. But there's just like these cultural narratives. Like I was at a dinner once. +[1059.200 --> 1063.360] And I was with like six of my friends. And one of my friends started like complaining about his +[1064.320 --> 1068.320] wife. And I, everybody else started to like, it was like such a cool dynamic. Everybody else was like, +[1068.320 --> 1072.160] yeah, you know, my wife does that too. And it bothers me. And I looked at my friend and I was like, +[1072.160 --> 1077.120] why don't you just get a divorce? And he was like, what? I was like, just, just get a divorce. +[1077.120 --> 1080.640] And he was like, obviously not going to get a divorce. I'm like, well, why are you sitting here? +[1082.000 --> 1086.080] Bad mouthing your wives. Like, I don't have anything bad. I love my wife. I love spending time +[1086.080 --> 1090.880] with her. Like, why are we doing this? And immediately the conversation switches to be like, yeah, +[1090.880 --> 1095.440] you're right. Like I love my wife for this reason. I love my wife for that reason. And it's just like, +[1096.160 --> 1100.560] I really believe I hate that I hate saying this, but I do believe like people are sheep in that +[1100.560 --> 1106.240] regard where there's a narrative. And if a powerful person just comes along and shifts the narrative, +[1106.240 --> 1110.720] you see how quickly everybody else falls in line. And we're all on both sides of this. +[1110.720 --> 1115.680] Sometimes we're the wolf and sometimes we're the sheep. I know you sort of specialize in nonverbal, +[1115.680 --> 1121.360] but is there like a way that we communicate about our partner that would be indicative of like +[1121.360 --> 1125.760] common complaining about our partner versus like, oh, there's something seriously wrong here. +[1125.760 --> 1132.480] Oh, total. I mean, like, I think there's just, this is like all the work that got mended, +[1133.040 --> 1136.800] relationship labs, and all predicting like signs of contempt and all these things. +[1137.680 --> 1141.760] This is a problem with relationships. Everybody has a different version of what a relationship is. +[1141.760 --> 1146.960] So one of the reasons why me and my wife have been together for 12 years, we are together like 99% +[1146.960 --> 1153.600] of the time. It works because both of us have the same definition for what a relationship should +[1153.600 --> 1158.960] quote unquote be. So we see view things through the same sort of lens. And a lot of conflict comes +[1158.960 --> 1163.280] from when two people have different ideas of what a relationship should look like. And there has +[1163.280 --> 1167.760] to be like that negotiation between that. And I think that's hard for a lot of couples. And it's +[1167.760 --> 1171.440] like, it's a good question before you get married. Like, what, you know, what are our characteristics of, +[1172.000 --> 1176.160] yeah, what should, which is an ideal relationship? What does it look like? Some people are like, no, +[1176.160 --> 1179.840] I hang out with my friends, you hang out with your friends, and we get together on the weekends. +[1179.840 --> 1184.640] I've no couples that have been together for 45 years that have that paradigm. To me and my wife, +[1184.640 --> 1189.440] we're like, oh, they're not that close. But it doesn't matter. It's what their relationship of +[1190.080 --> 1196.320] should look like. My uncle gave me this test. I think it was like a decade ago now. And without fail, +[1196.320 --> 1202.560] it is predicted every divorce. And this test was so simple. It was like, if you hang out with people +[1203.440 --> 1208.400] and they talk to each other, but only in transaction and transaction being like, you get groceries, +[1208.400 --> 1212.640] you change the diaper, you like errands, you know, sort of like day to day life stuff. And they don't +[1212.640 --> 1220.880] ask each other questions. Like that is a huge red flag for predicting a problem. And it's actually +[1220.880 --> 1225.360] enabled me to like intervene in some friends lives and be like, hey, we just had dinner last night. But +[1225.360 --> 1230.080] I noticed this thing is everything okay is like there anything you want to talk about. And they're like, +[1230.080 --> 1235.600] how did you know? And I'm like, oh, because like I'm sitting there. We're both talking to me, +[1235.600 --> 1240.480] but you're not talking to each other. And you know, on a one off basis, that's fine. But like +[1240.480 --> 1245.600] repeatedly, okay, well, now I'm detecting a pattern and something is something's up. +[1245.600 --> 1251.440] There's so many interesting nuance themes. I remember we used to do a bunch of studies in +[1251.440 --> 1256.320] New York City. And I had this one couple come in and they're like, oh, can you can you study us? +[1256.320 --> 1260.720] And I said, sure. So they come into the office and they sat down on a couch and I recorded them +[1260.720 --> 1264.800] from like three different angles. And they're like, what do you see? I was like, I'm not going to +[1264.800 --> 1269.920] say anything. I'm going to leave the room. I'm going to go get lunch, sit here and watch your +[1269.920 --> 1276.000] interaction and write down what problems you think we, you know, you have. And they did it. And +[1276.000 --> 1280.720] it was like, they saw so many things. He's like, I don't really listen to her. I was looking +[1280.720 --> 1284.640] on the video and she was speaking and I was just kind of nodding my head and distracted. And +[1284.640 --> 1288.800] they both had that was really impressed to see that level of awareness. But I think that, +[1289.600 --> 1294.080] video does in a world with so many different perspectives and perceptual differences. +[1294.080 --> 1298.960] Video doesn't lie. It's just like raw data. And I think it's so helpful to like see yourself on +[1298.960 --> 1303.200] video and relax. I don't mean to do more of that in terms of like recording ourselves because, +[1303.200 --> 1305.600] I mean, I'm a big fan of it. Push for this. It's going to be lots of good. Yeah. +[1305.760 --> 1311.760] I'm not too here. I mean, it's just, yeah, like, sit down and the truth is people, like, +[1311.760 --> 1315.600] we'll talk about the observer effect with video and all that. It goes away in like 10 minutes. +[1315.600 --> 1319.440] Like, if you have like a small iPhone, you forget about it and you're actually seeing patterns +[1319.440 --> 1326.560] of behavior. And that's something I say to my wife a lot. Like, oh, like, let's get a video of it +[1326.560 --> 1332.560] and see what it was actually like, right? Because it's so difficult to just, you ask two people to +[1332.560 --> 1338.560] recall an event. It's just wildly different. It's like, what? And then you show the video and +[1338.560 --> 1342.640] you see that it's like somewhere in the middle of what those two stories were. And I just find +[1342.640 --> 1347.280] that fascinating. So I'm obsessed with video. I like the feedback. We're giving a presentation. +[1347.280 --> 1354.880] We watch a video to like see ourselves articulation. So that's another big problem. So in presentations, +[1354.880 --> 1359.120] we had a program for called dynamic presentations. I used to do it for in New York City for like five +[1359.120 --> 1362.880] years and a lot of corporate stuff around it. And people are obsessed with recording the person +[1362.880 --> 1367.840] on stage. What's more interesting is recording the audience. Because the truth is I'm always asked +[1367.840 --> 1372.320] how do my presentation go? I don't know. Let's see the audience. A presentation is for that group +[1372.320 --> 1377.120] of people. So what often happens is a lot of communication experts will watch like a presentation. +[1377.120 --> 1380.720] And they'll go, well, I think you should move your hands more or less or I think you should speak +[1380.720 --> 1385.280] like they're doing that through their perceptual lens. They're not optimizing for the engagement of +[1385.280 --> 1391.120] the audience. So I used to record my presentation and the audience every third presentation for like three +[1391.120 --> 1395.680] years. It was fascinating. Why don't we take that approach? I mean, comedians effectively take that +[1395.680 --> 1400.320] approach without recording the audience because it's based on oh, that joke got a laugh. I'm going to +[1400.320 --> 1405.360] use that next time. That joke fell flat. I'm not going to use that next time. It's the feedback loop +[1405.360 --> 1410.000] is instant. So that's how that was was was such the value. Like when I was teaching psychology +[1410.000 --> 1415.200] acune, I was speaking like 80 to 100 hours a week, both at my office and both instant feedback +[1415.200 --> 1420.320] loop of what story worked. What story didn't work? Like did that land that offend somebody and you +[1420.320 --> 1424.160] just start to develop this quick of repertoire of things that actually work, but that comes from +[1424.160 --> 1429.760] that audience interaction. But most people when giving a presentation, they're not even present enough +[1429.760 --> 1434.240] to do that because they're so in their head about the presentation. So it's sort of a skill set that +[1434.240 --> 1439.120] comes after you've been more comfortable being on stage to be able to process and sort of predict +[1439.120 --> 1442.960] the behavior of an audience. What's the biggest thing that gets in people's way when they're +[1442.960 --> 1451.040] presenting? Really just the social construction that a presentation is something different. So people, +[1451.040 --> 1455.520] it's got this whole cultural narrative. Oh, you got to you have your big presentation coming up. +[1455.520 --> 1460.240] It's hyped up as it's this different thing. You're just talking to a group of people and they're +[1460.240 --> 1465.120] responding by shaking their head and nodding and you're setting up their net. I think that's the +[1465.120 --> 1469.440] first contract that needs to be broken. And then also just people just don't put in the reps. +[1469.440 --> 1474.880] Like that's something that just takes time and most people work so hard for a presentation and +[1474.880 --> 1478.240] they do it. And they're like, oh, it's a flood of release where they should have just done +[1478.240 --> 1483.040] them every day for the next three weeks through a presentation. It'd be so much better. +[1483.040 --> 1488.240] What does putting in the reps mean? Does that mean crafting your story and positioning it for the +[1488.240 --> 1494.240] audience? Does it mean your intonation? Like how do you actually go about working on that? Like how +[1494.240 --> 1500.000] would you make me an expert presenter if you had three weeks and you had one hour a day of my time? +[1500.000 --> 1505.120] So that's so cool that you did that. So my question is always be what was the constraint? +[1505.760 --> 1510.800] So if you said three hours a week, one hour a day of your time, the first week would probably be +[1510.800 --> 1516.400] reps of just let's get you comfortable. So the thing is with a lot of nonveral behavior stuff +[1516.400 --> 1522.000] and movement, I have found reliably that the most effective version of someone is when they're +[1522.000 --> 1527.600] the most comfortable. Bar not every single time. So the whole joke is people think I teach like, +[1527.600 --> 1531.920] oh, stand this one. Now, like step one is get you to the level where you're the most comfortable, +[1531.920 --> 1537.440] where you feel the most free and then build on top of that. So I try to get you there first. +[1537.440 --> 1541.920] And I wouldn't be focusing on, I mean, it really depends if you're doing like a TED talk that was +[1541.920 --> 1545.360] like 20 minutes, I'd probably tell you just to rehearse it and get that down. But we're doing like +[1545.360 --> 1550.160] an hour presentation or the most presentations that people have to do. It would be all outlines, +[1550.800 --> 1557.440] repeat, repeat, repeat, repeat. And it's a careful balancing act to like understand where you're at +[1557.440 --> 1563.280] because some people with a lot of anxiety, I will know or some people that are trying to get it right, +[1563.280 --> 1568.400] I won't be focusing on little details. It's a way more dynamic process. Like so some people that +[1568.400 --> 1572.400] have like these facial things to getting better at presentations, like it's different for every +[1572.400 --> 1576.400] person because some, if someone you're telling someone, listen, you're moving your hands too much. +[1576.400 --> 1579.360] And they're going to get in their head about moving their hands too much. You're going to start +[1579.360 --> 1584.400] looking all weird. And some people can take a cue and immediately change it. And other people +[1585.360 --> 1589.200] just get them comfortable, just get them comfortable. And then using video, +[1589.440 --> 1594.000] viewing as they else fascinating. So what do you show people? Video of themselves. I work, +[1594.000 --> 1600.560] once work with this woman. I hope she's hearing this because I love her, but not to call her out. +[1600.560 --> 1607.120] So I, she gives one of the worst initial presentations I've ever seen in my entire life. She was +[1607.120 --> 1612.960] extremely flat. She was like moving her hands. She literally spoke like this for an entire 20 +[1612.960 --> 1617.040] minutes. And it was like painful to watch. And at the end of the video, I was like, okay, so let's see +[1617.040 --> 1621.360] what we're working with. And I put her video on her like projector. And the first thing she says to +[1621.360 --> 1628.480] me is like, I need a nose job. And it just shows you like that's where that person's perception is +[1628.480 --> 1634.240] focused on. Like we're focused on these weird little different things that no one else recognizes +[1634.240 --> 1640.320] or no one else cares about. And I truly believe that the most world class best presenters are +[1640.320 --> 1645.120] truly about their audience and not about themselves. They're not trying to come across a certain way. +[1645.120 --> 1650.720] They're trying to like I even feel that now like I'm stepping more into my own self after first 20 +[1650.720 --> 1654.640] minutes. Like at first, it's a little bit, you know, it's a little different. I'm trying to be +[1654.640 --> 1659.680] more measured. Now it's more me coming out of it. And the truth is how do you get to that immediately? +[1659.680 --> 1664.160] And go from there and go right away. And I want to switch gears a little bit and talk about +[1664.640 --> 1672.160] workplaces and sort of power structures and social dynamics. How can you teach me to understand +[1672.160 --> 1677.360] the power structure at work and social dynamics? How would you go about that? So power structures. +[1677.360 --> 1685.360] Oh man, that's such a big question. They are these invisible things. That's what I, when we talk +[1685.360 --> 1688.640] about reading the Romanic corporate structure, that's what we're talking about. We're talking about +[1688.640 --> 1693.840] power structure, we're talking about permissions, all these things. The first way to do it is to do +[1694.080 --> 1699.920] exercise where you sort of do a decision tree of the potential, like show people what the potential +[1699.920 --> 1706.800] landscape could be. So for example, let's say all of a sudden a new CEO gets pulled in. And we +[1706.800 --> 1714.320] want to say, okay, what is this CEO going through? Is this CEO just pushed in by the PE company? Does +[1714.320 --> 1719.360] the CEO have performance-based incentives? Like what are they trying to do? And just map out all +[1719.440 --> 1725.040] what quote-unquote is possible? And then start using the data and evidence that's coming in on a +[1725.040 --> 1731.360] daily basis to cross out which one it is. And then sometimes just straight up ask, I think that's +[1731.360 --> 1736.160] something that a lot of organizations don't do. I can't tell you the amount of times where I'm just like, +[1736.160 --> 1742.240] so I was really cool perspective because I work with often the entire C suite. So like the +[1742.240 --> 1747.040] COO CT, like everybody I work with. And it's like you two need to talk about this because this is +[1747.040 --> 1751.200] blocking your you two need to talk about this. But the amount of communication that just doesn't +[1751.200 --> 1757.840] happen at like a personal level or just a level that's like blocking decision making, it's kind of +[1757.840 --> 1764.960] crazy. I think organizations need to talk way more than they are in this siloed environment sometimes. +[1766.080 --> 1770.720] If you just were able to have those conversations, you would be able to navigate and see the power +[1770.720 --> 1775.600] structures way easier. And people just don't have that social skill set that the people skills that +[1775.600 --> 1780.720] sit down with someone. And a lot of I've just seen every lot of people get power structures. Oh, +[1780.720 --> 1788.160] I'll give you a good one. If you are falling in line with a power structure, it's often very +[1788.160 --> 1793.840] difficult to navigate it. Meaning if it's like, Oh my God, this person is this and this person is +[1793.840 --> 1798.720] this. And I'm just this, you're very rarely going to be able to see eye to eye with that person. +[1798.720 --> 1803.600] Because you perceive them here and you perceive yourself here. And I feel like people do that a lot +[1803.600 --> 1807.920] inside of organizations and doesn't give them that creative freedom to actually read what's going +[1807.920 --> 1815.520] on. Is the delta between where you are and where you perceive the other person? Like, does that +[1815.520 --> 1824.240] influence your how? I mean, just from like of what you have quote-on-quote permission to do or say, +[1824.240 --> 1829.600] it's all a perception. Like, I've worked with people like executive, I've worked with CEOs that +[1830.240 --> 1836.160] the most open, every all of their behavior suggests that they're the most open, honest, come to them +[1836.160 --> 1840.560] with problems. But people don't come to them with problems because they're CEO. Yeah, but they say +[1840.560 --> 1844.400] it over and over and over again. And I look at why. And they're like, I don't want to bother the CEO +[1844.400 --> 1849.920] with this. I'm like, they said seven times this year come to me with this specific kind of problem. +[1850.720 --> 1855.760] Yeah, you're right. But I just don't know. You get in your head like that. How much of that do you +[1855.760 --> 1862.480] think is cultural too? Because I worked with a CEO who said that, but the minute you came to him +[1862.480 --> 1866.880] with the problem, I mean, basically like scream at you. That's the kind of stuff that I correct. +[1866.880 --> 1872.560] So that's the bulk of my when you say something, but you're patting a lot of these people often +[1873.120 --> 1878.000] just don't understand a lot of executives don't understand the impact of their own behavior. +[1878.720 --> 1885.440] So I have met people that are wonderful, wonderful, wonderful people. Yeah, the way they give feedback, +[1885.440 --> 1891.600] oh my god. It's just ripped the person apart. And they're like, no, I love them. They're one of my +[1891.600 --> 1896.080] best people. I think they're great. I'm like, well, let's take responsibility for what that interaction +[1896.080 --> 1901.120] look like. And that's why zoom in video is so important for me. Because sometimes when you work with +[1901.120 --> 1905.760] an executive or you work with anybody and you tell them something, they don't see it like the way +[1905.760 --> 1910.720] that you described. But when you show them on video, that feedback, I was like, listen, go back 20 years +[1910.720 --> 1916.240] in your career. If you were given this feedback, how would you feel? They're like, yeah. And I do +[1916.240 --> 1922.160] this cool thing. It's an exercise that really works. So, you know, leadership, principles and all +[1922.160 --> 1926.320] that stuff. I'm not there to tell somebody how to lead. I'm not there for any of that. I'm there +[1926.320 --> 1931.360] just to make sure that their intent is aligned with their behavior. So I do this thing where I'm like +[1931.360 --> 1936.880] close your eyes and imagine you're like a funeral. And everybody you've ever worked within your +[1936.880 --> 1941.600] entire life is there. What are the stories and things that you're saying about you? And I just +[1941.600 --> 1946.160] make sure that those things are in alignment with their behavior. And they choose and solidify what +[1946.160 --> 1950.480] those things are. And then I kind of hold them accountable to making sure that they're carrying out +[1950.480 --> 1955.120] those things. It's kind of interesting, right? Because it's almost like a destination analysis, +[1955.120 --> 1959.840] which is like, there's a difference between getting what you want and then wanting +[1959.840 --> 1965.440] what's worth wanting. And then also the way that you employ a strategy to go about getting that +[1965.440 --> 1971.440] thing. So like, if you want the right destination and you have to know how to get it, but then also +[1971.440 --> 1976.560] it's like, am I getting it in a way that I'm going to be happy with at the end of my life? And you +[1976.560 --> 1981.120] can think of Ebenezer Scrooge. Yeah. A great example of somebody who went after goals, but they're +[1981.120 --> 1986.000] consciously or unconsciously about sort of being the wealthiest, most well respected, most well-known +[1986.000 --> 1991.280] person in his community, accomplished all of those goals. But what did he want at the end of his +[1991.280 --> 1997.920] life? He just wanted a redo because the way that he pursued those goals was mutually exclusive +[1997.920 --> 2002.800] from a life of meaning, which he later determined, which is sort of like this deathbed, sort of +[2002.800 --> 2007.760] messed right? I'm calling it the Ebenezer exercise. I'll credit you with that because that's +[2007.760 --> 2013.840] the perfect sort of analogy. And also about that destination principle, I think there's a major gap +[2014.640 --> 2021.280] in the leadership or organizational culture, that whole world between theory and application. +[2022.000 --> 2027.600] So somebody reads a book about the radical candor or some like concept, and then the way that they +[2027.600 --> 2032.720] apply it is completely different. So for example, some people just have like tonal aspects of their +[2032.720 --> 2038.560] voice that society or 80% of people perceive as harsh. And they'll say, listen, so I'm just being +[2038.560 --> 2044.320] totally honest here. But, and it's like, okay, that's coming across a little bit strong, a little +[2044.320 --> 2051.680] bit this. And what I've been fascinated by is not everybody sees that. So some people see it, +[2051.680 --> 2056.080] and some people don't. And like, I'll play it back for them. And they go, I'm just giving them advice. +[2056.080 --> 2060.160] And I'm like, you really don't hear the difference between that? And they're like, no. And that's why +[2060.160 --> 2064.000] you get some people in an organization that are hyper literal and some that are more contextual. And +[2064.000 --> 2068.800] you just they clash. And that's why it's like, you said one word answer. You said this. You said +[2068.800 --> 2075.440] that back and forth. But it's all this just, it's all this cool narrative of everybody seeing the +[2075.440 --> 2079.360] world differently. And it's like my job to solve the puzzle of documenting that and bringing +[2079.360 --> 2084.640] them together and show them. And that's usually the best step. Like I really believe that the +[2085.360 --> 2090.080] personal like operating manuals that sometimes people, I've seen some people do them that are so +[2090.080 --> 2096.000] good. And just like, listen, you hear my quirks, hear my things about me, you get it all out there +[2096.000 --> 2101.200] first so that you create the narrative and not somebody else is imagining the narrative. +[2101.200 --> 2106.000] Are there different techniques to enhance your communication? I'm thinking specifically with +[2106.000 --> 2111.840] people who are hyper literal. Because if you're not a hyper literal person, you don't tend to think +[2111.840 --> 2116.880] that way by default. It's harder. So it's easier to deal with the people that are more contextual +[2116.880 --> 2121.360] and lower them down than the hyper literal. It becomes like certain people on spectrums and +[2121.360 --> 2128.720] multi-axle spectrums. I sometimes have worked with people that I have to like, yeah, it's +[2128.720 --> 2136.160] basics, basic. And they're like, interesting. So I had a client once that this is years ago, this +[2136.160 --> 2142.240] may be like 14 years ago in a or 15 years ago in a bar in New York City at three o'clock in the +[2142.240 --> 2146.640] morning, not in a bar. I'm sorry, in a diner. He walks into the diner. And I was like on a double +[2146.720 --> 2151.600] date. And he walks into he walks in and he goes, Blake, how are and introduces everyone? Hey, +[2151.600 --> 2157.520] how are you? How are you? How are you? How are you? Everybody starts hysterical laughing. And I'm like, +[2157.520 --> 2163.280] good, good. And on our session later, I was explaining to him, like, this is why the context was +[2163.280 --> 2168.480] different. And I'm drawing circles of all the layers of context in that dynamic. And he's like, +[2168.480 --> 2173.440] okay, I understand it now. And I was like, it was inappropriate for that dynamic. And I have so +[2173.440 --> 2178.800] much empathy for people that don't see the world that way because it's so, so, so hard. It's +[2178.800 --> 2182.720] learned almost for people like that, right? It's like, now next time he knows in that situation, +[2182.720 --> 2187.120] but it's not intuitive for him. It's like a learned behavior and algorithm he's following. +[2187.120 --> 2191.760] And it isn't intuitive for some people. And it's right now in this room, there's just this +[2191.760 --> 2195.760] invisible norms of how I should act on a podcast, how you should be like, +[2197.040 --> 2203.040] and if you don't know those norms, you're sort of like ostracized. And sometimes one of the things +[2203.040 --> 2209.840] that happens is these like hyper or really successful people who violate those norms get modeled. +[2210.800 --> 2215.440] And it's like, whoa, whoa, whoa, like Elon Musk can violate all the norms he wants. Steve Jobs can +[2215.440 --> 2219.760] too, but like, you can't, you don't have that authority. You don't have that contextual +[2219.760 --> 2224.400] understanding of who you are. So you have to sort of play the game in the beginning. This always +[2224.400 --> 2229.040] comes up with small talk. Like people hate small talk. I hate small talk. Well, small talk is a +[2229.040 --> 2233.040] path to big talk. So you give like a couple seconds, couple of minutes of small talk. You don't +[2233.040 --> 2236.720] just walk up to someone and they're like, Hey, like, so tell me what's the biggest conflict between +[2236.720 --> 2241.200] you and your wife right now. It's like, what? It's very off putting, right? Like there's a gradual +[2241.200 --> 2247.280] process. But if you don't understand those things, life is hard. Totally. What strategies can we use +[2247.280 --> 2255.040] to enhance our nonverbal communication at work? Our ability to communicate? Yeah. I think one, +[2255.120 --> 2258.800] step one, obviously, I've been repeating this over and over and over again is get as much video +[2258.800 --> 2264.640] as possible of you interacting. So everyone out there listening to this, if you work, I promise you, +[2264.640 --> 2269.760] you have a good amount of video on zoom, on whatever, you just basically want to rewatch those +[2269.760 --> 2275.200] videos and make sure that your intent is aligned with that with or without sound with definitely +[2275.200 --> 2278.880] with sound. What are you watching? Are you watching you? Are you watching other people's reactions? +[2278.880 --> 2284.960] So that's so the exercise of watching other people reacting to you is very valuable. +[2285.040 --> 2288.640] In the sense where you understand when you're losing people, when people are disinterested, +[2288.640 --> 2293.200] when they're engaged, so on and so forth. But also, understanding aspects of you, I believe that +[2293.200 --> 2299.840] most people, you need somebody else to help you with this. Yeah. People look at the things +[2300.880 --> 2308.000] that don't matter. They'll say like, I say like so much. Or I say, um, or I say, +[2308.000 --> 2312.640] ah, like you're missing the picture of what you're trying to sort of convey. So part of the process +[2312.640 --> 2317.440] is like coming up with almost like a series of words that you'd want to, like how do you want to +[2317.440 --> 2321.520] come across in this interaction? Well, I want to be enthusiastic. I want to be interested. I want to +[2321.520 --> 2326.000] be to the point and then making sure that your behaviors are in alignment with that and landing +[2326.000 --> 2331.600] for the people in that way. But a lot of it is like breaking down video and just analyzing video. +[2331.600 --> 2337.360] Like that's the best way by far. Okay. And if we don't have access to video for whatever reason, +[2337.360 --> 2346.400] if you don't have access to video, I would really, if I had to give it that, I would say I'd want +[2346.400 --> 2352.080] you to record yourself not in a work context. Record yourself with someone. They have a true +[2352.080 --> 2357.840] unconditional positive regard with someone that you know is not judging you that you're a close friend. +[2357.840 --> 2362.480] And I'd like you to look at your tonal patterns, your movement, how you are. That's how you should be +[2362.480 --> 2369.200] at work. Okay. That's for sure. It's like 95% of people. And every once in a while, there's a certain +[2369.200 --> 2375.120] percentage that's not that, but that's I think a good standard to follow. And do you consider writing +[2375.120 --> 2379.760] nonverbal or is it verbal because we sort of like read it with that little voice in our? Yeah, I mean, +[2379.760 --> 2384.720] yeah, it's, I, I, I'll take nonverbal as everything. So all of our software analyzes language +[2384.720 --> 2391.040] patterns, it analyzes tonality and I go into ever, you can't really separate nonverbal from verbal. +[2391.040 --> 2396.240] I mean, maybe the only place you could do this is poker and a couple of other things. The truth is +[2396.240 --> 2400.640] we're always, so the model for how we read behaviors, we're looking at behavior within a context +[2400.640 --> 2406.880] and we're coming up the reason for why it occurs. If you don't have like words, you don't understand +[2406.880 --> 2411.280] the context. That's why like a lot of things, a lot of these bilingual language people like take a +[2411.280 --> 2416.960] 10 second clip and be like, you know, because of my feet are sitting this way, that means this or +[2416.960 --> 2423.920] that and they just like build this narrative. Words allow you to understand the consistency of +[2423.920 --> 2428.880] that narrative. So it's, it's so much better to have the full picture as much data as possible. +[2428.880 --> 2434.400] I like that. Is there a way that we can look at our emails and evaluate our communication by +[2434.400 --> 2439.360] like sort of like looking outside in or questions we can ask other people to evaluate how effective +[2439.360 --> 2444.000] our communication is? So that's probably the best way is questions of asking other people. +[2444.720 --> 2447.680] But see, this is where it gets me. What would we ask this? So this is where it gets interesting, +[2447.680 --> 2452.880] right? So like you have to have a leadership style and a presence and a history of being the kind +[2452.880 --> 2456.240] of person that could sit there and be like, Hey, everyone, I'm really trying to improve how I am +[2456.240 --> 2460.320] on emails. So I just want you know, like when I send you an email, do you have any like weird stories +[2460.320 --> 2464.640] of an email that I sent you that you thought I was a certain way or I was frustrated of a certain +[2464.640 --> 2470.560] way? But a lot of leaders are executives, they can't even create the like space or the dynamic to +[2470.560 --> 2475.840] do that. Most people will be like, no, you're fine. So like I love these questions where we do all +[2475.840 --> 2482.160] these like perception research, right? So you go like this, you say, how do you think that interaction +[2482.160 --> 2486.640] went from a scale from one to 10? People are horrible at answering it. Like do you like that person +[2486.640 --> 2490.560] from a scale from one to 10? And they're like, I don't know. But you asked somebody a question, +[2491.120 --> 2496.560] would on a scale from one to five, how likely are you to invite this person to a dinner with your +[2496.560 --> 2502.400] closest friends? And the answer is so easy for them to answer. So I like questions that are about +[2502.400 --> 2507.920] predictive type of things, right? Not about just your trait because it allows them to conceptualize +[2507.920 --> 2513.280] and give a better answer. So I would be asking questions like, you know, on a scale from like, +[2513.280 --> 2518.880] just tell me I was going from one to five, how many times do you leave an email for me feeling +[2518.880 --> 2522.960] frustrated? Because they can easily give you the answer instead of searching for it. And +[2523.760 --> 2528.560] but let's be real. A lot of people don't want to do that work. Yeah. Like they're just going to +[2528.560 --> 2535.200] protect themselves from that. And listen, also, I think we have a problem in this kind of self-help +[2535.200 --> 2543.200] personal development world where I think a lot of, let's say like like thought leaders and +[2543.200 --> 2547.680] leadership and all I think they're disconnected from what leaders actually go through. And the reason +[2547.680 --> 2552.320] why I'm saying that is it's like all this stuff is easy and theory, but I see some of these people, +[2552.320 --> 2556.800] like they're pulled in six different directions. They're bored once them to do this. Their stock is +[2556.800 --> 2564.000] at this. This team is this and I it's so much harder than just like, oh, you know, be a little bit +[2564.000 --> 2570.880] more happy in the morning there. And I feel like you really as like a coach, you have to feel like, +[2571.600 --> 2574.800] you know, what's the 80 20 like what are the small things that you're going to do that are going to +[2574.800 --> 2580.480] have the greatest impact on the team. And then also I really just struggle with this in the sense that +[2580.480 --> 2589.040] like I've seen certain leaders be horrible and get incredible things done. The violate every book +[2589.040 --> 2595.200] ever written about leadership and work a team to complete death and play this MacaVillian game of +[2595.200 --> 2600.400] just when they're about to fire. Just by when they're about to quit, give them enough of reinforcement. +[2600.400 --> 2604.240] And I see they're like the dark side of everything that I'm teaching and I'm like, they know exactly +[2604.240 --> 2607.840] what they're doing here. And I call them out and I'm like, you know exactly where they're doing it. +[2607.840 --> 2612.320] And then you look at their KPIs and their metrics and how the how the organization structure and +[2612.320 --> 2618.800] they're like, they're actually in perfect alignment with how they should be acting. It's tricky. +[2618.800 --> 2622.080] It's like, it's a tricky thing. And I don't think that's spoken about enough. +[2622.080 --> 2627.600] Do you think all behavioral is contextual or environmental in that case? Because their environment is +[2627.600 --> 2631.920] those KPIs. The environment is the culture of the operating environment of the organization is +[2631.920 --> 2637.120] your environment. So the way to address that is to change the environment. Yeah. I think all +[2637.760 --> 2643.120] optimal behavior is within a construct of your environment. So the more you have context, +[2643.120 --> 2647.920] the more you understand understanding environment, the more your behavior can be designed or +[2648.960 --> 2655.120] modified to navigate that environment. Are there things that we can do in our environment that we +[2655.120 --> 2660.640] control individually outside of the context of an organization that we can use to improve our +[2660.640 --> 2667.680] behavior that come to mind for you? Set up cameras. Also, just set it up so that you win. +[2667.680 --> 2673.440] Like, I feel like a certain structures and certain environmental, just the way an office is +[2673.440 --> 2679.520] integrated. I had worked for one person a long time ago. They had like a desk and he had a like +[2679.520 --> 2683.920] really large desk and then there was a chair. And he would every time somebody came to the office, +[2683.920 --> 2688.160] he would like walk behind his desk, like shake their hand and then bring them to like a couch +[2688.160 --> 2693.120] set up where they were faced one-on-one. And that's before Zoom and before COVID. This is how they +[2693.120 --> 2697.760] did all their one-on-one sessions. And it was just like a simple way of getting making sure the person +[2697.760 --> 2701.840] feels really heard because you're just alive with them. And you're looking at them and you're +[2701.840 --> 2705.440] facing that direction. I was like, oh, that's a really cool way of doing that. And I mean, we do this +[2705.440 --> 2710.400] all the time with our kids with friends. We grab our phone and like we're talking to them but +[2710.400 --> 2716.560] we're not really talking to them. We're not. There's a huge advantage to sort of being gained by like, +[2716.800 --> 2721.040] I remember somebody said something who met Bill Clinton. And I was like, everybody got so many +[2721.040 --> 2726.080] Bill Clinton stories. No, but this one's not about this though. Yeah. Well, so this was like, +[2726.080 --> 2730.880] and I was like, oh, what did you take away from that? And this person like looked me in the eye +[2731.120 --> 2738.160] and they said, I felt like the most important person in the room for 45 seconds. How did he do that? +[2738.160 --> 2745.120] How do we create that? So the amount of people that have given me that build, I'm like, and I'm +[2745.120 --> 2751.040] so curious because I've seen some footage of him in interactions. A close friend told me the story +[2751.040 --> 2758.000] once of walking into a, he was with him and they walked into like a conference center. And there +[2758.000 --> 2763.360] was a cleaning lady cleaning. And she got really like, I'm not supposed to be here. And he walked +[2763.360 --> 2768.320] right up to her and he looked at her with the level of presence and focus for two and a half or +[2768.320 --> 2773.520] three minutes that people were like, huh, you know, and part of me is this is like, first, +[2773.520 --> 2777.200] there's definitely this halo or this thing of he's the president or for president of the United +[2777.200 --> 2783.200] States, right? But there's also these like nonverbal ways of just the piercing eye contact and the way +[2783.200 --> 2788.320] that he looks deep into your soul. And it's, there's definitely a balance of both. I am so curious, +[2788.320 --> 2793.520] though. That was what my friends said. Yeah. He never broke eye contact. Like he was just very intense. +[2793.520 --> 2799.280] But it was like a warm intensity if that makes sense. Not like a, there's also, I think this is +[2799.280 --> 2805.280] all about that perceptual bell curve. So I think there are qualities and tonality, the way the +[2805.280 --> 2811.280] gaze, the shape of someone's face is kind of like these, these whole old ancient pros, ancient +[2811.280 --> 2816.000] practices of like determining if somebody's going to be a criminal or not based on their facial structure. +[2816.000 --> 2820.640] And the fun, the interesting thing about that is like our culture supports that. Like William +[2820.640 --> 2826.080] DeFoe is usually the villain because he's got that really like angular sort of like type of face. +[2826.080 --> 2830.240] Like he was perfectly casted in one of the Spider-Man movies, right? Like, and there's certain things +[2830.240 --> 2835.920] about like there's a professor out of you pen that's done does a lot of stuff about impressions +[2835.920 --> 2840.720] and the structure of someone's face. And like that stuff's an advantage in life. Like some people have +[2840.720 --> 2845.840] a face that's going to be more trustworthy or a face that's going to be more aligned with attraction +[2845.840 --> 2849.840] or whatever it is. So I definitely think it's, it's multi variable, right? There's so many different +[2849.840 --> 2854.960] variables. And when they come together, you get that like really gifted communicator that just has +[2854.960 --> 2858.800] that ability. But I will say one thing. So one of the big things that I've noticed in the best +[2858.800 --> 2866.880] communicators is they have range. So they have the ability. There's a lot of shifts in their tonality. +[2866.880 --> 2872.400] There's a lot of movement. You kind of can't predict what the next word is going to be. There's, +[2872.400 --> 2875.520] I just think there's something that the brain loves that it loves the chaos. +[2875.520 --> 2879.120] Because you get to get to it. Exactly. And when you're not over complete and Google. +[2879.120 --> 2882.800] Exactly. No, that's the best way of thinking of it. Like you know what's going to happen. And then +[2882.800 --> 2889.120] I think there was one cool study. I think kind of like you pen that found that like people that talk +[2889.120 --> 2895.520] faster or listen to at a greater level, even though what they're saying is nonsense. They just not, +[2895.520 --> 2900.560] like that happens in your ones while I'll listen to somebody and I'd be like, I sound really good. +[2900.560 --> 2904.640] And then I replay it and they say the same thing three times. I even catch myself doing that. I was +[2904.640 --> 2909.680] like, I just had the same thing three times. Like, but I'm passionate about it. So it sounds good. +[2909.680 --> 2914.880] You know, yes. I mean, that's how are we on the other side of this though? +[2914.880 --> 2921.920] Like one of the biggest things that you can do in life is pick out people who are incompetent, +[2921.920 --> 2926.880] but sound competent from competence. Are there, how do we go about doing that? +[2926.880 --> 2934.960] So it's tricky. It's a tricky one. It's in the sense that I believe that certain people +[2935.680 --> 2940.720] speak with I was like a lot like this when I was younger. I would talk about things so confidently +[2940.720 --> 2945.680] that I knew nothing about. And my wife really changed my perspective on this when we first started +[2945.680 --> 2950.080] dating. She was like, you have a responsibility with that level of conviction. And I was like, +[2950.080 --> 2955.440] you're 100% right. Like I definitely like, I'm not really sure. So it's like, is my opinion about +[2955.440 --> 2960.960] this thing, but not like it's absolute fact and nobody else. And I think when you find somebody +[2961.040 --> 2967.840] like that, I like to look for, I don't know. So whenever I talk to any expert and is every, +[2967.840 --> 2971.920] there's always an answer sometimes it's nice to be like, I don't know, or there's not always an +[2971.920 --> 2976.160] answer to something, right? They don't always, because I feel like there's a compulsion to always +[2976.160 --> 2982.880] contribute or always to be confident. And the truth is it's just not possible. And then also there's +[2982.880 --> 2990.000] just little traits of humility of when someone was wrong or how they were wrong or why they were +[2990.000 --> 2995.760] wrong that come up in language and are not prompted. Like it's refreshing to hear an expert say, +[2995.760 --> 3002.400] I was really wrong about that. I think this is one thing that like, Uberman does so well of just +[3002.400 --> 3012.240] being this kind of like, like senior, very distinguished professor, tons of research, but has this +[3012.240 --> 3018.720] like, boyish kind of passion for science and for that. And I think that comes across as, +[3019.360 --> 3024.800] that's the new expert. One person that's not just, I have all the answers and trust me and I'm +[3024.800 --> 3030.640] right, but one person that's able to have humility and adapt over time. I like that. Are there +[3030.640 --> 3037.520] other things that stand out in terms of identifying incompetence or even deception? And I'm relating +[3037.520 --> 3042.160] those two because sometimes people are trying to deceive us and sometimes they're deceiving us in +[3042.160 --> 3046.880] part because they're, they're playing, you know, fake it to you and make it sort of thing, +[3046.880 --> 3052.480] where it's not necessarily necessarily intentional deception, but it is sort of like masking a +[3052.480 --> 3060.400] base level of incompetence. Yeah, so that's, so the, I think the best way. So on the nonverbal behavior +[3060.400 --> 3065.920] spectrum, dating is probably the easiest paradigm to understand and deception is the hardest, +[3066.560 --> 3072.320] because it's so multifaceted. It's so complex. I don't know if there ever will be a system that +[3072.320 --> 3076.800] can predict whether or not someone's lying or not, because it's just, it's a nuance that's very +[3076.800 --> 3081.920] difficult to encapsulate. I think the easiest way is to throw out fake information and see how people +[3081.920 --> 3088.480] respond. So I've done this in academic settings where I meet someone that I just, I think they're +[3088.480 --> 3092.720] a little bit full of it. So I just like make up studies. I'm like, have you read that? Like, +[3092.720 --> 3097.600] Dillington study in like 2014 and they're like about that. And then I'll say something. They're like, +[3097.600 --> 3101.280] yeah, I think I've read it. And I'm like, yeah, well, they did the double blind and like, +[3101.280 --> 3106.960] yeah, yeah, it was, it was great. I just made it up. And this is the trick though. It's not to make +[3106.960 --> 3113.040] a judgment call about that person. So I don't do something like that and go liar. How can it? But +[3113.760 --> 3119.360] I know that in terms of how they're willing to be seen, they're willing to sacrifice that +[3119.440 --> 3123.840] for being right in on the know, right? And they want to be perceived that way. So it's an +[3123.840 --> 3128.320] interesting sort of character thing. And there's questions like, you know, do you want your, +[3128.320 --> 3133.440] your head of business development, your salesperson to respond that way? Like, you know, it gets +[3133.440 --> 3138.000] interesting. But I think that's the easiest way. Because ultimately, I mean, there's a very innate +[3138.000 --> 3144.800] biological function in us that is self-preserving and the self-preserving instincts that we have +[3144.880 --> 3150.720] mean that we want other people to like us because for thousands of years, if we weren't liked, +[3150.720 --> 3156.400] we died. And by liked, I mean, we got kicked out of the tribe. So sometimes we would fake things +[3156.400 --> 3162.080] that would imagine to stay within the tribe rather than get exhumed a cato from the tribe, +[3162.080 --> 3169.120] which is certain death. Yeah. People have to be reminded. We're so bad as humans of looking at time. +[3169.760 --> 3176.400] But the amount of time we've been around on this planet, utilizing exactly what you're talking +[3176.400 --> 3181.040] about is like this. And then the time that we're navigating with these crazy structures, +[3183.680 --> 3189.840] and it plays such an important and valuable role in how we function. I always put my wife owns +[3189.840 --> 3196.400] a sleep optimization company. And she's all about circadian rhythms and very anti-these lights, +[3196.560 --> 3201.520] and what's so fascinating is like, if you look, we have had our sleep dictated by the sun +[3201.520 --> 3206.880] in the same way for so long. And just over the past like 90 years or 80 years or whatever it's been +[3206.880 --> 3213.680] completely altered. Yeah. That's so, so, so new. And just we are the time that we're going into right +[3213.680 --> 3221.920] now is the most socially complicated ever. There's nothing ever like it. We've got, you know, +[3222.880 --> 3228.240] tons of political everything. Everything is so nuanced right now that there isn't, it's not just +[3228.240 --> 3232.720] about your survival. It's about survival and looking good when you post this Instagram post and +[3232.720 --> 3237.120] making sure it doesn't offend this, this, this, this, it's getting more complex, not less. +[3237.120 --> 3240.480] And it's happening quick. It's like that, you know, hockey-shaped curve of just like, +[3240.480 --> 3244.320] whoop, complexity is driving up. Well, we did that right before the start. I moved a book, +[3244.320 --> 3248.080] right? Because I didn't want that book in YouTube. And I didn't want people commenting on that. +[3248.080 --> 3251.680] So like totally my perception of how people would respond to it. +[3251.680 --> 3255.680] Yeah. I mean, all these weird things, like I'm wearing two different socks right now, +[3255.680 --> 3259.440] two different kinds of socks. And I was like, no, maybe I'll cover my sock. And then I'm like, I don't +[3259.440 --> 3267.920] care about my sock. Like we do that. Like we just, we, and we over index these things that other people +[3267.920 --> 3273.440] don't even notice. So there's, when I was teaching psychology at CUNY, I had this, so I was wearing +[3273.440 --> 3279.200] the same j-crew pants every day because they were really comfortable. And I started like, +[3279.200 --> 3283.680] I'm just wearing the same pants like every single day. And I have like two pants. And I was like, +[3283.680 --> 3288.080] I think people are going to know, but I couldn't find these pants anymore. So I was in my head +[3288.080 --> 3293.680] about these pants and my students looking at my pants. So I was like, no, let me ask. So I basically +[3293.680 --> 3298.080] set up this cool thing where I was like, hey, everyone, like a muffin to the course. I have like 140 +[3298.080 --> 3306.240] students or whatever. I was like, there's something about me that hasn't really changed. What is it? +[3306.240 --> 3312.000] And I had everybody like fill out a little form and they gave it to me. And only one person knew, +[3312.000 --> 3318.960] I guess what, they worked at J-crew. And they said, you really love our pants. You must have so many. +[3319.680 --> 3326.480] And it was just this moment where I was like, no one cares. No one really cares. And I think we +[3326.480 --> 3332.080] spent so much time on those things and not enough time on connection and not enough time on really +[3332.080 --> 3337.200] like getting the other person, getting out of our own head into what's like actually occurring. +[3337.200 --> 3342.320] So it's interesting because I have a different approach to this, which is I believe nobody cares +[3342.320 --> 3349.040] in day to day life and interactions. A video lasts forever. So what people care about today and what +[3349.040 --> 3355.440] somebody's going to pick out in 10 or 15 years are two different things. And then that adds like a +[3355.440 --> 3363.360] layer of complexity to thinking about how things play across time or not about what people remember +[3363.360 --> 3367.840] or not about their experience in the moment. But now I can go back and analyze that video and like +[3367.840 --> 3373.280] nitpick everything. Well, this is my big problem too. So I've had to get a lot of personal coaching +[3373.280 --> 3378.000] with my coaches about video. So if I had to give a presentation in Madison Square Garden, +[3378.720 --> 3384.560] tonight I'd be ecstatic. But if I have to make a video that I know that's going to be out there +[3384.640 --> 3390.720] on it, I overthink. I overanaly. I mean, there's videos of me saying things because also, I mean, +[3390.720 --> 3396.240] I've changed my own opinions on things. So like I have videos out there that I made in 2016 about +[3396.240 --> 3401.280] poker tells that over the past eight years, I've completely changed my mind and thoughts about that. +[3401.280 --> 3407.280] There's videos of me saying things that I completely disagree with today. And I, oh yeah, +[3407.280 --> 3412.880] I struggle with the same thing. I think I have to go the what's the Rick Rubin? Rick Rubin had +[3412.880 --> 3417.200] great quote about like a creative process is like you're creating something for the time. +[3417.920 --> 3422.960] But I struggle with the same thing like video of just like, oh no, of like how this is going to +[3422.960 --> 3428.640] be perceived. Or you know, I'll be more effective at this kind of conversation a year from now. This +[3428.640 --> 3434.400] is my second podcast and God knows how many years like those type of things. But to people really +[3434.400 --> 3439.920] care, I think we care more than you know, they do. You brought up dating. What are the things that +[3439.920 --> 3446.000] we can take away from this conversation and apply to dating, whether we're a guy, a girl, +[3446.000 --> 3453.120] it doesn't matter how can we read into the first few interactions with somebody from +[3453.120 --> 3457.920] a am I trying to determine if I want to spend my life with this person or see them more or +[3458.640 --> 3463.920] how do we how do we use this information in that specific context? Yeah, dating is a tricky one +[3463.920 --> 3470.400] because dating is this like, it is this weird game still. Like in the beginning, it's this game +[3470.400 --> 3478.160] of like delayed attraction and delayed gratification. And it's so much nuance to it. I'll tell you +[3478.160 --> 3483.920] one thing. I think that so I used to do this. I used to teach a dating class in New York City. +[3484.800 --> 3489.520] And one of the cool things I would do is I asked people in the room like, what's their ideal +[3489.520 --> 3494.720] person? And they're like, right, a list. I'm a piece of paper. And then I go, okay, the last three +[3494.720 --> 3500.160] relationships in your life, how many of them met that criteria? And they would all like laugh and +[3500.160 --> 3506.240] it would never be one to one. Right. I think people have this real weird concept of, it's almost like +[3506.240 --> 3510.400] mimetic desire. Like there's a desire of what you think you want, but what you actually want. +[3510.400 --> 3515.040] And I feel like the first step in really dating is like, what do you really want? And for a super +[3515.040 --> 3518.640] young, super somebody's young and they're 20s or whatever, you don't know. It's going to take +[3518.640 --> 3524.480] some time to sort of like figure it out. But I think doing that work of like what makes me happy? +[3524.480 --> 3529.680] Like what am I looking for in somebody else? And not going for the traits of just, I mean, +[3529.680 --> 3533.280] Traktor is an instant quality for the most part, right? You see somebody you know if they're +[3533.280 --> 3537.120] attractive or you're attracted to them or not. But there's sort of like these deeper things. Like +[3537.120 --> 3543.680] I remember like me and my wife met it was so funny. We met in like 20, this 12 years ago, I met +[3543.680 --> 3547.680] and she met my class. I had a class called Body Language Explain. She was in the second class ever. +[3548.320 --> 3554.880] And she walks in and I was like, okay, what do we got here? And during the, she sits down and then +[3554.880 --> 3562.080] during during our conversations, this she on my bookshelf, she said, you know, you remind me of like, +[3562.080 --> 3566.000] you remind me of like Tim Ferris a little bit like a New York City Tim Ferris. And I was like, +[3567.040 --> 3571.840] it has attractive girl. This is before Tim was like Tim Ferris, right? It was just kind of like four +[3571.840 --> 3577.040] hour body had just came out before I work week. And I was like, and she really read that book. +[3577.120 --> 3580.560] And I she's like, oh, I love him. I love his books and did it. And I was just kind of like, +[3580.560 --> 3585.680] huh, like it was this qualifier that was like different. And I was like, yeah, I think I'd like to, +[3585.680 --> 3589.920] I never thought that I never had the concept that my partner was going to share my interest at that +[3589.920 --> 3594.880] age. Right. I just because the women I had met were an interest in this kind of stuff. And I met +[3594.880 --> 3598.640] somebody that was like really interested in self development. And I was like, I didn't realize how +[3598.640 --> 3603.840] important this is to me. And it was so funny because then it was all these things that are like +[3603.840 --> 3609.600] truly important to me at the time and continued to grow that solidified this sort of relationship. +[3609.600 --> 3616.160] And then there's all the other game type stuff that you got to do. And it's it's even very +[3616.160 --> 3622.080] difficult to even articulate. I can tell you what not to do. I can tell you try not to be someone +[3622.080 --> 3629.120] you're not don't everybody's got a song playing. And sometimes you just need to dial the volume up +[3629.120 --> 3633.920] and down. Some people try to play a completely different song. So they try to become someone +[3633.920 --> 3638.000] completely different to be more of a chameleon for that person. And after three or four +[3638.000 --> 3643.280] dates, it's just like, this is not really me. You don't want to be too much yourself and too +[3643.280 --> 3649.040] authentic and too straightforward. But it's funny. Our first date, my first study ever was on dates. +[3649.040 --> 3654.560] So I rented a restaurant in New York City. This was in 2008 in the subprime mortgage crisis. I put +[3655.040 --> 3660.800] add on Craigslist $50 to set you up on a 30 minute date. I got like 300 applications. And I +[3660.800 --> 3664.800] set up a series of dates. People came in 30 minute date. And then the guy would leave. And I would +[3664.800 --> 3670.480] ask what's going on like was and we found that the so interesting is when you show people these +[3670.480 --> 3675.680] dates, what they perceive to be the best date is not the best date. So the best date is the one +[3675.680 --> 3681.040] that was a little bit more awkward, a little bit weird, but it had more depth to it. And those people +[3681.040 --> 3686.080] connected at a much deeper level, the surface ones, they look because and then you also see all +[3686.080 --> 3692.480] people's biases. So you see, I show people two people in a date and like in one, the female is by +[3692.480 --> 3697.360] society standards far more attractive than the male. And people immediately don't think that she's +[3697.360 --> 3702.560] interested in him because of that reason. So they just met you projecting all your stuff onto the +[3702.560 --> 3707.360] world around you every single second. And that really showed you. Are there questions we can ask +[3707.360 --> 3714.560] on dates that are more revealing of so other person is I don't I don't I if the best paradigm +[3714.560 --> 3720.400] for dates is storytelling, never questions. Because when you fall into this question and all right, +[3720.400 --> 3726.000] so if we take if we write a story out, let's say you told me like before we started this, you're +[3726.000 --> 3731.280] telling me a story about your kids, right? And we list that out. There were so many threads of +[3731.280 --> 3735.600] connection between me and you in that story. It would have been impossible for you to do that in +[3735.600 --> 3740.240] a question and answer. So like even when I was dating people, I when I was like helping people with +[3740.240 --> 3744.560] dates, I would say start off with story. So as soon as you get in the table, I'm just like, Hey, +[3744.560 --> 3749.040] are you just like the craziest thing just happened or just get into some sort of story story. +[3749.040 --> 3756.240] A story are it still is the single most powerful communication tool, bar none. The problem is people +[3756.240 --> 3761.520] are now in our society going into performance storytelling. Yeah, yeah, which is a lot different +[3761.600 --> 3766.160] than social storytelling. So performance storytelling is that when I was seven years old, +[3767.120 --> 3771.360] something happened that changed the course of my life. And it's kind of like I find it icky. +[3772.480 --> 3775.520] A real story is just telling a story like you tell your friends, but it has a beginning and it +[3775.520 --> 3781.120] hasn't. And I find that tell more stories and dating. It will really show you how you connect to +[3781.120 --> 3788.320] someone. That's fascinating advice. How did nonverbal styles work across cultures? No, they're so +[3788.320 --> 3794.080] different. This trick is this. So I have this thing called the anywhere on the planet approach. +[3794.080 --> 3799.760] And the sort of concept is you can be dropped anywhere in the planet and you could observe and you +[3799.760 --> 3806.640] can look at things. So you can look at like the proxemic differences between a Midwestern interaction +[3806.640 --> 3812.960] versus the Middle East. Middle East people, they talk quicker together. The behavior will be perceived +[3812.960 --> 3816.720] as maybe a little bit more aggressive in the way that I move in this. It's a cultural construct. +[3816.720 --> 3821.040] It's how they interact. It's not aggressive to them. Not aggressive to them. They're fine. In New York +[3821.040 --> 3827.040] City, the way I was raised around my friends, we were brutal to each other constantly making fun +[3827.040 --> 3832.560] of each other, insulting people left and right. It was part of the cultural construct. Or even the way +[3832.560 --> 3837.600] you walk in New York. Oh, I mean, still, I mean, pick out a tourist just by how they're walking, +[3837.600 --> 3842.480] right? I mean, still to this day, everywhere I go, I'm like, can you hurry up already? Exactly. +[3842.720 --> 3847.520] Why do you come from? Yeah. Like, what are you doing? There's like six, I still get frustrated. +[3847.520 --> 3852.080] How does one person take up the whole sidewalk? Yeah. Like those kind. So that's one of the +[3852.080 --> 3856.400] interesting things. Like now I just get like really interested. So I see someone on the elevator +[3856.400 --> 3862.480] and there's two people and they're blocking the other side. And I'm just like, you really don't +[3862.480 --> 3867.440] have the concept that other people are and I don't think people do. I don't think people have that +[3868.080 --> 3873.040] theory of mind that other people are interacting in this world. And it's why it's so infuriating +[3873.040 --> 3878.640] for the people who do have that. But I just don't like even on the plane today, somebody was literally +[3878.640 --> 3884.320] holding back yesterday. Everybody is walking on this line and they're trying to get their little +[3884.320 --> 3889.040] luggage out and they just and then their wallet falls and they have to take their wallet out +[3889.040 --> 3893.680] and tuck it into their, there's a world happening around you. What's going on? +[3894.640 --> 3900.000] That's like my one prompt for everybody listening is like there's a world happening around you. +[3900.000 --> 3906.880] And I think it's very healthy. Just sometimes make your behavior about others and not yourself. +[3906.880 --> 3912.480] So how can you optimize your behavior for the people around you as opposed to what you're going +[3912.480 --> 3917.360] through? It's a very helpful exercise for people that are in their head. And I think the best +[3917.360 --> 3921.520] communicators and the people that are the most well liked are consciously doing this. They're +[3921.520 --> 3926.800] putting other people first and it's just sort of like a reaction or a way of doing it. +[3926.800 --> 3932.400] I mean, I walked in here and I was like just as a default approach, they were unloading gear +[3932.400 --> 3937.440] and I was like, Hey, did you need help? And I picked up a stand and brought it in like it's just +[3938.000 --> 3941.920] because I know what it's like to carry all these damn stands down the block. It's just putting +[3941.920 --> 3946.720] others first. It can change your life and change what you get back. You just can't do it from a place +[3946.720 --> 3951.600] of trying to get something. It's super weird. If all of a sudden I bring this stand in and I'm like, +[3951.600 --> 3956.560] Hey, guys, I have a shoot after this. Were you willing to do it? Because I was like, I see where +[3956.560 --> 3961.760] it's going. Right? So it's insincere. It's totally insincere. Right? There's an order to it. And I think +[3962.960 --> 3967.200] approaching life from that perspective, I promise you, I always say this, you can never measure the +[3967.200 --> 3975.040] ROI of a social interaction. You have no idea what one interaction will lead to. I mean, I went to +[3977.040 --> 3982.000] a meetup and I met you and now I'm here. Right? Like just never know what one thing is going to do. +[3982.000 --> 3987.280] So it makes sense that you want to show up in a way that's about others as much as possible in +[3987.280 --> 3992.560] these interactions because it comes back to, it comes back. And even if it doesn't come back, I +[3992.560 --> 3996.720] tend to think you've, most people feel better about themselves knowing they're four others than +[3996.720 --> 4002.720] four themselves. My good friend Peter Kaufman has a saying, which is go positive and go first and +[4002.720 --> 4008.640] you really unlock the world in a way that you can't even anticipate the second, third, fourth, +[4008.640 --> 4014.400] fifth order consequences of that. But his theory is most people don't go positive, go first. +[4014.400 --> 4017.600] They want to go positive, but they want the other person to go positive first. +[4017.600 --> 4022.880] Yes. So they sit around waiting for people to recognize their potential for the world to give them +[4022.880 --> 4027.040] what they're, what they're owed. And because they're doing that, nothing happens because there's +[4027.040 --> 4032.880] no action, so there's no response. That's absolutely genius. And in my opinion, that's what leadership +[4032.880 --> 4039.680] is. So leadership is that stepping up and doing that, it's not waiting for it. I'm going to lead. +[4039.680 --> 4043.360] And I love it because like everybody can be a leader. You've been like on an elevator and it's +[4043.360 --> 4047.200] like awkward. And like the person who speaks up first and everybody laughs, it's like leadership +[4047.200 --> 4052.400] right there. You broke the cultural norm and you said something. I always get in this sometimes +[4052.560 --> 4056.240] and I'm like, oh, it's the awkward elevator silence. It's a perfect, everybody laughs. +[4056.240 --> 4061.120] Laps instantly. Right. One of switch gears a little bit. What's the Rockefeller method? +[4062.080 --> 4069.200] Okay. So I read, I think it was Titan. The book and there was a lot of lessons in that book, +[4069.200 --> 4075.040] but there was one story that fascinated me. And it was John D Rockefeller was in an oil +[4075.040 --> 4082.000] barreling facility. And he was watching his group of people barrel these oil barrels. And like, +[4082.000 --> 4086.640] back in the day, you would take tar and you'd put tar all around the barrel and then you put it +[4086.640 --> 4091.760] and you'd hammer it. And he's sitting there watching this and he's going, why do you use nine or +[4091.760 --> 4097.440] 10 or whatever pieces of tar? And they were like, I don't know Mr. Rockefeller, it's just what we do. +[4097.760 --> 4102.960] And it was and he was like, well, can you find out? Do it, do like a little study and find out how +[4102.960 --> 4109.920] much can you do it without without ruining the integrity of it? And I was like, that's absolutely +[4109.920 --> 4114.160] fascinating. Like that way of thinking. And it's so cool to be like just Rockefeller sitting +[4114.160 --> 4120.160] there as like Titan of industry, looking at this really small process and being like, how can we +[4120.160 --> 4124.960] optimize that? So after reading that, I made this like Rockefeller method internally where every quarter, +[4124.960 --> 4131.040] I would sort of view things from that perspective, right? And it comes up in weird like sass ways +[4131.040 --> 4135.680] right now where it's like, oh, wow, we spent a lot of money on this. Like reach out to them and see +[4135.760 --> 4141.200] if we could maybe get a bulk discount. And the amount that comes back from that method is absurd. +[4141.200 --> 4146.160] Like just it just works. And then also just think about it like in my own life, like, where am I +[4146.160 --> 4153.360] putting extra tar on that I don't need to put tar on? And it's so helpful because I am a very big +[4153.360 --> 4158.560] control. Do it myself, ethos. It's like often hard for people to work for me because I'm like, +[4158.560 --> 4162.640] I'll just do it. I'll just do it. And I have to almost get people around me that I'm like, listen, +[4162.640 --> 4166.720] the thing I value the most is when you say, no, I'll do it and I'll do it better than you. +[4167.280 --> 4171.760] And then you back that up. Like, and it's and I need that sort of culture around me. But the Rockefeller +[4171.760 --> 4177.920] method was very helpful for that. What else did you take away from that book, Titan? Or lessons +[4177.920 --> 4186.800] from Rockefeller? So many, like, I found the, when he was discussing how he didn't know how to +[4186.880 --> 4192.960] give away his wealth. I thought it was like the most interesting problem ever. He's like, I need to +[4192.960 --> 4198.400] build a whole, I don't know how to do this anymore. And how people were constantly asking, just the +[4199.360 --> 4207.120] the ruthfulness of some of it all of just the early monopolies were not a thing and just squeeze +[4207.120 --> 4212.640] out, squeeze out, squeeze out everything. I found that really interesting. Yeah, I mean, the funny +[4212.800 --> 4217.360] thing is the thing I take away most from that book is that one story, is that Rockefeller method +[4217.360 --> 4221.920] out of everything. Also, I learned this in graduate school at a really good professor. I was +[4221.920 --> 4227.120] sure remembered his name, but he, I was teaching, it was like a terrorism class or something like that. +[4227.120 --> 4231.840] And he, the way we did it was we had to read six books in the class. Terrorism. Yeah, so I got +[4231.840 --> 4236.240] like my certificate in terrorism studies. Okay. Because I went to John J, which was a criminal justice +[4236.240 --> 4240.880] fool. Yeah, yeah. Sorry. Sorry. It was really cool. We had like people from +[4241.760 --> 4246.960] intelligence agencies, common gift presentations and do all these things. So what he had us do is, +[4246.960 --> 4252.800] he had his read like six books. And the only assignment for the entire class was to read a book +[4252.800 --> 4257.920] and take five passages out of the book, highlight it and write why they're important to you. +[4259.440 --> 4264.800] And genius. I was like, it's asked, I still do that to this day. So I still try to like, I have +[4264.800 --> 4269.040] like a Kindle connection where I just like highlight certain things because you highlight so much and +[4269.040 --> 4273.760] then it all goes in one ear out the other ear. But if you highlight five things that could actually +[4273.760 --> 4278.640] improve or impact your life in a certain way or change your perspective, so much more tangible +[4278.640 --> 4283.280] value from a book. So tell me your workflow. You go from Kindle to notion if I remember correctly. +[4283.280 --> 4291.600] Yeah. Good. Kindle to read wise, read wise to notion. Okay. And then what do you do with it? So +[4291.600 --> 4296.560] like I have this different colors mean different things. So I'm trying to improve my writing right +[4296.560 --> 4303.280] now. So blue is up. I like the way of sentence or a paragraph is structured. Red is I need to do +[4303.280 --> 4308.800] like more research on this. Yellow is one of those things that I want to sort of like take away from +[4308.800 --> 4313.760] a lifetime perspective. And I've been trying to also have a difficult time retaining a lot of what +[4313.760 --> 4319.280] I read. I'll remember aspects of this day for the rest of my life. Like I'm very good at like +[4319.280 --> 4324.160] experiential memory. But like things just go in one ear out the other ear. So I've been doing a +[4324.160 --> 4328.480] better job of like highlighting things that I want to remember. And I've been I'm wanting to use +[4329.440 --> 4334.000] this like Japanese method of flashcards. Yeah. It's just basically create flashcards for my events. +[4334.000 --> 4337.360] So I guess it's hard to keep things. So I remember when I used to teach I used to like have all +[4337.360 --> 4341.920] these like cool things in my head that I could pull from. And now that I'm doing more like podcasts +[4341.920 --> 4346.800] like I'll know a ton of researchers that I could reference. But I forgot their names. Yeah. +[4346.800 --> 4351.120] So my yeah, that person from you pen like I'd rather snow their name. So just trying to be more +[4351.120 --> 4355.520] intelligent about how I remember the sources and the things that are leading up to it. And then +[4355.520 --> 4362.240] there's just certain like people or writers or authors that I use as like anchor points like one +[4362.240 --> 4367.760] of them's like Robert Sopolsky like how wise you were to get all sorts of determinism behavior. He's +[4367.760 --> 4375.200] like while he's deep in biology. He's pretty cross discipline like he'll pull from different areas. +[4375.200 --> 4379.200] So it sends you down all these rabbit holes. And I like to like document it in the best way that I +[4379.200 --> 4384.960] possibly can. But I think AI is making that a lot easier now. Well, let's switch to your master +[4384.960 --> 4394.160] researcher. I'm curious about your process around using chat GPT to get up to speed on something +[4394.160 --> 4401.680] or how do you leverage AI to go about learning a new subject. Yeah, it's ridiculous. Because six +[4401.680 --> 4406.800] months ago I was like, I don't know, like it would just make things up. And now it's great. So one +[4406.800 --> 4412.400] of my big things is I say I have a prompt that's if I'm looking at an academic discipline that I +[4412.400 --> 4418.560] don't know a lot about, I say I want you to imagine this academic discipline as the branch. And I +[4418.560 --> 4425.920] want you to imagine the subcategories as the I'm sorry, as the branch, I want you to imagine the +[4425.920 --> 4432.400] base of the tree as the discipline, the branch as a sub discipline and the leaves as the +[4432.400 --> 4437.520] academics that correspond to that. And it does it for you. So like it goes into neurobiology, +[4437.520 --> 4442.960] like I've been really interested in predictive processing, which is this sort of universal theory for +[4442.960 --> 4449.600] how consciousness really is like how we process our world. And it's a rabbit hole to say the least, +[4449.600 --> 4455.280] right? So it has all these different philosophical philosophical approach all of that in order to +[4455.280 --> 4460.160] understand that world, I need to understand the bigger macro principles behind it and like who the +[4460.160 --> 4466.240] key players are in it. And I mean, you still have to do that research or have a research team +[4466.240 --> 4470.240] and have somebody like do this. And now I literally do it in a couple of prompts. It's absurd. +[4470.800 --> 4476.800] It's like absurd for workflows. So then what do you do with it? You have the leaves, you have the people's +[4476.800 --> 4482.560] name, and then I try to look for, so I try to look for what are the three biggest sources of conflict. +[4482.560 --> 4487.200] So what are they disagreeing about? Like where's the big fight in there to look for that? +[4487.200 --> 4494.960] Are you asking chat GPT? I lately have been asking chat GPT and the responses are pretty good. +[4494.960 --> 4499.760] It's pretty good. The problem. So what I I sort of back tested this and did it on things that I +[4499.760 --> 4505.520] haven't intimate knowledge about, right? So I was like, what are the discrepancies of research in +[4505.520 --> 4512.640] universal facial expressions and emotionality? And I'm like, oh, whoa, but I did it like six months +[4512.640 --> 4516.720] ago and it wasn't, oh, whoa, it was bad. I was like, you just made up a person. This is not real. +[4516.720 --> 4521.360] This is misappropriated. It is getting considerably better to the point of where I'm really trying +[4521.360 --> 4526.000] starting to trust this tool in six months from now. I think I have full confidence in these things. +[4526.000 --> 4530.400] And also you could do like cool things like we have a pretty robust database of all like the +[4530.400 --> 4534.320] PDFs on non-verbal behavior from every academic journal that I've just been collecting. I could +[4534.320 --> 4538.160] build my own little language model on that and just ask questions and query that. I mean, there's +[4538.160 --> 4542.720] different ways of doing it. But I mean, AI has helped me with behavioral coding more than anything +[4542.720 --> 4547.520] else. So I started this other company called behavioral robotics with the goal is to teach machines +[4547.520 --> 4552.160] to read human behavior because if reading human behavior is all about these complex decision +[4552.160 --> 4556.480] trees, there's no reason from a first principle perspective that a machine can't do the same thing. +[4556.480 --> 4561.040] In fact, it should be way better than we are. It should be way better because the camera on you +[4561.040 --> 4565.760] is not just Blake's worldview. It's all these modeled out worldviews that can predict and understand +[4565.760 --> 4569.280] and I think it can be something really special. It's going to take a lot of time to get there. +[4569.280 --> 4576.000] But you know, my first big study on Beyond Tells, I spent like a stupid amount of money. Like maybe +[4576.000 --> 4581.760] quarter million dollars manually coding. Like we counted 550,000 blinks. Somebody so we had a +[4581.760 --> 4586.160] team of like 70 people sit there and every time someone blinked, they clicked M on a keyboard. +[4586.960 --> 4592.720] And it got cross validated and made sure it was right and all that's not anymore. My machines do it. +[4592.720 --> 4596.800] I mean, just run it. Amazon Web Services. Every blink exactly when it happens. +[4597.440 --> 4600.720] Accuracy is incredible. It's amazing. +[4601.920 --> 4606.160] So how are you using it aside from that? Like what else are you doing with it? Because you had +[4606.160 --> 4612.560] some interesting takes on like how you're leveraging AI. Yeah. So like I am using it to develop +[4612.560 --> 4618.560] inventories and scales to better predict people that have social challenges. So for example, +[4618.560 --> 4623.600] like we're building something right now that's like a facial heat map. So basically, +[4624.640 --> 4629.920] it could understand when I'm saying so I'm having a language just by words, right? So basically, +[4630.560 --> 4635.360] our system takes all your movement and Brits down to raw data. Coordinate data. So like where +[4635.360 --> 4640.400] the hands and fixed points of the hands are moving. Facial data. And a lot of this stuff is you can +[4640.400 --> 4645.280] do it open source. But we're starting to refine it. So we're starting to understand like the composition +[4645.280 --> 4649.840] of wrinkles and people's faces and then understanding how their facial movement changes the wrinkles +[4649.840 --> 4656.000] to better classify behavior. And we know every word that everybody says at every second it says. +[4656.000 --> 4661.360] So then in interactions, we could easily create summaries and inventories for like, +[4662.000 --> 4666.640] this is something where this person should have shook in their head or sown some sort of facial +[4666.640 --> 4673.200] reaction, but they didn't. So a great example of this is a personal example. My dad passed away +[4673.200 --> 4678.560] like literally two weeks ago from a two year battle with ALS, right? So horrible disease, horrible. +[4678.560 --> 4684.720] But to see how people handle death and react to death from my level of expertise has just been +[4684.720 --> 4689.120] really fascinating. So some people are like, oh, so like like why aren't you coming? I'm like, +[4689.120 --> 4695.200] oh, my dad passed away. And they don't have the mimicking like the amp like, I'm so sorry. They +[4695.200 --> 4699.440] don't have to do that. They're like, oh, okay. And then I'm like, and I have to know. So I'm like, +[4699.920 --> 4703.600] in that moment, you just, I just feel really weird about death. I'm like, okay, thanks, +[4703.600 --> 4708.000] but you should probably tell people that because that's going to impact. So we can use machines to sort +[4708.000 --> 4712.320] of identify that, right? We could basically know that like there's a low level of facial animation +[4712.320 --> 4717.920] in this person's face when this person said something that there should have been social coordination. +[4717.920 --> 4721.680] And that's the truth. Like, I think all of these like, I mean, +[4722.880 --> 4727.200] so it's almost like blind spot identifying for people because it's like, hey, you should have +[4727.200 --> 4733.920] responded in this one way. You responded in a different way. That's not a judgment on how you +[4733.920 --> 4738.720] responded, but we're going to tell you how that's perceived exactly. So everything we do is about +[4738.720 --> 4743.280] perception, not meaning. The biggest problem with this industry of non-verbal behavior and body +[4743.280 --> 4748.320] language is it's pushed towards meaning, not perception. And the truth is it's just understanding +[4748.320 --> 4753.120] that your behavior is outside the distribution of what would be perceived as socially relevant. +[4753.120 --> 4757.520] And that's a lot more complex. Like all of those like, but there's also another angle to this. +[4757.520 --> 4766.240] As we keep talking about this, this is amazing for classifying behavior, but our society progresses +[4766.240 --> 4772.240] because of people who are outside of the norm. Like on an individual basis, it might be very +[4772.240 --> 4778.480] detrimental to you on a societal basis. It's usually advantages to society to have people who +[4778.480 --> 4784.400] operate outside the norms. So I wonder like to what extent if we try to reign people in to be more +[4784.400 --> 4790.320] normal, we're actually giving up sort of we're almost putting in ceiling on progress. Yeah, so this +[4790.320 --> 4795.040] is my whole thing. So we want to be to the right side of the bell curve, not the left. Yeah. So I'm +[4795.040 --> 4800.320] never pushing people towards normal. I'm getting them to understand normal to be themselves in the most +[4800.320 --> 4805.840] powerful way as possible. And to make sure they don't have behavioral blind spots that are completely +[4805.840 --> 4810.640] going to be like really ostracized by society. Like, okay, you can't do that kind of thing. But I, +[4810.640 --> 4816.160] there's nothing more that I love when a person is just themselves and just themselves. Like it, +[4816.160 --> 4821.360] and that's a very attractive quality. That's like, that's the definition of charisma in a lot of +[4821.360 --> 4824.960] ways. Like somebody just walks in the room and they they own it. And you're like, who is this person? +[4824.960 --> 4831.120] Like they must be someone. They must be something. But the truth is first, you want to understand what +[4831.120 --> 4836.960] normal is before you can break normal, right? Like that's how you know you're doing something that's +[4836.960 --> 4841.040] on the right side of the bell curve. And I say that because when you try something that's on the +[4841.040 --> 4844.800] right side of the bell curve and it doesn't work, you become the left side of the bell curve. So +[4844.800 --> 4848.240] that's why you're always trying to push in the right way. Yeah. I like that a lot. +[4848.880 --> 4855.520] Is there any other things to use chat GPT for? I should use it more and more. I mean, we use, +[4855.520 --> 4860.960] I mean, I'm using chat PT to build some of them. This is kind of meta, but I'm using chat GT to +[4860.960 --> 4865.840] build some of the language models that index and label communication because it's phenomenal at +[4865.840 --> 4874.320] that. So, so for example, I was like trying to create a mechanism for determining assertion in text. +[4874.960 --> 4883.680] So I said, I posted in chat GPT like 300 separate like words, not words phrases, that were from +[4883.680 --> 4888.720] a conversation. And I said, I want you to create an inventory on a scale from one to five +[4888.800 --> 4892.480] measuring assertion. First of all, what do you think is the opposite of assertion? And it's like, +[4892.480 --> 4898.080] oh, probably passive. So all right, so assertive and passive rank all of these. It was phenomenal at +[4898.880 --> 4904.160] it. It was like perfectly mimicked my like what I perceived to be nuanced understanding of like +[4904.160 --> 4908.800] the way someone structures a phrase. And then you ask it, I said, okay, better yet, give me the +[4908.800 --> 4915.120] rationality behind why? And it was like, because this answer is a little bit short and the other +[4915.120 --> 4920.560] person's a little bit larger. And it's like, yeah, you got it. It's phenomenal. But it goes into +[4920.560 --> 4924.880] like hundreds of details that we can't even exactly comprehend. We can't even begin. And like to me, +[4924.880 --> 4930.400] this is my world of nuance. So I'm like, oh, this thing gets it. So I'm using it. I can do that to +[4930.400 --> 4936.960] build out inventories super quickly. Like it would have taken a lot of time and effort to do that. +[4936.960 --> 4941.840] I'd probably hire some like some PhDs to like sit there and bubble. And now I just like, +[4942.480 --> 4947.680] it's incredible. I don't know how else to describe it. I mean, yes, it's not AGI. Yes, +[4947.680 --> 4953.360] there's so many other things. Yes, there's flaws. Yes, there's this. But as a tool, it is incredible. +[4954.400 --> 4960.080] You use coaching as a tool. You have a lot of coaches. Tell me about the process you use to +[4960.080 --> 4968.480] select coaches. And what's the difference between a great coach and a good coach for you? That's +[4968.480 --> 4976.400] great question. Okay. So process is not so much. I'm always, I always look for people that either I +[4976.400 --> 4981.760] have had some sort of experience with I like to follow people also or I like to be coached by people +[4981.760 --> 4990.000] that I believe are living their in alignment with their values. Like I know coaches out there in +[4990.000 --> 4994.240] the world that I'm like, you're saying that, but you're told team hate to like, how could you ever +[4994.240 --> 4999.200] coach that? So I'm looking for that first. Do they show up? Because I tend to stop listening to +[4999.200 --> 5005.040] somebody if I don't see that they're they're teaching something and not applying it. I also just like +[5007.280 --> 5012.000] maybe like a tougher coach. I want somebody that's going to call me out on my bullshit because I +[5012.000 --> 5016.720] can be very convincing and I can argue and they've been in a stop that's bullshit. Like one of my +[5016.720 --> 5023.120] coaches, Joss on him and Tasha, she is very quick to call me out on anything. She's like, that doesn't +[5023.120 --> 5028.000] sound right. Like she just gets to the point and gets to the heart of what's going on. And my other +[5028.000 --> 5032.160] coach, John McImmorgan is very good at saying like, well, you said this two weeks ago and now you're +[5032.160 --> 5037.200] not saying this. Like I think it's very difficult to get people to hold us accountable and for us to +[5037.200 --> 5042.640] be our word and all that. And then also I just coaches for other things like running and I just +[5042.640 --> 5048.000] like coaching. Yeah. So if you're a coach and you don't have three coaches, you should stop being +[5048.000 --> 5051.440] a coach. Well, it's really interesting, right? Because it's a shortcut to sort of +[5052.400 --> 5056.560] expertise in a way, which is like, they don't have the expertise. They shouldn't be a coach. +[5057.280 --> 5061.280] But from your point of view, it's like I can hire a coach who's done this, taught it. +[5062.000 --> 5066.080] I can get up to speed rather quickly, no matter what the subject is, whether it's holding me +[5066.080 --> 5071.520] accountable or like learning how to run longer distances or whatever objective I'm trying to achieve. +[5071.520 --> 5077.440] I now have access to sort of a better quality of thinking that I do in my immediate vicinity. +[5078.320 --> 5084.320] 100% and there's so many different and also just like I'm at point A, I want to get to point B. +[5084.960 --> 5089.760] And I want to get to point B, but I want to minimize the suffering that I'm getting to point B. +[5090.640 --> 5095.200] Find people to help me along that way. It's also my biggest. So if I were to go back, I would say +[5095.200 --> 5098.960] this, I would go back 21 years old and it was like what advice would you give your 21 year-old +[5098.960 --> 5106.240] self? I would say get a coach, but my 21 year-old self would probably tell my 38-year-old self to draw +[5106.240 --> 5111.040] off. I don't need to make that right. Well, let's go back to that a little bit. I did +[5111.040 --> 5116.400] want to touch on this before we wrap this up, which is you were a terrible student up until +[5116.400 --> 5122.560] university. Yeah. What changed? I felt like a complete failure. I think I had this identity that I +[5122.560 --> 5127.040] was a smart person and I went to school and at this moment, where this kid next to me was going +[5127.040 --> 5132.080] to West Point, kid to the left was going to Harvard. And I was like I can't believe I just wasted +[5132.080 --> 5135.920] this whole thing and I was like this is when everything changes and literally it was the +[5137.520 --> 5143.520] it was a massive identity shift for me. I had two big identity shifts. That that moment +[5143.520 --> 5147.680] when I shifted and when I started teaching psychology, those are the two big things that shifted me. +[5147.680 --> 5153.120] Go back to high school. Yeah. What was the shift? I was just like I'm going to take school very +[5153.120 --> 5157.280] seriously. I'm going to go to Harvard. I'm going to get my JDMBA. I made a decision that this is +[5157.280 --> 5163.840] what I want and it went a completely different way. But I still worked like that. Then your actions +[5163.840 --> 5169.840] aligned with that identity? Yeah. 100%. I was sitting I remember vivid conversations with my mom. +[5169.840 --> 5175.280] She's like Blake, like was the difference between like a 95 and an 88? I'm like you don't understand. +[5175.280 --> 5180.000] Like I need my GPA to be four. It was like the first time I really like applied myself to something. +[5180.000 --> 5184.160] I mean, except for like maybe gaming or a couple of other things that I did when I was younger. +[5184.880 --> 5191.120] But I over corrected to say the least. And I was like stressed and dealt with a lot of other +[5191.120 --> 5197.280] things. And I think I calibrated it like 21 or 22 to like, okay, now I know not to. And also +[5197.280 --> 5202.560] there's just like weird systemic things. So in like city university, if anybody's still going +[5202.560 --> 5210.960] there, like CUNY has the weirdest grading system where an A is 92 and above from a numerical +[5211.520 --> 5218.960] point. So there's no point in getting 100 or 93. There's no difference. So like if you're studying +[5218.960 --> 5224.800] to get 100 in a test, it's stupid. Like study for 90, get 90s across everything. Be nice to the +[5224.800 --> 5229.600] professor. You'll get an A. So it was just like seeing all this type of stuff. And then also like +[5230.480 --> 5235.200] being in school had me like question a lot about psychology, question a lot about research. +[5235.200 --> 5239.360] I learned how to critically think. And then I applied that critical thinking to the which I +[5239.360 --> 5245.200] think was what you're supposed to do. But I'm like, this study seems kind of like bullshit. +[5245.200 --> 5249.600] Like there's not that many people here and like what's going on. And it really opened my eyes to +[5249.600 --> 5257.280] like the flaws of research and how data can be manipulated. I had a very early experience with +[5257.280 --> 5262.960] that where I had a my professor was an adjunct and he was the head of data analysis at the MTA at +[5262.960 --> 5268.000] the time. And I had just done a study in my like experimental design class. And I was like, +[5268.000 --> 5271.120] yeah, I'm doing this study. He goes, you seem really happy with your study. I was like, yeah, +[5271.120 --> 5276.240] he's like, give me the data set. I vividly remember handing him a flash drive that had SPSS, +[5276.240 --> 5279.840] my data set. He goes, what do you want it to show? I was like significance, of course. And he's +[5279.840 --> 5284.960] like watch this. And literally 10 minutes later, he's like, there you go. And it blew my mind. +[5284.960 --> 5289.200] It blew my mind. That man blew my mind about how data could be manipulated. And he taught me at a +[5289.200 --> 5294.800] very young age, he was like, this was happening everywhere around. And he was and he gave an entire +[5294.800 --> 5302.560] class on how data is manipulated in day to day life. And I remember being like, oh my god, +[5302.560 --> 5306.400] like this is so cool. I guess this is such a well, imagine now, right? You just +[5306.400 --> 5310.160] look at the chat, GBT. And you're like, I want to show this. Give me the logic. Give me the +[5310.160 --> 5315.280] reasoning. Lay it out with reference. And you got a draft paper right there. Oh, that's another +[5315.280 --> 5324.240] chat GBT point. So I probably spent maybe $150,000 in hiring data scientists to clean and to produce +[5324.240 --> 5331.440] dashboards for beyond tells chat GBT. I could have done it all in a weekend. 20 bucks. Yeah. Yeah, +[5331.440 --> 5339.760] just sit in there. The insights are incredible. Yeah. I like the point about not, you know, after 93, +[5339.760 --> 5345.600] it doesn't matter. City might as well just optimize for 93. But that's how this is the systemic problem +[5345.600 --> 5350.400] in organizations. Well, so set the metrics and you're like, oh, okay. So I need to do this more than +[5350.400 --> 5354.640] this if I want to get move up, right? Well, so this is interesting. One of my kids came home and he +[5354.640 --> 5359.920] had a science fair project. And I was like, oh, that's an interesting topic. And I didn't seem like +[5359.920 --> 5364.240] something super interested in. I was like, why'd you pick that? He's like, well, if I picked what I +[5364.240 --> 5369.200] wanted to, I'd probably win. And then I'd have to present at like the regional science fair. And I +[5369.200 --> 5373.040] don't want to present it the reason on science fair. But I want to get a good grade. But I don't want +[5373.040 --> 5378.480] to get like a really good grade. So he's like, he's already thinking in terms of like optimizing for, +[5378.480 --> 5383.920] it's a really interesting approach. I mean, that's just like a sign of hyperintelligence in my +[5383.920 --> 5388.960] opinion. Like, it really is. Well, no, but that's like the joke where a lot of people, +[5388.960 --> 5393.280] well, like a lot of wealthy people would be like, oh, I wasn't smart. I was lazy. And I'm like, +[5393.280 --> 5399.200] yeah, but laziness in bed. That's a little sophistication, right? I get to interview the best +[5399.200 --> 5405.760] people in the world. How can I ask better questions? Oh, well, these really good questions. So one of +[5405.760 --> 5410.400] the things I'm doing right now, I can do it on you. So we're doing a thing on how to ask better +[5410.400 --> 5415.280] questions. So we're taking some really good question people like one of the people that I've seen +[5415.280 --> 5420.560] evolve is Tim Ferriss, his ability to ask questions in a certain way. And we basically take every +[5420.560 --> 5425.360] single interview he's ever had and we analyze all the behavior. And we're looking at the nonverbal +[5425.360 --> 5429.600] and contextual language patterns that do that. So I'll have to let you know because this is something +[5429.600 --> 5435.760] I'm like, I could do it for you too. Yeah, yeah. Um, I think, oh, God, those early interviews are terrible. +[5435.760 --> 5440.560] But that's, that's like the beauty of, so like, I've seen so many people, like, people that put +[5440.560 --> 5445.680] themselves out there, like, I find it so magical to see someone shift over years. And it's something +[5445.680 --> 5449.600] that I've been like, almost like afraid to do on video in a lot of ways, like not giving myself +[5449.600 --> 5453.760] permission to just be myself on video on them myself in person, but something about the video, +[5453.760 --> 5459.520] like we spoke about is like, whoa, but you see people change and you see people like warm up and you +[5459.520 --> 5466.080] see people become different over time. And I feel like that's so that that process is so much cooler +[5466.080 --> 5470.880] than like just seeing a master, like anybody that puts themselves out there and I'm paused to you +[5470.880 --> 5476.640] for doing this right now, you're seeing their improvement from where they started to where they +[5476.640 --> 5482.720] ended. And I think the lessons in improvement are the greatest lessons. Like, why did they go from +[5482.720 --> 5487.680] the first 10 podcasts, they're asking questions this way. Then the next 10, they shifted like, why? +[5488.240 --> 5495.040] Right. And understanding that context is, oh, that's a good one is like, ask somebody, +[5495.920 --> 5500.880] is there any context that you think I need to know in order for me to ask a better question or to +[5500.880 --> 5507.760] make this a better question? Because sometimes people don't give enough context and they have to be +[5507.760 --> 5512.000] prompted to, but once you give them the ability to give them more context, that works. And that's +[5512.000 --> 5516.000] also a quick tip. That's probably my single biggest communication tip for everybody in a corporate +[5516.000 --> 5523.040] culture. The more context, the better. It's just that simple. I like this works. One question before +[5523.040 --> 5527.920] the last question. So the penultimate question, I guess, is we're talking about writing earlier +[5527.920 --> 5536.160] and the power of writing in terms of thinking, can you expand on that? It's over time I have seen more +[5536.160 --> 5541.840] and more that it is probably one of the most powerful self-developed mediums, self-developed mediums +[5541.920 --> 5546.400] over anything. I think that people were in a war, especially for those who are thinkers, I know I'm +[5546.400 --> 5550.960] a thinker, I spend a lot of time in my head not being present and thinking and processing. +[5550.960 --> 5556.560] There's no checks and balances up here. There's no one to stop and say, hey, that's kind of not the +[5556.560 --> 5561.760] right idea or not. When you write something out, you're creating reality, you're taking your thoughts +[5561.760 --> 5566.960] and you're putting it out into reality. And then there's an ability to critically examine that. +[5566.960 --> 5570.960] And I just feel like it's so much help. It's so helpful. It allows you to structure your ideas. +[5570.960 --> 5575.760] It makes you a better thinker. It makes you a better communicator. It's evidence of how you've +[5575.760 --> 5580.320] shifted your thoughts and your principles and ideas over time. I think a writing practice is so, +[5580.320 --> 5587.840] so, so important. I think of it in terms of reflecting as well, right? So, mental reflection, +[5587.840 --> 5592.000] I tend to just keep going over the same things a lot. But when I write it down, it's like, that +[5592.000 --> 5596.480] doesn't make sense. You're checking your own thinking in a way, right? Because now you're, +[5596.560 --> 5600.160] and I prefer pen and pencil, even if I like shred it or burn it after. +[5601.920 --> 5606.080] But you're checking your own thinking. And it's the process by which I discover I don't know what +[5606.080 --> 5610.640] I'm talking about. But it's also the process by which I know where to look for more information, +[5610.640 --> 5616.000] where to go about it and learn what I'm talking about in a way that I can convey clearly to other +[5616.000 --> 5621.200] people. But it's also how I get new ideas. And importantly, how I give up ideas, +[5621.760 --> 5627.200] which I don't think we do a lot of these days. And so there's like an ego thing to it where it's +[5627.200 --> 5632.320] like you're practicing, that's a really good sentence. You want to keep it in, then you're like, +[5632.320 --> 5637.520] oh, I got to get rid of it. But it doesn't fit with the piece, right? But then you're giving +[5637.520 --> 5641.360] something up and you're sort of, and then you can give it to other people and get feedback. +[5641.360 --> 5646.400] Because they don't see all the thoughts that you have in your head that you see. And so, +[5646.400 --> 5650.160] like they're giving you, oh, this doesn't make sense. And now all of a sudden, it's like, oh, +[5650.160 --> 5655.360] I've got something here. Yeah. And you're talking about this makes me want to bring back this practice. +[5655.360 --> 5660.880] So I started journaling and I started to realize that because I'm such an optimist, I lie in my +[5660.880 --> 5666.800] journals. So what I started doing was I made a video journal and I sat in front of the camera +[5666.800 --> 5672.400] and I spoke to myself every day. And I have some of these videos where it's like, interesting. +[5672.400 --> 5676.880] The IRS is coming after me. Like it was like the worst business moment ever. +[5676.880 --> 5682.160] Yeah. I remember. All this crazy stuff. And I have a video of myself like being like, +[5682.160 --> 5687.360] yeah, things are okay. And I have like big dark circles under my eyes. And I look like, +[5687.360 --> 5691.040] oh, you poor kid, things are not okay right now. Well, because you have a different perspective. +[5691.040 --> 5697.200] Exactly. Exactly. But also, if I would have watched even that video after I recorded it, +[5697.200 --> 5701.280] I probably would have seen that things are not so okay with your analysis. Of course. +[5701.280 --> 5706.240] You've been like the best kid. So I think it's a helpful one. But that optimism can be helpful, +[5706.240 --> 5710.800] right? Which is I get through it as an entrepreneur. I mean, tell you push through, right? +[5710.800 --> 5718.160] It's like how Elon saved Tesla effectively. It wasn't sane. But I'll, yeah. I won't say a lie, +[5718.160 --> 5725.360] but you know, close to a lie and save the company through that lie, raise money. And now we have +[5725.360 --> 5730.240] this, but had that not happened, whether he was deceiving himself or deceiving other people or +[5730.240 --> 5735.760] didn't even know he was deceiving himself or deceiving other people objectively speaking. +[5735.760 --> 5742.480] It was sort of fiction that saved the company. So then do the ends justify the means? Do the, +[5742.480 --> 5748.720] I mean, I think so. As an entrepreneur, like, there's, there's like true belief in being able to +[5748.720 --> 5753.200] create something. And you're not necessarily knowing what the path looks like. But you're going to +[5753.200 --> 5759.840] create something, right? And then there's just flat out, like lies. Like, and then lying is, +[5759.840 --> 5764.880] I think a lot different, like intent is a lot of it of a different thing. But you know, a lot of +[5764.880 --> 5770.720] people, a lot of maybe they started off with, was that the road to hell is pay with good intentions? +[5770.720 --> 5775.520] I think that's what happens. I think people try to protect, right? They lose their values and then +[5776.480 --> 5781.680] there's somewhere else. But yeah, I mean, everybody does it. Like everybody, any, any VC pitch I've seen. +[5781.840 --> 5787.200] Come on. It's like, it's a hockey stick right after this month, right? Exactly. It's like, all right. +[5787.200 --> 5793.200] I know our users are redefining what active users is. Yeah. But they're growing up. Like, it's like, +[5793.200 --> 5800.480] come on, we all know the game. Yeah. The question we always end with is what is success for you? +[5800.480 --> 5805.040] Success for me is pretty simple. You write things down and you accomplish it. That's success. I +[5805.040 --> 5811.440] believe success is a personal journey in whatever you want to do. So some people don't actually +[5811.440 --> 5816.320] know what their version of success is and it just happens to them. So I think it's really helpful +[5816.320 --> 5820.960] to sit there and be like, you know, I want a relationship where there's no friction and I have +[5820.960 --> 5825.520] the love of my life and like, I texted my wife. I was like, I miss you already. And we were gone +[5825.520 --> 5830.240] for like eight hours and it was like, that's on the list. So I have a successful relationship. +[5830.240 --> 5834.400] The business right now is not necessarily certain things that need to get changed this year. +[5834.400 --> 5838.640] There's certain things that need to change. That can be listed and I can measure if I'm successful +[5838.640 --> 5843.680] or not. Because the truth is, if you don't do that, you're likely playing somebody else's game. +[5843.680 --> 5848.400] Yeah. And it's just so it's so hard to live that life. Well, thank you for taking the time +[5848.400 --> 5851.280] for this incredible conversation. This is a great, great question. diff --git a/transcript/allocentric_Z550DeGoTgU.txt b/transcript/allocentric_Z550DeGoTgU.txt new file mode 100644 index 0000000000000000000000000000000000000000..af025579d17f2fbe43798635554c9559cb3011c5 --- /dev/null +++ b/transcript/allocentric_Z550DeGoTgU.txt @@ -0,0 +1,435 @@ +[0.000 --> 17.500] It's my privilege and my honor to be able to introduce our keynote speaker today. +[17.500 --> 19.000] And I just want to spend a couple of minutes. +[19.000 --> 22.380] I don't want to eat up too much of his time because it's already been long enough that I've +[22.380 --> 23.380] taken. +[23.380 --> 27.120] But I just want to tell you a few things about him. +[27.120 --> 30.640] He is a professor of neuroscience and director of the Cavali Institute for Systems +[30.640 --> 36.040] Neuroscience at the Norwegian University of Science and Technology in Trondheim, Norway. +[36.040 --> 39.240] He did most of his formative training at the University of Oslo with Pierre Anderson, +[39.240 --> 43.960] which I think he shares in common actually with many people in this audience. +[43.960 --> 48.840] He then did a postdoctoral fellowship with John O'Keefe and with Richard Morris at University +[48.840 --> 52.480] of Edinburgh and UCL. +[52.480 --> 57.560] And the work that he's going to talk about today, the large corpus of work that he has +[57.560 --> 63.160] done in his career in collaboration with MyBrit Moser, focuses on how spatial memories +[63.160 --> 69.160] and spatial locations are encoded in the brain and the mechanisms that are required to formulate +[69.160 --> 73.480] some sense of where you are and how you can navigate in space. +[73.480 --> 77.400] Now this work was incredibly influential and transformative. +[77.400 --> 85.280] It earned him MyBrit Moser and John O'Keefe the 2014 Nobel Prize in Physiology or Medicine. +[85.280 --> 88.680] Some of our speakers, more recent work, which I hope you will have an opportunity to talk +[88.680 --> 95.600] about today also, focuses on taking this premise of understanding neural computation underlying +[95.600 --> 100.960] space and memory in the brain to try and understand time and understand how time is also process +[100.960 --> 101.960] in the brain. +[101.960 --> 104.160] And maybe there are shared mechanisms there. +[104.160 --> 107.480] There are differences, I think, we'll have to wait to hear from him on that. +[107.480 --> 110.600] But that's something I'm particularly excited to hear from him about as of these new directions +[110.600 --> 111.920] in the work. +[111.920 --> 114.880] One of the things I want to mention about our speaker also is that if you have a chance +[114.880 --> 120.080] to spend more than a couple of minutes with him, you'll realize something very, very special. +[120.080 --> 125.600] Aside from his global renown and his accomplishments, he's also one of the most humble people I have +[125.600 --> 127.120] ever met. +[127.120 --> 129.560] And I think you'll know this just by talking to him for a few minutes. +[129.560 --> 133.760] He's very generous with his time, with students, with colleagues. +[133.760 --> 138.120] And I've always had a listening ear when I've tried to reach out to him and chat about data +[138.120 --> 140.120] and science. +[140.120 --> 143.880] I also want to thank, take this opportunity to thank Neuralinks, who have sponsored this +[143.880 --> 145.560] keynote lecture. +[145.560 --> 147.920] Neuralinks and our speaker actually have kind of a history. +[147.920 --> 149.720] They go way back. +[149.720 --> 154.960] And this is something that I think is really just spectacular for us to be able to have +[154.960 --> 158.400] their support for this conference in particular for this keynote lecture. +[158.880 --> 160.400] Now, I know you're in for quite a treat. +[160.400 --> 162.000] I don't want to take up any more of your time. +[162.000 --> 166.120] So with that, ladies and gentlemen, please help me give a warm welcome to our speaker, Edvard +[166.120 --> 167.120] Moser. +[167.120 --> 191.920] So thank you, Edvard, for the nice introduction. +[191.920 --> 198.360] Thanks to both you and Manuel and everyone else who has organized and prepared this conference. +[198.360 --> 208.080] I think the size of the audience, the number of people here, testifies to... +[208.080 --> 213.080] You don't want to have that slide up the whole time? +[213.080 --> 220.960] Not only to the great work that is being done here, but also to the importance of this +[220.960 --> 228.840] center in the history of modern neuroscience and especially with the focus on learning and +[228.840 --> 229.840] memory. +[229.840 --> 236.440] And my congratulations especially to Jim MacGov for starting all of this and for being this +[236.440 --> 245.240] for 35 years. +[245.240 --> 247.920] So my talk will be... +[247.920 --> 255.160] I was told explicitly when we started, when I prepared this, that this is a combined +[255.160 --> 260.440] public talk and scientific talk, which is something that's really hard to achieve actually. +[260.440 --> 266.640] But I will start out at primary school level in the beginning and then I will go gradually +[266.640 --> 274.040] up and during the second half of my talk, I will move into unpublished territory and +[274.040 --> 279.840] include some new principles of the position coding in the internal cortex and then move +[279.840 --> 287.320] over, as Mike said, to time, which is the most recent work and which will serve as an +[287.320 --> 296.480] introduction to another talk that my former PhD student, Albert Cao, will go into more detail +[296.480 --> 298.320] on tomorrow. +[298.320 --> 302.560] But let's begin with location and space. +[302.560 --> 314.360] So I thought I could not be worse than both Jim and Lin who both had pictures of Gauls, +[314.360 --> 316.240] so I'll do the same. +[316.240 --> 319.320] And here's my Gaul brain. +[319.320 --> 326.440] It shows the different faculties or abilities or properties and how they are labeled onto +[326.440 --> 335.840] the brain and I think both speakers made the important point that this had actually tremendous +[335.840 --> 340.080] influence on neuroscience. +[340.080 --> 348.360] It set the stage and then was forgotten for many years, but actually today with trajectories +[348.360 --> 355.720] into circuits and not only areas and principles for cooperation, collaboration, interaction +[355.720 --> 356.720] between many cells. +[356.720 --> 364.120] We are actually getting back to the point where we can start to understand some of the psychological +[364.120 --> 370.760] functions that are enabled by the brain and especially the cortex. +[370.760 --> 380.440] However, since the 19th century, concepts have moved forward too and that's also one +[380.440 --> 391.920] of the reasons now with more conceptual advances and better ideas and models for how the +[391.920 --> 394.360] brain might work at the psychological level. +[394.360 --> 396.600] We are actually making some advances. +[396.600 --> 402.240] But there's more advance in some areas than others and one of those areas that over the +[402.240 --> 413.200] last 40 years or so really have seen a lot of advance is our understanding of how space +[413.200 --> 414.440] is represented. +[414.440 --> 421.360] Because this is one of the first in mammals, one of the first high-order, non-century +[421.360 --> 429.080] and non-motor functions that are really beginning to be understood in neural language in terms +[429.080 --> 435.920] of how cells work together and how cells have different functions and how this is all put +[435.920 --> 443.280] together to produce something that probably gives rise to our sense of location. +[443.280 --> 451.200] So, I want to start very simply and now I will go to primary school level and simply +[451.200 --> 460.000] ask what would it be like if we didn't have this ability to conceive of space and where +[460.000 --> 461.760] we are in space. +[461.760 --> 469.000] So I have an animation that you made for the purpose, not for this purpose but I will show +[469.000 --> 478.760] this and that takes about two minutes and so let's begin with this. +[478.760 --> 482.600] How sound should be on? +[482.600 --> 483.600] World-weight. +[483.600 --> 485.600] Sound on? +[485.600 --> 490.400] Okay, try again. +[490.400 --> 497.320] Nope, I'll wait because it sounds essentially. +[497.320 --> 499.560] It worked one minute ago. +[499.560 --> 504.120] So I don't know. +[504.120 --> 506.560] Okay, try once again. +[506.560 --> 507.560] There. +[507.560 --> 508.560] Good. +[508.560 --> 509.560] Fantastic. +[509.560 --> 510.560] Yeah. +[510.560 --> 511.560] Yeah. +[511.560 --> 512.560] Yeah. +[512.560 --> 516.560] How does life in world space? +[516.560 --> 518.560] Nine minutes. +[518.560 --> 526.560] Life for us development came over time. +[526.560 --> 534.080] Abilities and trades that proved useful for survival are retained across generations through +[534.080 --> 539.080] succession of species from the common ancestor to its project. +[539.080 --> 543.080] These are the mechanisms of evolution. +[543.080 --> 548.080] Natural solution has a vapor of the species with the best of meaning to develop it. +[548.080 --> 553.080] And all energy that moves can escape from danger and find the shelter. +[553.080 --> 559.080] An obligation also allows us to actively find food. +[559.080 --> 566.080] The safety of a flock. +[566.080 --> 573.080] Or a suitable mate. +[573.080 --> 582.080] Scientists have discovered an navigation system in the way that is common for my main species +[582.080 --> 588.080] as per birth, as per births, rats, mice, monkeys, and even humans. +[588.080 --> 595.080] These these common is suggest that this positioning system evolved from a common ancestor of mammals +[595.080 --> 596.080] or a prey. +[596.080 --> 607.080] We all share a system. +[607.080 --> 609.080] So where is this system? +[609.080 --> 615.080] Well, many parts of the brain are involved in space. +[615.080 --> 621.080] But there is still as we have learned earlier today, there are two areas that have received a lot of attention +[621.080 --> 628.080] that are critically involved in representation of space. +[628.080 --> 635.080] So let's see hippocampus and it's the entorinal cortex. +[635.080 --> 639.080] Is this the pointer? +[639.080 --> 640.080] Let's see. +[640.080 --> 646.080] Okay. So this shows the human brain and this shows rat brain. +[646.080 --> 650.080] This is all from collaborative work with menowitter. +[650.080 --> 656.080] It shows the human brain in the rat area, which is embedded under the deep in the cortex here. +[656.080 --> 659.080] And the blue area here is the entorinal cortex. +[659.080 --> 665.080] In the rat brain, it's located somewhat differently but very far back. +[665.080 --> 668.080] The hippocampus here, entorinal cortex here. +[668.080 --> 682.080] And in these areas, they are turned down to be important but because it is all much easier to investigate in animals, +[682.080 --> 692.080] then a lot of major advance was made about 45 years ago, as you've heard earlier in this meeting, +[692.080 --> 701.080] when John O'Keefe and Jonathan Strowski started to record electrical activity or action potential spikes +[701.080 --> 710.080] from single neurons in the hippocampus of rats. +[710.080 --> 716.080] So this shows a rat that is walking freely in a box or it could be other types of a parathy, +[716.080 --> 726.080] a parathy, a macers. But in any case, what John did was that he recorded activity from single cells +[726.080 --> 736.080] and viewed those on the screen, on the silo scope and stored them and then found that single neurons in the hippocampus +[736.080 --> 739.080] are responsive to the location of the rat. +[739.080 --> 742.080] So I will illustrate this with a movie. +[742.080 --> 745.080] Now we see the rat from a bob. +[745.080 --> 749.080] Rat is walking in a box. Box is one meter by one meter. +[749.080 --> 754.080] In the box, there are occasionally thrown out crumbles of chocolate, which rats like, +[754.080 --> 759.080] but keep them walking around in the box and visiting every possible place. +[759.080 --> 765.080] And at the same time, we are recording cells from the hippocampus. +[765.080 --> 772.080] You will hear those soon as spikes sounds or noise, sounds like noise. +[772.080 --> 778.080] But each time there is a sound, a popcorn sound, then the cell is active. +[778.080 --> 784.080] And you will notice that this cell, example cell, is active only at certain places in the box. +[784.080 --> 789.080] So let's start the movie. +[789.080 --> 797.080] And each time the cell is active or fires, you will also see a rat dot up on the screen. +[797.080 --> 804.080] So you probably already know and notice that this cell is active only at one place in the box. +[804.080 --> 809.080] In this case, in the upper left part, and otherwise the cell is very silent. +[809.080 --> 818.080] This can also be illustrated with a heat map, a color code, where red is high activity, blue is lower, no activity. +[818.080 --> 829.080] And it turned out that during the years to come after this discovery that different cells have different preferred areas in the hippocampus. +[829.080 --> 840.080] And together, it became clear that these cells cover the entire environment, visited by the rat. +[841.080 --> 855.080] For these, based on these data, then John O'Keefe and Lynne Adele suggested in 1978 that the hippocampus is actually the basis of cognitive map, +[855.080 --> 860.080] or a Tolmanian map, you heard in the morning about Tolman's contributions. +[860.080 --> 874.080] A map that encodes spaces, locations in the environment, but also more than that also experiences associated with those locations. +[874.080 --> 878.080] This was a major conceptual advance. +[878.080 --> 885.080] It put things together, tied it up as you heard this morning, a very chaotic literature. +[885.080 --> 903.080] So during the next decades, a lot was learned about play cells, but there were a few things that still weren't clear when we came into the picture after three months visit with John, where he taught us all the essentials. +[903.080 --> 920.080] So when we started up in 1996, in our own lab in Norway, there were several questions that were interesting, but one, perhaps the most important one, was which hadn't been resolved, where and how is this play cell signal generator. +[920.080 --> 939.080] Because this, remember, this is not sensory cortex. So these signals, they have properties that are as clear as you might see in sensory areas, because they really strictly respond to the location of the rat. +[939.080 --> 949.080] So you all know, you don't have space sensors on your fingers, not in your ears, not in your eyes. So how is this generated, where does it come from? +[949.080 --> 961.080] So to large extent, this is probably generated inside the brain, by the brain itself, based on with the help of sensory inputs, but this was really not well understood. +[961.080 --> 975.080] And one idea that was around in the 1990s was that if anything, this signal, if it wasn't created in hippocampus, it was at least enhanced quite significantly in the hippocampus. +[975.080 --> 998.080] And because the hippocampus operates to a large extent, like a circuit, a unit direction circuit consisting of sub areas that project from one to the other in a loop through the hippocampus, in, through it and out, then most of the cells have been recorded in CA1, which is one of the last stages of the circuit. +[998.080 --> 1009.080] Then it was, believe me, many people at that time, that essential things happened in the earliest stages just before the area where most of the activity had been recorded. +[1009.080 --> 1027.080] So an obvious thing to do, when we started out, was simply just to try to get rid of the inputs from C and C that was postulated to be so important. +[1027.080 --> 1043.080] And what we found, which actually was in agreement with earlier work, using other methods from the McNaughtner and Vonslab, was that a lot of the activity actually survived. +[1043.080 --> 1052.080] So this shows examples of seven different cells from a recording when the CA3 is inactivated, olesian. +[1052.080 --> 1062.080] And you can see that these seven cells, this is the box in from above and color indicates, color indicates the activity of the cell. +[1062.080 --> 1069.080] You see that these cells are still spatially selective. They still fire in certain areas and not in other areas. +[1069.080 --> 1082.080] So although the spatial firing was not as strong maybe as it is in the control animal, it was still not really noticeably different. +[1082.080 --> 1095.080] That then led us to get interested in the entirinal cortex, an area that feeds in most of the cortical input into the hippocampus. +[1095.080 --> 1109.080] And by that time we had strength and connections with Manu Vitter, who then participated in this study and was one of the world's experts on just this area. +[1109.080 --> 1129.080] And this work, and we showed then here that with Manu, this was the case that the place signal survived despite, in animals where there was absolutely no input from the CA3 left, which we showed by using anatomical methods. +[1129.080 --> 1144.080] So that led us to the entirinal cortex and we tried to record directly from that area together with Manu and with the students, Mariana Fien and Total Hofting. +[1144.080 --> 1152.080] And we put in the recording electrodes into the dorsal part of the medial entirinal cortex. +[1152.080 --> 1164.080] And this dorsal part is the area that has the strongest inputs into the dorsal hippocampus where almost all the place cells had been recorded. +[1164.080 --> 1173.080] So it was an obvious area to go to, but at that time in that part of entirinal cortex I don't think anyone really had recorded yet. +[1173.080 --> 1176.080] So it was a new territory. +[1176.080 --> 1183.080] And what happened was, and that cells in that area had a different type of pattern. +[1183.080 --> 1187.080] First of all, they were very strongly spatially modulated. +[1187.080 --> 1198.080] So what you see here now to the bottom right is the box again. Now it's a bigger box. In this case, a 220 by 220 centimeter large box. +[1198.080 --> 1204.080] The gray trace is where the animal walked, so it shows the part of the animal over half an hour. +[1204.080 --> 1213.080] And each black dot is where that one particular cell was active when the rat was running around. +[1213.080 --> 1228.080] So what you can see is that this cell, like other cells in the area, was active in certain places, but no longer just in one place or maybe in, it was active in many places. +[1228.080 --> 1242.080] And the other thing that you may notice is how regular this pattern is, which you can see when you put these lines on top, which I did in the left diagram here, that is actually repeating triangular hexagonal pattern. +[1242.080 --> 1253.080] That in many ways expresses a metric that was certainly not present in the place cell signals of the hippocampus. +[1253.080 --> 1269.080] So apparently here we had another component of this spatial or cognitive map that contain information about distances and directions that were not so easily extractable from the hippocampus. +[1269.080 --> 1274.080] So this is now 2004-05. +[1274.080 --> 1280.080] So one of the things that became clear from the beginning is that the grid cells, there were many of them. +[1280.080 --> 1289.080] And especially they were abundant in the superficial layers of the midial and toral cortex, which project into the hippocampus. +[1289.080 --> 1301.080] But varied in various ways. So they could have different faces, they could have different scales, or they could have different orientations relative to the environment. +[1301.080 --> 1308.080] So faces means that the grid patterns are shifted in XY space relative to each other. +[1308.080 --> 1312.080] So you see that illustrated here for a green grid cell and for a blue one. +[1312.080 --> 1318.080] And you can see that they have different the peaks of the grid pattern or different places. +[1318.080 --> 1323.080] Or they might differ in the scale, which you see here with the blue one compared to the green one. +[1323.080 --> 1327.080] So you see the blue one has both larger fields and larger distances. +[1327.080 --> 1333.080] And the third is then that they may be tilted relative to each other as well. +[1333.080 --> 1342.080] So we asked then early on whether there is any organization according to these dimensions. +[1342.080 --> 1350.080] And both yes and no. So first of all, for the face of the grid, there was no very striking organization. +[1350.080 --> 1364.080] Or that means that whatever we recorded, and this illustrator recording electrodes using tetrodes, which I don't have to explain how that works. +[1364.080 --> 1373.080] But anyway, it picks up signals in a way that makes it possible to differentiate between cells and to isolate from each other. +[1373.080 --> 1377.080] So here you have a blue cell, a green cell and a red cell. +[1377.080 --> 1381.080] And you can see that the grid patterns on the three cells. +[1381.080 --> 1385.080] All of them have grid patterns, but they are shifted in XY space. +[1385.080 --> 1388.080] And this is pretty representative of what you get in most places. +[1388.080 --> 1399.080] So that it is similar to what in sensory or visual neuroscience often is referred to as salt and pepper organizations, pretty mixed. +[1399.080 --> 1403.080] That is totally mixed, that is still uncertain. +[1403.080 --> 1408.080] And there are various indications recently that there may be some organization to it. +[1408.080 --> 1413.080] There may also be not an equal distribution of faces. +[1413.080 --> 1425.080] But by and large, every location is represented at every place, every anatomical location in the entorinal cortex. +[1425.080 --> 1437.080] Which is very different from the spacing of the grid, because it was clear from the outset that spacing varied depending on how far up or down you are in the brain. +[1437.080 --> 1444.080] So this is a side view of the hippocampus and entorinal cortex. +[1444.080 --> 1451.080] The hippocampus is this ear like structure here, and the red structure here is the medial entorinal cortex. +[1451.080 --> 1468.080] If you start at the top, which you often refer to as the dorsal part, and then go down towards the ventral or the bottom, what we typically see is that it starts out with only a small scale grid cells, dots are small and very close to each other. +[1468.080 --> 1478.080] This is a box of 2 by 220 by 220 centimeters, and the distance here is down to something like 30 centimeters between each node. +[1478.080 --> 1498.080] Once you go down, this increases, and so you get into the middle here, it may already be a meter or more, and if you go even further, then it's difficult to assess, because we don't have environments that are big enough or didn't at least at that time. +[1498.080 --> 1506.080] There is a clear gradient, topographical gradient, where it begins with the smallest at the top and goes towards the largest at the bottom. +[1506.080 --> 1521.080] So this can be organized in many ways, but one important question was whether is this a continuous gradient, where you go smoothly from smallest to largest, or are there actually discrete steps? +[1521.080 --> 1529.080] So does this depend on the consist of subnetworks that each have their own scale? +[1529.080 --> 1555.080] So certain ideas about how grid cells rise actually require the latter, so that we did look more into this, and this is work with Torren Hannes-Denzola in about 2012, who were able to record up to almost 200 grid cells from the same animal, which at that time was quite unique. +[1555.080 --> 1566.080] And by doing so, they were able to plot the scale of grid cells from the same animal in one diagram. +[1566.080 --> 1582.080] So what you have here is on the x-axis, you have Dorsal-2-Bentrel, so top to bottom in the inter-inventor-inventor-inventor-inventor cortex, and then on the y-axis you have the scale of the grid or the distance between the peaks, and then each dot is one cell. +[1582.080 --> 1592.080] And what you can see is, first of all, as I told you, as you start from Dorsal and go to ventral, then the scale generally gets larger and larger and larger. +[1592.080 --> 1606.080] But what you also see is that it is a step-like increase, where there is actually just a small number of scales present, and almost every cell can be put into one of these steps. +[1606.080 --> 1616.080] So these steps, we call them modules, and we call them module 1, the smallest one, and then module 2, 3, and module 4. +[1616.080 --> 1629.080] So it turns out they even have a certain relationship, so that you, when we asked, what is the factor that you have to multiply M1 within order to get M2? +[1629.080 --> 1633.080] How much do you have to multiply M2 with to get M3 and so on? +[1633.080 --> 1645.080] It turns out that it is actually a constant factor, and in this case, under those conditions in rats it was approximately, or on average the mean was 1.42. +[1645.080 --> 1658.080] Of course, there is a lot of variation, but still the scale factor is the same, so that you can actually describe the levels of the grid. +[1658.080 --> 1668.080] Of the grid cells, or the modules of the grid cells, as organized in a, like a geometric progression. +[1668.080 --> 1684.080] So, and what's the advantage of that? Well, that is still not clear, but it has been hypothesis, at least by various people, that this might be the best way to organize grid scales. +[1684.080 --> 1694.080] If you want to represent space in the most, possibly the most efficient manner with the fewest number of cells. +[1694.080 --> 1702.080] So, I want to emphasize at least one major difference between the place cell map and the grid cell map. +[1702.080 --> 1705.080] So, we go back to the place cells now. +[1705.080 --> 1718.080] You heard also again from the morning talks, and also from Karl's talk, that one property of place cells is that they remap, as we say. +[1718.080 --> 1730.080] That means that you have different maps or different combinations of place cell maps in different environments. +[1730.080 --> 1743.080] So, we can also say that the map is high dimensional, because it just, what this means is that the maps are uncorrelated, or as different as possible. +[1743.080 --> 1753.080] This was shown already by starting with Muller and Kubey, and then has been developed by many labs over the years. +[1753.080 --> 1765.080] But I like to show this experiment that we did quite recently, because we demonstrated effect in as many as 11 different recording rooms. +[1765.080 --> 1770.080] So, here's a picture of 11 labs where rats were tested. +[1770.080 --> 1775.080] And I like to show them, because those labs are so similar that I can't tell the difference. +[1775.080 --> 1783.080] I have no way that I can say the difference between lab number N8 and lab number N2, for example. +[1783.080 --> 1797.080] But the question is whether rats are able to, so what Charlotte Olma in our lab did was that she recorded many, many places from the same rat in sequence, +[1797.080 --> 1812.080] whether rats were tested sequentially in all these rooms, one familiar room which has a label F, and then 10 different novel rooms where they were exposed for the first time, label from N1 to N10. +[1812.080 --> 1830.080] And then asked whether places in those rooms are similar, is it one map that is carried over, or as we expected, because of the ability to remap from one room to other, that they are uncorrelated. +[1830.080 --> 1845.080] So, what we found in this experiment is that all combinations of maps in all of these rooms are actually as different as it is possible. +[1845.080 --> 1859.080] So, what you see here is first of all maps from place cells in cell number 1, cell number 2, 3, 4, and so on until cell number N, and then they are correlated using a population vector approach. +[1859.080 --> 1871.080] And this is a correlation matrix, and this is all the different rooms on one axis and all the different rooms on the other axis, and then the color indicates the correlation between the maps. +[1871.080 --> 1882.080] And of course, along the diagonal, when you correlate the rooms with itself, you get a correlation of 1, so that is no surprise, the same recording co-created itself. +[1882.080 --> 1897.080] But on all other combinations, you see that it is in the deep blue range, which means essentially what you would get by chance is absolutely not different if you just shuffle the data completely, +[1897.080 --> 1904.080] except for a very few places here, which all are marked by a star on asterisk. +[1904.080 --> 1921.080] And those are the instances where the room was actually repeated, a second exposure to the same room, and then so that shows that it is not just the fact that the new map is pulled up each time, but it actually the same map is re-expressed when they go back to the environment. +[1921.080 --> 1938.080] But otherwise, those maps or play cells are as different as they can be, which is probably quite useful, and what you want to have in a structure that stores memories, including spatial memories for many, many places. +[1938.080 --> 1943.080] You don't want to mix them up, you want to keep them separate. +[1943.080 --> 1951.080] So this is in line with all the work that has implied a role for the hippocampus in memory, which you heard more about this morning. +[1951.080 --> 1965.080] So this is quite different from what we see in the entorinal cortex, because in the entorinal cortex, there is not this scrambling between environments. +[1965.080 --> 1985.080] So I illustrate this first again, now same type of approach. You compare two different rooms. This is room A, this is room B. In one of the rooms, it was a circle, in the other it was a square, but anyway, many cells were recorded at the same time in both rooms. +[1985.080 --> 2000.080] And then cross-correlated, so similar correlation between the maps for different environments. And this shows the result for 1, 2, 3, 4, 5, 6 cells. +[2000.080 --> 2014.080] And the first row shows when you correlate the map with the same environments, or A times A. And of course you get the grid map, because there is no reason why it should change. +[2014.080 --> 2037.080] And of course you also get a peak in the center, because there is no reason why the map should move. But if you now correlate the A versus the B, you also get a grid map, except that the map is shifted slightly to the right in this case, which means that the cell has the same pattern, it's just slightly displaced in one direction. +[2037.080 --> 2056.080] But the important thing here is that this shift is expressed in every single cell that was recorded. They all show the same shift, which then means that actually here you have one map that is just shifted in X or Y, or maybe it could even be rotated. But it's the same map, same map. +[2056.080 --> 2068.080] And this is even the case if you put all the cells on top of each other and cross-correlated them all together, again you get the same map, but it's just slightly shifted. +[2068.080 --> 2079.080] So what is suggest is that it's really just one map that is used over and over again, at least as long as you stay within one of the modules. +[2079.080 --> 2088.080] So at that time we didn't know about modules, but there's still reasons to believe that most of the cells were from one single module. +[2088.080 --> 2107.080] So based on those data, which is a study, essentially all from a study by Mariana Fien in 2007, and in the collaboration with Alexander Trevis, then we asked more recently, +[2107.080 --> 2121.080] how much does this, is this a single map that actually is expressed even when the rat has no behavior, for example when it's sleeping and not walking around? +[2121.080 --> 2136.080] So this is a study that we did and actually has been shown, the same result has been shown also in Laura Colgins lab where they did the same at the same time, with exactly the same result. +[2136.080 --> 2150.080] So then I believe it. So what we found, and this is Richard Garner's work in our lab, what he did was first to compare pairs of cells that were in-face, +[2150.080 --> 2161.080] that means that the grid pattern is more or less overlapping. So you see two grid cells here, and you can see that the peaks are more or less in the same place. +[2161.080 --> 2172.080] You can also see that from the collar, but maybe easier to see here. So these are examples of two cells that fire in the same place when the rat is awake and walks around in the box. +[2172.080 --> 2188.080] And as you would expect, if you cross correlate those two cells in time, show what is the probability of cell two to fire when cell one is active, you get the strong peak around zero because when one fire, then the other fires two. +[2188.080 --> 2203.080] So that's all as expected. But you then record from the same cells in sleep, in slow way sleep, you get the same peaks. So that means that those cells that fire together in the wake state also fire together in sleep. +[2203.080 --> 2218.080] Conversely, if the cells are out of face, if they have their dots or peaks at different places, then on the fire out of face in the wake state, then they also fire out of face in sleep. +[2218.080 --> 2233.080] So it's the same thing. And if you do this now for 1267 combinations or pairs of grid cells, what you find is that, and you can plot that then with one line per cell pair. +[2233.080 --> 2245.080] And then now you can transform this one, this plot to color so that yellow is high cross correlation and black is low. +[2245.080 --> 2259.080] So what you then find is that those pairs that have the high cross correlation in the wake state when the rat is walking in this open field environment, they are also the ones that have the highest in sleep. +[2259.080 --> 2269.080] And those that have the lowest correlation in the wake state when it walks in the box are the ones that have the lowest correlation in the sleep state. +[2269.080 --> 2281.080] And the same applies also in a different type of sleep, REM sleep, which in humans corresponds to when we dream. You say it's the same thing again, a bit more noisy because there's much less data from that. +[2281.080 --> 2305.080] But what this essentially shows is that confirms the suggestion that this entire map is really low dimensional or has only one or at least only a few ways to express itself very, very differently from the hippocampal maps, which can appear in all kinds of combinations. +[2305.080 --> 2328.080] And this is exactly as would be predicted by attractive type models for grid cells or models that propose that grid cells actually arise as a consequence of how the network is wired together and these connections and these interactions they are present also in the sleep state, even if these animals don't walk around. +[2328.080 --> 2346.080] So with that, I then, that's a little bit of introduction about grid cells. I should also mention though that there are other types of cells, which already some of you may have heard that from the symposium earlier today and also was also mentioned in earlier. +[2346.080 --> 2357.080] But not the least the head direction cells. The head direction cells are cells that fire in when the animals face is pointing only in a certain direction. +[2357.080 --> 2367.080] These cells were discovered by Jim Rank and then Jim Rank with a student, Jeff Tauber, followed up and showed them. +[2367.080 --> 2379.080] They found them originally in the Dorsal Presuviculum, which is adjacent to the Dorsal, Medial and Rhynal Cortex. +[2379.080 --> 2385.080] But it turned out then that they are very abundant also in the Medial and Rhynal Cortex. +[2385.080 --> 2394.080] So this shows again a side view in the rat brain and the area between the two red areas lines here is Medial and Rhynal Cortex. +[2394.080 --> 2401.080] And what you see here is that these cells, they don't really have these grid dots that you saw in the other cells. +[2401.080 --> 2409.080] But what they have, as you see here in this is a polar plot that shows firing rate as a function of direction of the rat's head. +[2409.080 --> 2418.080] You can see that this cell, for example, only fires only active when the rat has its head pointing in the left or west direction. +[2418.080 --> 2425.080] This cell is only active when the rat is walking from bottom right to top left. +[2425.080 --> 2428.080] So these are strongly directionally tuned cells. +[2428.080 --> 2431.080] Some of them are very, very sharply directionally tuned. +[2431.080 --> 2439.080] Others are a little bit broader and some of them can also be head direction cells and grid cells at the same time. +[2440.080 --> 2455.080] There are also other cells, border cells we name them, cells that fire exclusively when the rat is walking along one or several borders of the local environment. +[2455.080 --> 2459.080] So here again you see the box from the top. +[2459.080 --> 2463.080] The color indicates activity of firing rate. +[2463.080 --> 2471.080] Red is the highest rate and you can see an example here of a cell that fires only when the rat is on the right part of the box. +[2471.080 --> 2480.080] And that happens even if you stretch the box either in the horizontal or in the right or either in the x or in the y direction. +[2480.080 --> 2484.080] Still just fire at or along that particular wall. +[2484.080 --> 2488.080] This shows the same cell in a different room. +[2488.080 --> 2492.080] So now the cell chooses the left wall instead. +[2492.080 --> 2501.080] And what you see here in the middle is that if a wall is inserted in the middle here, then the cell actually fires along that wall too. +[2501.080 --> 2503.080] And on the corresponding side. +[2503.080 --> 2509.080] So on the right side here, just as it does on the right side for the peripheral wall. +[2509.080 --> 2512.080] So it's a very different type of cell. +[2512.080 --> 2519.080] A grid cell is never a border cell and a border cell is never a grid cell at least not in our hands. +[2519.080 --> 2530.080] So different classes of cells and as some of you may have heard in the morning, they respond differently to sensory inputs, visual inputs versus locomotion for example. +[2530.080 --> 2539.080] But these cells coexist. They are intermingled in the superficial layers of the entrional cortex. +[2539.080 --> 2549.080] And very closely associated also with the head direction cells which are also there, but shift is slightly more into the deeper layers. +[2549.080 --> 2556.080] And more cells that many of which are actually heard about some of you may have heard about speed cells. +[2556.080 --> 2561.080] These are cells that don't really, as you'll see, the 12 example cells here. +[2561.080 --> 2564.080] They don't really have a preferred location of firing. +[2564.080 --> 2570.080] The color code heat maps here show that they are active anywhere in the box. +[2570.080 --> 2587.080] But what the line diagrams here show is that their activity is strongly correlated, linearly correlated with the firing rate or with the speed of the animal. +[2587.080 --> 2589.080] And that's also clear from the examples here. +[2589.080 --> 2595.080] Seven different cells shown in different color on the background of the speed of the rat. +[2595.080 --> 2601.080] So the speed is shown in gray over a period of two minutes and then the color shows the firing rate of the cell. +[2601.080 --> 2609.080] And you can see for example if you focus on the yellow one here, you can see how closely the cells firing rate actually follows the speed of the rat. +[2609.080 --> 2613.080] It's extremely closely tied to the speed. +[2613.080 --> 2627.080] So the existence of these cells is also kind of predicted because the self-motion is necessary for updating these cells. +[2627.080 --> 2641.080] These cells actually use past integration to decide whether fired and motion in self-motion or speed input is just as essential as the direction input. +[2641.080 --> 2650.080] And finally, there are many cells that actually have spatially localized firing fields that aren't anything really. +[2650.080 --> 2655.080] They're not borders. They're just blobs of firing at particular locations. +[2655.080 --> 2669.080] Of course there could be grid cells that for some reason either they have so large grid patterns as you just see one peak or the other grid fields are so low in rate that you don't see them. +[2669.080 --> 2676.080] But nonetheless they are hard to explain and there are many many of them. They have been around for a long time. +[2676.080 --> 2680.080] But until quite recently at least I thought they were mostly just garbage. +[2680.080 --> 2684.080] People cells that you couldn't really put into any category. +[2684.080 --> 2698.080] But I changed my mind slightly when Changlin Yao in our lab showed that they are modulated in a different way than many of the other spatial cells. +[2698.080 --> 2727.080] So what he found which you can see an example of here is that if you block some auto-statin expressing cells, interneurons, and just islands then using chemo-genetic methods, then what you find as you see in the middle column here is that those cells under blockade of these some auto-statin expressing interneurons +[2727.080 --> 2734.080] actually have much more dispersed firing and then when the drug is out of the body again then they go back to what they were. +[2734.080 --> 2744.080] So this happens only to these cells. So a grid cell for example would not respond to that treatment. +[2744.080 --> 2755.080] And conversely if you block another type of interneurons, per valve-human type of interneurons, then there is no effect on these cells as you can see here. +[2755.080 --> 2765.080] But there is a very strong effect on grid cells instead. So it seems like these are actually different classes of cells that are modulated separately. +[2765.080 --> 2783.080] So all in all this then brings me back to the movie where I started which suggests that these cells are widely expressed. +[2783.080 --> 2792.080] Actually they are present in many species. They were found first in rats and came then in mice, not surprisingly. +[2792.080 --> 2803.080] But then they were discovered in bats in the Wulanovsky group. And bats are on a completely different branch of the mammalian evolutionary tree. +[2803.080 --> 2819.080] And then grid cells or at least grid-like cells were found in monkeys with bad buffaloes work and then finally from Josh Jacobs and Isaac Fried in humans. +[2819.080 --> 2829.080] The fact that they were spread around among mammals probably suggests that they arose quite early on or present widely among mammals. +[2829.080 --> 2838.080] So they are at the price not only to grid cells but they are at least to several other types of cells like border cells and head direction cells. +[2838.080 --> 2847.080] So that was my long, long interaction but I did want to save some time for a few new things. +[2847.080 --> 2865.080] So one of the first questions probably come up already to everyone who is here who is not working in the field so may ask why do they only test these animals in these empty boxes because rats don't really walk in empty boxes in their natural lives. +[2865.080 --> 2879.080] So how about more realistic environments? And realistic environments, how are they different from empty boxes? Well at least they contain objects, there are things in the environments. +[2879.080 --> 2893.080] So there is some precedence from other approaches that suggest that rats or animals may actually use objects for navigation. +[2893.080 --> 2915.080] And that includes both behavioral work and especially the work of Tim Collett which is illustrated here and just the five second version of it is that they tested garbels in an area which contained two landmarks, two circles here and then the X indicates a location where they could dig for food. +[2915.080 --> 2941.080] And they were tested over and over and over and over and over again but then on a test trial the two landmarks, the two objects were pulled apart and then what they observed was that the animals did not search in the middle here but they actually searched at a certain distance away from each of those objects suggesting that they actually encoded the distance and direction from individual objects to find the food. +[2941.080 --> 2967.080] And this together with theoretical work that was inspired partly by this and that includes the work of McNaughton and at all Jim Kniehrm was also on that paper which suggested that there must be cells in the hippocampal system somewhere that actually respond to locations defined by distances and directions or vectors from the +[2967.080 --> 2995.080] individual objects and the idea of vector encoding was also proposed by O'Keefe and Burgess based on their work but they suggested it was walls or lann, walls or boundaries that were used by animals to encode positions in the open space. +[2995.080 --> 3024.080] So the idea was there so based on this then Avin Hüydel who is a PhD student in our lab recorded from mice when these mice were running around in very simple environments like the ones we have seen already but there was now an object, very prominent tower like object in the environment and it turned out that they were actually indeed very many cells that responded to the location. +[3025.080 --> 3047.080] Of the object they did not fire at the location of the object but they fired at some distance away from it in a certain direction and such a cell an example cell is shown here you see it has a one single peak of one single area of firing and that area is displaced from the object in a certain direction. +[3047.080 --> 3076.080] So the typical design is like this starts out with no object trial there is no object in the circle environment then an object is introduced somewhere near the middle and then the object is displaced and what he sees like in the two examples shown here is that the cell starts to express a strong field, a strong area of activity at a certain place defined to the object in this direction. +[3077.080 --> 3106.080] So this case on the north side of the object some 20-30 centimeters away then the object is moved so in this case the object is moved down you see the white circle here and still the cell files some 20-30 centimeters north of the object and the same thing for this cell shown here and that can be plotted so you could plot the firing rate as a function of distance from the object and the orientation of the object. +[3107.080 --> 3122.080] And you can measure that on trials with objects in different places and then you find for these cells that they have that they have correlations between those two trials that are way beyond what you would expect by chance. +[3122.080 --> 3151.080] So how is this object vector field distributor so this shows just data from a part of the data actually many more cells now since I had this figure but what you get here is still the point this shows one line per cell and color indicates firing rate and this shows orientation relative to the object and what you can see is that essentially all orientation. +[3152.080 --> 3170.080] So these are expressed if it was perfectly distributed you would have a line along the diagonal here so this is the distribution of orientation slight bias towards 90 degree cycles but actually that bias has been almost gone now in the larger data set. +[3170.080 --> 3193.080] This shows the distribution of the distances so you can see typically firing field is about 5-10-15 centimeters away from the object but it can be anything up to 45 probably more than but we couldn't test beyond that because the environment wasn't really larger than that if you wanted to have the object in different places. +[3193.080 --> 3222.080] It's not dependent on the exact type of object so this shows 13 different types of objects so many of them quite similar they are tower like they could be prisms or they could be cylinders but looked quite differently and if you then use several of them or replace them it doesn't really matter so this is shown here for five example cells so you can for example see cell number two here. +[3223.080 --> 3252.080] It responds in the same way to two objects placed here so there are two circles and you can see it fires on the left side at a certain distance away from each of the two objects so you can also see it here and this cell three different objects among this and again on the southeast side of each of them and this one here you can even construct the grid cell if you like because if you put the objects in a certain pattern you can get fields on the in this case on the north east. +[3253.080 --> 3281.080] So this suggests that it's not really the identity of the object that is encoded but more like positions and actually vectors directions and distances away from any prominent object in the environment and that even includes some that are very different like flat cylinder here and even a wall like this. +[3281.080 --> 3310.080] So this cells is this something that has to be learned while it turned out not to be because these cells that were recorded multiple times in familiar environments they were also tested in a novel environment, novel room with a novel object and you can see that you get the same exactly the same type of firing both in the familiar and in the novel room and their own if anything just very very minor differences in the information. +[3311.080 --> 3318.080] So this is a very important information content or in how spatially selective they are. +[3318.080 --> 3330.080] So we also wonder if is the intrinsic relationship between different cells of this kind maintained so what you see here is two simultaneously recorded cell. +[3330.080 --> 3358.080] So this is the field on the south east side and one has the field on the north east side in room A and if the right is then recorded in room B well then it all rotates for this cell so this one goes or flips almost 180 degrees and now you see that the file field is on the northwest side and this one also then flips by 180 degrees almost and the same happens to a head direction cell that's recorded to the same time. +[3358.080 --> 3387.080] So this also suggests that actually the intrinsic relationships between these cells and even between these cells and other directionally oriented cells are maintained between environments so again this is part of the low dimensionality of the entoronal map where both grid cells and head direction cells which I didn't mention actually turn out to be more or less one map that is maintained across environments. +[3388.080 --> 3394.080] So are these cells different from other cells like grid cells? +[3394.080 --> 3416.080] Well largely yes so we calculated scores for both border cells and grid cells and speed cells and a direction cells using different criteria that we have used previously to identify such cells and essentially what you can see here is that for example grid scores are around zero. +[3416.080 --> 3444.080] That means that it's not different from what you would expect by chance and also for the head direction tuning it's what you see in the middle column is the object vector cells when there's no object and then to the right you see the object vector cells when there is the object present and what you see in the left column here or the left one is the rest of the cell so it's not really different from the population. +[3444.080 --> 3454.080] So low head direction tuning, low grid tuning and not really definitely not more border like activity than border cells. +[3454.080 --> 3473.080] However, yeah I'll skip this but however there is some overlap with border cells and that may be not so surprising because border is also an object so I mean how you really distinguish those because a border is just an object that is elongated in one direction so when does the border become an object and when there is no object. +[3474.080 --> 3503.080] So this is an object becoming a border I think this is not totally obvious where where the boundary is so but using the criteria we have used to identify both object vector cells and border cells and we do find that there's a small subset 11 out of approximately 150 cells that actually satisfy both criteria and you see some examples over here so if you look at the bottom first you see a typical border cell recorded with no object present then the object is interesting. +[3503.080 --> 3532.080] And then the object is introduced here in the middle you see a white dot here and then the cell adopts a field on one side just like it helps for the border and here at the bottom you see another cell of the same type which flies along the border of this cylinder and then when you introduce the object in the middle here you get another field as well but many many don't most actually don't so you see an example at the top here. +[3532.080 --> 3560.080] Where there's no object that is a border field along one border here and then you also see that in the neighboring cell or in the one below here and you introduce the object and nothing happens so what is the difference between these cells it's not quite clear but there are many many things that suggest they are not just they are not object vectors cells are not just border cells that +[3560.080 --> 3589.080] in there are different in many ways so what we show here is that this shows just the relationship between the orientation of firing of these cells so this is the direction of firing relative to the border so this is for the border cells and you can see that they essentially line up along the orientations of the walls as expected. +[3590.080 --> 3619.080] But when it comes to the object vectors cells which you see here to write they have all kinds of orientations and the same with the distance from the object versus the wall so this shows this shows for a border cells so this shows the orientation of the object vector field versus orientation of the border field for those cells that had fields in both and you can see there is really no correlation you would expect bands along the parallel with the diagonal if they were in the middle. +[3620.080 --> 3643.080] And the field distance so this is the distance from the wall or the border this is the distance from the object for those cells that fired in relation to both and you can see that the distance from to the object is much larger so this could be because all the way the cells are defined but nonetheless they are different in many ways. +[3643.080 --> 3672.080] So then to try to tidy up in that then we have an ongoing work which is the Boston Anderson's work where he tried to manipulate the shape of the objects to make them more or less border like so first of all he tried to ask whether it's the height of the objects that matters so he had small objects and then big objects and then put them in different same place in the same environment and then could see that if they're very very small +[3673.080 --> 3696.080] sometimes they don't elicit object vector fields but consistently as they get larger and the same when he changes the width of the objects so this shows anything from about 2 cm width to 30 cm width and you can see that the cells fire in the same orientation in the same way regardless. +[3696.080 --> 3725.080] So he even tried to morph the objects from what we called an object originally to a border or a wall like you see the object here getting bigger and bigger and then going back so essentially the cell fires all the time but this cell which clearly is an object vector cell by definition as we had it still doesn't even it fires along most of the wall here it never fires along the peripheral walls. +[3726.080 --> 3740.080] So I think these cells have many properties that distinguish them although it still yet to be finally determined what is the difference between an object and the border. +[3740.080 --> 3769.080] So finally before I leave this topic I just want to say that these cells have some similarities with cells that have been recorded before so first of all I want to mention the object cells not object vector cells but the object cells of the lateral and rhino cortex which Jim Knerin and his students have observed for many years but these cells essentially fire at or around the object. +[3770.080 --> 3798.080] So they are different in that sense but the object vector cells are more similar possibly identical or at least more similar to what they call the landmark vector cell in the hippocampus which are cells that also fire displaced from the object so this shows four objects and this shows the firing fields which are in this case on the southeast side or two of the objects. +[3798.080 --> 3815.080] So they are different in some ways like for example many of them fire only in relation to some objects and not others and as I understood it also took quite a while for many of them to develop whereas the ones in the rhino cortex are expressed from the outside. +[3815.080 --> 3844.080] So finally just to sum up again and come back to where I started with these cells so these cells they suggest them that the antirinal, medial and rhino cortex may encodes position in several ways not only by a metric defined by a regular grid but also by actually using completely different principle vector based principle based on locations relative to objects in these areas. +[3845.080 --> 3874.080] So I will now move on to the environment, individual objects in the environment. So and then I promise to come back at the end to another dimension time which is we heard in this morning that especially from Lynn Adel's talk that hippocampus is very important for episodic memory where space has an absolutely essential role but space isn't all there is also an episodic memory, a time component and our understanding of how we are going to do this. +[3875.080 --> 3897.080] So time is really encoded has not been at the level of our understanding of space. So this is the work of Albert Tsau who was a PhD student in our lab and is also collaboration with the Kniehrm lab which I have contributed some of the data. +[3897.080 --> 3918.080] This will be presented in more detail tomorrow in symposium so I don't want to steal the whole show from Albert. So I will just present it very briefly and put it into context and then hopefully many of you will find an opportunity to listen to Albert himself tomorrow. +[3918.080 --> 3932.080] But let's put it in some background. So what do we know about encoding of time in the hippocampus? At least two aspects that is worth emphasizing. +[3932.080 --> 3961.080] First we have the so-called time cells which are cells that were described initially by Pastelkova et al from Bouchacke lab and then followed up more extensively by a series of studies from the Iconbaum lab which showed that when this is the original task, when rats run in a certain pattern like in a figure eight pattern like you see here, +[3962.080 --> 3976.080] and then stop in a running wheel to run for delay until they continue to run in the maze again. Then during that delay the cells actually fire at certain times in the interval. +[3976.080 --> 4000.080] This is plotted here by a new number one to new 30 here and then you have time in the wheel when they run and what you can see is that these cells fire at specific times during the interval and that it's a very orderly firing just like cells fire orderly in space when rats for example run on a linear track, they fire in a certain order when they run on the wheel. +[4000.080 --> 4012.080] Even if they don't move at all so it is not and this even happens when they control for movement. +[4012.080 --> 4024.080] So this was proposed then to show that hippocampus cells can actually also express time the very same cells that express space in other contexts. +[4024.080 --> 4042.080] So this is so called time cells there's a lot of attention to that now but nonetheless these cells this is described across time scales of not much more than 10 seconds or a little bit more and probably this also has to be learned. +[4042.080 --> 4066.080] But then there's a totally different expression of time in the hippocampus so going first back to studies again by I can bomb where they showed in this is a study from 2007 where rats were trained in an auto sequence memory task but the essence of it is shown in this figure. +[4066.080 --> 4083.080] So this is the trial lag or the distance between trials and on the y-axis you have the differences in the population activity and what you can see is that regardless of where they actually find their food the distance increases with time. +[4083.080 --> 4094.080] So slow there's a change in which cells are active at any given time in the hippocampus that could be an expression of time. +[4094.080 --> 4116.080] And then work from Jillian's there from Lloydkebs lab and also a study from C.V.T.L. when he was at Marc Schnitzer showed that in C.A.1 this is strong yes but is even stronger in the C.A.2 area of the hippocampus. +[4116.080 --> 4145.080] This is where we were when Albert Sauer started but we wanted to find out more about this and also where it came from and try to understand how such a code is expressed outside of the hippocampus and we then directed our attention to the lateral entranocortex which I haven't talked much about at all today but where cells are not really very strongly spatially selective as was shown in the hippocampus. +[4146.080 --> 4162.080] So we were shown by Jim Knierem's group about the same time when we found the grid cells but we wonder where much of the activity of lateral entranocortex could actually be explained by a role in coding of time. +[4162.080 --> 4191.080] Albert did was that he tested rats in the sequence of trials extending over a period of more than one hour so going from alternating between two types of environments so it's a black environment and a white environment so it means that the balls are either black and white otherwise totally similar and then alternating in a similar random sequence from over a series of 12 trials and then in between the resting trials of post trials. +[4192.080 --> 4201.080] So that total is 24 different recording epochs and as I said total a little bit more than one hour. +[4201.080 --> 4210.080] Then he asked how is the activity of cells in the lateral entranocortex during recording over this time sequence. +[4210.080 --> 4234.080] And first of all he did find some cells in the lateral entranocortex that are strongly modulated by time that means that their firing rates change in various ways across the experiments and this is not due to instability of the changes of the cells because he showed in many ways that it is totally, totally stable. +[4234.080 --> 4262.080] But the cells you see the activity of the cells shown here for four different types of cells this is across the alternating trials or sessions or black and white trials and what you can see here is made a little bit difficult to see but the cells ramp either up or down within trials as in this one so firing it begins low and gets high and high and higher then next trial begins longer, it's high and high and higher and so on. +[4262.080 --> 4283.080] Or it might have activity that could ramp up or down over the whole one hour or one hour plus session or you may have combinations where the activity ramps up or down just in certain just in the black boxes or in the white boxes and so on. +[4283.080 --> 4309.080] So based on this he then performed a general linear model analysis GLM and identify the fractional cells that have significant modulation by the various factors that he put into the analysis which included the color of the wall, black or white, the position of the rat or the combination of the two or time or mixtures of them all. +[4309.080 --> 4338.080] And what we found first of all if you begin at the bottom with the media and triangle cortex and with the C at 3 not surprisingly there's a very strong influence of positions you can see both here in the media and to Ryan and also to some extent in the C at 3 of course not much in lateral and triangle cortex when it comes to the color of the wall quite low in all of them but some in the C at 3 and in the lateral and triangle cortex. +[4339.080 --> 4358.080] But when it comes to time see that both C at 3 and the media and to Ryan are quite low but when it comes to lateral and to the right or the court it's a very high proportion that has significant modulation is about 25 to 30% that passed that significant threshold. +[4358.080 --> 4374.080] It doesn't mean that the others don't but they may have weaker influences but then and then C at 2 they are somewhat in between but none of them really reach the level of the lateral and triangle cortex. +[4374.080 --> 4386.080] But then you asked them well these are individual cells but could it be that those cells that don't pass that threshold perhaps also contribute to the coding so it took a totally different approach then. +[4386.080 --> 4406.080] So I used to look at the whole population instead and used them machine learning approach or linear support vector machine to determine the contribution of time for the three areas lateral and triangle cortex, C at 3 and medium and triangle cortex. +[4406.080 --> 4425.080] So what he did was that he trained the network based on he chopped it into 10 blocks and then trained it on 9 and then used that to predict that for the 10th one as a test case what time this actually was recorded in across those blocks so 24 epochs. +[4425.080 --> 4452.080] And the success of that is shown here so this is in this confusion matrix is here so what you have here on the x and y axis is the predicted epoch on the horizontal axis and the actual epoch on the y axis and then color indicates the proportional hits and you can see the strong almost every case is a hit here and very few were actually miss hits. +[4452.080 --> 4464.080] So you could almost all the time predict when the recording actually took place this is not the case in C at 3 years you can see here and just weekly the case in the medium and triangle cortex. +[4464.080 --> 4493.080] Correlated corrected for public for size of the sample I don't want to go into that and this is also was not only able to predict epochs or blocks of trials it could even go within trials and predict the right 22nd block or even 10 second block or even one second I mean of course with much lower success if it was one second but still you see the steeple line here at the bottom that is the same. +[4494.080 --> 4523.080] So it actually means that this representation of time was present at multiple times gains finally then one could ask is this something is this an internal clock that is present in the lateral and the final cortex so it is a clock like thing that goes on regardless of what happens or is it actually dependent on the experience of the animal and it may turn out to be the latter which I will show in this final data. +[4524.080 --> 4540.080] So the final slide which again now is a new type of task the right is not walking in the open field box like it did before but now it's running in the maze in a figure 8 pattern so alternating less than right or every second trial. +[4540.080 --> 4569.080] So I found in this task and also in one other task where the animals just run in a circle over and over and over again is that in those tasks if you then decode the identity of the trials which trial recording was from actually the success is lower than it was in the open field so it's much reduced in those two tasks in the figure 8 task and in the circle truck. +[4569.080 --> 4579.080] So the task compared to the task when the rat was walking in the open field and you can see here that's very little activity really very little. +[4579.080 --> 4598.080] It's along the diagonal here but what was the at the same time as this was lower situation lower in in the success of hitting the right trial whether it was trial number five or seven or nine when it comes to when in the trial the recording was from then it's reversed. +[4598.080 --> 4616.080] So what he did was that he chopped up segments each time the rat passed a certain point on this track and then he did that for every lap that the rat went through and through and through and through and then he asked is this from an early one or the late one in that trial then the success is actually reversed. +[4616.080 --> 4626.080] So now when in this task there's a hit success with much higher than it used to be when the rat was running freely in the open field. +[4626.080 --> 4645.080] So this then suggests that the encoding of time in the lateral interrhyne cortex is not a fixed thing it depends on the experience that the animal has and this network the representational time can actually be adapted to what the animal experiences going from a kind of absolute experience. +[4645.080 --> 4674.080] A kind of absolute invitation marks representation of time that is just running freely to one where you actually encode time relative to some temporal landmark like for example the start of the trial or passage of a certain point on the track and that then brings me back to the person who introduced me who was so Mike was kind enough to allow me to show one slide from his own data. +[4674.080 --> 4693.080] I asked him for him because Maria Monticell from his lab has done work in humans, human fMRI studies where they actually find that have data that are entirely consistent with the data from the lateral interrhyne cortex in the rats. +[4693.080 --> 4704.080] So what they find very very very briefly summarized is that they let subjects view a movie a famous TV thing that I never heard about. +[4704.080 --> 4717.080] But the point is that after the movie they were asked to put on a timeline when a still image from the movie was shown was this early or later whenever and then they could measure how well they hit. +[4717.080 --> 4727.080] And he trade was actually very highly correlated with activity in the lateral interrhyne cortex no correlation with me gel and also high that the. +[4727.080 --> 4749.080] Periodine cortex not with a power which and periodine is very strongly linked to the lateral interrhyne cortex so this I think this is those two sets of data fit very well together and with that I think come to the conclusion so these are people from the institute and from the lab as Mike mentioned. +[4749.080 --> 4773.080] So the hybrid moso participated in all of it there are also other people who have listed here so many can't really mention them but for the new work I would again then mention Albert's tells Roland who is going to present it tomorrow and also for the object vectors cells especially even her doll and also the more recent work is the bastion. +[4773.080 --> 4788.080] So with that and a lot of people pay for this and you'll in sponsor the lecture so with that and I'm done and thanks you for your patience sitting here so long and I hope you'll have a nice evening. diff --git a/transcript/allocentric_Z8ckbP8bHSs.txt b/transcript/allocentric_Z8ckbP8bHSs.txt new file mode 100644 index 0000000000000000000000000000000000000000..7eb96c6c8ab526a5fd705e2c2e1075d67778d53b --- /dev/null +++ b/transcript/allocentric_Z8ckbP8bHSs.txt @@ -0,0 +1,447 @@ +[0.000 --> 26.080] Welcome to your participants. +[26.080 --> 32.520] In today's module we shall discuss chronomics which is a study of time in the context of +[32.520 --> 35.720] non-verbal communication. +[35.720 --> 42.640] Chronomics is a subcategory of non-verbal aspects of communication which has emerged as the +[42.640 --> 45.120] studies in this field broadened. +[45.120 --> 51.840] Conventionally, time has been treated as an abstract concept and it is in this context +[51.840 --> 58.280] that linguistically we have responded to this idea representing it in different idioms +[58.280 --> 59.320] and phrases. +[59.320 --> 65.520] For example, quality time or time and time await for none. +[65.520 --> 72.560] However, we find that as the studies in the field of non-verbal aspects of communication +[72.560 --> 79.840] started to broaden their perspective in the areas of organizational behavior, business +[79.920 --> 87.080] communication as well as in anthropology people started to study the dimensions of time in +[87.080 --> 89.800] particular contexts. +[89.800 --> 97.520] A communication based study of time is dependent on how people in different cultures, in different +[97.520 --> 103.600] work cultures perceive and structure time in their interactions with other, in their +[103.600 --> 107.600] dialogues as well as in their relationships with others. +[107.600 --> 114.040] In the area of communication we also study how in different ways people respond to it +[114.040 --> 119.720] and thereby what type of non-verbal messages they try to communicate with it. +[119.720 --> 125.800] Our values in the context of time are reflected in our attitudes as well as in other aspects +[125.800 --> 133.280] of non-verbal communication and these can be understood in terms of how do we spend our +[133.280 --> 139.720] time, do we waste it, do we keep on postponing things, are we able to utilize the time to +[139.720 --> 141.720] its maximum. +[141.720 --> 147.280] There are of course individual variations in the way we respond to our understanding of +[147.280 --> 154.080] time and we evaluate, but at the same time we find that the cultural impact on this aspect +[154.080 --> 157.440] of MVC is also palpable. +[157.440 --> 164.760] As human beings we have a complex temporal identity which is constructed at different +[164.760 --> 171.280] labels, at personal as well as social, cultural and professional labels. +[171.280 --> 178.280] All types of verbal messages as well as non-verbal messages have their own temporalities, they +[178.280 --> 183.120] have a point of beginning and a point at which they end. +[183.120 --> 188.320] There has been something happening before that point and there would of course something +[188.320 --> 190.760] else would take place after that. +[190.760 --> 197.440] So our communication in the context of time or in the context of the larger phenomena of +[197.440 --> 203.320] non-verbal aspects of communication is not outside the context. +[203.320 --> 210.720] Chronomics ask for a more dynamic way of studying our professional interactions with emotional +[210.720 --> 217.360] understandings and connotations which we have individually, socially and culturally with +[217.360 --> 219.520] time. +[219.520 --> 225.800] Studies of chronomics have developed from interdisciplinary literature on time and they +[225.800 --> 234.360] have been also supported by researches in diversified fields of biology or sociology, +[234.360 --> 237.800] psychology as well as anthropology. +[237.800 --> 243.800] People have always been associated with studies of time in different ways. +[243.800 --> 250.320] But before we started using the term chronomics or even before we applied these understandings +[250.320 --> 255.800] in the area of business and professional communication, a number of scholars have to +[255.800 --> 261.280] be listed to acknowledge their contribution for the development of this idea. +[261.280 --> 268.040] From the modern perspectives we find that the idea was first of all developed by E. Robert +[268.040 --> 275.040] Kelly who is better known as E. R. Clay and the same idea was carried forward by William +[275.040 --> 282.600] James whom we students of English literature recognize primarily for his use of the phrase +[282.600 --> 286.720] stream of conscious and technique in his works. +[286.720 --> 292.320] The idea was also carried forward by George Herbert Mead and these leading developers of +[292.320 --> 301.200] the study of human acts and presentness alerted us to this idea that the time is not governed +[301.200 --> 304.800] only by the external clock time. +[304.800 --> 311.280] William James suggested that there is also an internal dimension of time which he called +[311.280 --> 312.680] as the re. +[312.680 --> 318.440] Another philosopher whom we have to acknowledge at this stage is Harold Innis, the famous +[318.440 --> 326.880] Canadian communicologist who published his famous book, Changing Concepts of Time in 1952. +[326.880 --> 336.640] He studied the impact of time as well as space for the development of civilization. +[336.640 --> 344.320] The ideas of Harold Innis were further enriched by Marshall McCluhan who discussed time and +[344.320 --> 347.240] human communication in several works. +[347.240 --> 354.320] We primarily know McCluhan for introducing us to the term global village in his works +[354.320 --> 359.640] but he has also talked about the concept of time. +[359.640 --> 367.360] In 1952 only the same year in which Harold Innis has published his book, Edward T. Hall +[367.360 --> 372.040] also published his book, The Process of Change. +[372.040 --> 378.320] Hall was to write periodically about time and the socio cultural relations over the next +[378.320 --> 386.200] four decades and his ideas have encouraged other researchers to take up similar studies. +[386.200 --> 393.920] The actual term chronomics was coined in 1972 by Fernando Poyatos, a Canadian linguist +[393.920 --> 396.000] and Cimutician. +[396.000 --> 401.200] In dealing with the communication system of the speaker actor, Poyatos briefly discussed +[401.200 --> 407.080] economics that concerned conceptions in the handling of time as a biopsychological and +[407.080 --> 410.720] cultural element of social interactions. +[410.720 --> 416.640] He had introduced this idea in cross-cultural study of parallel linguistic alternates +[416.640 --> 422.520] in face to face interaction which was published in 1975. +[422.520 --> 430.680] As examples for chronomically significant aspects in communication, he included the cross-cultural +[430.680 --> 436.860] differences in the duration of ordinary social visits, response latency among different +[436.860 --> 443.740] cultural groups when a question is asked or for example a decision is to be made. +[443.740 --> 450.620] He also looked at conversational silences and pauses as part of cultural chronomics. +[450.620 --> 457.540] Continuing with these observations, anti-looking and these fever suggests that since temporal +[457.540 --> 462.900] experience depends on the changing of something, chronomics is probably best conceived of +[462.980 --> 467.300] as a kind of parallel linguistic or supressigmental feature. +[467.300 --> 474.660] Tom Pernodu developed the first article on time and non-verbal communication in 1974 and +[474.660 --> 482.060] he also attempted to define chronomics and outlined its characteristics in 1977. +[482.060 --> 490.700] So it is in this decade of 1970s that the maximum understanding of the impact of chronomics +[490.780 --> 494.900] was being talked about by various research scholars. +[494.900 --> 502.460] Since these early works, we find that a number of works and commentaries have come out on +[502.460 --> 507.060] the significance of chronomics in the field of professional communication. +[507.060 --> 514.620] I would base my initial discussions or this concept on the findings of Edward T. Hall. +[514.620 --> 522.540] He has recognized three time systems and named them as technical formal and informal. +[522.540 --> 527.740] Technical time according to him is the scientific measurement of time which is associated with +[527.740 --> 529.980] the precision of keeping the time. +[529.980 --> 537.980] The way different mechanical devices for example, clock and watches primarily are used to keep +[537.980 --> 538.980] time. +[538.980 --> 546.100] Formal time is the time which we learn on the basis of our social conditioning. +[546.100 --> 552.060] Oveston Turner have quoted the example of the USA and have talked about how the American +[552.060 --> 555.740] society is being governed by the clock and calendar. +[555.740 --> 561.380] People have been socially conditioned to think that when it is 1 p.m. it is normally the +[561.380 --> 567.540] time to work and when it is 1 a.m. it is normally the time to sleep. +[567.540 --> 575.740] At the same time we find that in our contemporary cultures, our arrangement of time is broadly fixed +[575.740 --> 577.820] and rather methodical. +[577.820 --> 584.340] So to say that the majority of the people follow similar patterns at workplace and in their +[584.340 --> 585.940] personal lives also. +[585.940 --> 591.260] Informal time is normally our understanding of time at a personal level. +[591.340 --> 599.620] All has included 3 different concepts within it and these are duration, punctuality and activity. +[599.620 --> 606.540] Duration is related with the time which is formally allocated to a particular event. +[606.540 --> 614.020] For example, in a meeting for a particular agenda item we might have allocated 40 minutes. +[614.020 --> 621.980] But at the same time, sometimes in certain cultures our estimates can be normally imprecise +[621.980 --> 628.620] whereas in some cultures as we will later see these estimates have to be as close to +[628.620 --> 634.620] precision as possible and at the same time there are personal definitions also. +[634.620 --> 640.140] For example, if I say I would be there within 2 minutes then what exactly I mean by these +[640.140 --> 648.740] 2 minutes would it be 1 hour or exactly 2 minutes or maybe somewhere around 15 to 20 minutes. +[648.740 --> 655.620] Another aspect which is associated by hall within formal time is punctuality which is basically +[655.620 --> 660.140] our promptness associated with the way we keep time. +[660.140 --> 665.580] We are normally considered to be punctual when we arrive at the designated place at the +[665.580 --> 667.260] given time. +[667.260 --> 674.820] Some people are tardy and habitually late-combers and at the same time there are cultural associations +[674.820 --> 676.140] also. +[676.140 --> 682.700] In certain cultures for example punctuality is not exactly a value because late-coming +[682.700 --> 687.700] is often associated with our status and perceptions of power. +[687.700 --> 694.260] Activities also another chronic value our use and management of time is defined in a cultural +[694.260 --> 695.260] manner too. +[695.260 --> 701.380] Other aspects which may be associated with our concept of time is our willingness to +[701.380 --> 710.740] wait the way we maintain time during our interactions and to what extent the use of time punctuality +[710.740 --> 717.020] etc are a reflection of our status and a part of the power game. +[717.020 --> 724.700] The way we look at time we maintain our association with it and the way we value it affects +[724.700 --> 726.100] the life is time. +[726.100 --> 733.020] It is also a reflection of our own work culture as well as at a larger scale it becomes a reflection +[733.020 --> 736.420] of the work culture of an organization. +[736.420 --> 741.620] It also affects our communication and professional relationships too in the long run. +[741.620 --> 747.780] Hall has also pointed out that time can be an erigmatic characteristic as far as our +[747.780 --> 750.340] social pressures are concerned. +[750.340 --> 756.700] We are encouraged to use time wisely and at the same time we may also be cautioned not +[756.700 --> 759.100] to be too obsessive about it. +[759.100 --> 765.900] The way different cultures understand the function of time can be understood from several +[765.900 --> 768.300] different angles. +[768.300 --> 775.420] Hall has treated time as a language as a thread which runs through cultures. +[775.420 --> 782.460] In his opinion it acts as an organizer and at the same time it also acts as a message +[782.460 --> 783.660] system. +[783.660 --> 789.780] It reveals how people treat each other and at the same time it also tells us about the +[789.780 --> 791.940] things which people value. +[791.940 --> 800.220] Hall has taken a historical perspective as far as the human concept of time is concerned. +[800.220 --> 806.820] He suggests that our consciousness of time has emerged from the way we learn to respond +[806.820 --> 813.220] to natural rhythms which were associated with changes in the season, with changes during +[813.220 --> 819.700] the days, annual cycles of different crops etc. +[819.700 --> 825.940] Though the hidden dimensions of time remain to be exceedingly complex, basic time systems +[825.940 --> 832.700] can be termed as possessing either monochronic or polychronic orientations. +[832.700 --> 838.900] Hall suggests that most of our cultures are either monochronic or polychronic. +[838.900 --> 845.740] Although these patterns which are almost polar opposites cannot be applied rigidly to all +[845.740 --> 852.500] the cultures, a given culture is likely to have a preference for either one of these +[852.500 --> 855.500] and would be more inclined towards it. +[855.500 --> 862.820] However, there may be cultural and ethnic variations, a particular culture may be inclined towards +[862.820 --> 867.060] a particular preference or orientation in terms of time. +[867.060 --> 870.620] But within that culture we may find some smaller groups. +[870.620 --> 878.700] For example, ethnic groups or subcultural groups who are disposed in a different manner and +[878.700 --> 882.300] have retained a different association with time. +[882.300 --> 888.620] In general, Hall suggests that northern European and American cultures are monochronic and +[888.620 --> 891.900] Mediterranean cultures are polychronic. +[891.900 --> 897.580] So, how do we look at the differences between the monochronic and polychronic orientations +[897.580 --> 899.260] of time? +[899.260 --> 906.180] A monochronic understanding of time is linear and it is governed by our clock. +[906.180 --> 913.820] In comparison to it, a polychronic culture is a non-linear one and it is more oriented +[913.820 --> 915.460] towards time. +[915.460 --> 922.020] It prefers relationships in terms of the idea of keeping time. +[922.020 --> 928.620] Monochronic culture also has a short term orientation in relation with the polychronic +[928.620 --> 930.780] which is a long term orientation. +[931.340 --> 938.140] Whereas, monochronic preference precision, we find that the polychronic cultures understand +[938.140 --> 940.740] that time has a particular flow. +[940.740 --> 947.220] The basic difference between these two orientations has been beautifully summed up by McCool, +[947.220 --> 953.420] when he says that the monochronic cultures are based primarily on clock time, whereas +[953.420 --> 958.220] polychronic cultures are typically based on people time. +[958.220 --> 963.500] And this is by far the most significant difference between the two. +[963.500 --> 970.020] These cultural orientations towards the way we value time as people are reflected in +[970.020 --> 973.540] our day to day activities also. +[973.540 --> 981.020] A culture which has a monochronic orientation assumes a linear order of things and it suggests +[981.020 --> 986.580] that things have to be completed in a sequential pattern. +[986.580 --> 995.380] One thing has to follow the other and A should always proceed B and A should end before +[995.380 --> 998.060] the task B begins. +[998.060 --> 1004.660] And therefore, monochronic cultures value those tools and systems which increase focus +[1004.660 --> 1007.540] and help us in saving time. +[1007.540 --> 1015.540] They look at time as money as value which has to be structured and therefore, their culture +[1015.540 --> 1022.740] and therefore, the work cultures in these monochronic cultures are governed by well-structured +[1022.740 --> 1025.620] and well-defined schedules. +[1025.620 --> 1032.300] The focus in these cultures is somehow to reduce distractions during plant interactions +[1032.300 --> 1038.140] and they always try to save time as much as possible. +[1038.140 --> 1044.660] The non-verbal clues which can be associated with this orientation are linked with certain +[1044.660 --> 1049.900] tendencies which are exhibited in individual and it over cultures. +[1049.900 --> 1056.620] For example, the capability and tendency to plan ahead to schedule things to schedule meetings +[1056.620 --> 1057.620] etc. +[1057.620 --> 1061.420] So, that there is no fuzziness during the day. +[1061.420 --> 1067.060] Puncturity as a value has to be there and at the same time, there is a tendency to push +[1067.060 --> 1072.620] things through the agenda so that things can and on time. +[1072.620 --> 1077.820] After at the same time, they do not want to double with so many things simultaneously +[1077.820 --> 1081.140] and they prefer to do one thing at a time. +[1081.140 --> 1087.540] The countries which are typically associated with a monochronic orientation are most of +[1087.540 --> 1094.060] the countries in northern Europe, this Scandinavian countries, Germany, USA and Japan. +[1094.060 --> 1099.940] Hall has also pointed out that the monochronic perceptions and preferences in the cultures +[1099.940 --> 1104.860] of northern Europe and the USA are not natural. +[1104.860 --> 1112.660] They are learnt, social and cultural values and at the same time, they happen to be arbitrary. +[1112.660 --> 1118.900] He has traced the development of this attitude to the early days of industrial revolution +[1118.900 --> 1126.540] which had occurred during 1760 to 1820 and some people stretch it to 1840 also in Europe +[1126.540 --> 1128.300] in the USA. +[1128.300 --> 1135.240] The factory life required that the labour has to report at a given time and the appointed +[1135.240 --> 1141.180] hour was always announced using different types of bells or whistles etc. +[1141.180 --> 1147.740] This punctuality was necessary to maintain and sustain industrial revolution and gradually +[1147.740 --> 1151.780] these attitudes have seeped into these cultures. +[1151.780 --> 1158.660] And therefore monochronic cultures place a paramount value on schedules, on task, on +[1158.660 --> 1165.260] completing the things by the deadline and therefore hall has gone to the extent to say that +[1165.260 --> 1172.140] in the American business world, the schedule is sacred and time is tangible. +[1172.140 --> 1177.340] Because our preference for the monochronic attitude encourages us to take up only one +[1177.420 --> 1183.380] thing at a time, people who are governed by it do not like to be interrupted and also +[1183.380 --> 1188.020] do not prefer to suddenly change the pre-decided scheduling. +[1188.020 --> 1194.340] Hall has also been able to point out certain constraints which are associated in his opinion +[1194.340 --> 1197.580] with the monochronic preference for time. +[1197.580 --> 1205.500] He says that this perception of time seals people from one another and as a result intensifies +[1205.580 --> 1208.980] some relationships at the cost of others. +[1208.980 --> 1215.660] He has suggested that this time preference is like a room in which some people are allowed +[1215.660 --> 1219.020] to enter while others are kept out of it. +[1219.020 --> 1226.540] The rigidity and the focus to keep the schedules intact, conditions people to think that those +[1226.540 --> 1233.860] people who do not subscribe to similar value system in the context of time are basically +[1233.940 --> 1240.540] inefficient and unreliable and at the same time they are rather disrespectful. +[1240.540 --> 1245.660] Hall feels that even though most of the Western cultures are dominated by the monochronic +[1245.660 --> 1252.140] perception of time, it is not a natural focus of the way human beings have evolved +[1252.140 --> 1259.140] and in his opinion this preference seems to violate many of humanity's in natirithms. +[1259.860 --> 1265.420] It does not mean however that he prefers a different perception of time. +[1265.420 --> 1271.260] It is a part of his analysis only and has to be perceived in the same manner. +[1271.260 --> 1280.260] In contrast we find that polychronic orientation encourages a certain flux and non-linearity. +[1280.660 --> 1288.660] These cultures value relationship and predictions more than they value rigidity towards time. +[1289.620 --> 1296.460] There is always more emphasis on finishing the natural agenda first rather than keeping +[1296.460 --> 1299.660] the schedule in a mechanical manner. +[1299.660 --> 1305.380] For example if two people who belong to this culture meet on the street corner after +[1305.380 --> 1311.260] a long time they would prefer to catch on what is going on in other life first rather +[1311.260 --> 1314.060] than rushing to a 10 o clock meeting. +[1314.060 --> 1318.140] A slight delay is understandable. +[1318.140 --> 1325.140] The non verbal clues which seep into our work environment in such cultures are reflected +[1325.140 --> 1329.260] in being non punctual during the meetings. +[1329.260 --> 1336.260] Non punctuality is not necessarily related with a negative work culture rather it has +[1336.260 --> 1342.220] to be understood as a certain empathy if people tend to get leaked. +[1342.220 --> 1345.260] Meetings are used for building relationships. +[1345.260 --> 1350.700] The focus on finishing the agenda is not typically over there. +[1350.700 --> 1357.780] In these cultures we find that multitasking is considered as a value and therefore a certain +[1357.780 --> 1361.500] flexibility is encouraged. +[1361.500 --> 1368.220] In Latin American countries in most of the African and Arabic countries as well as in some +[1368.220 --> 1376.060] countries and certain segments in South Asia we find that a polychronic orientation towards +[1376.060 --> 1378.220] time is followed. +[1378.220 --> 1385.140] It is also followed in those sections of the society the world over which are basically +[1385.140 --> 1392.260] rural and agrarian because they follow the larger cycles of the crop and production etc. +[1392.260 --> 1397.460] And at the same time those societies which rigorously follow the religious calendars this +[1397.460 --> 1400.020] orientation is normally found. +[1400.020 --> 1407.900] In those cultures where a polychronic understanding of time is prevalent multiple timelines are +[1407.900 --> 1409.860] routinely followed. +[1409.860 --> 1416.500] It is understood if people are not able to follow the deadlines because they have preferred +[1416.500 --> 1420.580] to do some other thing within the eloted hour. +[1420.580 --> 1427.940] The tendency to view this attitude from a monochronic perspective is to view them as +[1427.940 --> 1430.540] basically chaotic or random. +[1430.540 --> 1436.200] The monochronic cultures are also primarily known as the clock cultures because for them +[1436.200 --> 1439.460] time is measured and it is of essence. +[1439.460 --> 1444.340] The punctuity which is practiced over there and the precision which is preferred in these +[1444.340 --> 1448.740] cultures is reflected in various routines also. +[1448.740 --> 1454.400] For example, keeping the time as far as the public transport is concerned is reflected +[1454.400 --> 1457.140] because of this cultural preference also. +[1457.140 --> 1463.820] In the context of the business world sometimes we find that too much of an emphasis on monochronic +[1463.820 --> 1470.820] perspective can backfire in a multicultural setting because the idea that sometimes it +[1470.820 --> 1477.540] may take years to develop a loyal customer base is not understood by such people. +[1477.540 --> 1484.580] The different ways in which cultures respond to punctuity and other time related values +[1484.580 --> 1490.860] is nicely displayed in this video. +[1490.860 --> 1496.900] I guess we all believe that time is pretty constant but around the world attitudes to +[1496.900 --> 1499.380] it differ greatly. +[1499.380 --> 1504.700] While you can set your watch by Swiss trains not all cultures break the day down into minutes +[1504.700 --> 1506.420] and seconds. +[1506.420 --> 1517.460] For other cultures punctuality is a very different matter. +[1517.460 --> 1521.660] A German sales executive trying to open doors in a number of African countries scheduled +[1521.660 --> 1523.500] two meetings a day. +[1523.500 --> 1526.260] For him quite easy going. +[1526.260 --> 1529.980] His first meeting didn't even take place till a day later. +[1529.980 --> 1534.180] By the end of his trip he was so stressed out he could hardly operate. +[1534.180 --> 1541.940] He mistakenly thought his hosts would look at time like he did. +[1541.940 --> 1547.860] In Africa like in the Middle East or South America there they work in blocks of time half +[1547.860 --> 1550.700] a day maybe certainly not in minutes. +[1550.700 --> 1554.860] As long as they can achieve what they need in that block of time then exactly when is +[1554.860 --> 1556.580] less important. +[1556.580 --> 1560.460] That's not to say that they're less efficient or effective it's just that they work at +[1560.460 --> 1562.260] their own pace. +[1562.260 --> 1567.020] If you work in seconds then you need to adapt otherwise you're going to set yourself +[1567.020 --> 1586.300] up for a lot of resistance from your hosts and you're going to get constant disappointment. +[1586.300 --> 1588.740] And then there are cultural anomalies. +[1588.740 --> 1592.580] In French society absolute punctuality is not the highest priority. +[1592.580 --> 1598.140] But if you arrive late at a French restaurant don't expect a warm welcome. +[1598.140 --> 1603.380] The French take their food very seriously and consider lateness a sign of disrespect +[1603.380 --> 1605.580] for their culinary efforts. +[1605.580 --> 1609.580] You'd better pay some serious compliments to the waiters if you want to get back in their +[1609.580 --> 1621.620] good books. +[1621.620 --> 1626.460] The American expression time is money can be taken very literally in the US. +[1626.460 --> 1632.220] A chatty bank teller whose line's moving slowly will cause customers to become impatient. +[1632.220 --> 1635.620] And you'll also get a near full if the line has to wait because you haven't filled out +[1635.620 --> 1638.420] your forms ahead of time. +[1638.420 --> 1644.540] Certain tendencies of monochronic and polychronic orientations which we have already discussed +[1644.540 --> 1647.660] are related with punctuality. +[1647.660 --> 1652.940] Monochronic orientation prefers punctuality which is considered to be almost sacred. +[1652.940 --> 1658.780] So ten o'clock meeting means that the discussions have to begin at ten o'clock. +[1658.780 --> 1664.980] On the other hand polychronic cultures are more people centered and for them at ten o'clock +[1664.980 --> 1670.580] meeting means at ten o'clock people would start assembling there and start greeting each +[1670.580 --> 1671.580] other. +[1671.580 --> 1677.900] In the polychronic orientation punctuality is largely ignored to the rhythm of the people. +[1677.900 --> 1683.200] And the rigid adherence to completing the projects and deliverables according to a +[1683.200 --> 1686.700] rigid schedule is sometimes overlooked. +[1686.700 --> 1692.740] The cultural variations in the perception of time are also discussed in this particular +[1692.740 --> 1693.260] video. +[1695.140 --> 1698.180] Every culture has its own perception of time. +[1698.180 --> 1702.460] Every culture has its own perception of time and perception of time in a separate light. +[1702.460 --> 1706.220] In some countries people dedicate their lives to build a strong relationship with their +[1706.220 --> 1708.500] families like the Arabic people. +[1708.500 --> 1714.060] Or others merely dedicate their lives with their career like the Japanese. +[1714.060 --> 1716.500] I have the rush, say, as the American. +[1716.500 --> 1717.500] My time is up. +[1717.500 --> 1722.140] The Arab, scornful of this sub-missifellotute discussion would only use this expression +[1722.140 --> 1724.740] if death were imminent. +[1724.740 --> 1729.580] Though Western European and North American countries fuel time as a linear vision, time +[1729.580 --> 1731.540] is a beginning and an end. +[1731.540 --> 1734.900] This culture is fast-paced compared to other cultures. +[1734.900 --> 1738.340] When Western cultures make a decision about business, they will see it as final when +[1738.340 --> 1739.860] they come to an agreement. +[1739.860 --> 1743.500] And so, they don't have the rethink or just the agreement. +[1743.500 --> 1748.020] They want to do as much as possible in the time they have. +[1748.020 --> 1752.500] The Arabic countries fuel the perception of time as a flexible vision, being led to an +[1752.500 --> 1756.780] appointment or checking a long time to get down to business is the exact norm for much +[1756.780 --> 1758.420] Arabic countries. +[1758.420 --> 1762.980] For flexible time cultures, scudges are less important than human feelings. +[1762.980 --> 1767.980] When people and relationships demand attention or require nurture, time becomes a subjective +[1767.980 --> 1771.340] commodity that can be manipulated or stretched. +[1771.340 --> 1776.220] Meeting should not be rushed or cut short for the sake of an arbitrary schedule. +[1776.220 --> 1778.420] Time is an open-ended resource. +[1778.420 --> 1782.220] Communication is not regulated by a clock. +[1782.220 --> 1787.220] In Asia, the people view the perception of time as a cyclical vision. +[1787.220 --> 1790.340] Vision culture takes a concept to a next step. +[1790.340 --> 1794.380] When the process of life ends, the Asian countries will start at birth again. +[1794.380 --> 1798.260] The Asian countries are slower-paced than the Western European countries. +[1798.260 --> 1801.940] For instance, when the Chinese people make an appointment, for for example a business +[1801.940 --> 1805.020] deal, they will always arrive early so they won't be wasting your time. +[1805.020 --> 1807.660] They have more focus on their career. +[1807.660 --> 1811.220] When Asian people make a decision, they will always refute their decision later on. +[1811.220 --> 1813.220] But see, if it's still the right choice. +[1813.220 --> 1817.980] If this is not a case, they will adjust accordingly. +[1817.980 --> 1822.060] For instance, when European businessmen want to make a deal or sign a contract with +[1822.060 --> 1826.300] Chinese businessmen, they expect to make the deal fast and only think about the future. +[1826.300 --> 1830.060] While the Chinese businessmen will always look for long-term solution and rethink the +[1830.060 --> 1831.060] deal several times. +[1831.060 --> 1836.780] If the deal isn't made quickly, the Western cultures will see it as a waste of time. +[1836.780 --> 1844.060] Our cultural preferences as far as our understanding of time is concerned are reflected not only +[1844.060 --> 1849.900] in our relationship with other people, but also in our relationship with technology. +[1849.900 --> 1855.940] A clear example of it is the way the global websites are designed. +[1855.940 --> 1862.060] We find that monochronic users are quick and decisive and usually task-oriented and they +[1862.060 --> 1865.340] design the websites in the same manner. +[1865.340 --> 1871.860] On the other hand, we find that polychronic users emphasize process over results and prefer +[1871.860 --> 1877.180] to gain a high level of understanding over a practical implementation. +[1877.180 --> 1885.300] And this difference is easily visible in the way technology is used by different cultures. +[1885.300 --> 1892.260] In the fast-changing pace of our work cultures, where we may have to work with people from +[1892.260 --> 1894.620] different cultural background. +[1894.620 --> 1900.860] Our awareness of how time is perceived differently in different cultures has become almost +[1900.860 --> 1902.940] a must. +[1902.940 --> 1908.780] People who work at an international level must know what are the different definitions +[1908.780 --> 1913.980] of time and how do people relate to it differently. +[1913.980 --> 1920.100] A particularly interesting word which is used in Latin American countries is Manana. +[1920.100 --> 1927.620] In the Middle East, a synonymous word is Bukhara, which indicates a particular attitude. +[1927.620 --> 1932.740] That means that what cannot be done today would be done tomorrow. +[1932.740 --> 1940.860] So this laid-back attitude in terms of time is a cultural aspect of looking at our values +[1940.860 --> 1943.660] and our relationships with other people. +[1943.660 --> 1949.980] In the monochronic cultures, we find that time is divided and further subdivided into +[1949.980 --> 1951.900] identifiable units. +[1951.900 --> 1957.500] However, in polychronic cultures, we find that time is a happy mixture of past, present +[1957.500 --> 1962.940] and future and these segments are not strictly segregated. +[1962.940 --> 1969.700] So we have to understand whether the people with whom we work look at time in a formal +[1969.700 --> 1975.500] and task-oriented fashion or do they look at time as an opportunity to spend time and +[1975.500 --> 1978.340] develop interpersonal relationships. +[1978.340 --> 1987.820] In some cultures, we find that lack of punctuality is associated with our social prestige. +[1987.820 --> 1993.900] It is very common in certain societies as well as in certain organizations to make the subordinates +[1993.900 --> 2000.580] wait for the appointments so that they can internalize the significance and importance +[2000.580 --> 2003.740] of the higher rank of their superior. +[2003.740 --> 2010.580] Power and dignity are often shown by arriving late and it is also used as a tactic in certain +[2010.580 --> 2016.220] countries, particularly we can refer to the work culture of the Middle East and countries. +[2016.220 --> 2022.900] However, we find that in monochronic cultures, lack of punctuality is always frowned upon. +[2022.900 --> 2028.980] A very interesting example is that of Michael Jackson, who angered the judge when he arrived +[2028.980 --> 2033.700] late in one of the codes in 2005. +[2033.700 --> 2040.580] Punctuality is considered by monochronic cultures as a value and it is not relaxed even for those +[2040.580 --> 2046.300] people who are considered to be as social or cultural leaders in different fields. +[2046.300 --> 2052.180] It is interesting to note that in certain international situations, the name of a country +[2052.260 --> 2058.100] is also inserted after the time of the meeting is given and the insertion of the name of +[2058.100 --> 2066.420] a country indicates that one also has to understand how the particular country associates itself +[2066.420 --> 2067.260] with time. +[2067.260 --> 2071.980] The insertion of the name of a country allows the participants from different cultural +[2071.980 --> 2079.020] backgrounds to understand if the time is fixed or fluid as far as the invitation is concerned. +[2079.020 --> 2085.580] I take this example from Martin and Cheney, who have cited this example of an invitation +[2085.580 --> 2091.740] where the meeting is announced at 9 a.m. within codes, Malaysian time. +[2091.740 --> 2098.540] Now Malaysian time is an indication that the punctuality would be practiced in a fluid +[2098.540 --> 2099.540] fashion. +[2099.540 --> 2104.500] Work time and personal times are strictly separated in monochronic cultures. +[2104.500 --> 2111.060] However, in polychronic cultures, we find that the work time and personal time are not +[2111.060 --> 2113.300] strictly separated. +[2113.300 --> 2116.380] They often interwine into each other. +[2116.380 --> 2124.380] These cultural aspects percolate further into different organizations and it is reflected +[2124.380 --> 2126.500] in their work culture. +[2126.500 --> 2133.300] For example, how much time is given during a work day to the company task and how much +[2133.620 --> 2136.380] time is given to socializing? +[2136.380 --> 2143.740] In monochronic cultures, we find that the division is typically 80 percent task and 20 percent +[2143.740 --> 2144.900] social. +[2144.900 --> 2152.500] On the other hand, in polychronic countries, we find that it may be rather skewed. +[2152.500 --> 2159.140] Understanding appropriate connotations of time is therefore important in international situations. +[2159.140 --> 2164.820] Globalization of business is influencing how the concept of time is viewed around the +[2164.820 --> 2170.700] world, particularly at the level of the individual, at the level of the organization. +[2170.700 --> 2177.100] So, more than the country we find that it is the organization which is reflecting the +[2177.100 --> 2180.140] cultural associations with time. +[2180.140 --> 2186.820] It is interesting to note that the work cultures and the offices of the same company which +[2186.900 --> 2191.620] are located in different countries may follow different patterns. +[2191.620 --> 2198.620] A head office situated in a country where the preferences for monochronic attitudes would +[2200.380 --> 2207.060] work in a different atmosphere in comparison to another office which is situated in a +[2207.060 --> 2211.060] country which is governed by the polychronic attitude. +[2211.060 --> 2217.420] These differences alert us to the manner in which time is perceived in different ways +[2217.420 --> 2223.980] and the extent to which we are conditioned by our social and cultural parameters. +[2223.980 --> 2230.180] And at the same time, the necessity to adapt ourselves in an empathetic manner to different +[2230.180 --> 2235.540] viewpoints as far as our associations with time is concerned. +[2235.540 --> 2242.380] The differences of attitude between monochronic and polychronic individuals can be further understood +[2242.380 --> 2243.860] with the help of this video. +[2265.540 --> 2273.460] In this scenario, we have Bob. +[2273.460 --> 2275.660] Bob is what we call polychronic. +[2275.660 --> 2279.460] Polychronic people are frequently late and are easily distracted and do many things at +[2279.460 --> 2280.460] once. +[2280.460 --> 2284.100] For Bob, it is normal to quickly change appointments, schedules and knock-meat deadlines. +[2284.100 --> 2290.100] This behavior is common in Latin America and in the Middle Eastern countries. +[2291.100 --> 2297.100] When monochronic and polychronic people interact in groups, the results can be frustrating. +[2297.100 --> 2302.940] Monochronic people can become distressed by how polychronic people seem to disrespect deadlines +[2302.940 --> 2306.500] and schedules. +[2306.500 --> 2310.780] In order to work together smoothly, monochronic members need to take responsibility for +[2310.780 --> 2312.700] the time-sensitive tasks. +[2312.700 --> 2316.580] While accepting that polychronic members will vary their promptness based on the nature +[2316.580 --> 2318.580] and importance of a situation. +[2318.580 --> 2326.020] As this video very aptly suggests, time is not only a measuring instrument. +[2326.020 --> 2329.260] It also indicates human behavior. +[2329.260 --> 2332.780] It also indicates our cultural preferences. +[2332.780 --> 2338.900] It also indicates our attitudes towards relationships. +[2338.900 --> 2345.420] Business and other professional activities are planned within time and diverse understandings +[2345.420 --> 2349.100] about our preferences can also cause confusion. +[2349.100 --> 2355.860] For an American, time is truly money and therefore it is always considered to be precious. +[2355.860 --> 2359.740] Because this society is basically a profit-oriented society. +[2359.740 --> 2365.740] German census link time with their sense of order, tidiness and planning. +[2365.740 --> 2371.940] In certain other cultures, for example in the Spanish culture as well as in Italian and +[2371.940 --> 2378.940] Arabic cultures, we find that the considerations of time are usually subjected to human feelings. +[2378.940 --> 2384.940] The understanding of the French as far as the punctuity is concerned is also closer to +[2384.940 --> 2386.900] a polychronic attitude. +[2386.900 --> 2393.740] Our understanding of time helps us to organize our non-verbal communication in a better +[2393.740 --> 2401.220] way and to modulate our dialogue and conversations in such a way that the other people can also +[2401.220 --> 2404.020] empathetically understand it. +[2404.020 --> 2404.340] Thank you. diff --git a/transcript/allocentric_Zd71719_G8Y.txt b/transcript/allocentric_Zd71719_G8Y.txt new file mode 100644 index 0000000000000000000000000000000000000000..271522f17009c9db15800517763bd4dfe20daf3c --- /dev/null +++ b/transcript/allocentric_Zd71719_G8Y.txt @@ -0,0 +1,65 @@ +[0.000 --> 20.000] When we park in a big parking lot, how do we remember where we parked our car? +[20.000 --> 26.000] Here's the problem facing Homer, and we're going to try to understand what's happening in his brain. +[26.000 --> 30.000] We start with the hippocampus shown in yellow, which is the organ of memory. +[30.000 --> 35.000] If you have damage there, like in Alzheimer's, you can't remember things, including where you parked your car. +[35.000 --> 38.000] It's named after Latin for seahorse, which it resembles. +[38.000 --> 41.000] Like the rest of the brain, it's made of neurons. +[41.000 --> 44.000] The human brain has about 100 billion neurons in it. +[44.000 --> 52.000] The neurons communicate with each other by sending little pulses or spikes of electricity via connections to each other. +[52.000 --> 56.000] The hippocampus is formed of two sheets of cells, which are very densely interconnected. +[56.000 --> 68.000] Scientists have begun to understand how spatial memory works by recording from individual neurons in rats or mice while they forage or explore an environment looking for food. +[68.000 --> 75.000] We're going to imagine we're recording from a single neuron in the hippocampus of this rat here. +[75.000 --> 80.000] When it fires a little spike of electricity, there's going to be a red dot and a click. +[80.000 --> 87.000] What we see is that this neuron knows whenever the rat has gone into one particular place in its environment. +[87.000 --> 91.000] It signals to the rest of the brain by sending a little electrical spike. +[91.000 --> 97.000] We could show the firing rate of that neuron as a function of the animal's location. +[97.000 --> 105.000] If we record from lots of different neurons, we'll see that different neurons fire when the animal goes into different parts of its environment, like in this square box shown here. +[105.000 --> 113.000] Together they form a map for the rest of the brain telling the brain continually where am I now within my environment. +[113.000 --> 120.000] Place cells are also being recorded in humans, so epilepsy patients sometimes need the electrical activity in their brain monitoring. +[120.000 --> 124.000] Some of these patients played a video game where they drive around a small town. +[124.000 --> 133.000] Place cells in their hippocampus would fire, become active with sending electrical impulses whenever they drove through a particular location in that town. +[133.000 --> 139.000] How does a place cell know where the rat or person is within its environment? +[139.000 --> 144.000] These two cells here show us that the boundaries of the environment are particularly important. +[144.000 --> 151.000] The one on the top likes to fire midway between the walls of the box that they're rat in. +[151.000 --> 154.000] When you expand the box, the firing location expands. +[154.000 --> 159.000] The one below likes to fire whenever there's a wall close by to the south. +[159.000 --> 169.000] If you put another wall inside the box, then the cell fires in both places, wherever there's a wall to the south, as the animal explores around in its box. +[169.000 --> 178.000] This predicts that sensing the distances and directions of boundaries around you, extended buildings and so on, is particularly important for the hippocampus. +[178.000 --> 187.000] The cells are found which project into the hippocampus which do respond exactly to detecting boundaries or edges, +[187.000 --> 192.000] particularly distances and directions from the rat or mouse as it's exploring around. +[192.000 --> 205.000] The cell on the left, you can see it fires whenever the animal gets near to a wall or a boundary to the east, whether it's the edge or the wall of a square box or the circular wall of a circular box, +[205.000 --> 209.000] or even the drop at the edge of a table which the animals are running around. +[209.000 --> 217.000] The cell on the right there fires whenever there's a boundary to the south, whether it's the drop at the edge of the table or a wall or even the gap between two tables that have pulled apart. +[217.000 --> 222.000] That's one way in which we think play cells determine where the animal is as it's exploring around. +[222.000 --> 230.000] We can also test where we think objects are, like this gold flag in simple environments or indeed where your car would be. +[230.000 --> 236.000] We can have people explore an environment and see the location they have to remember. +[236.000 --> 243.000] If we put them back in the environment, generally they're quite good at putting a marker down where they thought that flag or their car was. +[243.000 --> 249.000] On some trials, we could change the shape and size of the environment like we did with the play cell. +[249.000 --> 257.000] In that case, we can see how where they think the flag had been changes as a function of how you change the shape and size of the environment. +[257.000 --> 266.000] What you see, for example, if the flag was where that cross was in a small square environment and then you asked people to say where it was but you've made the environment bigger, +[266.000 --> 272.000] where they think the flag had been stretches out in exactly the same way that the play cell firing pattern stretched out. +[272.000 --> 278.000] It's as if you remember where the flag was by storing the pattern of firing across all of your play cells at that location. +[278.000 --> 287.000] Then you can get back to that location by moving around so that you best match the current pattern of firing of your play cells with that stored pattern. +[287.000 --> 290.000] That guides you back to the location that you want to remember. +[290.000 --> 293.000] We also know where we are through movement. +[293.000 --> 300.000] If we take some outbound path, perhaps we park and we wander off, we know because our own movements, which we can integrate over this path, +[300.000 --> 303.000] roughly what the heading direction is to go back. +[303.000 --> 310.000] Play cells also get this kind of path integration input from a kind of cell called a grid cell. +[310.000 --> 318.000] Grid cells are found again on the inputs of the hippocampus and they're a bit like play cells, but now as the rat explores around, +[318.000 --> 329.000] each individual cell fires in a whole array of different locations, which are laid out across the environment in an amazingly regular triangular grid. +[330.000 --> 343.000] If you record from several grid cells shown here in different colors, each one has a grid-like firing pattern across the environment and each cell's grid-like firing pattern is shifted slightly relative to the other cells. +[343.000 --> 348.000] The red one fires on this grid and the green one on this one and the blue one on this one. +[348.000 --> 360.000] So together, it's as if the rat can put a virtual grid of firing locations across its environment, a bit like the latitude and longitude lines that you'd find on a map but using triangles. +[360.000 --> 371.000] And as it moves around, the electrical activity can pass from one of these cells to the next cell to keep track of where it is so that it can use its own movements to know where it is in its environment. +[372.000 --> 386.000] Do people have grid cells? Well, because all of the grid-like firing patterns have the same axes of symmetry, the same orientations of grid shown in orange here, it means that the net activity of all of the grid cells, in a particular part of the brain, +[386.000 --> 392.000] should change according to whether we're running along one of these six directions or running along one of these six directions in between. +[393.000 --> 407.000] So we can put people in an MRI scanner and have them do a little video game like the one I showed you and look for this signal and indeed you do see it in the human entrial cortex which is the same part of the brain that you see grid cells in rats. +[407.000 --> 421.000] So back to Homer, he's probably remembering where his car was in terms of the distances and directions to extended buildings and boundaries around the location where he parked and that would be represented by the firing of boundary detecting cells. +[421.000 --> 443.000] He's also remembering the path he took out of the car park which would be represented in the firing of grid cells. Now both of these kinds of cells can make the place cells fire and he can return to the location where he parked by moving so as to find where it is that best matches the firing pattern of the place cells in his brain currently with the stored pattern where he parked his car. +[443.000 --> 453.000] And that guides him back to that location irrespective of visual cues like whether his car is actually there. Maybe it's been towed but he knows where it was so he knows to go and get it. +[453.000 --> 470.000] So beyond spatial memory, if we look for this grid like firing pattern throughout the whole brain, we see it in a whole series of locations which are always active when we do all kinds of autobiographical memory task like remembering the last time you went to a wedding for example. +[470.000 --> 485.000] So it may be that the neural mechanisms for representing the space around us are also used for generating visual imagery so that we can recreate the spatial scene at least of the events that have happened to us when we want to imagine them. +[485.000 --> 500.000] So if this was happening your memories could start by place cells activating each other via these dense interconnections and then reactivating boundary cells to create the spatial structure of the scene around your viewpoint and grid cells could move this viewpoint through that space. +[500.000 --> 514.000] Another kind of L head direction cells which I didn't mention yet they fire like a compass according to which way you're facing they could define the viewing direction from which you want to generate an image for your visual imagery so you can imagine what happened when you're at this wedding for example. +[515.000 --> 533.000] So this is just one example of a new era really in cognitive neuroscience where we're beginning to understand psychological processes like how you remember or imagine or even think in terms of the actions of the billions of individual neurons that make up our brains. +[533.000 --> 535.000] Thank you very much. diff --git a/transcript/allocentric__n_vDvne5yo.txt b/transcript/allocentric__n_vDvne5yo.txt new file mode 100644 index 0000000000000000000000000000000000000000..67f4e1a97a39f72586d2fe76e95523fcfaefd976 --- /dev/null +++ b/transcript/allocentric__n_vDvne5yo.txt @@ -0,0 +1,181 @@ +[0.000 --> 7.000] Okay, it's a real pleasure to be here presenting to everyone. +[7.000 --> 10.360] And let's start things off since we're talking about navigation. +[10.360 --> 16.560] Most of you drove possibly a small number of you walked or biked from somewhere else. +[16.560 --> 23.160] How many of you used GPS, a mobile device on your phone or on your car to get you? +[23.160 --> 24.160] How many are willing to? +[24.160 --> 29.280] I did not because I live in Davis, but okay, so a lot of you did not, okay, that's good. +[29.280 --> 37.280] Okay, so you use some kind of navigational and computer assist today. +[37.280 --> 39.280] So maybe about 50% of you. +[39.280 --> 48.080] Do any of you remember the days when you moved to a new city and you had to go to a gas station and you bought or even worse those +[48.080 --> 52.080] books of maps and you can look it up and be in that. +[52.080 --> 59.080] You get a 50 and you're pulled over on the side of the road and you're trying to figure out where that met chance. +[59.080 --> 65.080] And it's a desire to stick it in your glove compartment and the next time you get it, you can't read the city because it's all legal. +[65.080 --> 74.080] So many of you have the experience now that mobile devices like your phone can really enhance your really navigate. +[74.080 --> 80.080] But now an interesting question in navigation and we're going to just sort of touch on this briefly and this talk. +[80.080 --> 87.080] Is this destroying your ability to represent and learn about your spatial environment? +[87.080 --> 96.080] And there are even some people in my field of studying human spatial navigation who their advice for healthy aging is turn off your GPS. +[96.080 --> 98.080] I'm not of that ilk by the way. +[98.080 --> 104.080] I think your brain is involved in a lot of different things, not necessarily just navigation. +[104.080 --> 108.080] That would be a little biopic of me given that I study navigation. +[108.080 --> 111.080] But that is an opinion that's out there. +[111.080 --> 118.080] Now why do we think that representing spatial environments and learning to do that is so important? +[118.080 --> 128.080] Well, if you imagine learning your surrounding environments, if you live in the city of Davis remembering the basic layout and organizational organization of the city of Davis, +[128.080 --> 133.080] why do we think that that's important and why is that a difficult task for you to bring in the first place? +[133.080 --> 141.080] Well, imagine when you walk around even just campus, there's a wealth of different things that you see at different times. +[141.080 --> 148.080] You may see one landmark stand out to you, you know, these funny upside-down heads and other things that sticks out. +[148.080 --> 151.080] You see these at different points, these bridges. +[151.080 --> 157.080] And what you try to do in your head is come up with something rough like a mental map. +[157.080 --> 160.080] A cartographic map would be the more accurate version. +[160.080 --> 168.080] What you're trying to do is construct some rough idea of the distance and direction of things as you experience them. +[168.080 --> 177.080] And the problem is not easy from the standpoint of translating behavior into brain, which is that we may experience multiple routes from different angles. +[177.080 --> 184.080] Many of you entered this building from the parking lot, but you could just as easily have entered the building from one of the back doors. +[184.080 --> 187.080] There are many different ways you could experience the same location. +[187.080 --> 193.080] And your brain needs a way of taking that very different visual information and knowing that it's the same location. +[193.080 --> 203.080] We need to take these buildings that stand out and realize this is a useful landmark and then ignore all the other things that aren't useful landmarks, +[203.080 --> 209.080] like, say, bikes that could change their location, brightly colored cars, which are not going to be constant and that useful landmark. +[209.080 --> 212.080] So we need to figure out what to use some as a landmark. +[212.080 --> 221.080] And we're going to learn this information at different times if we have lived in the city of Davis for much of our life or Sacramento or somewhere else. +[221.080 --> 226.080] We experience this information in some cases over decades by lifetime. +[226.080 --> 232.080] And so our brain needs a way of taking all this information and fitting it together into a way that is accurate. +[232.080 --> 236.080] And, in principle, it does not require us to use our phones. +[237.080 --> 243.080] Now, in the literature, we've often referred to this idea of a mental map as something called a cognitive map. +[243.080 --> 253.080] And those of you who are familiar with the Nobel Prize will know that a couple of years ago, the Nobel Prize was awarded for work on the mental map of the cognitive map. +[253.080 --> 255.080] But all that work was in rodents. +[255.080 --> 262.080] So the interest of my lab is understanding how does this apply to the more interesting species, in my opinion? +[262.080 --> 267.080] Us. No offense. Alex. Others who study rodents. +[267.080 --> 270.080] We're wrong with rodents. We're wrong with them. +[270.080 --> 275.080] But ultimately, we are interested in us, so let's help. +[275.080 --> 278.080] So I'm going to give you a quick crash course on navigation. +[278.080 --> 285.080] And I'm going to try to give you an understanding of why we might think turning off our GPS device could be a good idea. +[286.080 --> 293.080] So there are several different ways we know from the study of navigation that we can learn where things are in our environment. +[293.080 --> 299.080] And we believe that the optimal way to do this is something called awo-centric coordinates. +[299.080 --> 308.080] It sounds like a really technical terms. It is. It means a coordinate system, a way of thinking that is referenced outside of our body position. +[308.080 --> 317.080] So that's exactly what a cartographic map is. That map you buy at the gas station tells you how landmarks are arranged relative to each other. +[317.080 --> 327.080] So for example, knowing the Davis is approximately 10 miles south of Woodland, and approximately 15 miles west of Sacramento, +[327.080 --> 332.080] would be a landmark reference to our alo-centric way of thinking about things. +[332.080 --> 337.080] Okay, alo-centric again means reference to something outside of our body position. +[337.080 --> 347.080] Okay, so here's an example of this. If we want to remember where something is, we can remember it based on the relative position from these other landmarks. +[347.080 --> 357.080] Okay, and landmarks can be cities, they can be buildings, anything that stays constant over time, data we can use to remember where things are. +[357.080 --> 367.080] So the essence of the cognitive map, and what we think the most effective form of spatial memory is, is an alo-centric form of memory. +[367.080 --> 372.080] Now, you're not going to be surprised to know that there are some other forms of spatial representation. +[372.080 --> 381.080] And one of these is called the egocentric form. It sounds bad, right? You don't want to be an egocentric person, right? +[381.080 --> 387.080] And similarly, in general, we don't think that egocentric coordinates are a great way to remember how to navigate. +[387.080 --> 395.080] Now, egocentric coordinates are extremely useful in many, many contexts in everyday life, which would be where things are in front of your body. +[395.080 --> 401.080] So if you want to reach for a cup of coffee, you need to put your glasses down on your bedside when you go to bed, +[401.080 --> 409.080] anything like that involves egocentric coordinates, right? Because you need to know where to reach your hand relative to the current position of your body. +[409.080 --> 419.080] When you get up from this chair, if you were oriented 180 degrees egocentrically and correctly, that could cause a big problem because you could ram into the chair behind you. +[419.080 --> 427.080] So it's very important to know the position of your body relative to objects. Now, for navigation, there's a problem. +[427.080 --> 437.080] Egocentric coordinates change constantly as you move, right? So the position of you, everyone in this audience to me right now to find an egocentric coordinates is in one system now. +[437.080 --> 447.080] And as soon as I move, that changes. So imagine you've driven from that, but to here, your egocentric coordinates are constantly changing, right? +[447.080 --> 455.080] The relative position of vacaville is constantly changing relative to your body as you're moving. But it's all of central coordinates are staying constant, because vacaville, +[455.080 --> 465.080] at least as far as we know, almost there's some major seismic activity, is not changing its position relative to map our Davis over time. But your body position is constantly changing. +[465.080 --> 473.080] So egocentric coordinates are extremely important for things like reaching for things in front of us, but they're not in general a great way to navigate. +[473.080 --> 487.080] Now, perhaps some of our given the navigation literature, the worst way to navigate is what's called a place response strategy or a beacon strategy. +[487.080 --> 501.080] What is that? That's essentially what GPS is giving you. So that would be that you want to walk, say to the gas station across the street, and then the next thing you're going to do is walk to Carl's Jr. across the street from there, +[501.080 --> 513.080] and the next thing you're going to do is walk back to center for neuroscience. You haven't had to use any relative positions of things. All you've had to do is just remember a sequence of things and terms. +[513.080 --> 529.080] And that's essentially what GPS is giving you. GPS is saying, take a left at this place, take a right at this junction. So we tend to think that a beaconing strategy, also called a response strategy, in general, is not a great way to engage your brain in a meaningful way, +[529.080 --> 549.080] with all the things around. So these are the three fundamental ways that we believe typically people navigate in the wild. And the current thinking is that we've switched a lot of our ways of navigating to these other forms like egocentric and beacon forms of navigate. +[549.080 --> 559.080] Again, a beacon would just be like a big thing that's in front of you and you just walk to. You don't really remember where anything is placed relative to that. +[559.080 --> 569.080] So in my lab, what we try to do is we try to understand how the brain remembers spatial locations and how the brain uses these different forms of representations. +[569.080 --> 580.080] So some of our research naturally that involves virtual reality, because virtual reality is something we can build on a computer and have a lot of control of. +[580.080 --> 593.080] So I'm going to show you an example of a technique that we've been developing in my lab to allow people to navigate in the lab on an environment, in an environment that is generated on a computer. +[593.080 --> 604.080] So we can get a really detailed sense how people navigate in new environments. And you might wonder why not just have people walk around downtown Davis. Right now we can see how they learn this. +[604.080 --> 613.080] While there's a problem there from an experimental standpoint, some of you probably have had more exposure to downtown Davis than others. Some of you may have viewed maps. +[614.080 --> 623.080] So ideally, as experimentalist, as scientists, we want to have a situation where we can control for your exposure and knowledge of the city. +[623.080 --> 634.080] And there are other parts about real world environments that are a little bit complicated, like you're walking around and someone asks you for directions or you have to stop for a car or something like that. +[634.080 --> 638.080] So ideally, we want to have people just walk around and navigate as much as possible. +[638.080 --> 649.080] So here's an experimental setup that we've been working on in my lab. You can see this thing here, it looks a little bit like a disc that my student is standing on these wearing goggles here. +[649.080 --> 655.080] So what I'm going to show you is what he experienced when he walks on this treadmill. +[655.080 --> 668.080] So this is what he is seeing through the goggles. And the image is actually fused. +[668.080 --> 676.080] So what's happening in your retina is you're seeing two different pieces of the environment, but your brain is actually fusing these two images together. +[676.080 --> 682.080] So it's appearing as one big environment even though the way we've rendered it is two separate images. +[682.080 --> 689.080] So what my student is doing is he's walking around this environment, you can see how he's doing it. He is moving his feet on this treadmill. +[689.080 --> 693.080] So you can imagine now we have a situation. +[693.080 --> 699.080] He's running. It's going to go backwards in a second. +[699.080 --> 706.080] So you can imagine now we have a situation where we can control a lot of variables that previously we did not have an ability to control. +[706.080 --> 713.080] And we can start to study how people learn large scale environments in the what. +[713.080 --> 717.080] In the wild so to speak, our one. +[717.080 --> 723.080] So what we do is we have people walk around this environment on the treadmill. +[723.080 --> 728.080] And then we have them point to the locations of objects in this environment. +[728.080 --> 735.080] So we have you do a classic task in the navigation literature called the judgments of relative direction text. +[735.080 --> 742.080] And this is tapping largely into your allocentric knowledge of where things are located in your environment. +[742.080 --> 748.080] So it's providing a relatively stripped test of how well you've learned where landmarks are placed. +[748.080 --> 755.080] So what you're doing is you're imagining standing at one store with your bat facing another and you're pointing to another. +[755.080 --> 762.080] So imagine for example you are standing in downtown areas facing south. +[762.080 --> 768.080] And then you want to a point to approximately where Sacramento. +[768.080 --> 770.080] So you just imagine yourself in that situation. +[770.080 --> 772.080] That's the types of questions that we're asking you. +[772.080 --> 777.080] What we do is we have this done repeatedly over and over again throughout the experiment. +[777.080 --> 780.080] And we do something a little tricky to put. +[780.080 --> 783.080] The environment is shaped like a big rectangle. +[783.080 --> 791.080] And we have people answer questions where their body is either aligned or misaligned with the surrounding boundaries. +[791.080 --> 803.080] And the reason why we're going to do this is we're going to see to what extent people start to form knowledge that is based on the structure or the shape of the allocentric nature of the environment. +[803.080 --> 813.080] And to make sure that people really know this environment before they get into it, we have them study a map beforehand just to make sure that they really know where things are. +[813.080 --> 818.080] And we compare that with a situation where they don't study a map before it. +[818.080 --> 823.080] And not surprisingly when you study a map beforehand in general your knowledge is better. +[823.080 --> 830.080] Your error you make fewer errors when pointing to the locations of landmarks or stores in the environment. +[830.080 --> 840.080] And what we find is that over trials not surprisingly as you walk through this environment and then point to the locations of objects you get better and better at the task. +[840.080 --> 849.080] But interestingly on the first couple of trials we find that people typically do not use the shape of the environment to anchor their knowledge. +[849.080 --> 859.080] So this suggests that it takes a certain amount of time to integrate what you've learned as you freely navigate through an environment with the surrounding structure of. +[859.080 --> 863.080] So in other words learning how to represent stuff takes time. +[863.080 --> 873.080] And that is why we might say the cautionary note about GPS. When you have your GPS on you have your mobile phone on you are short circuiting some of that normal process. +[873.080 --> 880.080] Okay, I'm going to skip through the conclusions because I did want to talk about healthy aging and navigation. +[880.080 --> 884.080] I know many of us are interested in this topic. +[884.080 --> 888.080] So what happens as we age with regard to our ability to navigate? +[888.080 --> 906.080] And one of the ideas in the literature is that because of some changes in the structure of your brain we switch from these allocentric strategies to non-allocentric strategies like egocentric position and what I call the beckoning strategy. +[906.080 --> 912.080] And the issue is that these ultimately may not be the best way to navigate. +[912.080 --> 924.080] And remember I said that the goal that we hope happens when people navigate is they form rich representations that tell them about the relative positions of objects within their environment to each other. +[924.080 --> 936.080] So if we think that one of the things that happens with aging is that there is a decrement in this process is there a way that we could potentially try to reverse or stop this. +[936.080 --> 953.080] And I've been collaborating with Beth Over in the Department of Human Development on this issue where we show young healthy undergraduates and then individuals who were 80 and over. +[953.080 --> 963.080] Maps as well as have them navigate in an environment just like what you saw with the treadmill but instead of being on a treadmill they're using a joystick. +[963.080 --> 971.080] And makes life a little easier that treadmill can be a little crazy for some people. Although my son who's six loves us. +[971.080 --> 974.080] Does he something about virtual reality? +[974.080 --> 981.080] So for example what we find is that when we show individuals 80 and older. +[981.080 --> 989.080] Maps of an environment in general they tend to use this information quite well compared to when they freely navigate an environment. +[989.080 --> 995.080] So there appears to be a benefit to showing older and adults a map compared to having a freely navigate. +[995.080 --> 1004.080] Compared to younger people who admittedly generally do better on the task but don't show the same proportional benefit from studying a map. +[1004.080 --> 1022.080] So one of the areas that we are starting to investigate is if we can use maps and the general structure of environments because many of our cities are shaped like rectangles or have grid shapes to them are other shapes that could be very useful for remembering where things are. +[1022.080 --> 1034.080] Can we use this type of iterative training to help rescue or encourage use of allocentric spatial memory strategies? +[1034.080 --> 1045.080] That's something we're currently investigating a lab. We've been very fortunate to get a small amount of money from UC Davis Alzheimer's Center to investigate this issue thanks to Charlie DeCarly and others at that center. +[1045.080 --> 1050.080] But we're really just getting started on this and we hope there's a lot more that we can learn. +[1050.080 --> 1065.080] So I did want to talk a little bit about the brain and I want to give some what I think is good news and some major changes that I think have happened in how we think about the brain more generally and what that could mean for at least many of you. +[1065.080 --> 1077.080] So our classic way of thinking about the brain is what we would call the localizationist perspective and it's essentially this one brain region one function. +[1077.080 --> 1089.080] And this works somewhat well in some context. So we have vision here at the back of the brain. In general we know that there are many neurons responsive to visual features in the back of the brain. +[1089.080 --> 1099.080] And if we damage the back of the brain called visual cortex people will be blind. We also know if we damage an area called the cerebellum that we severely impair motor control. +[1099.080 --> 1109.080] We know the cerebellum is important for motor control. But how about some of these higher cognitive functions that I've been talking about like allocentric navigation or egocentric navigation. +[1109.080 --> 1120.080] Can we stick them in one part of the brain? Well we used to think that and we spent decades investigating that issue. And in general I think the answer is that is not born out the way that we thought of it. +[1120.080 --> 1133.080] And increasingly we started to move to a different perspective on the brain. I don't know if any of you have been on a flight lately, been through an airport, maybe hopefully not. +[1133.080 --> 1145.080] But if you have had that displeasure or pleasure depending on your perspective, you may have blanced at one of these maps of how airlines are interconnected and continental United States. +[1145.080 --> 1156.080] Which is that we have these things called hubs, which we would call the areas where airlines typically have the most of their flights taking off landing it. +[1156.080 --> 1162.080] And then we have other areas depending on the airline where they just don't have as many flights to them. +[1162.080 --> 1171.080] So we already think about air travel on a lot of travel in this highly interconnected dynamic fashion. What do I mean by dynamic? +[1171.080 --> 1181.080] If we looked at this airline map of the United States at any given time, things would look really really different. We might see a lot of flights coming into Phoenix, of course Southwest. +[1181.080 --> 1191.080] We might see fewer flights going into areas in the Pacific Northwest. But in general looking at any given time would reveal very different things. So that's what we mean by dynamic. +[1191.080 --> 1203.080] And there's a new method that has been developed really in the last two decades called graph theory analysis, which lets us take these highly interconnected maps and try to make some sense of it. +[1203.080 --> 1214.080] Now you might not be surprised to hear that the brain also has some of these similar properties. In other words, there are areas that serve as hubs that are highly interconnected with other areas. +[1214.080 --> 1227.080] And that this can be highly dynamic depending on what we were doing with our brain. So an area that my graduate student I am or have been investigating is can we apply these methods to understanding something like memory? +[1227.080 --> 1233.080] And in particular, our memory for spatial locations and the order in which things happen. +[1234.080 --> 1244.080] And again, remember that we used to think in a very localized fashion about the brain. And one of the areas we've historically focused a lot on is called the hippocampus. +[1244.080 --> 1248.080] We used to think if you lose your hippocampus, you lose all your memory. +[1248.080 --> 1259.080] The new perspective that is starting to emerge in cognitive neuroscience is that that is simply not true. That there are many other areas that participate meaningfully in memory. +[1259.080 --> 1275.080] So the good news is that if you suffer, hopefully not if you do at some point in your life, damage to any part of your brain, all the parts of your brain may be able to dynamically reconfigure and take over for some of that loss function. +[1275.080 --> 1285.080] And that is a new emerging area which my lab is very interested in. And again, this really contrasts from how we used to think about the brain as a more static structure. +[1285.080 --> 1296.080] We used to think one brain region, one function, lose that brain region, lose that function. We are now starting to see behavior and cognition as a more distributed phenomenon. +[1296.080 --> 1302.080] In other words, many other parts of the brain can take over for that loss function. +[1302.080 --> 1314.080] So we still view brain areas like the hippocampus as important for memory. But increasingly we are starting to see that other brain areas are playing critical roles in how this works do. +[1314.080 --> 1319.080] And if you're interested in a more technical discussion about that, I'm happy to do it. +[1319.080 --> 1327.080] But the important implication of this is that there is a possibility for other brain structures to take over for loss function. +[1327.080 --> 1337.080] And you might ask, how? The two areas that are active areas of research in many labs, including my lab, is cognitive rehabilitation and neuro stimulation therapy. +[1337.080 --> 1342.080] So I'm going to show you a future direction and then I should take some questions because I think I'm already over. +[1342.080 --> 1344.080] That's the sign of a work. +[1344.080 --> 1351.080] So let me show you what my lab is just started working on. And this is taking people on our treadmill. +[1351.080 --> 1364.080] Taking people on the treadmill. And what we're going to do is we record from their brain while they navigate in these large scale events. +[1364.080 --> 1372.080] So this is an individual who is wearing a cap called Scalpy G. It has electrons that can access signals in the brain. +[1372.080 --> 1380.080] He's walking on the treadmill. You can see what he is seeing as he is navigating. And you can see the brain signals that we are continuously recording while he navigates. +[1380.080 --> 1388.080] So this will give us new insight into how the brain codes things like spatial distance and spatial direction. +[1388.080 --> 1394.080] Okay, since I'm out of time, I'm going to mention really quickly my very generous sponsors from the federal government. +[1394.080 --> 1399.080] But I should mention I'm fortunate and further along in my career where I've been able to get some of these things. +[1399.080 --> 1408.080] But the early work in my career was impart funded by generous donations from family foundations and people like you. +[1408.080 --> 1418.080] And I can't emphasize how important these sources are for fueling innovation and new ideas because government funding is drying out unfortunately. +[1418.080 --> 1425.080] And in addition, government funding doesn't tend to fund the high risk types of things. +[1425.080 --> 1429.080] All right, I want to thank you very much and I'm happy to take any questions. diff --git a/transcript/allocentric_akfatVK5h3Y.txt b/transcript/allocentric_akfatVK5h3Y.txt new file mode 100644 index 0000000000000000000000000000000000000000..ced2008bdd31c26f0305a33eb89489866dba48c8 --- /dev/null +++ b/transcript/allocentric_akfatVK5h3Y.txt @@ -0,0 +1,47 @@ +[0.000 --> 6.000] Hello friends, I am Surbi and once again welcome to my channel Key Differences. +[6.000 --> 13.000] Today in this video tutorial I am going to explain you the difference between verbal and non-verbal communication. +[13.000 --> 16.000] So friends, let's get started. +[16.000 --> 29.000] After watching this video you will be able to understand what is communication, what is the process of communication, +[29.000 --> 43.000] and what are its types, what is verbal communication and its types, what is non-verbal communication and what are the types of non-verbal communication. +[43.000 --> 49.000] Lastly, what is the difference between verbal and non-verbal communication? +[49.000 --> 53.000] Now come let's understand the meaning of communication. +[53.000 --> 57.000] Communication is the process of interacting with people. +[57.000 --> 65.000] No matter whether you speak something or not, but your behavior, attitude or body language conveys a message to the other party. +[65.000 --> 73.000] Meaning that communication is not dependent on words. It is possible even without the use of words. +[73.000 --> 78.000] So now we are going to understand the process of communication. +[78.000 --> 90.000] In the process of communication, the sender encodes a message through a proper channel that is email, phone, SMS, etc. to the receiver. +[90.000 --> 101.000] The receiver decodes the message and after interpreting it gives a proper feedback through a proper channel to the sender. +[101.000 --> 105.000] So in this way the process of communication continues. +[105.000 --> 110.000] So now we are going to understand the types of communication. +[110.000 --> 120.000] So on the basis of channel there are two types of communication, verbal communication and non-verbal communication. +[120.000 --> 125.000] Let's understand the meaning of verbal communication. +[125.000 --> 136.000] The communication in which we use words and language to communicate the intended message to the other party is called verbal communication. +[136.000 --> 140.000] It can be performed in two ways. +[140.000 --> 146.000] That is oral communication and written communication. +[146.000 --> 159.000] Oral communication is a communication through spoken words. That is face to face communication, voice chat, video conferencing or communication over the telephone or mobile phone. +[159.000 --> 173.000] On the other hand, written communication entails the use of letters, documents, emails, SMS, various chat platforms, social media, etc. to interact with people. +[173.000 --> 183.000] What is non-verbal communication? Non-verbal communication is a wordless communication as it does not use words. +[183.000 --> 196.000] The communication takes place through signals such as facial expressions, body language, nodding of head, gestures, postures, eye contact, physical appearance and so forth. +[196.000 --> 201.000] Now come let's understand the types of non-verbal communication. +[201.000 --> 214.000] The communication through body language, facial expressions, gestures, postures, eye contacts is called as kinetics. +[214.000 --> 227.000] In artifacts, you learn how the appearance of a person speaks a lot about his personality. That is the way he or she is dressed, accessories carried by him, etc. +[227.000 --> 237.000] Proximics. The distance maintained by a person while communicating with another tells you a lot about their relationship. +[237.000 --> 249.000] Chronomics is the use of time and communication. It tells you about how punctual or disciplined a person is or how serious the person is regarding the matter. +[249.000 --> 260.000] Vocalics. The volume, tone of voice and pitch used by the sender to transmit information is called vocalics. +[260.000 --> 270.000] The use of touch and communication to express emotions and feelings is called haptics. +[270.000 --> 284.000] Come let's discuss the difference between verbal and non-verbal communication. Meaning, verbal communication is the process of communication in which words and language is used to transmit the message to another person. +[284.000 --> 299.000] Whereas in non-verbal communication, we do not use words. Instead, we use signals to transmit the message. The signals can be facial expressions, eye contact, body language, parallel language, sign language, etc. +[299.000 --> 308.000] Next. In verbal communication, the transmission of message is very fast and feedback can also be provided instantly. +[308.000 --> 317.000] Whereas non-verbal communication relies on the understanding of the receiver, so it consumes a lot of time. +[317.000 --> 325.000] When it comes to delivery of message, there are very less chances of confusion in case of verbal communication. +[325.000 --> 335.000] Contrary to this, the chances of confusion and misunderstanding are relatively high in non-verbal communication. +[335.000 --> 344.000] In verbal communication, the presence of both the parties that is sender and receiver at the place of communication is not necessary. +[344.000 --> 353.000] As against, in non-verbal communication, the presence of both the parties at the time of communication is a must. +[353.000 --> 362.000] The best thing about verbal communication is that the message can be clearly understood and feedback can also be provided immediately. +[362.000 --> 375.000] Whereas, the best thing about non-verbal communication is that it complements verbal communication. That is, it helps in understanding the lifestyle and emotions of the sender. +[376.000 --> 385.000] Okay guys, this is all for this video. Now if you want to study the topic in detail, you can visit our official website that is keydifference.com. +[385.000 --> 395.000] Here you can find a detailed comparison of the two types of communication along with their definitions. +[395.000 --> 398.000] We have also provided the links in the description below. +[399.000 --> 404.000] So friends, I hope you enjoyed watching this video. Please like and share this video. +[404.000 --> 409.000] And if you have any queries of feedback for me, don't hesitate to comment below. +[409.000 --> 415.000] And please like our channel to never miss a video from key differences. Okay then, bye bye for now. diff --git a/transcript/allocentric_bQLya0OLd2A.txt b/transcript/allocentric_bQLya0OLd2A.txt new file mode 100644 index 0000000000000000000000000000000000000000..46986cf4cc630209898db85d31d37cb5004228a3 --- /dev/null +++ b/transcript/allocentric_bQLya0OLd2A.txt @@ -0,0 +1,45 @@ +[0.000 --> 13.660] Everyone, quick look now at the nonverbal communications which are absolutely brilliant. +[13.660 --> 17.680] So I'm going to click on My Status. +[17.680 --> 28.040] Okay, so there we are, we've got a wonderful, happy, surprised, faster, sad, confused, slower. +[28.040 --> 29.520] And agree and disagree. +[29.520 --> 32.120] So if you want to do instant polls, are you happy? +[32.120 --> 33.520] Are you understanding? +[33.520 --> 34.720] Are you with me? +[34.720 --> 36.040] I can agree. +[36.040 --> 39.760] Now we can see here that I agree. +[39.760 --> 43.680] My iPad might disagree. +[43.680 --> 46.840] So he is not happy at all. +[46.840 --> 48.720] But you can't see that. +[48.720 --> 52.520] So let's go to the Magic Purple button. +[52.520 --> 53.520] And there we are. +[53.520 --> 55.280] Look, it went straight to people. +[55.280 --> 57.920] It knew what we were looking for. +[57.920 --> 61.840] And you can see that Carola agrees, but the iPad disagrees. +[61.840 --> 67.480] Okay, now then if the iPad is happy, I managed to click surprised. +[67.480 --> 73.120] If the iPad is happy, there we are, the iPad is suddenly happy. +[73.120 --> 77.960] And I might be happy here. +[77.960 --> 82.080] If I'm sad, it shows up. +[82.080 --> 83.640] Look at that little sad face. +[83.640 --> 90.680] And if my iPad is confused, there we are, we've got a very confused face. +[90.680 --> 92.440] So that's really useful. +[92.440 --> 94.160] Now there's one more thing. +[94.160 --> 96.960] And there's a hands up symbol. +[96.960 --> 97.960] There you are. +[97.960 --> 102.960] The iPad put his hand up. +[102.960 --> 104.960] Okay. +[104.960 --> 110.160] And even though I wouldn't generally do it as a teacher, I can raise my own hand. +[110.160 --> 111.160] And there you are. +[111.160 --> 113.360] You can see that there are hands up. +[113.400 --> 118.520] I can lower the iPad's hand when I've answered his problem. +[118.520 --> 120.960] And hopefully he's no longer confused. +[120.960 --> 123.800] And I can lower my own hand. +[123.800 --> 128.240] So buttons at the bottom. +[128.240 --> 133.080] Put my hand up. +[133.080 --> 134.600] Look at the attendees. +[134.600 --> 138.280] I've got my hand up. +[138.280 --> 142.920] If I close all the buttons, and my iPad puts a hand up, +[143.800 --> 147.040] I get a notice. +[147.040 --> 152.760] Okay, yeah, it's more or less covered those nonverbal communications. +[152.760 --> 155.560] So I can lower the iPad's hand. +[155.560 --> 156.840] Right? Thanks very much. +[156.840 --> 159.840] Bye for now. diff --git a/transcript/allocentric_c-N8Qtz_g-o.txt b/transcript/allocentric_c-N8Qtz_g-o.txt new file mode 100644 index 0000000000000000000000000000000000000000..5385399ebc0f6bc5fb50790de222e1525d04ed11 --- /dev/null +++ b/transcript/allocentric_c-N8Qtz_g-o.txt @@ -0,0 +1,929 @@ +[0.000 --> 7.400] I can be very loud. +[7.400 --> 8.680] So thanks, yeah, so quick. +[8.680 --> 13.720] I mean, the only thing I would add to the background there is just that in retrospect, +[13.720 --> 17.920] in thinking about the things I've done, I've realized that there's a real theme. +[17.920 --> 21.560] And this is probably true for many of us, to the things that I did that there is a lot +[21.560 --> 24.800] of hardware and software and interaction and sorts of things that really attracted me +[24.800 --> 27.360] to virtual reality long ago. +[27.360 --> 31.200] Some of you, or most of you, don't know, there are very few of you in this room who would +[31.200 --> 38.680] know that my life and everything I've done has been very much shaped by this sort of dual +[38.680 --> 42.080] personality that I have for my mother and my father, my father being a very accomplished +[42.080 --> 45.720] musician and my mother being a mathematician and computer programmer. +[45.720 --> 51.760] And so she's German heritage and my father's sort of French Irish and yeah, here's some +[51.760 --> 52.760] music somewhere. +[52.760 --> 55.400] So the funny thing is, and this is what people don't know. +[55.400 --> 58.880] When I graduated from Purdue, I had three or four job offers. +[58.880 --> 62.880] I turned them all down and I went to New York City to work for a music production company +[62.880 --> 67.680] to do audio stuff for live performance. +[67.680 --> 73.980] So I worked on that point in time, the International Tour of Repeater Gabriel, the So Tour of +[73.980 --> 75.180] South, and set up the audio. +[75.180 --> 76.180] You'd think I'd now have fixed this, right? +[76.180 --> 77.180] It's been a long time. +[77.180 --> 81.120] It's been a very long time. +[81.120 --> 83.960] And so we worked on the beginning of the So Tour a little bit. +[83.960 --> 86.880] This is what Peter Gabriel looked at, like at that time. +[86.880 --> 91.040] And then I also worked on the rig for the cure when they were going out on their World +[91.040 --> 92.840] Tour at that time. +[92.840 --> 96.720] And this is what the cure looks like at that time. +[96.720 --> 97.800] A little more interesting. +[97.800 --> 98.800] All right. +[98.800 --> 102.000] So then I decided it wasn't a life for me. +[102.000 --> 105.120] Literally hopped in my car, drove from New York City to Los Angeles and started working +[105.120 --> 107.040] for NASA and haven't looked back. +[107.040 --> 108.040] All right. +[108.040 --> 113.160] So what I want to talk about today, though, is a bunch of stuff that I want to make sure +[113.160 --> 117.480] I give some credit to other people that I work with, including Guarrett is here somewhere +[117.480 --> 120.760] in the audience, Professor Garrett Bruder, my closest collaborator now. +[120.760 --> 126.720] So a lot of things I talk about today are in some way involve contributions to work +[126.720 --> 131.000] from these various people and these funding agencies. +[131.000 --> 132.480] All right. +[132.480 --> 136.640] So this is something that I realized I was telling Garrett before my talk. +[136.640 --> 139.600] It's a little bit of a risk here what I'm going to talk about because I have thought +[139.600 --> 143.120] about it for a long time, but I've never tried to put it into words. +[143.120 --> 146.640] And now I'm going to put it in words and pictures and we'll see how it goes. +[146.640 --> 148.160] I don't have any demonstrations for you. +[148.160 --> 149.240] I am the only demo. +[149.240 --> 153.080] So I hope I won't break during the talk. +[153.080 --> 154.760] All right. +[154.760 --> 158.360] So David Blaine, anybody here know who he is? +[158.360 --> 159.880] A few people know who he is. +[159.880 --> 160.880] All right. +[160.880 --> 164.640] I want to show you just a little quick sort of an AR-ish demo from David Blaine. +[164.640 --> 168.040] And the audio on this clip is the lowest of all the clips that I have. +[168.040 --> 169.040] So hopefully it'll be okay. +[169.040 --> 170.640] You might have to listen carefully. +[171.640 --> 175.640] We're going to try something with the book. +[175.640 --> 176.640] Seems like your book. +[176.640 --> 178.640] This is kind of interesting. +[178.640 --> 182.640] Think of how old you are. +[182.640 --> 183.640] 30? +[183.640 --> 185.640] Let's try to think of Megan. +[185.640 --> 187.640] Let's not use Megan's whole name. +[187.640 --> 188.640] Let's use her initial. +[188.640 --> 189.640] And okay. +[189.640 --> 191.640] So let's just visualize that letter. +[191.640 --> 192.640] Okay. +[192.640 --> 197.640] And now I want you to, as you're turning to page 30 for one page reach here, you're just +[197.640 --> 198.640] going to hold the book out. +[198.640 --> 199.640] I just don't want to be near. +[199.640 --> 200.640] Hold it out. +[200.640 --> 201.640] It's a little bit. +[201.640 --> 206.640] Turn to page 30 and you're going to feel something and see something happen. +[206.640 --> 209.640] Oh, I see it. +[209.640 --> 211.640] Oh, my God. +[211.640 --> 212.640] Oh, my God. +[212.640 --> 213.640] Oh, my God. +[213.640 --> 216.640] So I've never seen him in parts of that. +[216.640 --> 217.640] And that's amazing. +[217.640 --> 222.760] But the thing that always fascinated me about him is the magic that he does, like most +[222.760 --> 224.160] magicians, appears to be reality. +[224.160 --> 225.640] I'm not there with them, but it's real. +[225.640 --> 227.360] That's what makes it so fascinating. +[227.360 --> 230.480] And we put on virtual reality headgear. +[230.480 --> 234.920] The magic that happens there is maybe less magical in some ways because we know we're conscious +[234.920 --> 236.440] that we put something on. +[236.440 --> 238.200] We don't, we don't all this stuff to get ready for this experience. +[238.200 --> 239.840] So yes, it's going to be different. +[239.840 --> 243.400] So why, why do, you know, what's magical about VR and AR? +[243.400 --> 246.000] It's that we can basically do anything. +[246.000 --> 249.560] Most of what we do, not all of it, is compatible with real world things. +[249.560 --> 251.160] It doesn't have to be. +[251.160 --> 257.480] But most people and most things they do are in some way compatible with it. +[257.480 --> 261.600] We do almost anything you want, like I said before. +[261.600 --> 266.480] What's really, what I find interesting, I've been thinking a lot about is the sense of what, +[266.480 --> 270.240] and I'm going to touch on this throughout my talk, what it means to have a virtual reality +[270.240 --> 274.560] experience and why that is so distinct from our real world experience and why is that +[274.560 --> 276.680] and why, and do we want that and why is that. +[276.920 --> 283.200] But it really makes it, I'm thinking of VR in some sense as being very close to magic. +[283.200 --> 285.280] And I'm thinking about it in several ways. +[285.280 --> 288.680] In it's in the way it's practiced in the way it's done. +[288.680 --> 293.480] So one way, one way I've been thinking about it is every time you want to use some VR, +[293.480 --> 297.680] typically somebody has to be given the crown and somebody has to be given the sector +[297.680 --> 300.440] and they're allowed to do the thing and nobody else can kind of do it. +[300.440 --> 306.000] They're group VR experiences, but there's still the sense that you have to be chosen +[306.120 --> 310.120] and given the opportunity to do this virtual experience. +[310.120 --> 316.040] And that was true in the very beginning from the first time that I've been in his +[316.040 --> 322.200] students built a head-mounted display system in MIT in Utah in the late 60s. +[322.200 --> 323.360] And it's still true today. +[323.360 --> 325.560] So hopefully this video is not too loud. +[325.560 --> 327.000] Coming in. +[327.000 --> 329.000] So we're going to show you some virtual reality today. +[329.000 --> 332.600] It's really hard to show people what it's like to be in virtual reality without having them +[332.600 --> 333.960] try it for themselves. +[333.960 --> 337.520] Filming you in the green screen studio is just the best way we found to help everyone +[337.520 --> 339.640] else understand what it's like to be in VR. +[339.640 --> 340.640] Any questions? +[340.640 --> 341.640] Can I go first? +[341.640 --> 343.640] One person gets to it. +[343.640 --> 344.640] All right, go crazy. +[344.640 --> 345.640] Hi. +[345.640 --> 350.640] And it's magical for that person, everybody else. +[350.640 --> 352.640] Oh, you know what? +[352.640 --> 356.640] Oh, I feel like I'm just sitting there. +[356.640 --> 357.640] That's pretty good. +[357.640 --> 358.640] I'm not saying that. +[358.640 --> 359.640] I'm not saying that. +[359.640 --> 360.640] I'm not saying that. +[360.640 --> 361.640] I'm not saying that. +[361.640 --> 362.640] I'm not saying that. +[362.640 --> 363.640] I'm not saying that. +[363.640 --> 364.640] Yes, it. +[364.640 --> 365.640] Come on. +[365.640 --> 369.440] Oh, you're so cool. +[369.440 --> 374.880] So I've been thinking now about what is it that's been sort of, I'd say, bugging me. +[374.880 --> 375.880] Maybe bugging's a little strong. +[375.880 --> 379.640] But the thing I've really been thinking about is why is it in our discipline that it +[379.640 --> 384.880] seems like in our many disciplines, we tend to have, and I'm going to touch on this later, +[384.880 --> 389.440] almost tribes or groups of individuals who work together and focus on something and +[389.440 --> 391.840] sort of exclude everybody else. +[391.840 --> 393.720] And it's sort of my tribe and your tribe. +[393.720 --> 396.200] And we don't necessarily work together. +[396.200 --> 399.360] We don't necessarily think across tribes and things like that. +[399.360 --> 402.200] And so we have this, I call it the purification of the disciplines. +[402.200 --> 407.680] It's almost a self-fulfilling convergence of those groups of researchers over many +[407.680 --> 409.400] years. +[409.400 --> 414.760] So some of this special status, I think, about is being inherent in the way we think about +[414.760 --> 416.680] ourselves and the things we do. +[416.680 --> 421.560] So those of us who practice VR, do research of VR, when I say VR and AR, I mean broadly +[421.560 --> 425.040] HCI related things, user interaction. +[425.040 --> 426.040] So we're sort of the wizards. +[426.040 --> 429.120] We know how to do it and everybody else, there's sort of muggles and they don't have any +[429.120 --> 430.120] idea what's going on. +[430.120 --> 432.400] And we have to kind of show them how to do it. +[432.400 --> 433.400] And we're special. +[433.400 --> 435.480] We think of ourselves as maybe a little special. +[435.480 --> 439.360] We may not consciously think about it that way, but we do think about it a little bit +[439.360 --> 440.360] that way. +[440.360 --> 443.480] So it's also inherent in the way we think about it. +[443.480 --> 445.400] And this is something that really struck me. +[445.400 --> 451.600] As many of you would know this continuum, but what's interesting to me is, here's virtual +[451.600 --> 453.520] and it's way far away from real. +[453.520 --> 462.520] And so people will get into arguments over where does my work fit in this particular framework. +[462.520 --> 464.840] Because it fit way down here, is it over here and people argue about where it is. +[464.840 --> 470.440] So we think inherently, at least in this construct about placing our work somewhere +[470.440 --> 475.920] in here instead of allowing it to be necessarily many places at once, which some people would +[475.920 --> 477.120] call it mixed reality. +[477.120 --> 480.720] But I still think we're classifying it. +[480.720 --> 484.200] And I am not someone who loves classifying things. +[484.200 --> 488.040] I recognize that classifying things is useful sometimes, absolutely. +[488.040 --> 489.800] But I don't like myself to limit. +[489.800 --> 494.760] I don't want our thinking to be limited by the way we classify things. +[494.760 --> 500.720] So this has happened and it happens, continues to happen in this way. +[500.720 --> 503.520] And again, I'm not saying it's a gloom and doom and bad thing, but I'm just thinking +[503.520 --> 504.520] about it. +[504.520 --> 507.280] We have all of these different research communities. +[507.280 --> 508.320] We have our different journals. +[508.320 --> 510.240] We have our different conferences. +[510.240 --> 517.320] And these tend to sort of, I think, sort of solidify and converge ideas and thinking +[517.320 --> 519.320] in those areas. +[519.320 --> 524.240] I'll touch on this again later, but there's a sense sometimes of if I'm a reviewer +[524.240 --> 525.240] for sighi. +[525.240 --> 528.360] I'd say, oh, this doesn't belong here. +[528.360 --> 529.360] This belongs somewhere else. +[529.360 --> 534.560] And so we're inherently partitioning things and spreading them apart so that they fit +[534.560 --> 536.120] in those particular domains. +[536.120 --> 542.080] But in doing so, a little concern that we may be limiting our thinking. +[542.080 --> 544.200] So we've seen this sort of thing before. +[544.200 --> 546.160] This is a little silly, but I wanted to go back and think about it. +[546.160 --> 548.880] In the beginning of the forming of the earth, there were no humans. +[548.880 --> 551.480] There were no eventually humans. +[551.480 --> 553.440] And we're just a bunch of humans just walking around. +[553.440 --> 558.600] Humans just migrated all over the earth and kind of settled down in different places. +[558.600 --> 564.680] And at some point, they started building communities, families, communities, larger communities, +[564.680 --> 566.480] and building up nations. +[566.480 --> 568.000] So now we have our individual countries. +[568.000 --> 570.400] We need to go on to a country, right? +[570.400 --> 572.600] And not your country, my country. +[572.600 --> 574.600] So we're all separated in that way. +[574.600 --> 576.640] We even have our sports teams. +[576.640 --> 580.920] I'm not too into sports, but I thought some people here might be. +[580.920 --> 581.920] Right? +[581.920 --> 584.480] For our team, the other team's evil. +[584.480 --> 589.640] And we think about in that sort of clandish way or that tri-bish way. +[589.640 --> 596.440] So in the beginning, of course, when people started doing maybe in the 60s, VRA, Rish +[596.440 --> 600.080] sort of things, we didn't have these distinct research communities. +[600.080 --> 602.960] We're just people doing research in that area. +[602.960 --> 606.600] And then over time, some people started saying, I want to work on head-mounted displays. +[606.600 --> 609.080] Other people said, I want to work on these or interface parts. +[609.080 --> 610.480] Other people said, I want to work on displays. +[610.480 --> 613.200] So they kind of divided up. +[613.200 --> 618.080] And so now we have, and this is just a short sampling of a few conferences some of you +[618.080 --> 622.080] might know about and how long roughly they've been going on. +[622.080 --> 630.000] And so you can see across these, when you look at them, that they are, there is some duplication. +[630.000 --> 636.760] I mean, many of us who work, who participate a lot in VR and ISMAR, and even Sui and 3DY +[636.760 --> 639.920] when it was around in VR, they're certainly a lot of overlap. +[639.920 --> 646.560] I'm not saying it's a bad thing, but there's still these distinct communities. +[646.560 --> 651.240] And of course, we in this HCI domain aren't alone in this respect. +[651.240 --> 655.320] There are people, if you look, for example, at the whole previous session. +[655.320 --> 660.200] I thought was awesome because wearables and robotics here at Sui as a part of this is +[660.200 --> 664.480] exactly what I'm going to be talking about at the end of my talk about bringing those +[664.480 --> 667.560] things in, which I think is wonderful. +[667.560 --> 671.400] Whether they're curated or whether they just happen by chance, they still think it's +[671.400 --> 672.400] fantastic. +[672.400 --> 678.920] So, right, there's just a few wearable conferences, organizations. +[678.920 --> 681.760] It's a little more hairy if you go look at, for example, robotics. +[681.760 --> 687.840] Okay, here are, these are current robotics conferences, just a few of them there. +[687.840 --> 694.880] And what area broadly, sort of roughly related to things we do, do you think is the most +[694.880 --> 702.800] prolific or most has the most events, most conferences, most events? +[702.800 --> 703.800] Christian, got it. +[703.800 --> 704.800] Computer vision. +[704.800 --> 706.000] I mean, look at computer vision. +[706.000 --> 709.640] These are 2018 computer vision and image processing related events. +[709.640 --> 713.520] So, each one of these is their own event, their own community, they're all, you know, +[713.520 --> 720.200] thinking about what belongs in their venue and what doesn't belong in their venue. +[720.200 --> 724.120] The friend of mine, University of Maryland administrator, he and I were talking about this +[724.120 --> 726.760] once and he said, you know, disciplines are defined by their boundaries. +[726.760 --> 729.720] That's what makes us, that's what makes us who we are. +[729.720 --> 734.200] We get to say what's at the edge and what's beyond, but we, well, is considered part of +[734.200 --> 736.680] our community and what's not. +[736.680 --> 743.080] And this is where, for those of you who served on program committees for, you know, a conference, +[743.080 --> 747.880] you've probably heard reviewers or seen reviewers write things like, this isn't X or this +[747.880 --> 751.000] isn't, it doesn't belong here, belongs somewhere else. +[751.240 --> 756.600] I'm not saying that's wrong, sometimes that is useful and has to be done, but I do, +[756.600 --> 764.920] again, think about the cost of moving, excluding some of those things. +[764.920 --> 770.560] There's the, you know, I think about this as a, it's a dual edge sort, it's useful because +[770.560 --> 773.480] it allows us to focus on common things. +[773.480 --> 778.320] But I worry that, it makes us a little pro-heal or a little nationalistic in our thinking +[778.320 --> 782.240] that is we don't think beyond sort of where we're working at that moment. +[782.240 --> 785.440] So we may miss some bigger opportunities. +[787.360 --> 790.560] All right, so here I am, painting a picture of Gloom and Doom. +[790.560 --> 793.280] We're all off in our communities and we're converging on something and we don't want to +[793.280 --> 796.640] talk to anybody else, we're just going to work on whatever it is, tracking it, whatever it is. +[797.440 --> 799.760] So is that going to change? +[799.760 --> 800.560] It is changing. +[800.560 --> 802.160] It's nothing, everything changes. +[802.160 --> 806.560] So it is changing, but I want to talk a little bit about some things I've thought about +[806.560 --> 808.720] that could impact this. +[810.400 --> 814.720] So what might disrupt this and what is disruption, of course, it's a buzzword, everybody +[814.720 --> 816.880] talks about disruptive forces. +[817.680 --> 822.160] One thing I've been thinking about here is disruption occurs, can occur on many scales, +[822.160 --> 826.240] right? There can be global disruption, a new thing that changes the way a new transistor, +[826.240 --> 828.240] a new technology that changes the way everything's done. +[829.120 --> 835.520] And it can be local, it can be in communities or in research communities or research groups for that matter. +[836.880 --> 841.840] So one trend I've been focusing on for a while, I'm thinking about, and it's reflected in a lot +[841.840 --> 849.040] of the work I've done over the past many years, and particular work with Gared is how it seems to +[849.040 --> 855.040] me that in general many things about VR, and again, I use that phrase very broadly, are becoming +[855.760 --> 860.080] closer and closer to something that you would consider or we might someday believe is real. +[860.880 --> 866.880] While at the same time I see a lot of things happening in the real world, changes, technological +[866.880 --> 870.960] changes, things that one could think about as being virtual, because what is virtual? +[870.960 --> 875.120] Virtual is allowed, remember this richness and flexibility allows us to do anything we want. +[875.120 --> 879.760] So can we do that in the real world? Well, things maybe in the future are changing in a way +[879.760 --> 885.200] that would do the same thing from the opposite direction. So I'll give you a couple of examples +[885.920 --> 891.360] from my own work with Gared and others and things we've been thinking about here. This is something we +[891.360 --> 897.600] called Wobbly Table. This is a VR paper from 2016. And you think about this as just mixed reality, +[897.600 --> 902.160] but that wasn't the point here. There's a real table up front and she is virtual and she's +[902.160 --> 906.720] got a virtual table back there. And yes, when when Kong Sue sitting in front here moves the table, +[906.720 --> 912.080] her virtual table moves, it's sensed, and now the flip side of this is she can actually move +[912.080 --> 923.760] his table. So the real reason why they play a game of 20 questions. The important thing is that +[923.760 --> 929.360] she is aware of what's happening. She's aware of when the table moves. She exhibits behaviors +[929.360 --> 932.960] that she looks down and she knows so he's leaned on the table. Just like when you're in a restaurant, +[932.960 --> 936.000] right? And the table's wobbly and kind of bugs you and you're going to shove an African under it +[936.000 --> 941.520] or something like that. So you think of that as a nuisance, but in this way we did it as a contrived +[941.520 --> 945.840] way to see if it could establish a social connection between you and the virtual human. And what we +[945.840 --> 951.040] learned is it really makes a difference in how the users feel about her when there's this physical +[951.040 --> 956.160] connection. When she is aware that the table rocks and if she leans on it and pushes the table and +[956.160 --> 960.400] you feel it, it really changes the way people feel about it. So that's the point here, not the mixed +[960.400 --> 965.840] reality, but the point is about that awareness and ability to affect things. So here's another +[965.840 --> 972.320] example where we used like a wind sensor hidden back in here and we had a nice oscillating fan, +[972.320 --> 976.960] not unlike the ones in the corners over here, that was blowing on the subjects as they sat here +[976.960 --> 983.440] and talked to Katie. And the fan was pointed away, the paper that she has would be still when the +[983.440 --> 988.960] fan moved toward it, the paper would flutter, the virtual paper would flutter. And then at some +[988.960 --> 993.520] point she would notice that she would try to push the paper down and kind of look over at it. +[993.520 --> 999.120] So again, same thing, totally unnecessary, totally contrived. We inserted that and injected it, +[999.120 --> 1004.320] that connection to try and that physical virtual connection to try and cement or reinforce the +[1004.320 --> 1010.160] relationship between those two individuals, the physical and the virtual individual. +[1011.680 --> 1016.960] This one's a little different, but it's still the physical virtual aspect. What we did here +[1016.960 --> 1024.080] were experiments to see if witnessing Katie, the virtual human, having a conversation with Michael, +[1024.080 --> 1030.560] a real human, when you walk into the room, makes you feel differently about her. So again, she is +[1030.560 --> 1035.120] aware that she appears to be aware, she's not really, obviously. She appears to be aware that he's +[1035.120 --> 1039.600] there. He's having a high level conversation with her, they're laughing, they're telling jokes, +[1039.600 --> 1045.360] and then he says, oh, your visitors here, I'll see you later, and he leaves. So now you have this +[1045.360 --> 1049.920] sense without saying anything that she is aware of what's happening in the room, she's aware of +[1049.920 --> 1058.000] people there, and again, perhaps that she can influence it. So no surprise again makes a significant +[1058.000 --> 1065.280] difference in how people feel about her. So this is physical and virtual connecting to change the way +[1065.280 --> 1072.240] people feel about, in this case, virtual humans. So we have little gadgets we played around with a +[1072.240 --> 1076.480] lot in these cases, little wind sensors and other little little devices, and this has led me, +[1076.480 --> 1083.760] and I'm not the only person, to think a bit about the coming of the internet of things. And yes, +[1083.760 --> 1090.400] it's a cliche and all of that, but it is happening, and in some sense, disruptive because we're not +[1090.400 --> 1094.640] driving that, right? It's a little, it could be a little disruptive, it's happening outside of anything +[1094.640 --> 1100.160] we do, and there are people who think they've invented the ideas of network appliances or whatever, +[1100.160 --> 1105.440] and they're off just making all this stuff happen. So we could sit there and say, +[1105.440 --> 1110.560] tryably, oh, that's not our field, we don't do IOT, we just do, let's be sure, use +[1110.560 --> 1115.600] our interfaces or VR, or we can look at this and say, wow, it's this an opportunity for us. +[1116.560 --> 1119.920] You know, I was thinking about this as like, where are Alex's, let's not talk about their Alex, +[1119.920 --> 1123.920] yesterday I was talking about activating the physical world. He said that, I was like, +[1124.880 --> 1134.320] this is maybe a way of thinking about that. So a side effect is that, if you allow that IOT +[1134.320 --> 1140.080] could become useful in our spatially bearing interfaces and in our daily AR and VR experiences, +[1140.080 --> 1147.360] if you allow that, what that can start to do is transform the experiences that we're used to +[1147.360 --> 1151.840] having in AR and VR, which are typically very ego-centric. So I put on a head mount display, +[1151.840 --> 1155.280] you know, Kyle doesn't get to see it, it's me, I get everything from me, I get the sounds, +[1155.280 --> 1160.080] I get the sights, everything is for me, into something that is off my head and now more into the +[1160.080 --> 1165.360] real world. And so the sounds that I hear or the effects that happen could be happening from things +[1165.360 --> 1169.760] in the real world, the real world objects could be sensing me, so I don't even need to have my +[1169.760 --> 1174.960] head mount on, my dog comes in, you know, the IOT device could sense that, and my AR agents, +[1174.960 --> 1182.720] for example, could be aware of that when something happens. So we're looking historically at a trend +[1182.720 --> 1188.960] that's at least as somebody in this mindset, myself, stand back and look at these things that +[1188.960 --> 1194.160] had stored that. And so one of the visions I've had for a while that Gerden and I have been +[1194.160 --> 1199.920] formulating for a while is this idea of what we call augmented reality and put output devices. +[1199.920 --> 1204.720] The idea would be that you'd have a component that's like a tracking system, but it'd be a +[1204.720 --> 1208.880] box that you just set down in different places in the real world. And these devices would talk to +[1208.880 --> 1216.560] each other, they would network to each other, and they would set up a sort of a separate subsystem +[1216.560 --> 1221.440] of awareness and effectiveness. So an AR application, for example, could ask that network and say, +[1221.440 --> 1225.440] let me know if there's motion over here, let me know if there's a sound over here, tell me where +[1225.440 --> 1230.240] it is, tell me what it is, listen for these words, look for these actions throughout that space, +[1230.240 --> 1235.920] and then they could also output for you so they could provide sound, some of it could provide +[1235.920 --> 1240.320] liquids, some of them could provide, you know, low air, control, sort of, and so imagine a training +[1240.320 --> 1246.480] scenario like this, training nurses and physicians. One of the difficulties in using ARVR and +[1246.480 --> 1250.320] something on a scale like this is everybody has to wear a headbounce, everybody's running around. +[1250.320 --> 1255.200] There's a bunch of real world stuff, there are actors who come in who just need to pretend to +[1255.200 --> 1260.640] be the patient who pretend to be a paramedic, they don't need to wear a head mounted display. +[1260.640 --> 1267.120] So what we envision here is these ARIO or ARIO units would be spread out throughout objects, +[1267.120 --> 1270.640] so they could make a bed shape, so even though you've got a virtual human on the bed, +[1270.640 --> 1274.400] if you had an ARIO unit somewhere sitting on this structure while you're rolling the structure, +[1274.400 --> 1279.360] if the virtual patient is supposed to be shattering because they're having some sort of seizure, +[1279.360 --> 1282.560] then this unit could be shaking the bed, it could be vibrating the bed a little bit. So it could +[1282.560 --> 1287.680] provide that haptic part while your head mounted display provides the visual part, a little bit of +[1287.680 --> 1293.520] sneezey moist nastiness coming from a child here who sneezes on you, some vomit over here, +[1293.520 --> 1299.360] some blood, different things. So an ability to sense, so again if a real person somewhere waves +[1299.360 --> 1303.440] their hand and says I need help over here, but I don't have the head mounted display, this system +[1303.440 --> 1308.640] ought to be able to sense that and the ARVR system ought to be aware of that, it's not right now. +[1308.640 --> 1314.240] ARVR systems are very closed in their sort of awareness of what's going on in the world, +[1314.240 --> 1321.440] typically, not all but typically. So what do I mean when I talk about the claps? I don't mean +[1321.440 --> 1326.640] falling apart, that's not what I meant when I talk about the claps. I mean claps is in wrapping +[1326.640 --> 1332.720] around, and I'm not sure wrapping around is quite the right way to think about it, it's not +[1332.720 --> 1337.680] satisfying to think about it as becoming a big soup of stuff that doesn't have any organization, +[1337.680 --> 1343.760] because it's satisfying to either. But I am conscious again of the cost of spreading things +[1343.760 --> 1348.320] out like this. So there are people who've thought about other continuums, other ways of thinking +[1348.320 --> 1354.560] about this. So here's an example from Microsoft where thinking about it in more of a then sense, +[1354.560 --> 1360.480] mixed reality in the middle, the real environment, the human computer, some other things where they +[1360.560 --> 1369.200] overlap, some of you I know in this room, no Chris Stapleton. And so here is a version of that +[1369.200 --> 1374.560] sort of a notional diagram from Chris. And one of the things I think is unique and I think interesting +[1374.560 --> 1381.040] here is that he includes imagination. It's a little maybe unsatisfying for some of us that it's so big +[1381.040 --> 1386.240] because we don't control imagination, at least we don't think about controlling it right now. +[1386.640 --> 1393.200] It's a whole other topic, but it's something that some people, psychologists, certain people in +[1393.200 --> 1399.520] our field do think about when they do experiments or create experiences. They think about priming +[1400.320 --> 1405.680] the individuals for what they're about to receive before they do it. So they try to steer them +[1405.680 --> 1411.520] mentally into a particular place before they have the experience. So that is, you know, there is +[1411.520 --> 1417.520] something you maybe could control the imagination, but it seems pretty hard. All right, so virtual +[1417.520 --> 1421.120] becoming real, I'm just going to go back and forth and these two things a little bit virtual becoming real. +[1422.160 --> 1426.320] So what, you know, what does this mean? There's so many different ways that I've been thinking about +[1426.320 --> 1432.720] this. So one is with respect to just the actual experience of doing something virtual. So if any of you +[1432.720 --> 1436.720] as anybody in this room read this book, or seen a book, the where it will, a couple of hands, people +[1436.720 --> 1441.040] who read the book. So one of the very first things Jeremy and Jim talk about beginning of the book +[1441.040 --> 1445.520] is how virtual experiences are real experiences. They're real for that person who's experiencing it +[1445.520 --> 1450.560] at that moment. And he also talks a lot about how, you know, in some sense, the brain doesn't really +[1450.560 --> 1454.880] care much about whether what it's processing, whether it's real or virtual, that we can be influenced +[1455.760 --> 1463.040] in the same way. So there's a psychological or psychology aspect of that virtual influencing +[1463.040 --> 1469.440] the real. Jeremy has done, so those of you who don't know, I don't know, hundreds of experiments on +[1470.240 --> 1476.000] how virtual things can affect real behaviors. So things like cutting down a virtual tree with a +[1476.000 --> 1483.040] haptic chain cell device causes the subjects to be more, to conserve more pepital after they leave +[1483.680 --> 1490.160] the experiment room. How, as another example is saving a child. He has this, he gave this one's +[1490.560 --> 1494.560] experiment where you were Superman. You put him in a headmapper's play and you fly through a city +[1494.560 --> 1498.240] that's been evacuated because something bad has happened. There's one child that nobody can find, +[1498.240 --> 1502.640] child is diabetic, needs it, so like you got to find the kid. And so you do this virtual experience, +[1502.640 --> 1507.280] you find the kid. And then afterwards they, you know, contrived way when you're filling out the +[1507.280 --> 1510.240] questionnaire after you've done the study, you think you're all done, you're sitting down at the +[1510.240 --> 1515.120] table doing a questionnaire and the researcher accidentally knocks over a bin full of pencils. +[1515.120 --> 1519.520] And they fall all over the table on the ground. And it's amazing statistically how many people who +[1519.520 --> 1523.760] had the Superman experience helped pick up the pencils compared to the people who you did. +[1525.440 --> 1533.040] I'm involved with a couple of some, some of you know, I've been trying to start some workshops +[1533.040 --> 1538.160] with some other folks called VR for good, AR for good, kind of mixed things that I truly +[1538.160 --> 1544.240] VR and AR and I'm involved with some other organizations looking at VR and AR for the social good. And +[1544.800 --> 1551.920] people think about how do we, how can we use VR and AR? And the most obvious thing people think +[1551.920 --> 1557.440] about is, oh, show me the starving child, show me Sam and show me whatever it is. And I agree, +[1557.440 --> 1560.240] except the problem is how do you get people to look at that? Who's going to want to look at that? +[1560.240 --> 1564.160] Right? You can't force them to, and the thing comes to mind for me is anybody here knows the movie +[1564.160 --> 1569.040] of Clockwork Large? Yeah. It's all you can think about as a senior clockrollers, like I strapped in +[1569.040 --> 1573.040] the theater and their whole desire was able to make it and watch, you know, these terrible videos +[1573.120 --> 1579.360] trying to influence me. So there's a psychological aspect of this virtual becoming real. There's of course +[1579.360 --> 1585.120] a visual or, you know, the sort of traditional visual and appearance. Does anybody see this? +[1585.120 --> 1591.040] Can you just read it? I'll just do this and this is very, very new. So a little video here +[1593.040 --> 1598.080] from Magic Leap. Let's see if this works. Nearly two years ago we had interesting progress in +[1598.080 --> 1604.480] Micah's development. After focusing on realistic eye gaze, that's she's right movement and gaze, +[1604.800 --> 1610.960] we set up Micah on our current prototype. AI components were then added to track the user and +[1610.960 --> 1615.840] look them in the eye. Additional AI elements were added for body language and posture. +[1619.520 --> 1624.480] So, you know, you heard and say AI gaze, body posture, of course, keep in mind, an important thing +[1624.480 --> 1629.840] is she has to be aware of your body language and your posture and what you said in order for her +[1629.840 --> 1635.040] to exhibit things that are responsive. So again, is there a role for broader +[1636.320 --> 1642.480] allocentric sensing as an IoT or other devices? How can they play a role in helping our agents that +[1642.480 --> 1646.480] are, and this is AI, this is Magic Leap. So the idea is you put on your Magic Leap goggles and +[1646.480 --> 1650.720] you're walking through your house and you know, she appears over here and you talk to her. Well, +[1650.800 --> 1655.200] she has to, for it to be effective, she probably has to know what I'm doing where I am, you know, +[1655.200 --> 1659.920] and those sorts of things. So again, that environmental allocentric sensing may play a role. +[1660.560 --> 1666.240] There is a physical side in terms of taking the computer graphics. So Kyle mentioned it, +[1666.240 --> 1671.600] but there's some work I did years ago with Rommatch Raskar and others to develop something +[1671.600 --> 1679.840] made up calling spatial augmented reality. This is in the 1990s. So the idea was take the richness +[1679.840 --> 1684.960] of computer graphics from a, these were just still from like Photoshop still images, +[1684.960 --> 1689.440] carefully projected onto white wooden blocks. Suddenly the white wooden blocks look like they +[1689.440 --> 1693.600] have color. The basic idea here was when we see color on things, we see it because white light +[1693.600 --> 1698.560] generally is reflecting off of something that's passing blue light back to me. So I see blue and +[1698.560 --> 1701.680] that's blocking the other wavelengths, but you can do the same thing by having colored light up there +[1701.680 --> 1706.160] and having the object be white. You just transfer where the where the filter is being done. So we +[1706.160 --> 1710.240] worked on this for a while and this, and I think one of the things I realize now looking back on +[1710.240 --> 1714.720] this and it's really hard to articulate for those of you who've done any work in this area, +[1714.720 --> 1718.960] it's a very weird and special feeling to be in front of something without any headmaid display +[1718.960 --> 1723.680] on or anything that is changing color and changing maybe apparent shape and things are changing +[1723.680 --> 1728.240] about it right in front of you. It is really a compelling feeling and I'm not saying that just because +[1728.240 --> 1733.280] I had something to do with it. It just feels different and it's very hard to convey to somebody who's +[1733.280 --> 1739.920] never felt this or experienced it what it feels like. But I tell you that combination of +[1739.920 --> 1748.320] off of me in the field, dynamic virtual things happening is compelling. So we're using this now for +[1750.640 --> 1754.480] a lot of other things, people use the concept or the paradigm all over the world, we're doing +[1755.120 --> 1759.840] patient simulator. So we've got projectors and cameras and things and touch sensing going on +[1759.920 --> 1763.920] in the patient here. So nurses, these are a couple of our, uh, Garrett and my colleagues, +[1764.880 --> 1770.320] Desiree and Mindy who are nursing pediatric nurse professors and so they can walk up and touch +[1770.320 --> 1774.320] the patient and it's got touch sensing built into it, these curved surfaces you can do things like +[1774.320 --> 1779.040] pull down on the lat or pull down on the eyelid or take a limb or do whatever you want to the kid. +[1779.040 --> 1783.040] It's got temperature control all over the body so we can change the temperature of his head, his +[1783.040 --> 1787.120] hands, things like that, provide pulse and breathing sounds and all sorts of things. +[1787.920 --> 1792.880] Happy to talk to anybody who wants to about that moral of lines and exciting things in the +[1793.680 --> 1798.240] I don't know how many of you know about this. Has anybody in this room ever seen this work? Do you +[1798.240 --> 1804.560] know of this work? A few people. So this idea of passive haptics and I don't know that Fred and +[1804.560 --> 1810.080] Brent Inscow and Mary were among the first to do this but the first time aware of. So they went +[1810.080 --> 1815.600] out and very carefully measured Fred Brooks's kitchen in his house, very carefully modeled it in +[1815.600 --> 1820.800] the graphics model all the surface surface properties did everything they could at that time. This is +[1820.800 --> 1827.520] in the 1990s and then went and took a lot of bunch of styrofoam blocks and some masonite wood +[1828.160 --> 1833.840] created a crude physical representation of that same space and would walk around in it and so you +[1833.840 --> 1838.240] would feel if you reached out and there was a counter virtual countertop you would actually feel +[1838.240 --> 1841.520] something there so that was sort of satisfying. I don't know that you could lean on it but you +[1841.520 --> 1846.000] would feel it there. An interesting thing from this was one of the outcomes of the research was that +[1846.640 --> 1852.960] the visual sense of shape in the cases that they tested seemed to capture or overcome your +[1852.960 --> 1857.200] tactile sense of shape. Maybe not that surprising because our hands are pretty low fidelity but if +[1857.200 --> 1862.320] you see something that looks to you curved and you feel it and it's not curved your senses that +[1862.320 --> 1867.040] least in their studies was that it's curved even though it didn't feel like it. If you took the +[1867.040 --> 1871.840] head mount off and you felt it you'd say oh that's a sharp edge. So that was kind of a cool idea. They +[1871.840 --> 1876.560] went on and did this and I'm sure many of you know about the pit experiments. One was originally done +[1876.560 --> 1881.920] at UNC and again in the 1990s but then they've been done all over the world. Some of you may not +[1881.920 --> 1887.760] know or may not have seen how many in here have seen or how many have done a pit experiment actually +[1887.760 --> 1892.240] done the demo. Okay most everybody in here so something that was really cool that was that they +[1892.240 --> 1896.560] did here was for those of you who don't know so here's the virtual room. The objects would walk into +[1896.560 --> 1900.160] the virtual room they had had not to play on. They would the door would open they did some stuff. +[1900.160 --> 1904.080] They'd step out here and you kind of see here there's an opening here and all you can see down. That's +[1904.080 --> 1908.240] the pit. You can see down to the room down below here and you're given a task you had to walk out +[1908.240 --> 1912.240] on this little ledge right here that little like diving board which is what this is and look down +[1912.240 --> 1917.600] and you had to drop a ball onto a target and it's very the vexion the you know motion products is +[1917.600 --> 1921.200] very powerful as you're standing there and you move your head just a little bit this whole downstairs +[1921.200 --> 1926.000] thing is moving a lot and for most people it's pretty gets your attention. The thing they did that +[1926.720 --> 1933.040] pushed people over the edge so to speak part of the fun was to add a plywood +[1934.080 --> 1938.240] structure around here that was maybe three centimeters tall or something like that and so the +[1938.240 --> 1943.200] people walking on it. First of all they stepped off carpet onto the wood so it felt different to +[1943.200 --> 1948.160] their feet and then when they get to the edge of it they could feel over the edge with their foot +[1948.160 --> 1954.000] and it matched what they would see. So what they measured was heart rate and a skin galvanic +[1954.000 --> 1958.320] skin response and several other things and could see some significant increases in people's +[1958.320 --> 1964.800] heart rate as they felt and sweat when they felt that edge. Again bringing the physical +[1964.800 --> 1971.840] and the virtual together and spreading it out into the real world a little bit. So really +[1971.840 --> 1980.000] spreading it out. I don't know how many of you have seen these but you know theme parks in general +[1980.080 --> 1985.680] are doing a lot of this and I'll talk about that in a minute but you can do this yourself now +[1985.680 --> 1990.960] as a consumer if you want to be happy. When you use virtual reality at home you're always trying +[1990.960 --> 1996.640] not to touch anything. Your real world surroundings the thinking goes break the illusion of VR +[1997.200 --> 2002.240] but it turns out that if you merge the two things you end up with an experience that's far more +[2002.240 --> 2008.640] immersive. At the void a quote hyper reality facility they're melding state of the art VR tech +[2008.640 --> 2014.240] with real world physicality and in doing so they're leading a wave of location based VR. +[2014.240 --> 2019.440] It's a little bit video game a little bit laser tag and feels more like an actual adventure +[2019.440 --> 2028.160] than anything else on the market. And we have fire and bringing it back to magic to what is +[2028.160 --> 2032.800] the magic there's magic and multiple respects here but for all this tech hyper reality actually +[2032.800 --> 2039.600] uses some old school magic theory. Virtual reality and its truest sense is a form of magic. +[2040.560 --> 2045.360] Magic is just creating a new reality for people using the tools that are available whether that's +[2045.360 --> 2050.320] picking up a coin and using slide a hand to make someone think the universe is just for a second +[2050.320 --> 2055.840] that that coin could vanish. That's a little reality that you were able to create for it. VR is the +[2055.840 --> 2060.400] same way we're trying to create new realities that people believe in so it makes sense to use +[2060.400 --> 2065.680] magic principles to take that to a further stick stand and really get people embedded in a +[2065.680 --> 2071.280] mercenaries world. What I really love about that is that I live in Orlando so you have Disney +[2071.280 --> 2077.440] world there at Universal Studios but those one of my closest friends works for Disney. Those +[2078.560 --> 2082.240] teams of people who develop the experiences there they don't care. They're not +[2083.120 --> 2087.920] what's the word they're not bigots about oh it's not VR or it's not this or it's not they don't +[2087.920 --> 2091.120] care. They'll use any trick they'll use magic they'll use deception they'll use everything they +[2091.120 --> 2094.480] can to give you the experience they're trying to give to you to make you afraid to make you happy to +[2094.480 --> 2099.440] make you you know smile to make your kids happy and so I really love that and I'm conscious of the +[2099.440 --> 2103.760] fact that a lot of times when we do things in the academic community we frown on you know these +[2103.760 --> 2107.840] sort of tricks that people would play or something that isn't that isn't intellectually deep or +[2107.840 --> 2112.400] something like that it doesn't fit it's not VR it's something else it doesn't belong so again +[2112.400 --> 2117.680] is there a cost to us doing that or is there a place for people to do that and think about it and +[2117.680 --> 2122.480] pursue those particular things okay so that was virtual becoming real I want to cover a little +[2122.480 --> 2128.400] bit of real becoming virtual so what do I mean by this I mean as I said earlier virtual I think of +[2128.400 --> 2133.600] as mostly richness and flexibility of things to happen we can do that with head mounted displays +[2133.600 --> 2138.320] right but where do you see this happening now in the real world things you see it happening one +[2138.320 --> 2143.360] example is in robotics so I get a little film here from Boston Dynamics short +[2178.240 --> 2181.840] that's good because I was watching you guys when that happened it's interesting several people +[2181.840 --> 2185.440] or I'm really going oh you know and you feel bad it's like you kicked a dog or something right +[2185.440 --> 2189.360] that's what it looks like it's really cool of course it's not and then he's demonstrating how +[2189.360 --> 2195.280] stable it is but you know it is it is interesting to me that this thing that's mechanical and physical +[2195.280 --> 2199.600] is in some sense becoming more virtual and that it can do things that it couldn't do before it can +[2199.600 --> 2205.280] climb and go places that you know 10 years ago even we didn't think of robots doing those sorts of +[2205.920 --> 2209.360] things you don't have to be you know have a million dollars to buy something you can go out to your +[2209.360 --> 2220.880] toy store and buy cosmos +[2228.720 --> 2234.720] so you can start to think about you know how could this impact what I do how could I make use of +[2234.720 --> 2239.200] these sorts of robots is there a place or what problems could we solve together if we thought about +[2239.200 --> 2243.520] this and you can start thinking about adding remember I showed you the spatial augmented reality a +[2243.520 --> 2253.280] little while ago so what if you have this robotic thing but you can change the appearance of it +[2253.280 --> 2261.440] also Disney's been doing it for a while. Robots here, so it's animated based again rich and +[2261.440 --> 2267.200] complex right. It's body is also very comfortable so it's a combination of those two things some of +[2267.200 --> 2273.200] you here I know have seen I've seen Sphereo does anybody here remember I remember and is +[2273.200 --> 2276.960] our Christian anybody else that when this was today we want to introduce you to something really +[2276.960 --> 2281.360] special we've been working on and it's called Sharknobin we've been working on augmented reality +[2281.360 --> 2286.000] technology for Sphereo for over a year now we're just about all right now projected imagery but +[2286.000 --> 2290.960] now capital based or color based. It's different than AR this thing moving around you don't have to +[2290.960 --> 2294.960] use a printed out marker. It's being tracked and it's the novelated. A lot of augmented reality has +[2294.960 --> 2301.520] been all about introducing external markers into the scene in the case of Sphereo this marker is +[2301.520 --> 2306.080] a robot. This AR marker isn't stationary it isn't on a piece of paper that's stuck on the ground this +[2306.080 --> 2310.720] AR marker can actually move around and drive around and you can walk around your entire house and +[2310.720 --> 2316.000] plane augmented reality game. The reason why Sphereo is so special is because we can put this character +[2316.000 --> 2320.800] in your living room and you can move him around and interact with him in the real world and this +[2320.800 --> 2324.880] is just never been done before. So you think about it and it's like if I turn off the video and just +[2324.880 --> 2329.200] listen to what he said I might tend to think I can do that right now with AR I do with anything else +[2329.200 --> 2332.480] so why is this different what's different about it being a robot I'll give you one example one +[2332.480 --> 2338.240] example is my cat is lying here on the floor sleeping and Sphereo comes by my cat's going to get +[2338.240 --> 2341.920] up and jump out of the way which is going to change the way how I feel about this virtual thing +[2341.920 --> 2345.040] that's flying through the room right instead of the virtual thing passing right through my cat my cat's +[2345.040 --> 2349.120] going to like you know freak out and scream and and run somewhere else which I'm not trying to be +[2349.120 --> 2356.720] mean to my cat but just saying other things will react to this and it will be in a in a non-computer +[2356.720 --> 2363.760] vision way sensitive to you know floor height bumps things like that as you go through it. So again +[2363.760 --> 2368.080] back to some work that Garrett and I and and some others have been doing think that's related to this +[2368.080 --> 2372.400] thinking about why does the physical thing matter? Does the physical thing matter? Does the +[2372.400 --> 2376.720] physical or the shape of the environment or the the relationship the environment matter? +[2376.720 --> 2380.320] So I'm not going to go deeply into this but some recent experiments we did and this is being +[2380.320 --> 2386.800] this is at Ismar next week being presented but looking at so Amazon Echo or Google Home or +[2386.800 --> 2393.040] Apple HomePod or whatever right so it's just a voice or they can movie her or the guy falls in +[2393.040 --> 2403.760] love with her. Those are named the actress Scarlett James so anyway so the the idea here is you ask +[2403.760 --> 2411.520] your Echo Alexa please turn on the light and so Alexa turns off the light and you go okay fine +[2411.520 --> 2417.520] it's great or versus you ask her to turn on the lamp and you see somebody there representing Alexa +[2417.520 --> 2422.320] kind of like the magically video so you see an assistant and another condition we had was she +[2422.320 --> 2427.680] pulls out a device and she taps something in the light. Thank you the third condition was +[2428.480 --> 2432.640] you see her and you say please turn off the light and she goes okay and she walks over to the light +[2432.640 --> 2436.640] and again using IoT right she appears to turn off like she does she turns off the light. +[2436.640 --> 2442.080] The light goes off so this so the question is how does that affect people? Well turns out it doesn't +[2442.080 --> 2445.280] matter so much when the lights in your room but think about when the when the thing you ask for is +[2445.280 --> 2450.560] outside you can't see it so how do you feel about it then? Well as it turns out it makes a big +[2450.560 --> 2456.080] difference if she leaves the room appears to leave the room even out in snow purpose right? +[2456.080 --> 2460.480] The light's been turned off but if it looks like she leaves to do something and comes back you trust +[2460.480 --> 2466.640] that it's done more than you ever did before so it matters in this case you know here's an example +[2466.640 --> 2473.120] of similar to that where we did an experiment looking at privacy so if it's just the voice agent +[2473.120 --> 2478.640] the right there's a lot of privacy stuff in the news about like really Echo and other you know +[2478.640 --> 2482.880] is right collecting all the speech all the time processing it off somewhere in the world and stealing +[2482.880 --> 2488.960] your life secrets or my life's boring anyway so the so there's one thing if you don't see anything +[2488.960 --> 2493.280] but what if you see her and she says okay and she says I'll give you a few minutes of privacy +[2493.280 --> 2497.680] shirt and she appears to put on headphones and like since they're listening to music so now do you +[2497.680 --> 2502.320] believe more that she really is not listening or is she there and then she leaves the room she says +[2502.320 --> 2507.040] okay I'm gonna leave yell when you're ready and I'll come back and it turns out again people are +[2507.040 --> 2512.160] more trusting that she's not listening when she physically appears to leave before then when +[2512.160 --> 2518.720] she just says okay so and this may be a vestige of our real world now maybe you know future generations +[2519.440 --> 2523.840] won't worry about it it would be ideal for me but that's an example where it matters so +[2524.880 --> 2534.080] so this is something this sort of melding of robots and AR and HCI and user interface this is +[2534.080 --> 2537.760] something that I and others many people have been there for a long time but in particular I've +[2537.760 --> 2543.360] been thinking about it with was thinking about it a few years ago with a group of people gosh 18 years +[2543.360 --> 2549.280] ago eight years ago eight years ago and in a in a grand big grandpa pose a big team we put together +[2549.280 --> 2553.360] to work on this and it was the advantage of sometimes for those of you who haven't written +[2553.360 --> 2558.160] grand proposal students will be writing them it's a pain in the neck it's it's I think what +[2559.120 --> 2566.080] somebody called soul sucking which I agree it is a soul sucking draining on the other hand it forces +[2566.080 --> 2570.880] you to think about things articulate them write them down and sometimes what comes out of that +[2570.880 --> 2575.120] lives on you go somewhere else with it and this is lived on this isn't effort that wasn't successful +[2575.120 --> 2580.080] from a funding standpoint but a lot of things lived on about it one of the thing coolest things +[2580.080 --> 2584.960] about it that I loved was we were thinking about in these primitive sort of being these like robotic +[2585.040 --> 2588.560] building blocks you've probably seen many of these I'll show you one here in a minute but these +[2588.560 --> 2592.080] little robotic things that can reassemble self-assemble what if they could self-assemble and what +[2592.080 --> 2596.880] if they could change their appearance using shader lamps or something else in some way and so that +[2596.880 --> 2601.600] was the kind of thing we were thinking about there many people since this time have started you +[2601.600 --> 2609.120] pan these guys were some of the first here's a recent clip from that I think our objective is to +[2609.120 --> 2615.840] design self-assembling and self-reconfiguring robot systems these are modular robots with the +[2615.840 --> 2622.960] ability of changing their geometry according to task and this is exciting because a robot +[2622.960 --> 2630.160] designed for a single task has a fixed architecture and that robot will perform the single task well +[2630.160 --> 2635.120] but it will perform poorly on a different task in a different environment if we do not know +[2635.200 --> 2640.640] ahead of time what the robot will have to do and when it will have to do it it is better to consider +[2640.640 --> 2647.920] making modular robots that can attain whatever shape is needed for the manipulation navigation +[2647.920 --> 2653.120] on or sensing needs of the task so there's stat these she's showing her static in color imagine +[2653.120 --> 2657.600] that they could change color at least imagine they could actually put imagery on the side imagine +[2657.600 --> 2661.520] that you could have and there are people who've done I forget the name of it but there was a group +[2661.520 --> 2666.720] in Japan who did like conticular displays and it a cube so it was an auto stereo cube and you +[2666.720 --> 2671.120] could look at it so you couldn't see things outside the cube but you could perceive things inside +[2671.120 --> 2675.280] the cube you can move it around so a lot of interesting things maybe you could do here all of +[2675.280 --> 2679.920] changing the appearance of and think about by the way you need user interface elements right this +[2679.920 --> 2683.600] thing could reconfigure to whatever is appropriate at that time it becomes a stick it becomes a circle +[2683.600 --> 2687.280] it becomes whatever it needs to be at that moment for you to do whatever it is you need to do +[2688.240 --> 2692.160] changing color changing appearance all that takes energy it's view dependent it's all these +[2692.160 --> 2695.360] things so what we were thinking about and we're not the first people to think about this there are +[2695.360 --> 2700.320] more groups starting to pick this up but it turns out this is again the advantage of trying to +[2700.320 --> 2707.120] get outside of your silo I hate that cliche word but again out of our tribe into other groups so +[2707.120 --> 2712.960] there are people in material sciences who are studying how for example butterflies and cuddle fish +[2713.120 --> 2719.120] and other animals change their appearance with no apparent energy or very low energy and so in +[2719.120 --> 2726.560] the case of butterflies it has to do with the the nanostructure of the materials in their wings +[2726.560 --> 2730.400] and that when you look at them very closely they reflect or refract certain wavelengths of light +[2730.400 --> 2734.560] and that's how we see color so there are people actually developing nanomaterials that you can +[2734.560 --> 2739.520] apply a current to and change the nanostructure such that it changes the light properties so you +[2739.520 --> 2744.480] basically can make a display that's static they're working on dynamic ones the ones I know are +[2744.480 --> 2749.920] static where you can apply a voltage and suddenly here on the table it's a different wood pattern +[2749.920 --> 2755.200] and it is real it is real wood pattern as real wood it's not real wood but I look at it with a +[2755.200 --> 2759.840] flashlight or move around in different ways and you're going to it's not the protesters are they +[2760.880 --> 2766.640] they're coming closer there so in the structure of this has changed so that it actually has +[2766.640 --> 2771.760] those colors it's not an emissive display creating those colors so I love that and I you know +[2771.760 --> 2777.600] still wonder if there's something there to be done all right so now a little silliness here +[2778.880 --> 2783.520] anybody been to Orlando before besides Gared if you others I know anybody been to Disney World +[2783.520 --> 2790.000] or Universal there okay anybody seen Harry Potter stuff a couple of fantastic all right +[2790.000 --> 2796.800] steam all right so I'm gonna show you just a little bit here think IOT robotics and think +[2796.800 --> 2804.240] you know making physical things virtual or virtual things real either way this is a really fun experience +[2808.960 --> 2813.440] so you show up you get this map you get to get a lot of money you get into the park of course +[2813.840 --> 2825.120] but you're in Die Gun Alley go to all of Anders Wandshop the whole ceremony go through +[2825.120 --> 2830.480] and they'll pick out a wand for you just like in the movies like Harry Potter lots of cool stuff +[2830.480 --> 2834.320] happens when you're in the Wandshop now you have your wand +[2844.400 --> 2849.840] so you can take a sip of the frog so she's moving her wand it's a positive thing to happen +[2853.440 --> 2858.000] so there's something sensing what she's doing then there's something actuated and +[2858.000 --> 2866.080] what's positive in the real world I love that funny story about this if you go to Universal Studios +[2866.080 --> 2872.000] to Die Gun Alley you will see people walking around who look like employees they look like the +[2872.080 --> 2877.520] Universal Studios people but they're not they're fanboys girls who are so into Harry Potter and +[2877.520 --> 2881.600] they have annual passes and they show up there and they're like free docent they walk around and +[2881.600 --> 2885.760] they'll tell help the help little kids like here's how you do it you know and I guess Universal just +[2885.760 --> 2892.720] let's them you know do their thing but that's pretty fun all right so the last topic I want to +[2892.720 --> 2898.000] cover here is a little bit of a challenge a little bit of thinking for us and again I don't +[2898.000 --> 2901.200] want you to think I think this is all new and other people have thought about some of this before +[2901.200 --> 2905.200] but I'm thinking about it a lot more now and so I want to bring it to your attention so +[2906.320 --> 2911.680] so one thing I've been thinking about is you know just VR what we do right now VR and AR I'd call +[2911.680 --> 2917.840] it just pseudo reality I really feel like I should say or something we should all be swaying +[2924.800 --> 2929.680] all right so I would call that just regular sort of it's magic magic and quotes magic meaning +[2929.680 --> 2934.800] that we know it's not real and it's just sort of you know kind of thing that's happening so now I +[2934.800 --> 2940.320] think if your VR and AR things are somehow aware and they're able to affect things things can happen +[2940.320 --> 2945.840] in the world around you doors can open and close robots can come back together I would call it a +[2945.840 --> 2951.920] little more real reality and then super reality is now if you have distributed sensing and control +[2951.920 --> 2956.240] and you have all this information like my calendar my email so all this comes together that's +[2956.240 --> 2961.040] sensing right it's not just sensing a physical things but sensing of virtual things of my life and +[2961.040 --> 2968.080] my data think about Jarvis right or Iron Man I think is one of the things the way in Iron Man that's +[2968.080 --> 2971.840] the way I think about it so right my age it could be tell me Greg your shoes untied you left your +[2971.840 --> 2976.320] window open your mother-in-law's come in I adjusted the thermostat for you she'd be aware of all this +[2976.320 --> 2980.960] because it's in my calendar and she's looking at my shoes your flies on down whatever it might be +[2981.920 --> 2986.720] you know she could even change the way I feel right says Greg you're awfully handsome today so +[2986.720 --> 2991.520] she made me feel better right so it's important for her to influence me socially so I believe that +[2991.520 --> 2995.440] I have to you know all these other things have to happen I have to be conscious that she can +[2995.440 --> 3000.640] affect things she's really aware of things she's smart so well she says that then I know she's smart +[3001.760 --> 3007.040] so right here's all the other stuff my email a coffee maker of course everything that's going +[3007.040 --> 3014.560] on in my house and like I said think of Jarvis so where do we you know what should we be thinking +[3014.560 --> 3023.040] about and again Garrett and I and others have been thinking about this in people in CS and engineering +[3023.040 --> 3028.640] and other groups have been thinking about what could this be like so if you have distributed +[3028.640 --> 3033.120] things in this internet of things some of them could be AR user interface IoT objects and +[3033.120 --> 3037.760] they could be just normal IoT object your coffee maker things like that so everything in my house +[3037.760 --> 3042.560] every device I have is getting going into the cloud it's being analyzed it's being analyzed locally +[3042.560 --> 3048.720] for me right and then I've got all these user interface side things appearance and interaction +[3048.720 --> 3057.280] with things over here how I can so things are continuously being analyzed and things are +[3057.280 --> 3061.840] happening around me my agent will follow me around the house right my agents in my phone then my +[3061.840 --> 3070.960] agents in my refrigerator my agents at work with me and I can talk with them so convergence I'm +[3070.960 --> 3076.560] going to talk about that word in a moment but I'm I think it's fun to just think about you know +[3076.560 --> 3081.120] IoT robotics all these things is there's something we could do together is there something we're +[3081.120 --> 3088.080] missing because we're working in these individual communities that we could have an opportunity to +[3088.560 --> 3094.640] play with have fun and do something meaningful if we would only pay attention and work together on those +[3095.440 --> 3101.680] so here's the start of my little bit of challenge some of you or in the US professors would know about +[3101.680 --> 3107.280] the National Science Foundation has this notion of what they call convergence and this is something +[3107.280 --> 3111.920] very special this is one of the ten top ideas for or the big ten ideas they call it for +[3112.560 --> 3119.600] for money for where NSF is going to put money the National Science Foundation what if you read this +[3119.600 --> 3124.240] what they're saying is it's more than just multi disciplinary work it's more than trans disciplinary work +[3124.240 --> 3138.960] it really means in some sense yeah the music is going along with this is somebody should film me +[3138.960 --> 3147.200] I'll do this yeah so it's more so it's it really is the ideas you're potential for forming a new +[3147.200 --> 3151.920] discipline it's a new new conference or new something that comes out of where there wasn't something +[3151.920 --> 3156.560] before and people work together so ironically it's sort of like you're creating a new community where +[3156.560 --> 3160.160] people are going to be potentially be isolated again so it's like you know we're going to take +[3160.160 --> 3162.080] something from here something here we're going to put them together and then they'll go off and do +[3162.080 --> 3166.400] their own thing and they'll be isolated so it's like I said I'm not saying that that's necessarily bad +[3166.400 --> 3171.520] that people work together in particular areas I just think it's interesting to to think about the +[3171.520 --> 3175.200] way I think about it because I think a lot about a lot of things the signal processing things is +[3175.200 --> 3179.840] we could get stuck in local minimums right and so a lot of what we do our work we're sort of focused +[3179.840 --> 3184.880] in on this one area and if we don't take the opportunity to occasionally take excursions out +[3184.880 --> 3190.240] into these other disciplines or other areas we sometimes miss opportunities to look at something +[3190.240 --> 3196.000] interesting or do something compelling so I'm going to challenge you guys and I think this is +[3196.080 --> 3199.920] already starting it does happen and it happens naturally through a lot of things we do +[3199.920 --> 3206.400] but to take those excursions especially for students to to try and think about how you can go see +[3206.400 --> 3212.000] other conferences go visit other talks that are not in your area and see if you can learn something +[3212.000 --> 3216.400] so these happen this is happens a lot in some of you might know about doctoral seminars here in +[3216.400 --> 3222.960] Germany in Dougschtul and shown in meetings in Japan and other workshops so this is these are +[3222.960 --> 3226.560] like week-long sort of retreats where researchers go and they think about things and they talk +[3226.560 --> 3232.320] about things and those of you who've been in them and and organized them know that it is really +[3232.320 --> 3238.640] hard but it's really important to articulate and disseminate the ideas that you have so it's we +[3238.640 --> 3243.600] all owe it to everyone else since those of us who go to those events are fortunate enough to be +[3244.320 --> 3248.960] chosen to be able to go to those events even though we pay for them but we really owe it to the +[3248.960 --> 3255.840] rest of the community to share what it is we think about so then co-locating conferences I think +[3255.840 --> 3263.040] really helps so three thumbs up for out of three for Sui West is Martin and AWE it's awesome that +[3263.040 --> 3266.160] they're you know it's not perfect there is no perfect right if they were on top of each other +[3266.160 --> 3270.400] it'd be a bummer if they're spread out it's a bummer so there is no perfect there's just something +[3270.400 --> 3276.240] that is an attempt and it works joint registrations are hard financially it's hard to cross money +[3276.320 --> 3281.680] between places but it's I think it's worth it we really need to work on it this keynote is an +[3281.680 --> 3286.800] example complimentary joint sessions so people from West we're invited to come here so I love that +[3288.480 --> 3293.440] last thing is just on an individual basis you know I was thinking about I'm glad Steve's here +[3293.440 --> 3297.280] because I didn't see it before and I was thinking Steve is one of the Steve Finer professor Steve +[3297.280 --> 3301.600] Finer from Columbia for those of you who don't know him is one of the people I know who's like at +[3301.600 --> 3307.520] every place I go he's involved and many communities that I'm not involved in and so he serves as a +[3307.520 --> 3313.680] bridge between and he's not just attends them but Steve is involved in the leadership of many of +[3313.680 --> 3319.120] these different conferences so he will pollinate cross pollinate ideas and thinking in things across +[3319.120 --> 3324.640] these organizations and I think that's really really really important I do wonder about and think +[3324.640 --> 3331.040] about if conferences should make a more proactive effort to reach out to other communities and +[3331.040 --> 3335.840] bring one or two people to their conference to be a part of it and experience it and learn and see +[3335.840 --> 3341.600] if there's a potential synergy next week at SIG Graph Steve and Christians some others know we have +[3342.240 --> 3346.640] somebody from SIG CHI who's the vice president of conferences or something is going to come and +[3346.640 --> 3352.800] meet with us and talk to some people it is more so it's an example again of that and with that I +[3352.800 --> 3360.720] am going to shut up and happy to sing a song or take questions whatever you guys like but I'm finished +[3361.040 --> 3385.520] okay thank you thank you for the talk it was very inspiring and so do we have any questions +[3386.080 --> 3395.200] and it's okay if you don't I'll be around over here we can talk over here or hang out Steve +[3397.360 --> 3403.600] great talk Greg this is less of a question more of a kind of a comment okay and that's that if +[3403.600 --> 3410.800] you think about back the days of 1965 ultimate display talk that I've been settled and gave +[3410.960 --> 3416.000] and a lot of this resonates with that except with the ultimate display talk besides the fact +[3416.000 --> 3421.440] it was only one room it was indoors only one person there were no virtual people but there were +[3421.440 --> 3425.760] chairs you could sit in and bullets that could kill you and handcuffs that could confine you but he +[3425.760 --> 3432.000] had no idea how to do any of that stuff and it was just all this like the ultimate display would be +[3432.000 --> 3436.800] and then years later of course there's a headborne display and sort of down at least it a bunch +[3436.800 --> 3443.360] of very clunky but really cool 1960 stuff so one thing that really impresses me here is that +[3443.360 --> 3450.160] here we are of course coming up on in months on 50 years later and we actually kind of know how to +[3450.160 --> 3456.640] do some of this stuff and so it's not just this crazy wild dream right but it's really something that +[3456.640 --> 3461.680] together lots of folks and lots of different disciplines can turn into a reality at some time +[3461.680 --> 3465.760] absolutely absolutely and I think that's one of the things where again for me the evolution of +[3465.760 --> 3470.240] robotics and IoT things because it's happening independent of me at least I can't speak for +[3470.240 --> 3474.320] everyone and so you know the fact that these things are coming to fruition they do play into +[3474.320 --> 3478.800] that vision and nobody could do those at that time and you're right now and now we have the +[3478.800 --> 3482.480] luxury maybe of starting to be able to do or think about some of these things and I think that's +[3482.480 --> 3491.280] just really cool so yeah it's a good observation you guys get that I'll just take this over for +[3491.280 --> 3494.160] David Tar apparently he was just going to sit down I thought he was trying to take over +[3500.000 --> 3504.160] I wanted to ask you a question Greg as we were talking about sort of the social contract stuff and +[3505.040 --> 3510.160] Alex's work question I asked earlier about that and when you showed the you know the virtual +[3510.160 --> 3515.360] people sort of responding to you and her leaving the room and stuff that was not that was really +[3515.760 --> 3521.840] an amazing example of this this cross over where we've got this device that can send us but now do +[3521.840 --> 3527.680] we expect the same things of the virtual person what kind of contract do we have with them or other +[3527.680 --> 3533.680] peoples right and do we treat them similarly so I didn't you know you mentioned it but +[3533.680 --> 3539.760] Garrett and I and others and Jeremy and Stanford have done studies for lack of a better thing I +[3539.840 --> 3545.200] call them after we call them afterglow related studies so if I see a virtual human sitting in this +[3545.200 --> 3552.800] chair and I'm talking with her and then I take off my head mount display for example I just take it off +[3552.800 --> 3558.880] do I behave now as if she's still there or if I shut it down if I intentionally do something in my +[3558.880 --> 3563.440] head on display to say turn off do I behave as if she's still there so all these these things that +[3563.440 --> 3569.360] have to do with not just how she reacts to me like you're saying Kyle but how I feel about her do I +[3569.360 --> 3575.440] avoid her do I walk around her when I leave and surprisingly enough people do and so you know +[3575.440 --> 3580.800] it's like you think about it do people subconsciously have like a separate dimension in which these +[3580.800 --> 3585.280] people exist and they're still here and just because I've taken off these glasses it's like heat +[3585.280 --> 3589.520] sensing glasses like I just can't see them right now but I put my glasses back on they're still +[3589.520 --> 3595.040] there take them off so I think there's I'm not a social psychologist just again why I love +[3595.680 --> 3601.200] stepping outside of my area working with someone like Jeremy it makes it so much fun to learn +[3602.480 --> 3610.240] Frank thanks Rick for the excellent talk I really enjoyed it um is it on yeah um also a question +[3610.240 --> 3614.240] related to other fields which might come in our field and I observed in the last two years or so +[3614.240 --> 3618.640] that every sick rough conference every computer vision conference there was a lot of machine learning +[3618.640 --> 3623.360] and deep learning for everything exactly what they developed so far haven't seen so much machine +[3623.360 --> 3628.000] learning in our field so there are rarely any papers within the last two years at PR of RST which +[3628.000 --> 3633.120] were made use of machine learning so I want to ask you what do you think what would be interesting +[3633.120 --> 3637.120] applications for machine learning and not virtual agents because that's pretty obvious but what +[3637.120 --> 3642.080] are other options that we could use machine learning I get to tell you before I answer the question +[3642.080 --> 3645.440] because I have an answer for you but I saw a funny article somewhere once it was like almost a +[3645.440 --> 3650.080] cartoon but it was where statisticians who were sort of bitter about machine learning getting +[3650.080 --> 3653.520] all the glory and getting all the money now there was like this little table where they said like +[3653.520 --> 3658.160] things like you know like who gets credit for inventing certain things and it's like machine learning +[3658.160 --> 3661.920] people it was basically sour graves from the statisticians it was kind of funny it was like you know +[3661.920 --> 3665.360] who gets all the research funding who has all the conferences it's the machine learning people +[3665.360 --> 3670.320] what we've been doing this stuff for 30 years and nobody cares you know so you know I just feel like +[3670.320 --> 3675.120] there's so many things I mentioned earlier the the spatial user interface that's adapting to whatever +[3675.120 --> 3679.840] I needed that moment it needs to know what I'm doing what I what I need I tell people before it's like +[3680.880 --> 3687.760] except for an agent my son one of my sons has this rare for my experience rare ability to both +[3687.760 --> 3691.520] pay attention and be alert and be attuned to what's happening so if I'm doing something like +[3691.520 --> 3695.200] an example comes to mind as I'm changing a light switch in my house so it's you know there's some +[3695.200 --> 3699.040] electrical stuff there I'm paying attention to it I'm trying to shock myself I think I've turned +[3699.040 --> 3705.840] off the electricity and I need a tool and I kind of put my hand down and I say Carlos could you get +[3706.160 --> 3711.360] and he's already got the FIG ID he knows from the context he knows what I'm doing he knows I need to +[3711.360 --> 3716.640] remove this screw so he's provided for me the tool right there before I've been saying it's like +[3716.640 --> 3723.360] it is liberating it's so amazing sometimes my assistant is is is equally proactive and press +[3723.360 --> 3728.800] in it in that way so machine learning I think at least for context understanding for recognizing +[3730.080 --> 3734.560] and it may be as simple as recognizing gestures you know one of the things that Gary and I have +[3734.560 --> 3741.280] talked about in the the awareness avatar AR avatar thing is think about with Amazon echo or +[3741.280 --> 3747.200] something like that right now how clunky it is you have to name every lamp right you you get it +[3747.200 --> 3751.280] the first day you get the echo home and you program it and you set it up for this and it says you +[3751.280 --> 3755.840] have to give it a name and so you go a lamp and then you know you go okay well it's pretty cool I'm +[3755.840 --> 3759.680] gonna put on this lamp too and you get an amp to name it you're like oh I don't know lamp too +[3759.680 --> 3763.520] you know you're we're not very creative and we don't think it through and we don't plan it out and +[3763.520 --> 3768.400] so now you got to remember it's like that's lamp too that's that's Greg's lamp this one's floor lamp +[3768.400 --> 3772.000] you know getting weird names and everything but shouldn't it be the case that you should be able to +[3772.000 --> 3776.240] just say turn off that lamp and just gesture and point toward it and to turn off kind of like +[3776.240 --> 3781.120] X1 but with my hand I shouldn't have to to do that so maybe in machine learning I think there's a +[3781.120 --> 3788.000] lot to be done about context about what's going on and about the state of the real physical environment +[3788.000 --> 3796.720] including my body +[3799.200 --> 3805.360] yeah thank you so much for the talk I also I also want to point out how humble you are when you're +[3805.360 --> 3813.360] mentioning spatial AR which basically influenced the whole feel of projection mapping and ironically +[3813.440 --> 3818.800] in Berlin this week we have the festival lights so tons of buildings are +[3818.800 --> 3826.960] yeah it might be on today is anybody been to in Leone to I was for the fate of Lumière in +[3826.960 --> 3833.280] Leone this is amazing too I didn't know they had that here yeah so I think you might be on tonight +[3833.280 --> 3839.680] still yeah wow okay sorry Alex so it was like for me it was like a 30 minute walk which is +[3839.680 --> 3847.520] building after building so thank you thank you so I was a little bit curious there's a lot +[3847.520 --> 3854.880] of great examples from simulation and I mean especially I'm interested in you're involved in +[3854.880 --> 3861.920] so many different fields and I was especially interested about the medical applications and how +[3861.920 --> 3867.360] you see some of these techniques being applied not only in training but actually how they could +[3867.760 --> 3877.200] improve medical practice and like surgery etc in real time that makes sense yeah so we I'm not +[3877.200 --> 3881.360] sure about surgery in real time I haven't thought about that too much but I'll think about it but +[3881.360 --> 3886.640] in terms of training I've thought a lot about it so there's a lot of it turns out there's great +[3886.640 --> 3892.640] evidence that let me back up in the US at least and I think this is pretty true worldwide +[3893.360 --> 3896.720] physicians that is doctors where they go through schooling and at some point they have an +[3896.720 --> 3902.000] internship and then they're they're actually practicing in a real hospital or a real setting and +[3902.000 --> 3904.640] that somebody looking over their shoulders so they're really working with somebody but they're +[3904.640 --> 3909.920] actually doing stuff nurses don't have that and you know for whatever reason historically and +[3909.920 --> 3913.600] there are many more nurses and nurses show up at a hospital the first day they get hired +[3913.600 --> 3917.520] and hospitals don't trust them because they haven't had that experience they so their +[3917.520 --> 3922.480] internship is in a sense begins right then of course we don't want them practicing on us right nobody +[3922.480 --> 3927.280] wants that or practicing on our kids or our family members or something like that so it's a +[3927.280 --> 3932.720] difficult situation so there's lots of investment and in training nurses in particular and +[3933.600 --> 3938.720] there's lots of evidence that the realism of the experiential part of it plays a big role +[3938.720 --> 3944.640] in how much of the skills they retain and therefore the outcomes once they start practicing +[3944.640 --> 3948.480] and one of the things that I've been told by nurses and physicians that I work with over the +[3948.480 --> 3952.800] years is that there are many of the technical things they can train you know you can practice you +[3952.800 --> 3956.320] can have rubber arms you can practice doing an IV or a central line then you can do that over and +[3956.320 --> 3961.760] over again you make mistakes and it's okay but it's learning about talking to other people about +[3961.760 --> 3966.400] the chaos of say an emergency room where you're on call you show up and there's another doctor shows +[3966.400 --> 3970.160] up and another nurse and you don't know each other and you don't know what your specialty is I don't +[3970.160 --> 3974.240] know if I can trust you I don't know whether you're you're gonna sit back and let me lead whether +[3974.240 --> 3977.840] you're gonna take over and so there are all these dynamics then there's chaos you know the kids +[3977.840 --> 3982.160] screaming because it was a car accident somebody you know the mother's upset or the father's upset +[3982.160 --> 3985.680] and so there's all this stuff that people just aren't used to dealing with and the same thing +[3985.680 --> 3990.240] happens within the military training medics for example who are have to deal with somebody getting +[3990.240 --> 3994.720] shot or something the first time and and you know that you don't want them to freeze out in the field +[3994.720 --> 3999.840] and so you want them to get used to that so all of this stuff we talked about and I showed the +[3999.840 --> 4006.640] picture there of the of the sort of trauma center Garrett and I have an experiment or project going +[4006.640 --> 4011.040] where we're working with actually we're very fortunate to have a relationship with the UCF +[4011.040 --> 4017.040] police department at our university who are it's a big university it's a 68,000 students so it's +[4017.040 --> 4022.080] a very big second largest university in the US maybe first go so a very big police force but very +[4022.080 --> 4027.120] forward thinking and so we're doing some work with them to look at the use of augmented reality +[4027.920 --> 4034.160] in our student union which is this great big hall where students gather so police unlike +[4034.160 --> 4042.000] military have this situation where they know quote unquote that this could be a target for some crazy +[4042.000 --> 4046.960] person or people someday shopping balls things like that because a lot of people concentrated one place +[4046.960 --> 4052.960] so what we're looking at is how could the police both do real not just real-time simulations of a +[4052.960 --> 4058.400] bad event happening in the union in the student union so having virtual innocent people having +[4059.360 --> 4065.200] the paramedics come in and all sorts of stuff happen but then also what I for lack of a bit of +[4065.200 --> 4069.520] termical pre-breathing which is they go to the union and police officers just talk about that +[4069.520 --> 4074.240] structure and say you know like where the imagine there's a guy with a gun over here where would +[4074.240 --> 4078.560] you go where would you take cover where they can look around and they they don't just learn +[4078.560 --> 4082.960] specific to that place but they learn about the types of things where you would take cover tables +[4083.040 --> 4089.040] are better than this and something else is worse than that so AR the ability to sense in this +[4089.040 --> 4094.960] big area stuff that's happening the ability to create specialized effects around in the environment +[4094.960 --> 4100.800] all of that can be helpful in terms of increasing the realism of the experience which lots of research +[4100.800 --> 4105.360] are shown increases the effectiveness of those people in medical or military or teaching or +[4105.360 --> 4111.520] anything else they do after that I hope that kind of answer the first question yeah I mean I +[4111.600 --> 4116.720] actually I was also curious a little bit more so this is training before the actual +[4117.680 --> 4124.800] event or the skill is needed and maybe it's easier to talk about that from a medical standpoint but +[4126.320 --> 4134.400] the nurse that needs to exercise these skills is there a way as you see it to shorten +[4134.400 --> 4141.040] that training loop by basically have the technology as a real-time feedback loop while it's +[4141.040 --> 4146.880] happening like a digital support system so you're training in C2 that's a great great idea +[4148.560 --> 4152.720] boy I mean it'd be interesting to talk to Nassir Novab about this a little bit the C +[4152.720 --> 4157.600] C would probably have a better sense of for those of you don't know he's a professor at T.A. +[4157.600 --> 4162.640] Munich who's who's and at Johns Hopkins University in the U.S. because back and forth teaching +[4162.640 --> 4170.000] about places and his focus area is AR in medicine so it happens to be AR but which is not VR which +[4170.000 --> 4176.800] is not this which is not that two things come to mind one is it's not exactly what you're asking but +[4176.800 --> 4182.800] I'll say it anyway Garrett and I have another project where we're working on is that I had forgotten +[4182.800 --> 4188.080] about it do you guys know what moulage is then the word moulage a French word means like fake wound +[4188.080 --> 4193.840] or fake injury so if you put catch up on me and something else you can make it look like I +[4193.840 --> 4197.920] hurt bleeding or something that would be called moulage it's this makeup that essentially makes +[4197.920 --> 4204.240] me look injured so one of the things we're working on are I assume you either know or could +[4204.240 --> 4210.880] imagine what what I mean when I say fake barf a little rubberized pile of something that could be +[4210.880 --> 4217.200] like a wound that you put on your arm and that has sensors and actuators inside it so that you can +[4217.200 --> 4222.800] actually if it's if it's if I look down with my air headset I can lock on to that object because +[4222.800 --> 4227.280] it's got markers on it active or passive and so I can see blood coming from that if I apply +[4227.280 --> 4231.040] pressure to it it can sense that I'm applying the pressure so it can reduce the blood flow +[4231.040 --> 4235.600] which changes the physiology of the patient and changes the visual aspects of it so that's +[4235.600 --> 4239.680] bringing some of those things into the training the real-time thing the only experience I have Alex +[4239.680 --> 4245.840] that that was a surprising experience to me was and and maybe the biggest challenge for anything +[4245.840 --> 4252.480] related to your last question is trust and the and so I assume there are certain things that +[4253.360 --> 4258.000] are less critical and I can trust them more but I did an experiment with Henry Fougues in some +[4258.000 --> 4262.480] others years ago on telepresence for medical people so it wasn't they wasn't the system was going +[4262.480 --> 4266.400] to do anything for it was a real human was going to it was 3D telepresence was the idea you'd +[4266.400 --> 4270.960] have multiple cameras watching while if you're a paramedic you're doing something really sensitive +[4270.960 --> 4274.160] it happened to be a cryotherautomy you're putting an airway in someone's throat and it's something +[4274.160 --> 4278.480] that most paramedics and EMTs in the US don't do very often and they probably don't have very much +[4278.480 --> 4283.760] training and if they don't do it person's going to die right there and if they then they're +[4283.760 --> 4286.000] afraid to do it because they're afraid they're going to kill the person because they don't know how +[4286.000 --> 4292.800] to do it so the idea was you could have a 3D viewpoint live reconstructed 3D a person back +[4292.800 --> 4296.800] of the hospital a nurse or a physician could be watching a coach and you say that's right it's okay +[4296.800 --> 4300.320] that's expected now's the time to do it first of all second of all now put your hand there do you feel +[4300.320 --> 4305.840] it you're good so kind of comforting you and we thought oh this is going to be great it turns out +[4305.920 --> 4313.040] the paramedics in the study we did 100 paramedics by far didn't like the system because they were +[4313.040 --> 4317.360] afraid they wouldn't trust it it wasn't they didn't trust the doctors of the nurses they didn't +[4317.360 --> 4323.760] trust that the system was providing the context for them for what it was they were doing so they're +[4323.760 --> 4327.200] looking at it is and then we talked to them and said things like you know the patient's life is +[4327.200 --> 4331.120] in my hands that's the way they think about it this patient's going to die if I don't do the right +[4331.840 --> 4336.880] thing you're a thousand miles away telling me what I should do but you really don't understand +[4336.880 --> 4341.600] the whole situation and or I'm afraid you don't and so I'm probably going to do what I think is the +[4341.600 --> 4347.360] right thing in the end anyway and not listen to you so I wonder about those real-time aids when it +[4347.360 --> 4351.680] comes to something is it depends on the on the medical circumstances if it's something really +[4351.680 --> 4358.720] critical I do know people use obviously institute I have a friend as a surgeon at the children's +[4358.720 --> 4363.360] hospital in Orlando and he's pretty forward-thinking they do a lot of AR based surgery and they do +[4363.360 --> 4368.880] planning everything of the head like we he's a maxillofacial cranial surgeon so he reconstructs +[4368.880 --> 4376.160] head bone and so it was really hard for me to watch some of it but even in pictures but they'll +[4376.160 --> 4381.520] use it in real time and they'll watch using AR essentially which amazed me because again that's +[4381.520 --> 4385.760] sort of one of these things is like totally outside of it wouldn't surprise me to go to Ismar and +[4385.760 --> 4389.600] see that presented as a paper but to go to this hospital and see these people just using it +[4389.600 --> 4394.400] it's like wow it has arrived or something like you know people are actually using it but they're +[4394.400 --> 4399.200] doing things like he was telling me about like if they have to drill and they don't want to drill too far +[4399.200 --> 4402.480] I mean I have this problem when I'm drilling in my wall in my house so I think it's probably more +[4402.480 --> 4408.160] important the kids head that you stop drilling in certain depth and so they'll be using AR in different +[4408.160 --> 4413.520] sensing of the drill and all sorts of things to help them go just as deep as they need to but no deeper +[4414.240 --> 4420.000] that's the closest I'm aware of someone else in the room might have some other some other knowledge +[4421.280 --> 4422.080] yeah thank you +[4425.840 --> 4431.280] their little mundane thing is geared and I got approached by Orlando Fire Department +[4431.280 --> 4435.200] paramedics the chief of all paramedics that's concerned about and this is really sort of +[4435.920 --> 4440.800] an IOT-ish mundane thing but they have boxes with medicine in them and they keep them in the +[4440.800 --> 4445.760] ambulance all day long and certain medicines have to remain a certain temperature and they don't +[4445.760 --> 4450.800] know this this is where you learn about the real world so many things are eye opening you know +[4450.800 --> 4453.760] it's like they have these boxes they get to know if the temperature goes goes both certain +[4454.400 --> 4458.560] level then they throw the medicine away they replace it and of course we're thinking oh just put +[4458.560 --> 4462.640] a wireless thermometer in there you know and they're thinking we have no money and we don't +[4462.640 --> 4468.160] have to do this and so it's like you know how do we do this without money and how do we so it's +[4468.160 --> 4473.920] eye opening to sometimes when and I've seen the same thing with the military where you go in +[4473.920 --> 4478.480] and you have this great idea for something big and I'm sure Steve and Tobias and we all see this +[4479.280 --> 4486.480] and they're worried about something very small very mundane so that's when getting out of your +[4486.480 --> 4489.040] field sometimes it's already have to find the right person the right person who's under the +[4489.040 --> 4495.760] right circumstances who's both able and willing to spend time with you and and give up sort of on +[4495.760 --> 4502.560] their some of their own desires and be flexible and be patient and invest in that relationship +[4513.280 --> 4516.480] I talked to you to death all right and the music stopped +[4516.720 --> 4526.560] like I said I'm here you know I'll leave tomorrow but I'm here tonight so I'm very happy to chat if +[4526.560 --> 4533.920] anybody wants to at some point I'll be around. I wonder our questions so let's all give a round of applause to Greg Welch. +[4541.120 --> 4542.000] Thank you. Thank you. diff --git a/transcript/allocentric_cM4ISxZYLBs.txt b/transcript/allocentric_cM4ISxZYLBs.txt new file mode 100644 index 0000000000000000000000000000000000000000..3ed43c7f69fddd448da497e809a37a610f50b0ad --- /dev/null +++ b/transcript/allocentric_cM4ISxZYLBs.txt @@ -0,0 +1,1352 @@ +[0.000 --> 2.000] you +[30.000 --> 32.000] you +[60.000 --> 62.000] you +[90.000 --> 92.000] you +[120.000 --> 122.000] you +[150.000 --> 152.000] you +[180.000 --> 182.000] you +[210.000 --> 212.000] you +[240.000 --> 242.000] you +[270.000 --> 272.000] you +[300.000 --> 303.000] you +[330.000 --> 332.000] you +[332.000 --> 342.000] you +[342.000 --> 344.000] you +[344.000 --> 346.000] you +[346.000 --> 348.000] you +[348.000 --> 352.000] i +[352.000 --> 356.000] have +[356.000 --> 359.960] on their face, which I think is asking a slightly different question. +[359.960 --> 363.080] And so I thought today I would try to answer that in, +[363.080 --> 364.880] like, you know, at least at the beginning of this talk, +[364.880 --> 369.600] try to give you an idea of why my lab does the odd things that we do. +[369.600 --> 372.240] And then to give you one example, like, +[372.240 --> 375.960] in depth of what we actually do in practice, +[375.960 --> 378.000] which will unfortunately not involve brain decoding, +[378.000 --> 379.880] but I'm happy to talk to you about that. +[379.880 --> 382.360] So first, the introduction of motivation. +[382.360 --> 385.160] You know, the brain is just a deep network, +[385.160 --> 387.640] but it's a hideously complicated deep network +[387.640 --> 390.720] that is very, very different from the deep networks +[390.720 --> 393.200] you guys use in computer science. +[393.200 --> 397.080] Brain has about 80 billion neurons, about 60 billion of those +[397.080 --> 398.800] are actually back in the cerebellum, +[398.800 --> 401.400] which is an anatomically very old structure +[401.400 --> 404.960] that is what you might argue relatively inefficient. +[404.960 --> 408.040] And therefore, the brain kind of evolved +[408.040 --> 410.320] a new method of building neural networks, +[410.320 --> 415.040] which is the method that my cerebellum student is like, +[415.160 --> 416.280] grimacing at me. +[416.280 --> 418.800] Look, we can argue about this forever, Amanda. +[418.800 --> 420.160] 80 billion neurons. +[422.160 --> 425.720] Anyway, so there's another alternative, +[425.720 --> 428.800] there's another alternative that the brain has evolved, +[428.800 --> 430.080] which is the cerebell cortex, +[430.080 --> 431.560] where there's about 20 billion neurons +[431.560 --> 433.960] who together in these very complicated networks, +[433.960 --> 435.680] each one of those neurons is communicating +[435.680 --> 438.000] with 10,000 other neurons. +[438.000 --> 440.920] The cerebell cortex, the outline part of the brain +[440.920 --> 443.200] that you can see when you sort of just, you know, +[443.200 --> 445.240] see a picture of the side of the brain, +[445.240 --> 449.800] consists of about 300 different areas and modules. +[449.800 --> 452.240] This network is very highly interconnected. +[452.240 --> 455.960] If you record from a single brain area, +[455.960 --> 458.000] it has about a 50% chance of being connected +[458.000 --> 459.600] with every other brain area. +[459.600 --> 461.160] And every one of those connections, +[461.160 --> 462.800] everyone does feed forward connections, +[462.800 --> 465.000] has a concomitant feedback connection. +[465.000 --> 468.320] So the whole brain is this complicated series of loops. +[468.320 --> 471.400] Everything, or information is constantly looping around, +[471.400 --> 474.960] and the time skills of these loops are slow, +[474.960 --> 477.440] because, you know, we don't have electrical wires +[477.440 --> 479.000] in the brain, we've got essentially +[479.000 --> 481.120] electrochemical processes happening, +[481.120 --> 482.560] and those are very slow. +[482.560 --> 486.560] So if you have a neuron at the back of the brain, +[486.560 --> 490.400] communicating with, you know, a prefrontal brain structure, +[490.400 --> 492.520] that might take 30 or 40 milliseconds +[492.520 --> 494.240] for that loop to be completed. +[494.240 --> 498.640] So the brain is on a single neuron level relatively slow, +[498.640 --> 500.280] and there are a lot of feedback loops, +[500.280 --> 502.920] and these feedback loops have long delays. +[502.920 --> 505.720] And so there's a lot of reverberant activity that happens, +[505.720 --> 508.720] and the principles of a system that +[508.720 --> 512.120] involves these oscillatory feedback loops that slow delays, +[512.120 --> 514.280] this very high degree of connectivity feed forward +[514.280 --> 517.400] and feedback, the principles of operation of that thing +[517.400 --> 519.520] are going to be likely to be very, very different +[519.520 --> 521.960] from the principles of operation. +[521.960 --> 524.960] Conventional neural networks that we use in computer science. +[524.960 --> 528.400] Because all these neural networks, like convolutional networks, +[528.400 --> 529.840] and more complicated kinds of things +[529.840 --> 533.440] that people have been devised, all grew out of essentially +[533.440 --> 537.040] our understanding of how neural networks worked in World War II. +[537.040 --> 537.280] Right? +[537.280 --> 539.480] McCulloch and Pitts, and those really guys +[539.480 --> 543.440] set off on a trajectory that was picked up in AI and computer +[543.440 --> 546.720] science and has evolved ever since, fairly slowly +[546.720 --> 548.040] until 10 years ago. +[548.040 --> 553.160] And now there's this giant burgeoning of evolutionary progress +[553.160 --> 556.320] in artificial neural networks on a completely different evolution +[556.320 --> 560.640] or a path than the much slower mammalian evolutionary path. +[560.640 --> 563.400] So transformer networks don't really +[563.400 --> 565.520] have anything to do with how the brain operates. +[565.520 --> 567.280] Attention and transformer networks basically +[567.280 --> 569.360] has nothing to do with how attention operates in the brain. +[569.360 --> 571.720] They're different things, both interesting, +[571.720 --> 573.280] but different things. +[573.280 --> 575.640] So my job is to understand the brain, +[575.640 --> 576.760] because I'm a neuroscientist, which +[576.760 --> 578.800] means I have to understand an architected system +[578.800 --> 580.640] that somebody else built. +[580.640 --> 585.960] And in that domain, we ask a lot of standard questions +[585.960 --> 587.280] that everybody in the field asks. +[587.280 --> 589.920] And they're the same subsets of questions. +[589.920 --> 592.400] First, how is the brain divided into parts? +[592.400 --> 593.960] I mean, if you just look at the outside of the brain here, +[593.960 --> 596.640] it's not obvious that there is more than one part here. +[596.640 --> 602.520] But in fact, this folded structure +[602.520 --> 604.240] is sort of like a beach ball that's +[604.240 --> 605.800] had all the air sucked out of it so that it +[605.800 --> 608.880] confided inside your skull and distributed across +[608.880 --> 612.160] the surface of that structure are a lot of different areas +[612.160 --> 614.520] that have different functional properties. +[614.520 --> 616.600] What information is represented in each of these modules +[616.600 --> 617.640] or areas? +[617.640 --> 623.520] What is the neural code that instantiates these representations? +[623.520 --> 624.760] How is information transformed? +[624.760 --> 627.800] Is it passes through these circuit circuits? +[627.800 --> 631.920] How are top-down processes like expectation and priors +[631.920 --> 634.760] and control processes and attention implemented? +[634.760 --> 637.840] And how do they affect representations and information flow? +[637.840 --> 640.960] And then how does all of this interact with memory, +[640.960 --> 642.360] obviously, any systems can do anything. +[642.360 --> 643.840] Interesting is going to have to have memory. +[643.840 --> 645.640] You can have to store those memories. +[645.640 --> 646.680] You can have to learn things. +[646.680 --> 648.480] You can have to recall the memories +[648.480 --> 651.880] and apply them with current experience to make decisions. +[651.880 --> 655.480] All this stuff is basically an open question in the brain. +[655.480 --> 657.200] That people have been working on for, +[657.200 --> 658.360] at this point, hundreds of years. +[661.360 --> 664.280] So as you will see, it'll become very clear +[664.280 --> 666.640] when I get to the actual example. +[666.640 --> 668.960] Because this is neuroscience, and we're +[668.960 --> 671.200] doing experiments on a structure that we're trying +[671.200 --> 674.000] to understand, most of the questions we ask +[674.000 --> 677.160] can be formulated as a regression problem. +[677.160 --> 679.320] I've got some x variables, which are either the things +[679.320 --> 682.840] I manipulate or the observed variables +[682.840 --> 686.600] that I see in a naturally behaving animal. +[686.600 --> 688.440] And I've got some my y variables, which +[688.440 --> 689.600] are the brain activity. +[689.600 --> 691.240] And I want to understand the difference +[691.240 --> 693.320] between the relationship between the x variables +[693.320 --> 694.080] and the y variables. +[694.080 --> 695.920] That's a regression problem. +[695.920 --> 699.320] Most things in science are of that kind of problem. +[699.320 --> 702.320] If I'm an astrophysicist, and I point a telescope up +[702.320 --> 705.640] at the sky, I detect some vanishingly small fraction +[705.640 --> 707.240] of all the stars in the sky. +[707.240 --> 710.360] And I have to make inferences about how the sky is a whole +[710.360 --> 711.040] works. +[711.040 --> 713.600] And we're basically in the same situation in neuroscience, +[713.600 --> 716.760] except our microscopes are different. +[716.760 --> 719.360] They're microscopes instead of telescopes. +[719.360 --> 722.760] But it's all a regression problem. +[722.760 --> 724.800] And the one thing I was like to mention when I'm talking +[724.800 --> 728.920] to engineers is the criteria for success +[728.920 --> 731.720] in solving this regression problem are kind of different +[731.720 --> 734.040] in neuroscience than they are in engineering. +[734.040 --> 736.080] Engineers want to build something. +[736.080 --> 738.800] And the first requirement is the thing has to work. +[738.800 --> 742.280] So engineers value prediction and generalization. +[742.280 --> 745.320] And as you all know, you would like a proof for every system +[745.320 --> 745.920] you design. +[745.920 --> 748.760] But if you can't write a proof, and it seems to work, +[748.760 --> 752.160] and you just put really big safety boundaries on it, +[752.160 --> 753.600] and you can deploy it anyway. +[753.600 --> 756.360] And that's OK as a provisional model. +[756.360 --> 758.880] In neuroscience, and most areas of science, +[758.880 --> 760.520] it's actually the opposite. +[760.520 --> 762.840] People actually never check predictions in neuroscience +[762.840 --> 763.840] or psychology. +[763.840 --> 765.120] They never check generalization. +[765.120 --> 767.560] It's not a requirement of any paper that's published. +[767.560 --> 771.440] And so what people value is an elegant explanatory model +[771.440 --> 773.440] rather than a good prediction. +[773.440 --> 776.960] Now this makes me sad because I want both. +[776.960 --> 779.480] I want good predictions and generalization +[779.480 --> 781.400] and a beautiful elegant model. +[781.400 --> 784.400] But I have noticed that I'm in the minority because, of course, +[784.400 --> 786.160] science is a social enterprise, and people +[786.160 --> 790.760] have a vested interest in behaviors that I would consider +[790.760 --> 791.360] not optimal. +[791.360 --> 793.360] For example, pretending that statistical significance +[793.360 --> 797.040] is important, or pretending that the data set you have, +[797.040 --> 799.040] if the model fits really well within the data set you have, +[799.040 --> 801.160] you don't need to have a separate fit test set, things +[801.160 --> 801.960] like that, right? +[801.960 --> 803.160] Which are very common. +[803.160 --> 807.440] My lab, we try to not buy into those dysfunctions, +[807.440 --> 810.280] and we try to make sure that all of the procedures +[810.280 --> 812.320] that we use in the lab are adhering +[812.320 --> 816.360] to the best possible standards in modern data science. +[816.360 --> 818.520] Just going to mention one more thing. +[818.520 --> 821.040] Because a lot of my colleagues think AI +[821.040 --> 823.120] until chat GPT was kind of useless. +[823.120 --> 827.160] And I always like to point out, no, the whole reason +[827.160 --> 829.400] we have data science is because of AI. +[829.400 --> 831.840] Because by the time the 1990s came, +[831.840 --> 835.680] AI was a seriously broken, horrible area. +[835.680 --> 838.280] And the government was not going to fund it anymore. +[838.280 --> 840.520] The government was like, we've spent 50 years of money +[840.520 --> 841.280] on this thing. +[841.280 --> 842.160] Nothing works. +[842.160 --> 843.440] Why are we paying you? +[843.440 --> 845.600] And the AI people, this is a cartoon, of course. +[845.600 --> 847.360] The AI people all got together and said, +[847.360 --> 848.640] we have to fix our problem. +[848.640 --> 849.920] What's our problem? +[849.920 --> 855.160] And they realized, look, science is politics with data. +[855.160 --> 855.680] Right? +[855.680 --> 856.960] It's some data. +[856.960 --> 858.880] And then a sociological experiment +[858.880 --> 861.960] that gets applied to the data, which is politics. +[861.960 --> 863.240] What's the problem with AI? +[863.240 --> 864.320] There's no data. +[864.320 --> 865.760] It's just politics. +[865.760 --> 869.600] So if you don't, if you don't have, there's no ground truth. +[869.600 --> 871.840] So if you don't have some way to keep yourself +[871.840 --> 874.600] from political dysfunction, bad things will happen. +[874.600 --> 877.600] So in that from the 90s to late 2010, +[877.600 --> 880.520] the machine learning AI community did a fantastic job +[880.520 --> 882.840] of basically inventing modern data science, +[882.840 --> 885.320] of like grafting statistics and computer science together, +[885.320 --> 888.080] and making sure that they could create models that work, +[888.080 --> 890.520] that predicted accurately and that generalized well. +[890.520 --> 892.800] And all of modern data science, I personally +[892.800 --> 894.640] think came out of that dysfunction. +[894.640 --> 896.760] And it was a marvelous success, as we +[896.760 --> 899.200] see because of the great success of an AI today. +[900.200 --> 902.440] One of my regrets, however, is that a lot of those data +[902.440 --> 906.000] science things have not yet leaked into other areas of science. +[906.000 --> 908.840] And so those of you who are very young, +[908.840 --> 910.640] when you're taking data science and you're maybe +[910.640 --> 912.040] thinking of going into an experimental science +[912.040 --> 916.320] like biology or psychology, just keep doing what you're doing. +[916.320 --> 919.360] Do it the right way and wait till the old people die +[919.360 --> 921.200] and then everything will be fine. +[921.200 --> 923.800] Science progresses one funeral at a time. +[923.800 --> 925.560] All right. +[925.560 --> 928.160] So one of the fundamental problems we have in neuroscience +[928.160 --> 930.320] is that everything is data limited, right? +[930.320 --> 933.320] In any Earth science, either theory limited or data limited. +[933.320 --> 935.280] And at an end of the point in time, one of those two things +[935.280 --> 937.320] is going to be the main limitation you face. +[937.320 --> 940.560] And in neuroscience, there are plenty of ideas and theories. +[940.560 --> 942.160] There's just no data. +[942.160 --> 945.560] Just like an astronomy where data limited in neuroscience +[945.560 --> 946.680] where data limited. +[946.680 --> 950.200] So every method we have for measuring the brain is limited, +[950.200 --> 952.960] either in space or in time. +[952.960 --> 954.920] And so if you're going to measure the brain, +[954.920 --> 958.080] you need to make some decision about which kind of limitation +[958.080 --> 961.400] you want to suffer from, right? +[961.400 --> 965.200] And if you're working with humans, then you have to decide +[965.200 --> 967.120] do you want to do invasive things with humans, which means +[967.120 --> 969.080] you're only going to get a very small amount of data +[969.080 --> 971.600] from pre-surgical patients or do you +[971.600 --> 973.560] want to use non-invasive methods. +[973.560 --> 977.040] So for me, I would like to get high-quality data sets +[977.040 --> 981.280] from neurotypical or at least not people suffering +[981.280 --> 982.560] from a medical disorder. +[982.560 --> 985.760] And so I want to have a method that gives me +[985.760 --> 988.720] the most space and time information I can. +[988.720 --> 991.280] And so in my lab, we generally focus on this method +[991.280 --> 992.480] called Functional MRI. +[995.560 --> 998.000] Here's another way to think about this problem. +[998.000 --> 1000.360] You can think about this like a classic information theory +[1000.360 --> 1001.080] problem. +[1001.080 --> 1002.560] OK, I've got a brain here. +[1002.560 --> 1004.800] It's got a bunch of bits of information in it. +[1004.800 --> 1006.840] I want to extract all those bits of information +[1006.840 --> 1010.000] and put them on my computer where I can analyze them. +[1010.000 --> 1012.560] And ideally, I would get all the information out of the brain +[1012.560 --> 1013.720] and put it on my computer. +[1013.720 --> 1014.960] But I can't do that. +[1014.960 --> 1017.760] Because I have to suck the information out of the brain +[1017.760 --> 1020.760] through some task, the person's always doing some task, +[1020.760 --> 1022.680] through this little tiny straw that's +[1022.680 --> 1025.840] given by whatever lousy method I'm using for recording +[1025.840 --> 1026.680] the brain. +[1026.680 --> 1028.480] And that straw is going to determine +[1028.480 --> 1031.680] how many bits of information I get from this brain per unit +[1031.680 --> 1034.440] time money graduate student, which are the three factors +[1034.440 --> 1038.440] that limit any engineering or science project +[1038.440 --> 1040.440] at the university. +[1040.440 --> 1042.280] So we want to optimize this pipeline +[1042.280 --> 1044.360] to get as many bits of information per unit time money +[1044.360 --> 1045.920] graduate student as we can. +[1045.920 --> 1049.400] And we want that data to predict and generalize +[1049.400 --> 1050.800] to the real world. +[1050.800 --> 1053.400] So in my lab, generally, we focus on naturalistic +[1053.400 --> 1056.000] stimuli and tasks because the brain is a nonlinear dynamical +[1056.000 --> 1056.760] system. +[1056.760 --> 1058.240] If you have a nonlinear dynamical system +[1058.240 --> 1061.040] and you want it to generalize to natural world, +[1061.040 --> 1063.080] you need to measure it in the natural situation. +[1063.080 --> 1065.120] Otherwise, your nonlinear areas will probably +[1065.120 --> 1067.560] get you and your predictions will fail. +[1067.560 --> 1071.960] We try to get as much information as we can out +[1071.960 --> 1074.520] of the brain per unit time money graduate student. +[1074.520 --> 1076.200] We follow best practices of data science. +[1076.200 --> 1077.680] We always do predictions. +[1077.680 --> 1079.080] We always do cross validation. +[1079.080 --> 1081.880] We always have generalization tests. +[1081.880 --> 1084.480] We use an encoding model approach to analyze these data. +[1084.480 --> 1085.360] I'll show you in a second. +[1085.360 --> 1087.520] That's basically just multiple regression. +[1087.520 --> 1092.400] And we perform subsequent statistical modeling +[1092.400 --> 1096.040] after that to try to understand what we did. +[1096.040 --> 1098.280] So I told you I'm using going to use MRI in this talk. +[1098.280 --> 1101.600] For those of you who have never thought about MRI, +[1102.560 --> 1104.840] this is all you need to know about functional MRI. +[1104.840 --> 1106.600] MRI, magnetic resonance imaging, +[1106.600 --> 1108.400] is just a big chemistry experiment. +[1108.400 --> 1111.120] We stick our sample, which in this case is your head, +[1111.120 --> 1114.760] in a big magnetic field, and it aligns some vanishingly +[1114.760 --> 1118.080] small number of protons along the main magnetic field. +[1118.080 --> 1121.480] Then we apply an electromagnetic gradient, +[1121.480 --> 1124.720] and that pushes the proton spins off axis. +[1124.720 --> 1127.640] Then we turn off that gradient and the protons +[1127.640 --> 1131.240] precess back down to the base magnetic field. +[1131.240 --> 1133.440] And when they do that, they give off energy. +[1133.440 --> 1136.440] And we can either look at that energy in two planes, +[1136.440 --> 1139.640] it's three dimensions, so two planes give us all the information +[1139.640 --> 1142.680] we need about how fast these protons precess down. +[1142.680 --> 1146.240] And it turns out that oxygenated and geogygenated +[1146.240 --> 1148.920] hemoglobin have different magnetic properties. +[1148.920 --> 1152.040] And therefore, the protons bound to oxygenate +[1152.040 --> 1155.880] or geogygenated hemoglobin spin down at different rates. +[1155.880 --> 1160.440] So we can Fourier encode these electromagnetic gradients. +[1160.440 --> 1163.920] And we can recover three-dimensional positions +[1163.920 --> 1168.400] in the brain of small volumes of protons. +[1168.400 --> 1172.760] And then we can look at the rate at which they spin down +[1172.760 --> 1176.080] to try to get an idea of how much oxygen there is +[1176.080 --> 1176.960] in the bloodstream. +[1176.960 --> 1178.040] Why oxygen? +[1178.040 --> 1180.680] Because a neuron is a little chemical engine that +[1180.680 --> 1185.520] burns oxygen with sugar to create a chemical called ATP, +[1185.520 --> 1187.840] which is the fuel that drives the cells. +[1187.840 --> 1191.520] So as neurons fire, they're constantly extracting sugar +[1191.520 --> 1193.000] and oxygen from the bloodstream. +[1193.000 --> 1194.480] And the more neurons that are firing, +[1194.480 --> 1196.000] the more sugar and oxygen is being +[1196.000 --> 1197.960] as triad from the bloodstream. +[1197.960 --> 1199.920] Now, the problem with this is that the students +[1199.920 --> 1202.080] of noise is going to depend on how strong this made magnetic +[1202.080 --> 1203.120] field is. +[1203.120 --> 1206.920] And we don't want to put somebody in a magnetic field that's +[1206.920 --> 1208.960] so strong they would levitate or something bad +[1208.960 --> 1209.960] would happen to them. +[1209.960 --> 1213.560] So typically, we have only a vanishing small fraction +[1213.560 --> 1215.760] of the protons that are aligned in our students +[1215.760 --> 1216.640] and noise is low. +[1216.640 --> 1219.600] And we need to average over space to get good signal. +[1219.600 --> 1223.240] And at the main magnetic we use here at Berkeley, +[1223.240 --> 1226.280] so center three-testile magnet, our spatial resolution +[1226.280 --> 1228.040] is about two to three millimeters. +[1228.040 --> 1231.640] There's a brand new magnet we just installed at Berkeley, +[1231.640 --> 1233.400] called the next gen 17 magnet that +[1233.400 --> 1236.640] goes down to a half a millimeter resolution. +[1236.640 --> 1238.560] And that's just coming up to speed right now. +[1238.560 --> 1240.120] I'm happy with the F-Roy. +[1240.120 --> 1242.440] Yeah. +[1242.440 --> 1243.960] And it's the only magnet in the world. +[1243.960 --> 1249.040] Right now, Berkeley has the best F-Roy machine on the planet. +[1249.040 --> 1249.320] OK. +[1249.320 --> 1250.480] So now I'm going to go. +[1250.480 --> 1251.840] So that kind of gives you the background +[1251.840 --> 1254.320] of why we do MRI, tells you about the limitations you +[1254.320 --> 1255.160] have to deal with. +[1255.160 --> 1256.880] Now, I'm going to go into an actual experiment +[1256.880 --> 1258.240] for the rest of this talk. +[1258.240 --> 1261.680] So this experiment was spearheaded by Tian Xiao Zhang, +[1261.680 --> 1262.920] who I think's here somewhere. +[1262.920 --> 1263.600] Is Tian Xiao Zhang? +[1263.600 --> 1264.320] Yeah, he's back there. +[1264.320 --> 1265.960] So if you have any questions, you can ask that guy, +[1265.960 --> 1268.320] because I'm just the talking head. +[1268.320 --> 1269.880] When I was getting this talk ready, +[1269.880 --> 1272.720] I realized that Tian Xiao is actually giving the same talk, +[1272.720 --> 1274.400] probably with a different introduction. +[1274.400 --> 1276.000] Next Monday at Oxyopia. +[1276.000 --> 1278.120] So if you want to go to that talk, you can. +[1278.120 --> 1280.600] If you like this and you want to see more. +[1280.600 --> 1283.440] So we decided, in this particular experiment, +[1283.440 --> 1284.680] we studied language in the lab. +[1284.680 --> 1285.880] We studied vision. +[1285.880 --> 1287.720] We studied a lot of different things. +[1287.720 --> 1290.200] But in this talk, I thought I would talk about navigation. +[1290.200 --> 1292.760] Navigation's a cool task because we all do it. +[1292.760 --> 1293.640] We do it all the time. +[1293.640 --> 1294.800] It's a naturalistic task. +[1294.800 --> 1296.960] And there's also a lot of different brain subsystems. +[1296.960 --> 1298.600] So look at navigation. +[1298.600 --> 1302.080] There has been some work on navigation in F-Roy and in humans. +[1302.080 --> 1304.720] But that all involves very reduced environments. +[1304.720 --> 1308.160] Like you show pictures of people, pictures of different places, +[1308.160 --> 1309.840] and ask them if they recognize it. +[1309.840 --> 1311.680] Or you show them a picture of one place. +[1311.680 --> 1313.960] Then show them a picture of another place from a different angle. +[1313.960 --> 1315.560] And ask them if they're the same or different, +[1315.560 --> 1317.880] very reduced kinds of situations. +[1317.880 --> 1321.720] We want to do a naturalistic task as good as we could. +[1321.720 --> 1325.400] So Tian Xiao built a video game environment using Unreal Engine. +[1325.400 --> 1327.480] It's about two by three kilometers on a side. +[1327.480 --> 1330.760] Has hundreds of buildings and roads inside it. +[1330.760 --> 1333.920] And people have to learn this outside the scanner. +[1333.920 --> 1337.800] Takes them about 10 hours to learn all the landmarks in this world. +[1337.800 --> 1340.240] And then we put them inside the scanner. +[1340.240 --> 1341.080] This is the scanner. +[1341.080 --> 1342.680] It's just a big magnet. +[1342.680 --> 1344.200] So Tian Xiao's outside the magnet. +[1344.200 --> 1345.600] Now he's going to slide in there. +[1345.600 --> 1348.120] You notice he has optics that present the virtual reality +[1348.120 --> 1351.080] to his eyes and has a steering wheel and foot pedals +[1351.080 --> 1353.080] that we built that Tian Xiao built. +[1353.080 --> 1355.760] Whenever I use we in this talk, I mean him. +[1355.760 --> 1360.320] It's the royal we that we built that are magnet safe. +[1361.320 --> 1365.160] OK, so we put people in the MRI machine +[1365.160 --> 1367.920] and we just do a taxi driver task. +[1367.920 --> 1370.960] So you get a queue, go to the grocery store, +[1370.960 --> 1373.120] and you just have to drive to the grocery store. +[1373.120 --> 1375.720] And then eventually you arrive at the grocery store, +[1375.720 --> 1377.840] and then you get another taxi driver task. +[1377.840 --> 1379.680] And this is just like an Uber driver. +[1379.680 --> 1385.720] It's not that exciting a task, but it's a naturalistic task. +[1385.720 --> 1387.760] There are other cars that are pedestrians. +[1387.760 --> 1390.280] There are different times of day, different traffic patterns. +[1390.280 --> 1392.960] You have to use all that information in this task. +[1392.960 --> 1396.880] So I always like to show a movie of brain activity in this task +[1396.880 --> 1400.280] because this really gives you an idea of what's going on. +[1400.280 --> 1402.680] The brain is inconveniently folded up inside the skull, +[1402.680 --> 1406.120] so we can extract it computationally and then flatten it out. +[1406.120 --> 1407.600] And if we did something like that with your brain, +[1407.600 --> 1410.560] we'd end up with something about the size of a large pizza. +[1410.560 --> 1412.200] The visual system here is in the middle. +[1412.200 --> 1414.600] The prefrontal cortex is at the far left and far right. +[1414.600 --> 1417.080] The somatosensory strip is kind of here +[1417.080 --> 1419.800] and the auditory system is here and here. +[1419.800 --> 1423.600] Now, red on this map means more brain activity, +[1423.600 --> 1425.800] more metabolic activity, and blue means +[1425.800 --> 1428.440] relatively less metabolic activity. +[1428.440 --> 1430.200] So all that's happening here is the person +[1430.200 --> 1433.120] is driving to some random destination we don't know where. +[1433.120 --> 1435.480] And we can follow the patterns of brain activity +[1435.480 --> 1437.440] as the person is driving. +[1437.440 --> 1441.760] And what you'll notice is that these patterns vary a lot. +[1441.760 --> 1445.040] And they depend on what's going on outside. +[1445.040 --> 1448.360] So here the person stopped behind another car. +[1448.360 --> 1451.040] So we get activity in a network, a brain network called +[1451.040 --> 1453.320] the default mode network, which is the internal illumination +[1453.320 --> 1455.960] network that is activated when you're +[1455.960 --> 1457.200] talked to yourself. +[1457.200 --> 1458.920] You'll see when the person turns a corner of that, +[1458.920 --> 1461.480] we'll end up with activity in the motor strips. +[1461.480 --> 1463.000] When they have to break and accelerate, +[1463.000 --> 1465.080] you'll get activity in the motor strip. +[1465.080 --> 1469.720] Anything that happens in this task must have some correlate +[1469.720 --> 1472.520] in the brain, because we're neuroscientists here. +[1472.520 --> 1473.920] If there's a soul, it's irrelevant to us. +[1473.920 --> 1475.360] It all has to be in the brain. +[1475.360 --> 1477.640] There's only one world we're dealing with here. +[1477.640 --> 1480.760] So anything you see must be represented in the brain. +[1480.760 --> 1483.760] Any motor action must be represented in the brain. +[1483.760 --> 1486.640] Your intentions to drive your cognitive plans +[1486.640 --> 1490.760] about where you went, your thinking about where you came from, +[1490.760 --> 1492.960] all of that stuff must be represented in the brain. +[1492.960 --> 1494.240] And that makes it very clear that this +[1494.240 --> 1496.680] is just a giant multiple regression problem. +[1496.680 --> 1498.160] I've got a bunch of x variables, which +[1498.160 --> 1502.680] are the perception data, the controls data, and the task +[1502.680 --> 1503.400] we gave you. +[1503.400 --> 1505.080] And I've got a bunch of y data, which +[1505.080 --> 1508.160] is this time series of brain activity +[1508.160 --> 1510.120] over about 100,000 points in the brain. +[1510.120 --> 1513.640] And I just got to figure out how they're related to one another. +[1513.640 --> 1514.680] OK. +[1514.680 --> 1517.680] So the first thing you often do when you get these kinds of data +[1517.680 --> 1522.400] is you just plot the brain activity on flat maps to look at it, +[1522.400 --> 1524.920] see where things were activated. +[1524.920 --> 1528.320] And this task activates a lot of things in the visual system. +[1528.320 --> 1529.720] In the front-line fields, in the motor +[1529.720 --> 1532.280] of semantics, sensory strip, and this parietal cortex, which +[1532.280 --> 1534.960] is the parietal cortex is the part of the visual system +[1534.960 --> 1537.000] that involves coordinate system transformations +[1537.000 --> 1539.400] between your eye coordinates and your hands. +[1539.400 --> 1542.240] There's a lot of different coordinate systems involved there. +[1542.240 --> 1545.080] And there are some activity also here in prefrontal cortex +[1545.080 --> 1546.520] that probably has to do with planning, +[1546.520 --> 1549.440] some with audition that has to do with the sound in the MRI. +[1549.440 --> 1552.440] They sound in the video that you could not hear, +[1552.440 --> 1554.640] because they did not play it. +[1554.640 --> 1557.200] So now the video game environment, because this is a video game, +[1557.200 --> 1559.000] we have ground truth about everything that happened +[1559.000 --> 1559.760] in the video game. +[1559.760 --> 1560.960] Is there a question? +[1560.960 --> 1563.520] So you said that everything has to be +[1564.480 --> 1565.320] in the brain. +[1565.320 --> 1567.320] Sometimes we hear that the brain +[1567.320 --> 1568.480] is going to get, right? +[1568.480 --> 1569.960] There are no ones in the brain. +[1569.960 --> 1572.320] How do you know that the happening +[1572.320 --> 1574.520] has to be brain-opening? +[1574.520 --> 1576.960] That's an amazingly cool question that nobody's ever asked +[1576.960 --> 1577.480] before. +[1577.480 --> 1579.920] Actually, it's one person that's asked before. +[1579.920 --> 1582.320] We're not recording from the gut. +[1582.320 --> 1585.120] So if stuff's happening in the gut, it's irrelevant to us. +[1585.120 --> 1587.800] Because this is only in the head. +[1587.800 --> 1592.960] So there could be a correlation between the gut and the brain. +[1592.960 --> 1594.680] I suspect that when I'm driving home, +[1594.680 --> 1596.560] like at the end of the day, if I'm hungry, +[1596.560 --> 1599.480] I'm probably driving differently than if I'm not hungry. +[1599.480 --> 1600.600] Just take an example. +[1600.600 --> 1602.280] So I think that probably has an influence, +[1602.280 --> 1603.280] but we won't see it. +[1603.280 --> 1605.040] We won't know about it. +[1605.040 --> 1608.320] So anyway, there's a bunch of different feature spaces here. +[1608.320 --> 1610.680] We know where all the buildings are. +[1610.680 --> 1611.880] We have the semantic segmentation. +[1611.880 --> 1613.400] We know all the surface normals are. +[1613.400 --> 1615.200] We know what the inferred distance of all the buildings +[1615.200 --> 1615.720] was. +[1615.720 --> 1616.760] We have all that ground truth. +[1616.760 --> 1618.400] And we can use that to make features. +[1618.400 --> 1620.560] We also have behavioral control. +[1620.560 --> 1621.800] We're measuring the steering wheel. +[1621.800 --> 1623.160] We're measuring the foot pedals. +[1623.160 --> 1625.280] And we're measuring, as shown here in this blue bubble, +[1625.280 --> 1626.480] where your eye is. +[1626.480 --> 1628.280] Where your eye is is very important. +[1628.280 --> 1630.160] Because we don't have any direct measure of attention +[1630.160 --> 1630.920] in this task. +[1630.920 --> 1635.280] So it turns out attention in mammals follows your eye movements +[1635.280 --> 1637.320] or precedes your eye movements, really. +[1637.320 --> 1640.000] So we can use eye movements as a proxy for attention. +[1642.960 --> 1647.840] So what we actually did in this, what Chang did in this experiment, +[1647.840 --> 1651.720] is he created 34 different feature spaces. +[1651.720 --> 1654.560] Using these various variables. +[1654.560 --> 1658.480] Some of these feature spaces are related to the perception, +[1658.480 --> 1663.520] like gaze grid, eye tracking, motion energy, which +[1663.520 --> 1665.840] is just how much motion energy occurs in different locations +[1665.840 --> 1667.680] in the display. +[1667.680 --> 1670.320] The spatial semantics, which are the labels of objects +[1670.320 --> 1672.200] in the scene, the gaze semantics, which +[1672.200 --> 1674.760] are the labels of objects that you're actually looking at, +[1674.760 --> 1676.560] which are the behaviorally relevant things in the scene. +[1677.080 --> 1684.400] The scene structure, the depth, all of these things were coded. +[1684.400 --> 1686.760] Then we also have all the control information, +[1686.760 --> 1689.640] like where your foot pedals were, where the steering wheel +[1689.640 --> 1690.880] was, where the accelerator was. +[1690.880 --> 1691.920] We have all that. +[1691.920 --> 1694.840] And then we have a bunch of navigation information. +[1694.840 --> 1696.480] And most of this navigation information +[1696.480 --> 1698.920] comes from theories of navigation +[1698.920 --> 1702.840] from the rodent literature and also some from the human literature. +[1702.840 --> 1705.000] So there are dozens of different theories +[1705.000 --> 1707.720] about what kind of information about navigation +[1707.720 --> 1713.600] might be represented, future path navigation directions +[1713.600 --> 1716.000] in various coordinate systems to the target +[1716.000 --> 1717.960] that you're going to, all of those kinds of things. +[1717.960 --> 1720.640] And all of this was coded in various feature spaces. +[1720.640 --> 1723.560] In every case, the way this works is pretty simple. +[1723.560 --> 1726.440] You take the information you have. +[1726.440 --> 1728.400] You essentially create an embedding that +[1728.400 --> 1730.880] reflects just the feature space you care about. +[1730.880 --> 1733.920] And then you concatenate all these embeddings together. +[1733.920 --> 1735.680] And that means you now have a regression problem +[1735.680 --> 1737.000] where you have your training data. +[1737.000 --> 1740.680] And you have a stack of features, where each feature space +[1740.680 --> 1742.920] has, of course, a long list of features. +[1742.920 --> 1744.800] And now you simply do Ridge regression +[1744.800 --> 1748.040] to find a set of model weights that map each of those features +[1748.040 --> 1749.320] onto every box on the brain. +[1749.320 --> 1751.760] So you're going to do, you've got 34 features, +[1751.760 --> 1755.360] comprising about 2,500 or so features. +[1755.360 --> 1757.680] And you've got 100,000 voxels on the brain. +[1757.680 --> 1760.800] So you're doing 100,000 regression problems, +[1760.800 --> 1766.040] where you're fitting a 25,000 feature log vector to each voxel. +[1766.040 --> 1767.480] That's where data is going to be. +[1767.480 --> 1770.960] So everyone 100,000 voxels has 2,500 weights +[1770.960 --> 1772.200] in the regression model. +[1772.200 --> 1774.760] This would be impossible if it was like 1980. +[1774.760 --> 1777.720] But the kernel trick allows you to do all of this +[1777.720 --> 1781.720] with a matrix of dimensions of the length of the experiment +[1781.720 --> 1783.440] rather than the number of features. +[1783.440 --> 1785.520] And so this is all done in kernel space +[1785.520 --> 1788.760] by some statistical miracle that I still can't even +[1788.760 --> 1789.600] have found them. +[1789.600 --> 1790.600] It's amazing to me this work. +[1790.600 --> 1793.080] Just to say that a little differently, +[1793.080 --> 1797.280] your goal is just to find out where in the brain +[1797.280 --> 1800.200] the stimulus appears. +[1800.200 --> 1803.520] Well, these features of the stimulus, right? +[1803.520 --> 1806.000] We want to know where all these aspects appear. +[1806.000 --> 1810.600] So we don't know what, one more thing I should mention +[1810.600 --> 1812.440] that makes this a little clearer. +[1812.440 --> 1815.760] We don't know which of these feature spaces +[1815.760 --> 1817.920] is represented the brain and which isn't. +[1817.920 --> 1820.280] And these feature spaces are all collinear, right? +[1820.280 --> 1823.280] If I have seen semantics, which is a label of all the objects +[1823.280 --> 1824.960] in the scene, and I have gaze semantics, which +[1824.960 --> 1826.840] is the label of the objects that I'm looking at, +[1826.840 --> 1828.640] those are correlated, right? +[1828.640 --> 1829.960] So what we're really trying to do here +[1829.960 --> 1833.120] is we're trying to find out what features are represented +[1833.120 --> 1835.320] at what point in the brain, what perceptual, motor, +[1835.320 --> 1836.560] and cognitive features. +[1836.560 --> 1838.600] And we're trying to do that in as data driven a manner +[1838.600 --> 1839.720] as we can. +[1839.720 --> 1842.760] So to do that, we fit more feature spaces than we need. +[1842.760 --> 1845.160] And then we're going to interrogate the data afterwards, +[1845.160 --> 1846.880] looking through the T-leaves to try +[1846.880 --> 1849.000] to see what was actually represented. +[1849.000 --> 1850.000] Is that clearer? +[1850.000 --> 1851.280] Well, it's clear. +[1851.280 --> 1853.080] Is the pedestrians up there? +[1853.080 --> 1854.080] Yes. +[1854.080 --> 1855.960] The pedestrians are going to appear somewhere +[1855.960 --> 1857.440] on the right, basically. +[1857.440 --> 1859.520] Well, not necessarily. +[1859.520 --> 1862.280] Well, it was both, it was most likely, right? +[1862.280 --> 1865.480] So that's the mapping. +[1865.480 --> 1867.680] You want to find the mapping. +[1867.680 --> 1868.640] Yes, but of everything. +[1868.640 --> 1870.360] The pedestrians just box on the right. +[1870.360 --> 1871.600] Exactly. +[1871.600 --> 1872.800] That's the problem. +[1872.800 --> 1875.720] And but it's not only that the only reason I was correcting +[1875.720 --> 1877.160] was you were talking about visual things. +[1877.160 --> 1879.160] But remember, this is an navigation experiment. +[1879.160 --> 1881.120] We really care about as an navigational variables. +[1881.120 --> 1883.560] So you have some sense of how long it's +[1883.560 --> 1885.520] going to take you to get where you're going. +[1885.520 --> 1887.040] So that should be represented somewhere. +[1887.040 --> 1889.280] You have a sense of the path you're going to take, right? +[1889.280 --> 1890.520] This is a complicated map. +[1890.520 --> 1892.200] You could take a lot of different paths. +[1892.200 --> 1894.360] So when people start in this navigation experiment, +[1894.360 --> 1897.600] they have a path that they start to take. +[1897.600 --> 1899.280] They might deviate from that path later on +[1899.280 --> 1900.560] if there's too much traffic or something. +[1900.560 --> 1903.800] But there must be a cognitive map of the path, for example. +[1903.800 --> 1907.160] So that's all the stuff we're really trying to pull out here. +[1907.160 --> 1909.320] But the general idea is correct. +[1909.320 --> 1911.840] So one thing I should mention, this +[1911.840 --> 1916.840] is basically a big, ugly applied math problem. +[1916.840 --> 1919.080] One of the, there are a lot of aspects to this problem +[1919.080 --> 1920.480] because it's a big data kind of problem. +[1920.480 --> 1923.600] It's got a lot of annoying things that have to be done. +[1923.600 --> 1925.840] One of the annoying things is that all of these features +[1925.840 --> 1927.960] faces have different students' noise properties. +[1927.960 --> 1931.280] The students' noise is governed by how many examples +[1931.280 --> 1934.560] of each of the features you acquired in your experiment. +[1934.560 --> 1936.120] It has to do with where in the brain it occurs +[1936.120 --> 1937.320] because different places in the brain +[1937.320 --> 1939.080] have different students' noise properties +[1939.080 --> 1941.680] because of MRI susceptibility artifacts. +[1941.680 --> 1945.280] There are all kinds of factors that can affect the students' +[1945.280 --> 1947.080] noise for these different feature spaces. +[1947.080 --> 1949.120] So this is going to be a ridge regression problem, +[1949.120 --> 1951.960] where we're going to have a regularizer and some features, +[1951.960 --> 1953.120] and we're going to put those together. +[1953.120 --> 1956.160] We have to estimate the regularizer and then essentially +[1956.160 --> 1959.400] use that to condition the data when we do our regression problem. +[1959.400 --> 1962.320] Every feature space gets its own regularizer in our framework. +[1962.320 --> 1966.960] So this is using a method of called Tick-and-Off regression +[1966.960 --> 1968.520] that we have a very specific implementation +[1968.520 --> 1972.000] of called banded ridge regression that we have software for +[1972.000 --> 1974.360] that just allows these problems to be run really quickly +[1974.360 --> 1975.480] on GPUs. +[1975.480 --> 1977.200] That's long story short. +[1977.200 --> 1978.560] So we spent a lot of time in the lab +[1978.560 --> 1980.360] basically solving these applied math problems +[1980.360 --> 1983.280] for doing these big fitting kinds of issues. +[1983.280 --> 1984.920] All right, so what do you get out of this experiment? +[1984.920 --> 1987.960] Here's one example that will be helpful. +[1988.600 --> 1992.160] The video game engine gives you 16 categories of features. +[1992.160 --> 1995.080] That's just how the video game engine keeps track of features. +[1995.080 --> 1997.600] So of the semantic structure of the scene. +[1997.600 --> 2000.240] So like there's buildings, we can see them on here. +[2000.240 --> 2003.920] There are sidewalks, road lines, foliage, ground, +[2003.920 --> 2005.120] pedestrians, and so on. +[2005.120 --> 2007.440] These are the features the video game engine gives us. +[2007.440 --> 2012.200] So for every voxel, we can create a 16 long vector of weights +[2012.200 --> 2015.760] that tell us how much that voxel cares about these various +[2015.760 --> 2019.720] categories of objects that appear in the video game. +[2019.720 --> 2022.480] Now we can basically take all of the voxels +[2022.480 --> 2024.280] and we can do principal components on it +[2024.280 --> 2026.280] and take the first three principal components +[2026.280 --> 2028.360] and apply them to the red, green, blue channels +[2028.360 --> 2033.280] of our display and make a map by projecting those PCs +[2033.280 --> 2035.560] now back onto the surface of the brain. +[2035.560 --> 2039.800] And now we see what semantic features each place in the brain +[2039.800 --> 2040.680] represents. +[2040.680 --> 2043.640] So these purple areas are representing pedestrians, +[2044.120 --> 2045.640] which is what you mentioned. +[2045.640 --> 2049.640] The greenish areas are representing roads and road lines. +[2049.640 --> 2053.240] The yellow regions are representing foliage and so on. +[2053.240 --> 2055.640] So you can see a lot of places in the brain represent pedestrians +[2055.640 --> 2057.840] and people because we're social animals +[2057.840 --> 2059.480] and people are important to us. +[2059.480 --> 2061.360] A lot of places in the brain represent the structure +[2061.360 --> 2062.480] of the environment. +[2062.480 --> 2065.680] There are places that represent the road, signs, and so on. +[2065.680 --> 2069.360] Now this is fine, but you can't do this for 34 feature spaces. +[2069.360 --> 2070.640] You will go insane. +[2070.640 --> 2072.800] It's just too much data. +[2072.800 --> 2074.120] So you're going to do something. +[2074.120 --> 2075.200] What do you do in these kind of problems? +[2075.200 --> 2076.760] You do dimensionality reduction. +[2076.760 --> 2079.880] So one kind of sleazy method to mentionality +[2079.880 --> 2082.240] reduction you can do that I do not particularly like +[2082.240 --> 2083.320] is TSNE. +[2083.320 --> 2088.360] TSNE depends a lot on the kernel that you start with +[2088.360 --> 2090.160] and it's very susceptible to noise. +[2090.160 --> 2092.240] But it gives you a nice summary. +[2092.240 --> 2095.160] So here we have 34 of the feature spaces, +[2095.160 --> 2096.520] all the feature spaces together. +[2096.520 --> 2099.120] And we've just classified them here into five classes. +[2099.120 --> 2102.760] And we've used TSNE to produce a very low-dimensional 3D +[2102.760 --> 2105.520] bedding that we can project onto the surface of the cortex +[2105.520 --> 2107.640] and then we've color-coded each of the feature spaces +[2107.640 --> 2110.040] by the same scheme. +[2110.040 --> 2111.400] So you can see where on the brain +[2111.400 --> 2113.680] these different kinds of features are represented. +[2113.680 --> 2117.080] Can you expand the acronym, PSN? +[2117.080 --> 2119.240] God, temporal, no. +[2119.240 --> 2121.800] I can't even remember what the heck it is now. +[2121.800 --> 2122.600] Zhang, what is this? +[2122.600 --> 2123.800] Do you remember? +[2123.800 --> 2126.640] From a cross-section. +[2126.640 --> 2127.640] So you know anybody can remember? +[2127.640 --> 2129.240] Does anybody remember what TSNE is? +[2129.240 --> 2131.280] No, it's a classic neighborhood bedding. +[2131.280 --> 2132.920] Oh, a collegial one of my students knew it. +[2132.920 --> 2133.520] OK, good. +[2133.520 --> 2134.040] There you go. +[2134.040 --> 2136.640] There's one of us, one person in the room. +[2136.640 --> 2139.880] Temporals to Castic Neighborhood Bedding. +[2139.880 --> 2142.000] Which provides me no information whatsoever +[2142.000 --> 2144.440] about what the thing actually does. +[2144.440 --> 2147.480] All I remember from this is don't use TSNE. +[2147.480 --> 2149.840] That's the rule I learned when I was exposed to TSNE +[2149.840 --> 2152.520] because it's very unstable. +[2152.520 --> 2155.400] All right, but that's what we use because it's not events. +[2155.400 --> 2157.440] This is just a, we're just on the way +[2157.440 --> 2158.920] to where we want to go. +[2158.920 --> 2163.680] So anyway, so the red places here are all representing visual stuff. +[2163.680 --> 2165.480] And these are all in the visual system. +[2165.480 --> 2170.000] This is the red and the topic here is the yellow stuff here is motor. +[2170.000 --> 2172.000] And these are all, you know, all these motor variables +[2172.000 --> 2173.760] are represented in the motor system. +[2173.760 --> 2177.160] The navigation stuff, the past navigation where you were +[2177.160 --> 2179.840] is in these representing these purple patches. +[2179.840 --> 2182.080] And those seem to be broadly distributed in the brain. +[2182.080 --> 2184.840] And the future navigation is also +[2184.840 --> 2187.000] represented in broadly distributed locations in the brain. +[2187.000 --> 2190.200] So it seems like the navigational features +[2190.200 --> 2193.800] spaces are projecting under the brain's subsystems writ large, +[2193.800 --> 2196.920] prefrontal cortex, motor cortex, visual cortex, +[2196.920 --> 2198.480] as we would expect, which is good. +[2198.480 --> 2200.400] Because if this didn't work, you know, +[2200.400 --> 2204.120] we would have to question our whole basis for being in this experiment. +[2204.120 --> 2206.200] All right, but what you really want to do, you know, +[2206.200 --> 2208.440] you don't really care about how these individual features spaces +[2208.440 --> 2209.040] represented. +[2209.040 --> 2211.920] What you want to know is are there navigation networks in the brain? +[2211.920 --> 2213.400] That's the real question we had here. +[2213.400 --> 2215.280] So let's try to see if we can pull that out. +[2215.280 --> 2217.520] To do this, this is a really complicated slide +[2217.520 --> 2220.680] that I'm not going to go into. +[2220.680 --> 2222.760] To my students, student postdoc in the lab, +[2222.760 --> 2224.560] Mateo's in the back. +[2224.560 --> 2226.320] I don't see the other student. +[2226.320 --> 2229.600] So Emily, Meshki, and Mateo Visconti +[2229.600 --> 2233.200] both worked on this project to develop +[2233.200 --> 2235.360] a new method called model connectivity. +[2235.360 --> 2237.360] Connectivity is a word for correlation +[2237.360 --> 2240.680] that is used in neuroscience, sadly. +[2240.680 --> 2241.600] You guys have a run problem. +[2241.600 --> 2242.400] It's a major causality. +[2242.400 --> 2243.440] It has nothing with causality. +[2243.440 --> 2246.520] So you know, everybody's got their sins. +[2246.520 --> 2248.320] Anyway, you see model connectivity, +[2248.320 --> 2249.760] think model correlation. +[2249.760 --> 2251.880] All we're doing here is every single voxel +[2251.880 --> 2254.640] has a feature vector of 2,500 long. +[2254.640 --> 2256.640] And we're just going to basically take the angle between those +[2256.640 --> 2259.640] vectors and or the correlation between those vectors +[2259.640 --> 2262.880] and use them in a cluster analysis to pull out networks. +[2262.880 --> 2263.680] That's all we're doing. +[2263.680 --> 2265.320] Pretty straightforward. +[2265.320 --> 2266.040] OK. +[2266.040 --> 2269.400] So and then, of course, since we're using cluster analysis +[2269.400 --> 2272.760] on the feature vectors, now we're going to get a dendrogram. +[2272.760 --> 2274.760] We're going to have different numbers of clusters +[2274.760 --> 2276.360] that we could pick out of this. +[2276.360 --> 2278.960] And we use cross validation across subjects +[2278.960 --> 2281.400] to determine how many networks we can pull out of our data +[2281.400 --> 2282.400] set. +[2282.400 --> 2284.680] And that's going to be data limited. +[2284.680 --> 2284.880] All right. +[2284.880 --> 2287.360] So here's the number of clusters we're pulling out. +[2287.360 --> 2289.840] And here's our held out prediction. +[2289.840 --> 2294.440] And you can see that the more clusters we pull out, +[2294.440 --> 2295.920] better our predictions are. +[2295.920 --> 2298.520] But you can see that there's a knee here around 10 or 15 +[2298.520 --> 2299.440] networks. +[2299.440 --> 2301.920] So because 10 or 15 networks is also +[2301.920 --> 2304.080] a countable number that we can actually think about, +[2304.080 --> 2306.760] that's probably where we're going to focus our attention here. +[2306.760 --> 2311.320] So we're pulling 10 networks out of these 34 feature spaces, +[2311.320 --> 2313.200] and now we can look at these networks. +[2313.200 --> 2314.600] So this plots a little complicated, +[2314.600 --> 2316.520] but it should be straightforward what we're doing here, +[2316.520 --> 2318.080] based on what I just said. +[2318.080 --> 2320.640] We have here our 10 networks that we pulled out +[2320.640 --> 2322.600] by cutting off our dendrogram. +[2322.600 --> 2324.520] Now, remember, each one of these networks +[2324.520 --> 2327.560] consists of some combination of these different feature +[2327.560 --> 2329.880] spaces, these different 34 feature spaces. +[2329.880 --> 2333.840] And so we can marginalize across the features in each feature +[2333.840 --> 2338.280] space and use these circles to indicate +[2338.280 --> 2342.480] the weight that that individual feature space has in each network. +[2342.480 --> 2345.400] So you can see, for example, that network one, +[2345.400 --> 2348.520] it has a large weight for this scene structure feature space +[2348.520 --> 2351.920] and this attended visual semantics feature space. +[2351.920 --> 2356.680] But it has a very low weight for this depth feature space. +[2356.680 --> 2357.880] Excuse me, I don't think that's depth. +[2357.880 --> 2361.800] That's actually retinotopic motion energy. +[2361.800 --> 2365.840] So each one of these clusters has a different constellation +[2365.840 --> 2368.960] of features that weight highly in that cluster. +[2372.200 --> 2374.480] And rather than interrogate this map, +[2374.480 --> 2377.480] it's easier to just project the clusters onto the brain +[2377.480 --> 2378.640] and see what they do. +[2378.640 --> 2380.600] So if you do this, you find out there's +[2380.600 --> 2384.000] a low-level vision cluster that where all the voxels +[2384.000 --> 2386.680] are tuned for low-level visual features +[2386.680 --> 2389.400] like motion energy and those all end up +[2389.400 --> 2391.680] being located in retinotopic visual cortex, +[2391.680 --> 2394.680] where we know low-level features are represented. +[2394.680 --> 2396.880] There's high-level vision where the voxels +[2396.880 --> 2399.720] are selective of voxels, a three-dimensional pixel. +[2399.720 --> 2401.080] I didn't make that clear. +[2401.080 --> 2404.640] Where the voxels represent the semantic category +[2404.640 --> 2407.400] of the objects in the scene. +[2407.400 --> 2411.360] And those semantically selective visual areas +[2411.360 --> 2415.800] form a patchwork, a mosaic, that's sort of on the back +[2415.800 --> 2417.880] of the brain surrounds the retinotopic visual areas, +[2417.880 --> 2419.320] the low-level visual areas. +[2419.320 --> 2421.720] So that works just as it's supposed to. +[2421.720 --> 2423.280] There's a visual attention network. +[2423.280 --> 2425.880] This is loading in this thing called IPS. +[2425.880 --> 2428.240] The IPS is the inter-parietal sulcus. +[2428.240 --> 2431.680] And it's a region of the brain that is heavily modulated +[2431.680 --> 2435.560] by attention because that's on the visual stream pathway +[2435.560 --> 2437.600] that is involved with coordinate transformations +[2437.600 --> 2439.000] between different coordinate systems. +[2439.000 --> 2440.480] You can imagine that's going to be very important +[2440.480 --> 2442.800] in navigation. +[2442.800 --> 2444.800] Then there are several motor networks. +[2444.800 --> 2446.720] There's a foot network that loads highly +[2446.720 --> 2449.280] in the foot representation of your somatosensory +[2449.280 --> 2450.560] and motor system. +[2450.560 --> 2453.120] There's a hand network that loads highly +[2453.120 --> 2455.160] in the hand representation of your motor +[2455.160 --> 2456.600] and somatosensory system. +[2456.600 --> 2459.080] And there's a supplementary motor network, which +[2459.080 --> 2462.120] is a diffuse network distributed +[2462.120 --> 2464.760] in these secondary motor areas. +[2464.760 --> 2469.120] Remember that to first order, it's not at all true, +[2469.120 --> 2471.160] but just when you're broadly thinking about it, +[2471.160 --> 2473.480] the visual system is organized, kind of like just +[2473.480 --> 2476.680] an AlexNet convolutional network with successfully deeper +[2476.680 --> 2479.800] layers representing more complicated and abstract things. +[2479.800 --> 2481.320] And the motor system is flipped. +[2481.320 --> 2484.480] So the output of motor cortex is going down +[2484.480 --> 2485.960] to the spinal cord nuclei. +[2485.960 --> 2488.160] That's a pretty low-level motor code. +[2488.160 --> 2491.080] But higher levels of the motor cortex, +[2491.080 --> 2493.520] the supplementary motor areas, are representing more abstract +[2493.520 --> 2494.400] motor variables. +[2495.120 --> 2499.320] OK, so this is perception and this is motor. +[2499.320 --> 2501.120] We'd expect to see that fine. +[2501.120 --> 2505.440] Again, this just shows us what we were doing was not crazy. +[2505.440 --> 2507.560] But what we want, did you have a question? +[2507.560 --> 2509.800] But what we want is to pull out the navigation networks. +[2509.800 --> 2511.240] That was the interesting thing. +[2511.240 --> 2513.000] So in this 10 network solution, there's +[2513.000 --> 2516.200] three navigation networks that we can pull out. +[2516.200 --> 2519.440] And they all, why do we say their navigation networks? +[2519.440 --> 2521.160] Because they load very, very highly +[2521.160 --> 2524.440] on these navigation-related variables down there. +[2524.440 --> 2527.640] And so now we can try to inspect each of those. +[2527.640 --> 2530.520] That's going to be more difficult than you think. +[2530.520 --> 2533.440] So here's the three navigation networks. +[2533.440 --> 2536.440] And if you look at the features that these networks weigh +[2536.440 --> 2539.560] heavily on, you'll see that one of these networks +[2539.560 --> 2541.960] is predominantly visual. +[2541.960 --> 2544.160] One of these networks is predominantly motor. +[2544.160 --> 2547.840] And one of these networks is distributed across the navigation +[2547.840 --> 2548.880] features. +[2548.880 --> 2552.360] So this suggests that these navigation networks are +[2552.360 --> 2555.720] divided into visually biased navigation networks, motor +[2555.720 --> 2560.880] biased navigation networks, and more navigational navigation +[2560.880 --> 2561.760] networks. +[2561.760 --> 2564.360] I should mention, nobody asked me, and I +[2564.360 --> 2566.640] was remiss to not have mentioned this. +[2566.640 --> 2569.560] When we fit these 34 models, we're +[2569.560 --> 2572.480] fitting all 34 models simultaneously. +[2572.480 --> 2575.440] So since they're all fit simultaneously, +[2575.440 --> 2577.320] variance is attributed to each network, +[2577.320 --> 2579.680] according to where it needs to be, and what the regularization +[2579.680 --> 2581.520] parameter was for that. +[2581.520 --> 2582.840] It's attributed to each feature space, +[2582.840 --> 2584.800] according to what the regularization parameter is +[2584.800 --> 2586.000] for that feature space. +[2586.000 --> 2589.040] And then when we do this, although I'm only +[2589.040 --> 2594.440] pulling out three networks, remember, all those other networks, +[2594.440 --> 2596.680] I'm only pulling out three networks here. +[2596.680 --> 2598.880] But all the other networks, the vision network and the motor +[2598.880 --> 2601.200] network, they're still in the model. +[2601.200 --> 2603.200] I'm just taking a slice out of the model here. +[2604.200 --> 2610.200] OK, so we have three kinds of motor networks. +[2610.200 --> 2612.760] And if you look at where these motor networks are represented, +[2612.760 --> 2615.960] the motor networks are represented more +[2615.960 --> 2617.520] by the motor system. +[2617.520 --> 2619.440] Where are we here? +[2619.440 --> 2624.520] The, oh, I'm mislabeled this, and I +[2624.520 --> 2626.760] can't no longer remember what the labels are. +[2626.760 --> 2628.080] And I can't go back. +[2628.080 --> 2628.920] There we go. +[2628.920 --> 2632.760] So abstract is blue, motor is green, and sensory is red. +[2632.760 --> 2637.880] So red network is sensory network, motor is green, +[2637.880 --> 2640.480] and abstract is green, and motor is blue. +[2640.480 --> 2645.160] So these are kind of lining up with the way you'd expect. +[2645.160 --> 2647.360] Now, are these three discrete networks? +[2647.360 --> 2648.680] No, these are gradients. +[2648.680 --> 2651.640] So there's essentially one navigation network. +[2651.640 --> 2654.240] But it's distributed, remember, in this physical substrate. +[2654.240 --> 2657.680] And certain locations in this physical substrate that +[2657.680 --> 2660.040] contain this navigation network are more heavily +[2660.040 --> 2661.280] weighted toward motor. +[2661.280 --> 2664.440] Certain locations are more heavily weighted toward vision. +[2664.440 --> 2665.960] And certain locations are more heavily +[2665.960 --> 2668.200] weighted toward abstract navigation. +[2668.200 --> 2672.240] But these are gradients not discrete networks. +[2672.240 --> 2675.800] OK, so that's all kind of abstract, +[2675.800 --> 2677.040] and that's still a work in progress. +[2677.040 --> 2680.280] It's notoriously hard to interpret these complicated kinds +[2680.280 --> 2682.200] of networks, not only in this experiment, +[2682.200 --> 2683.840] but in all the navigation experiments +[2683.840 --> 2685.480] people do in rodents. +[2685.480 --> 2687.080] It's fairly difficult to figure out +[2687.080 --> 2689.680] what these very abstract brain areas are doing. +[2689.680 --> 2691.880] And for those of you who have tried to interpret a deep neural +[2691.880 --> 2696.400] network in engineering, you know that you have that same problem. +[2696.400 --> 2699.400] Interpreting these networks is notoriously difficult. +[2699.400 --> 2701.160] But there are some simpler things we can do. +[2701.160 --> 2703.240] So let's look at attention. +[2703.240 --> 2707.720] Attention is a huge variable in human thought, +[2707.720 --> 2709.200] in the human brain function. +[2709.200 --> 2713.400] And I think the reason for this, the reason any psychologist +[2713.400 --> 2716.040] will tell you is that the brain has very limited processing +[2716.040 --> 2716.760] power. +[2716.760 --> 2720.960] And so what happens is the brain networks +[2720.960 --> 2727.680] are reallocated using attention to whatever task is currently being demanded. +[2727.680 --> 2730.440] And there's a lot of data to show this that +[2730.440 --> 2733.920] has been collected both in neurophysiology and animals and also in MRI. +[2733.920 --> 2737.000] So this is just a simple MRI experiment. +[2737.000 --> 2740.320] We have people watching movies in this experiment. +[2740.320 --> 2741.680] This is an old experiment. +[2741.680 --> 2744.720] And in one condition, we have them attend to humans. +[2744.720 --> 2746.800] We just say whenever you see a human, hit the button. +[2746.800 --> 2748.920] In another condition, we have them attend to vehicles. +[2748.920 --> 2750.440] Whenever you see a vehicle, you hit the button. +[2750.440 --> 2752.360] They're just watching naturalistic videos. +[2752.360 --> 2754.760] And what you see is when they're attending humans, +[2754.760 --> 2758.240] human in this map is mostly green and yellow. +[2758.240 --> 2761.440] You can see that the map is largely biased towards humans. +[2761.440 --> 2763.680] And when they're attending to vehicles, which is purple in this map, +[2763.680 --> 2766.680] you see that the map becomes much more purple. +[2766.680 --> 2769.400] And when they're passively viewing it somewhere in between. +[2769.400 --> 2771.800] So what ends up happening is when you attend, +[2771.800 --> 2773.440] and you're going out in your daily life, and you +[2773.440 --> 2776.480] attend to that person walking up the sidewalk, +[2776.480 --> 2781.400] then your brain tries to become a giant person evaluator or person detector. +[2781.400 --> 2783.840] And it can't do this perfectly. +[2783.840 --> 2788.240] It's not like every neuron in your brain becomes a person detector. +[2788.240 --> 2790.320] Neurons in the peripheral visual system, +[2790.320 --> 2793.600] and that are at the periphery of the motor system. +[2793.600 --> 2795.320] They don't change their tuning much. +[2795.320 --> 2799.080] But neurons are prefrontal cortex, which is a very abstract part of the brain. +[2799.080 --> 2802.080] We'll completely change their tuning depending on the task. +[2802.080 --> 2807.400] And they seem weird to those of you who have worked with neural networks. +[2807.400 --> 2811.280] But if you kind of think about neural networks a bit differently, it makes sense. +[2811.280 --> 2815.800] When you're training a neural network, like just say you like decided in a class, +[2815.800 --> 2817.200] I've got AlexNet. +[2817.200 --> 2822.040] I'm just going to train AlexNet to do discrimination between dogs and cats. +[2822.040 --> 2825.520] While in your training, AlexNet, the weights of that network +[2825.520 --> 2829.320] are constantly being updated every single iteration through that network. +[2829.320 --> 2831.320] That's how that network learns. +[2831.320 --> 2835.760] So the way to think about attention in brain networks is that it's a very short term +[2835.760 --> 2838.920] updating of the weights through the whole system. +[2838.920 --> 2841.320] You can think of it as short term learning. +[2841.320 --> 2845.000] So this is because the human brain, unlike artificial neural networks, +[2845.000 --> 2848.920] where we usually train and then deploy, the human brain is constantly learning +[2848.920 --> 2850.920] all the time at all timescales. +[2850.920 --> 2853.680] And attention is the very shortest time scale of that. +[2853.680 --> 2858.920] So attention is the way that your brain tries to update weights to solve a specific problem +[2858.920 --> 2865.280] by essentially just reallocating the reengineering. +[2865.280 --> 2872.200] The information flow through the network to make a parent or to make explicit +[2872.200 --> 2877.640] representation of the information that's most relevant to the task. +[2877.640 --> 2878.960] So we know this happens in humans. +[2878.960 --> 2880.920] Does it happen during driving? +[2880.920 --> 2885.320] So here are, again, this is this gase semantics model. +[2885.320 --> 2887.560] These are the 16 categories of things you could look at. +[2887.560 --> 2892.000] And these are the weights of the features for those 16 categories during the active navigation +[2892.000 --> 2893.000] task. +[2893.000 --> 2898.840] So you can see that when you're actively navigating in the world, there is a large representation +[2898.840 --> 2901.960] of buildings and fields and vehicles. +[2901.960 --> 2906.520] You can see a represented and pedestrians and also traffic signs seem to be represented +[2906.520 --> 2910.120] because they're, of course, very important for this task. +[2910.120 --> 2915.600] If we compare this map to the map we get when you simply passively watch random movies, +[2915.600 --> 2919.000] random videos, you see that this map is very, very different. +[2919.000 --> 2922.600] So if you're not doing an active navigation task, you're just looking at random videos +[2922.600 --> 2927.640] of people and cars and buildings, you see that the representations are predominantly oriented +[2927.640 --> 2932.560] toward essentially people and not so much these other kinds of factors. +[2932.560 --> 2935.200] So this is an attention difference. +[2935.200 --> 2938.440] It's not shown in this slide, but we can show that this isn't due to just the difference +[2938.440 --> 2939.840] in stimulus statistics. +[2939.840 --> 2941.680] This is actually due to attention. +[2941.680 --> 2946.480] Representing the representation from a passive viewing situation where your brain is predominantly +[2946.480 --> 2951.120] representing people, those are the most important thing, to an active brain representation system +[2951.120 --> 2954.120] where you're representing the navigational variables. +[2954.120 --> 2955.600] And you can see this is the difference map. +[2955.600 --> 2961.200] You can see there's this huge bias towards navigation related stuff being represented +[2961.200 --> 2963.240] when you're doing navigation. +[2963.240 --> 2970.200] Now if you go through the literature of MRI, you'll find that there's kind of two subsets +[2970.200 --> 2973.080] of networks in the visual system. +[2973.080 --> 2978.240] One is like a person oriented and animate subset of networks. +[2978.240 --> 2983.520] These consist of brain areas called like the pair hippocampal, excuse me, the fused +[2983.520 --> 2988.600] form face area, the extra straight body area, and several other parts of the visual system +[2988.600 --> 2991.200] that seem to respond to animate stuff. +[2991.200 --> 2994.520] And then there's a separate network for inanimate stuff. +[2994.520 --> 2999.600] This consists of areas called the pair hippocampal place area, the occipital place area, the +[2999.600 --> 3001.240] retro-spleenial cortex. +[3001.240 --> 3004.000] Was there a question or no, just stretching? +[3004.000 --> 3005.840] Okay, good. +[3005.840 --> 3012.680] So there are different subnetworks for animate and inanimate objects in the brain. +[3012.680 --> 3020.360] So the cool thing, so right here for example, we're showing what happens when you are +[3020.360 --> 3027.820] passive viewing vehicles and you're not actually actively engaged in vehicle, sort of in +[3027.820 --> 3028.820] active navigation. +[3028.820 --> 3031.580] And you can see that vehicles are represented, for example, in the pair hippocampal place +[3031.580 --> 3036.220] area, this occipital place area, and up here in the retro-spleenial cortex. +[3036.220 --> 3041.580] Now when you act, the weird thing and the cool thing is when you actively engage in a +[3041.580 --> 3046.660] navigation task, vehicles now become very, very important because you can't run into them +[3046.660 --> 3048.140] and you have to avoid them. +[3048.140 --> 3053.580] So they become, they end up being represented as animate objects. +[3053.580 --> 3057.220] They get represented in the fused form face area, the occipital place area, and they're +[3057.220 --> 3060.180] no longer represented in the object network. +[3060.180 --> 3065.820] So the whole system completely re-orients the way it views this class of inanimate objects +[3065.820 --> 3066.820] based on the task. +[3066.820 --> 3068.740] You can see the difference here. +[3068.740 --> 3074.860] In passive navigation, these blue areas are where vehicles are represented in passive +[3074.860 --> 3077.900] navigation and the red areas are where they're represented during driving. +[3077.900 --> 3080.980] You know, there's no white, which indicates that this is a complete shift. +[3080.980 --> 3084.860] This is actually the biggest attention effect I've ever seen. +[3084.860 --> 3089.340] It's a complete reorientation of the system according to this task demands. +[3089.340 --> 3093.140] When you're driving, vehicles are, you're treating basically like other people, which makes +[3093.140 --> 3096.540] sense because when you're driving, you're concerned about this other vehicle is what +[3096.540 --> 3098.660] is the person driving the vehicle going to do? +[3098.660 --> 3104.820] So you engage in this theory of mind behavior that is all part of the social negotiation +[3104.820 --> 3107.900] of active navigation. +[3107.900 --> 3109.780] And we're very interested in this topic. +[3109.780 --> 3114.380] So we've looked a lot at multi, sorry, we're beginning to look a lot at multi-agent interactions +[3114.380 --> 3116.780] and we're doing this with Claire Tomlin's lab. +[3116.780 --> 3118.700] So I see Chris in the back of the room there. +[3118.700 --> 3119.700] Yes, question. +[3119.700 --> 3122.100] Oh, you can just yell at me. +[3122.100 --> 3126.300] You don't know what the timescale that transition from remote to other. +[3126.300 --> 3131.500] Oh, this is going to be a very quick transition on all of the, like, hundreds of milliseconds +[3131.500 --> 3132.500] at the most. +[3132.500 --> 3133.500] Yeah. +[3133.500 --> 3139.500] So we don't have that directly, but based on other attention data just in the literature. +[3139.500 --> 3140.500] Yeah. +[3140.500 --> 3143.740] How would imagine you'd find a safe place for court tunes? +[3143.740 --> 3144.740] Yeah. +[3144.740 --> 3145.740] Yeah. +[3145.740 --> 3150.260] People represent if somebody's looking at a robot and they're representing it as an agent +[3150.260 --> 3152.580] that they act interact with, it becomes representative of the people. +[3152.580 --> 3153.580] I would expect. +[3153.580 --> 3155.340] Even if there's too many behind it. +[3155.340 --> 3156.340] Right. +[3156.340 --> 3157.340] Probably. +[3157.340 --> 3159.740] We haven't done that experiment, but I would be my guest. +[3159.740 --> 3160.740] All right. +[3160.740 --> 3163.420] So this is being done with a group in Claire Tomlin's lab. +[3163.420 --> 3166.220] Claire, as you know, her group studies active navigation. +[3166.220 --> 3169.020] So this is, like, just preliminary data. +[3169.020 --> 3170.420] I just want to mention where we're going. +[3170.420 --> 3174.380] Everything I told you about is, like, the static features that we were grasped under the +[3174.380 --> 3175.380] brain. +[3175.380 --> 3179.660] It's not really a particularly interesting way to model the brain. +[3179.660 --> 3183.420] What we would like is something that's more dynamic, that has a plant and, you know, +[3183.420 --> 3187.620] a policy and something that's, like, feels more like a cognitive process. +[3187.620 --> 3193.580] And so Claire's group has been implementing a model predictive control framework to try +[3193.580 --> 3200.660] to see if there's part of the brain that's particularly involved in negotiating vehicle +[3200.660 --> 3203.220] vehicle interactions during driving. +[3203.220 --> 3207.780] And so this is a standard model predictive control loop where the driver is constantly +[3207.780 --> 3213.380] trying to estimate what the next car is going to do and then adjust their behavior for +[3213.380 --> 3214.380] that. +[3214.380 --> 3218.180] So we have a model predictive control set of equations. +[3218.180 --> 3224.260] We need to use these as parameters as features that we fit to the brain. +[3224.260 --> 3228.900] So the first stage of that is basically optimizing this network so that it simulates the behavior +[3228.900 --> 3230.700] of the actual car in the experiment. +[3230.700 --> 3232.380] We do this just from the stimulus. +[3232.380 --> 3237.380] So basically, we set these model predictive control parameters so that the behavior of +[3237.380 --> 3242.380] the actual vehicle, the person was driving, matches the behavior in the experiment. +[3242.380 --> 3243.700] Now we have our features. +[3243.700 --> 3248.020] It's basically the behavior of the vehicle projected into this model predictive control framework. +[3248.020 --> 3252.660] And now we can use those parameters to progress onto the brain to discover where in the brain +[3252.660 --> 3255.180] these NPC features are represented. +[3255.180 --> 3258.500] And this model is fit along with all the other 34 models. +[3258.500 --> 3262.540] So what we're discovering here is unique variance that is attributed to this model predictive +[3262.540 --> 3266.500] control framework and not to any of the other highly correlated variables that we looked +[3266.500 --> 3267.500] at. +[3267.500 --> 3271.500] And you can see that there's a lot of locations in the brain that this NPC model fits +[3271.500 --> 3273.100] well. +[3273.100 --> 3277.500] Up here in the motor system, this is probably variance sharing with the controls. +[3277.500 --> 3279.500] But there's locations that have unique variance, for example. +[3279.500 --> 3281.500] This broken area is actually a speech area. +[3281.500 --> 3285.500] It's a classic speech area that goes back 150 years in neuroscience and psychology. +[3285.500 --> 3290.500] And on both sides of it, you see these little punk-tate bright spots that are predicted by the +[3290.500 --> 3295.500] model, the predictive control model, but no other model that we have fit to these data. +[3295.500 --> 3300.500] So we're very excited about this because this is a much more interesting way to model cognitive +[3300.500 --> 3302.500] variables than we've been using. +[3302.500 --> 3305.500] And we think it has good legs for the future. +[3305.500 --> 3306.500] All right. +[3306.500 --> 3309.500] So in summary, I told you that activation, active navigation is supported by distributed +[3309.500 --> 3310.500] networks. +[3310.500 --> 3317.500] Many findings in the rodent navigation literature end up being validated in this experiment. +[3317.500 --> 3322.500] There are a lot of known navigation-related regions of interest like the peri-epic-capal place +[3322.500 --> 3328.500] area that has been known in this literature, but now we're going to have a lot of different +[3328.500 --> 3331.500] cap-up-place area that has been known in this literature. +[3331.500 --> 3336.500] But now we can see in this more sensitive data set that it actually consists of several +[3336.500 --> 3340.500] substructures or sub-areas. +[3340.500 --> 3346.500] We see that navigation leads to widespread shifts in semantic representation due to attention, +[3346.500 --> 3350.500] which we would have expected based on other attention experiments, but this is the first time it's +[3350.500 --> 3352.500] been shown in a naturalistic task. +[3352.500 --> 3357.500] And there are probably brain representations mediating multi-agent interactions using the +[3357.500 --> 3360.500] model predictive control framework. +[3360.500 --> 3362.500] But that's really preliminary data. +[3362.500 --> 3368.500] The next person you should listen to is Chris, who hopefully next year will be able to talk about this in more detail. +[3368.500 --> 3374.500] So for future directions, we are working hard to obtain a more fine-grained understanding of exactly +[3374.500 --> 3377.500] what is being represented in these navigation networks. +[3377.500 --> 3378.500] It's a very hard problem. +[3378.500 --> 3384.500] One promising future direction is to do exactly what Chris is doing, and we're working on that. +[3384.500 --> 3388.500] We're also going to look at navigation in open areas, like open fields. +[3388.500 --> 3392.500] And the reason for that is a large fraction of the rodent literature on navigation, which is where the best +[3392.500 --> 3398.500] data comes from, is all done in open arena, not in amaze. +[3398.500 --> 3402.500] But I do want to mention that this approach can be used for any video game environment. +[3402.500 --> 3407.500] In fact, originally our experiments that we started doing this with 10 years ago were using counter-strike. +[3407.500 --> 3412.500] And personally, I always want to do this with Grand Theft Auto because it just seems like that's the most open world game you can have. +[3412.500 --> 3417.500] So this is a generalizable framework and all our tools are open source. +[3417.500 --> 3419.500] So that's about it. +[3419.500 --> 3425.500] I'm not going to talk about medical things because we don't have time, so I'm just going to skip to the end. +[3425.500 --> 3427.500] But that's it. +[3427.500 --> 3428.500] Thanks very much for your time. +[3429.500 --> 3430.500] Wow. +[3430.500 --> 3440.500] Questions. I'm going to go to students first. +[3440.500 --> 3443.500] I always like to go to, here we go. +[3443.500 --> 3445.500] This is our microphone. +[3445.500 --> 3446.500] Wow. +[3446.500 --> 3456.500] Wait, because the driver is in a simulation and not the real world, is there any possibility that the data is different than if the driver was actually driving in a real car? +[3456.500 --> 3457.500] Totally. +[3457.500 --> 3462.500] You should think about the difference between a controlled experiment and the real world as a continuum. +[3462.500 --> 3470.500] And we've moved as far down that continuum as we can in MRI, but there's things left. +[3470.500 --> 3474.500] Unreal Engine doesn't look like the real world. +[3474.500 --> 3476.500] There's no vestibular input. +[3476.500 --> 3481.500] So when you're moving around the world, you're constantly getting vestibular input about your acceleration and your orientation. +[3481.500 --> 3482.500] We have none of that. +[3482.500 --> 3489.500] In fact, the person is lying down in their back, which is completely different from driving, unless you're really one of those relaxed drivers. +[3489.500 --> 3492.500] So there are going to be differences, right? +[3492.500 --> 3496.500] And we don't know what they are, and they're going to be very hard to sort out. +[3496.500 --> 3503.500] Because anytime, you know, I could collect brain data while people are driving in a real car, but to do that I would have to use EEG. +[3503.500 --> 3507.500] And EEG is a really low information method. +[3507.500 --> 3514.500] There are very, very few bits of information coming through EEG, so you're probably not going to be able to liken it to conclusions about how those data relate to these data. +[3514.500 --> 3516.500] Back here. +[3516.500 --> 3517.500] Yeah. +[3517.500 --> 3518.500] You're up. +[3518.500 --> 3520.500] Talk to the boss. +[3520.500 --> 3521.500] Talk to the boss. +[3521.500 --> 3530.500] So you mentioned at the beginning that artificial neural networks and particularly transformer networks have nothing to do with the brain. +[3530.500 --> 3532.500] Well, yeah, but that was a check statement. +[3532.500 --> 3533.500] Nothing. +[3533.500 --> 3538.500] I was just struck by your description of how attention works in the brain. +[3538.500 --> 3544.500] I sounded remarkably similar to how the attention mechanism works in transformers. +[3544.500 --> 3547.500] That's what the transformer people would like you to think. +[3547.500 --> 3548.500] Well, that. +[3548.500 --> 3550.500] Trans-former. +[3550.500 --> 3556.500] You know, it was a classic jack kind of overstatement for generalization for rhetorical purposes. +[3556.500 --> 3564.500] The transformer networks, there's been a bit of recent work on this trying to understand the relationship between transformer attention and attention attention. +[3564.500 --> 3568.500] And transformer attention does seem to be implementing some sort of grouping process. +[3568.500 --> 3577.500] And grouping processes are actually the purpose of attention in brains, right? +[3577.500 --> 3583.500] So all of intermediate vision in human, in mammalian brains is involved in segmentation and grouping. +[3583.500 --> 3587.500] Grouping the pieces together that need to be grouped and segmenting figure from ground. +[3587.500 --> 3589.500] That's all intermediate vision. +[3589.500 --> 3592.500] And that's clearly a very, potentially driven process. +[3592.500 --> 3596.500] So at that level, they are related. +[3596.500 --> 3606.500] But attention, I think, you know, again, if you think about attention as the learning component of training a network, I think they're very, very, very similar. +[3606.500 --> 3611.500] Because you can imagine a system of the human brain if you want to implement attention. +[3611.500 --> 3633.500] If you just change the gain, just change the gain of the weights at a peripheral level, say in primary visual cortex, as those small gain changes percolate up the system, and you pool it's excessively higher levels of processing, then what's going to happen is these small gain changes that are just like putting the volume control on different neurons are going to lead to representation changes at the higher level. +[3633.500 --> 3641.500] And to the extent that attention in neural networks, artificial neural networks, influence that same kind of thing, then yeah, they're, they're analogous. +[3641.500 --> 3645.500] Thank you. +[3645.500 --> 3649.500] But I think the box is going to go over there. You can yell at me right now. +[3649.500 --> 3651.500] You want to just yell? Now I'll repeat it. +[3651.500 --> 3654.500] No, no, no, we've got the microphone. It's being reported. +[3654.500 --> 3657.500] I'm not taking off. I'm not going to take over your job, Jeff. +[3657.500 --> 3666.500] Okay. When you mention attention changes weight, you meant like attention changes synaptic weights. +[3666.500 --> 3667.500] Ah. +[3667.500 --> 3670.500] And if that's the case, where's the evidence for it? +[3670.500 --> 3674.500] Yeah, nobody knows. Nobody knows what attention does or how it works. +[3674.500 --> 3679.500] So there are multiple theories for how attention works in the brain. +[3679.500 --> 3684.500] One is that it somehow, or other changes the synaptic efficacy in neurons. +[3684.500 --> 3698.500] Another is that essentially it's there's a whole set of modulatory channels that come in and essentially multiply fixed weights with variable weights and change the computation that way. +[3698.500 --> 3703.500] Nobody, nobody knows. So when I said attention, change to the weights, I meant purely in this model space. +[3703.500 --> 3705.500] Oh, okay. Yeah. +[3706.500 --> 3715.500] So you mentioned that, well, we lack vestibular input with the current experiment. +[3715.500 --> 3716.500] Yeah. +[3716.500 --> 3722.500] Would EEG over long duration of time like not suffice? How does that work? +[3722.500 --> 3726.500] So I like long duration of time, right? +[3726.500 --> 3733.500] If you have a lousy method of measurement, kind of the best thing you can do is collect a really large data set. +[3733.500 --> 3738.500] So EEG over a longer period of time would be way better than short EEG experiment. +[3738.500 --> 3744.500] But you're still always going to be limited because remember, FMRI is a volumetric measure. +[3744.500 --> 3750.500] It's a chemit, it's basically measuring the bulk tissue. +[3750.500 --> 3761.500] And the bulk tissue has been spatially encoded by applying these gradients that allow you to use the 48 transform to infer the spatial position of the different signals. +[3762.500 --> 3768.500] So there's a sophisticated MRI is a two-way street. You put in a scoded signal. +[3768.500 --> 3771.500] And the coded signal is multiplexed with what's already going on the brain. +[3771.500 --> 3775.500] And then you can decode the signal and recover a lot of information. +[3775.500 --> 3779.500] EEG is a one-way street. It's purely passive. You're not putting anything in. +[3779.500 --> 3782.500] You're purely measuring things. So there's no coding that goes on. +[3782.500 --> 3788.500] And EEG is essentially a two-dimensional sheet overlying a three-dimensional volume. +[3788.500 --> 3795.500] So you also have not only do you not code anything, but you now have this problem of volumetric surface measurements. +[3795.500 --> 3799.500] So EE and the skull acts like a big low pass filter and filters out most of the EEG signals. +[3799.500 --> 3803.500] So EEG has loss and loss and loss at every level. +[3803.500 --> 3808.500] And then bits of information per unit time when you graduate student from EEG are vanishingly small. +[3808.500 --> 3817.500] Pretty much the main thing you see in EEG signals is giant sets of these brain networks being switched in and out as the tasks change. +[3818.500 --> 3823.500] It's really exciting to see more naturalistic behaviors being brought into the scanner. +[3823.500 --> 3830.500] I cannot imagine the kind of engineering problems that you all had to solve to make that work with motion, etc. +[3830.500 --> 3837.500] So very nice to see that kind of work and really nice to see how task modulates representations in only some ways. +[3837.500 --> 3841.500] That's so positive. I feel like the next thing is going to be really horrible. +[3841.500 --> 3842.500] I mean, you're not wrong. +[3843.500 --> 3851.500] Well, the point is I just want to ask you to kind of follow through on the promise in the title. +[3851.500 --> 3858.500] Reverse engineering. I haven't seen anything that would lead me to believe that you would be able to reverse engineering thing with the system from the dating show today. +[3858.500 --> 3863.500] I love it. I do want to point out it's TOR reverse engineering. +[3863.500 --> 3866.500] So specifically. +[3867.500 --> 3872.500] So all I have to do is just make sure the vector is pointing in that direction. +[3872.500 --> 3875.500] Yeah, it's really hard. +[3875.500 --> 3881.500] We know you guys already know that if you have just any neural, you know, GPT. +[3881.500 --> 3884.500] How does GPT actually work? Try to reverse engineer GPT. +[3884.500 --> 3886.500] Good luck. It's really, really hard. +[3886.500 --> 3890.500] We have that problem, but we also don't have any data. +[3890.500 --> 3895.500] At least in GPT, you essentially, you know, you have infinite amount of time to look at that network. +[3895.500 --> 3898.500] You could do whatever you want. We don't have that. +[3898.500 --> 3900.500] We've got like an hour's worth of data from this stupid thing. +[3900.500 --> 3903.500] It's really, really hard. Please don't tell my funders. +[3903.500 --> 3907.500] This is a fundamentally impossible problem. +[3907.500 --> 3915.500] So when I was an undergraduate, we had a famous researcher, Dr. Wilder Penfield, +[3915.500 --> 3920.500] who was also, he was a surgeon, but he was also a psychologist. +[3920.500 --> 3925.500] And he cut open people's brains and he played music. +[3925.500 --> 3932.500] And different electrical signals would appear in different parts of the brain as the music played. +[3932.500 --> 3936.500] It seems like, and that was like almost 100 years ago. That was a long time ago. +[3936.500 --> 3940.500] So it seems like you're doing the most modern version. +[3940.500 --> 3946.500] Yes. And you don't have to open anyone's brain up because you have MRI. +[3946.500 --> 3948.500] Yes. Is that where we are? +[3948.500 --> 3953.500] Yes. So one of my joke names for MRI is functional hemophronology. +[3953.500 --> 3956.500] Fundamentally, for those of you who don't remember phonology, +[3956.500 --> 3960.500] phonology is this widely and well-deserved discredited method in the 19th century, +[3960.500 --> 3966.500] where people thought you could tell, you could basically look at the bumps on people's heads to infer +[3966.500 --> 3969.500] what their brain was good at and what they were bad at. +[3969.500 --> 3973.500] And then that man, if that's true, if you were a baseball player and had a really good vision, +[3973.500 --> 3975.500] you'd have big bump over the visual system. +[3975.500 --> 3977.500] I'll give you the last one. +[3977.500 --> 3983.500] This is really, you know, that idea was crazy in one sense and not crazy in another. +[3983.500 --> 3986.500] It's crazy that the bumps in your head is going to tell you about the ending about the brain. +[3986.500 --> 3991.500] But the fact that the brain is localized, that there are structures that represent certain kinds of information, +[3991.500 --> 3996.500] that is clear from from the infield and all the subsequent work, right? +[3996.500 --> 4001.500] If you get a brain lesion in certain brain areas, you will lose that function. +[4001.500 --> 4005.500] If you have a stroke and it affects your visual cortex, you will go blind. +[4005.500 --> 4010.500] In other brain areas, you have a stroke there, you just kind of get worse at anything, +[4010.500 --> 4013.500] at everything. Why? Because it's a hugely connected network. +[4013.500 --> 4019.500] And if a little piece gets taken out, there are other things that you can compensate for it, right? +[4019.500 --> 4022.500] And there are other pathways for the information closing the network. +[4022.500 --> 4026.500] So some brain areas are very specialized, some brain areas are not at all specialized, +[4026.500 --> 4033.500] some brain areas are not affected by attention at all, some brain areas are completely affected by attention. +[4033.500 --> 4038.500] It's a mottly bag. But yeah, we're essentially just enumerating here, right? +[4038.500 --> 4043.500] If there are, if there are say, 500 brain areas, it's probably more than we need, +[4043.500 --> 4046.500] and each brain area is representing, you know, 100 dimensions. +[4046.500 --> 4050.500] Well, okay, I know how many dimensions I need to recover from 50,000 dimensions, right? +[4050.500 --> 4054.500] It's an enumerable problem. +[4054.500 --> 4061.500] Ah, yes. Is there any work in turning EEG into F-M-R-I? +[4061.500 --> 4068.500] F-M-R-E-E-G by inputting the currents into the brain. +[4068.500 --> 4069.500] Into the brain. +[4069.500 --> 4075.500] Into the brain. I know there's a lot of experiments amateurily with putting, like, +[4075.500 --> 4076.500] 1.1. +[4076.500 --> 4078.500] Yes, yes, yes, yes. +[4078.500 --> 4079.500] Okay, that's an old question. +[4079.500 --> 4081.500] Putting in the signal on the B-B-B. +[4081.500 --> 4084.500] Yeah, yeah. So can you put signals into the brain? +[4084.500 --> 4088.500] The answer is yes. You know, you could do that if you want. +[4088.500 --> 4091.500] The answer is can you control anything, right? +[4091.500 --> 4093.500] So, um, can you interrogate in that way? +[4093.500 --> 4098.500] Yeah, so, so, so think of it the way I like to talk about it this way. +[4098.500 --> 4101.500] Imagine you guys are all engineers, so, you know, probably when you were five years old, +[4101.500 --> 4105.500] you, like, took a part in a TV or radio and like started looking inside it, +[4105.500 --> 4107.500] trying to figure out what the circuits were. +[4107.500 --> 4111.500] And you can imagine if you might find like a circuit, if you just have a voltmeter that's, you know, +[4111.500 --> 4114.500] correlated with like the brightness of the TV, okay, fine. +[4114.500 --> 4116.500] But imagine now you said, I'm going to make the TV really bright. +[4116.500 --> 4119.500] I'm going to put in a bunch of current into the circuit and see what happens. +[4119.500 --> 4121.500] The TV is probably going to blow up. +[4121.500 --> 4124.500] And that's mostly what happens when you put, when you put signal in the brain. +[4124.500 --> 4127.500] So there's a method called transcranial magnetic stimulation, +[4127.500 --> 4130.500] which is essentially causes temporary brain lesions. +[4130.500 --> 4134.500] And the method, there's old methods, electrical convulsive therapy, +[4134.500 --> 4137.500] which is, you've heard about it, probably seen in, you know, +[4137.500 --> 4140.500] one fluid of the kukus nest, which is putting in giant voltage into the brain. +[4140.500 --> 4143.500] And that's the human analog of turn it off and turn it on again, +[4143.500 --> 4145.500] which we all know is the only way to fix the computer. +[4145.500 --> 4148.500] And so, works for humans, too. +[4148.500 --> 4152.500] And there are other things you can do, which is putting in more subtle currents. +[4152.500 --> 4157.500] So there have been a lot of attempts over the past 20 years to put in just subtle currents, +[4157.500 --> 4161.500] say between, you know, prefrontal cortex maybe and the plow cortex, +[4161.500 --> 4165.500] with the idea being that there's already a recurrent loop there. +[4165.500 --> 4169.500] And if you can lower the membrane potential of the circuits in this recurrent loop, +[4169.500 --> 4173.500] you can actually increase the amount of activity in that recurrent loop. +[4173.500 --> 4176.500] And if that recurrent loop is transmitting information or modulating information +[4176.500 --> 4180.500] that you need to do a task, it might improve the task. +[4180.500 --> 4184.500] I am agnostic about whether the stuff works or not. +[4184.500 --> 4188.500] I think there's a lot of evidence that it probably does something. +[4188.500 --> 4192.500] GDCS, but it's really a work in progress, because the voltage is a very, very low. +[4192.500 --> 4196.500] Complicable effects are very high, and it's still a work in progress at this point. +[4196.500 --> 4200.500] So you can put stuff in, but it's not something you want to do. +[4200.500 --> 4204.500] Okay, I think we can stop here. Thanks very much for your time. +[4204.500 --> 4205.500] Thanks for your time. +[4214.500 --> 4216.500] Thank you. diff --git a/transcript/allocentric_csaYYpXBCZg.txt b/transcript/allocentric_csaYYpXBCZg.txt new file mode 100644 index 0000000000000000000000000000000000000000..f9f9fe92520e4a9048551353192827ea29e19625 --- /dev/null +++ b/transcript/allocentric_csaYYpXBCZg.txt @@ -0,0 +1,63 @@ +[0.000 --> 4.880] Hi, I'm Jacob Taxis for About.com. +[4.880 --> 8.760] In this video, you will learn 8 types of nonverbal communication. +[8.760 --> 12.400] This is information from About.com's Psychology Site. +[12.400 --> 13.720] Number 1. +[13.720 --> 18.920] Facial Expression Facial expression is one type of nonverbal communication that is nearly +[18.920 --> 21.040] universal in meaning. +[21.040 --> 25.560] Though different cultures generally ascribe different meanings to various types of nonverbal +[25.560 --> 29.960] communication, the meanings attributed to certain facial expressions like this one. +[29.960 --> 34.480] A smile or the frown remain quite similar throughout the world. +[34.480 --> 38.720] For example, a downcast look in New York will be a downcast look in Moscow. +[38.720 --> 43.680] A smile and beliefs will signal happiness or joy just as it would in Barcelona. +[43.680 --> 44.840] Number 2. +[44.840 --> 45.840] Gestures +[45.840 --> 50.320] Hand gestures are a vitally important type of nonverbal communication that take on various +[50.320 --> 53.960] meanings as you navigate the world's cultures. +[53.960 --> 58.840] One might immediately think of waving, giving a peace sign or a thumbs up. +[58.840 --> 64.280] One might see a raised index finger to signal that a person's team is number 1. +[64.280 --> 67.640] Politicians will use specially designed gestures to emphasize points. +[67.640 --> 68.640] Number 3. +[68.640 --> 69.960] Parallinguistics +[69.960 --> 75.480] Parallinguistics simply means a type of vocal communication without the use of language. +[75.480 --> 79.920] This includes voice inflection, pitch, rhythm, loudness, and tone. +[79.920 --> 85.400] A slow rhythm and hush tone might signify gentleness or concern, while heavy pitch and +[85.400 --> 89.720] rising inflection might be attributed to anger or enthusiasm. +[89.720 --> 90.720] Number 4. +[90.720 --> 92.040] Body language +[92.040 --> 96.320] Though body language and posture can be quite subtle, they can have an enormous impact +[96.320 --> 97.920] on communication. +[97.920 --> 101.800] Cross-darms might signify closed-off or defensive attitude. +[101.800 --> 105.360] Slumped shoulders and excessive leaning might signify boredom. +[105.360 --> 108.680] Again, these cues are subtle but powerful. +[108.680 --> 109.680] Number 5. +[109.680 --> 110.760] Proxemics +[110.760 --> 113.600] Proxemics refers to personal space. +[113.600 --> 118.160] Independent individuals prefer different distances when it comes to speaking with others. +[118.160 --> 122.400] Obviously, standing too close to someone while she or he is talking might bring about +[122.400 --> 125.240] feelings of discomfort or annoyance. +[125.240 --> 129.600] When speaking to groups, individuals tend to need larger distances in order to feel +[129.600 --> 130.600] heard. +[130.600 --> 131.600] Number 6. +[131.600 --> 132.600] Eye gays +[132.600 --> 136.880] Eye gazing is a fascinating type of nonverbal communication. +[136.880 --> 141.120] For example, the rate of blinking might actually increase and the pupils die late when +[141.120 --> 143.520] friends or loved ones are encountered. +[143.520 --> 145.920] This goes for interesting objects as well. +[145.920 --> 151.360] The eyes react very differently to outside stimulus depending on personal interpretation. +[151.360 --> 152.640] Number 7. +[152.640 --> 154.160] Habtics +[154.160 --> 157.640] Habtics simply refers to communicating through touch. +[157.640 --> 161.320] Touching is used to signify love, affection, and familiarity. +[161.320 --> 165.920] It might also be employed in times of stress or sadness when comfort is needed. +[165.920 --> 170.960] The force of a handshake might signify extra enthusiasm between close friends, while +[170.960 --> 176.040] a firm, standard grip might be more appropriate for a professional introduction. +[176.040 --> 177.040] Number 8. +[177.040 --> 178.040] Appearance +[178.040 --> 182.000] Appearance is a very important type of nonverbal communication. +[182.000 --> 186.480] Physical appearance, including clothing style and neatness, is the first thing people see +[186.480 --> 188.840] when encountering one another. +[188.840 --> 192.920] Studies in the area of color psychology suggest that the colors of clothing can have big +[192.920 --> 195.600] effects on mood and attitude. +[195.600 --> 199.480] People make quick judgements of character according to dress and appearance. +[199.480 --> 200.720] Thank you for watching. +[200.720 --> 202.600] For more, visit about.com. diff --git a/transcript/allocentric_d_J9UxKBl7o.txt b/transcript/allocentric_d_J9UxKBl7o.txt new file mode 100644 index 0000000000000000000000000000000000000000..f171c1e18d0a7fa68f0fbb1c10f7a83e3bd0a9f1 --- /dev/null +++ b/transcript/allocentric_d_J9UxKBl7o.txt @@ -0,0 +1,72 @@ +[0.000 --> 2.240] Welcome to this module, guys. +[2.240 --> 7.120] And in this module, we're going to explore the concept of Proximix, +[7.120 --> 11.360] which is developed by anthropologist Edward T. Hall. +[11.360 --> 13.480] As we discussed earlier in the course, +[13.480 --> 19.760] Proximix actually focuses on the use of space and distance when it comes to nonverbal communication. +[19.760 --> 24.760] And it looks into how it influences our interactions with those around us. +[24.760 --> 29.040] And by actually better understanding the principles of Proximix +[29.320 --> 33.000] through Hall's Proximix model, +[33.000 --> 37.840] we can actually create an environment that's conducive to effective communication, +[37.840 --> 39.800] trust building and collaboration. +[39.800 --> 43.480] And we can also learn that in different types of situations, +[43.480 --> 48.240] how we can use the nonverbal communication aspect of space +[48.240 --> 53.240] to actually optimize and maximize our communication for success. +[53.240 --> 58.840] So let's go into the four zones of personal space that Hall identified. +[58.840 --> 63.840] So Hall has identified four distinct zones of personal space +[63.840 --> 67.240] that people maintain in their interactions with others. +[67.240 --> 73.240] We have the intimate zone, the personal zone, the social zone and the public zone. +[73.240 --> 79.040] Now, the intimate zone is around 0 to 50 centimeters from you. +[79.040 --> 82.840] And this zone is only reserved for close relationships, +[82.840 --> 85.720] such as family members, romantic partners, +[85.720 --> 87.800] or in some cases close friends. +[87.800 --> 92.600] So entering someone's intimate zone without permission can cause discomfort. +[92.600 --> 94.200] So what does that tell us? +[94.200 --> 97.800] This means that when we're talking to someone we've met for the first time, +[97.800 --> 104.400] or we're talking to a team member or a professional who we don't have an intimate relationship with, +[104.400 --> 107.600] this is the zone that we don't want to get into. +[107.600 --> 112.200] Being 0 to 50 centimeters close to someone can often feel intrusive +[112.200 --> 114.000] like you're going into their personal space. +[114.000 --> 120.400] And we never want to get this close to someone unless we have the right relationship. +[120.400 --> 123.600] So this is a zone we probably want to stay away from. +[123.600 --> 126.400] Next, we move on to the personal zone, +[126.400 --> 131.600] which is around 0.5 to 1 meters away from you. +[131.600 --> 138.400] And now this zone is for interactions with friends, acquaintances, and professional colleagues. +[138.400 --> 142.600] It allows for casual conversations and personal connections +[142.600 --> 145.400] without invading someone's intimate space. +[145.400 --> 149.200] So when you're having a conversation with someone, +[149.200 --> 151.400] whether it's a casual conversation, +[151.400 --> 155.000] whether it's like a lunch conversation, +[155.000 --> 158.600] or you're just having a casual conversation around work, +[158.600 --> 162.200] whether it's with like a work colleague or an acquaintance or a friend, +[162.200 --> 164.200] this is the zone that you want to be in. +[164.200 --> 168.400] You want to be around 0.5 to 1 meters away from them, +[168.400 --> 173.200] because this is probably the optimal zone where someone will feel like +[173.200 --> 177.000] you're not invading their personal space, which is a good thing. +[177.000 --> 184.400] Next guys, we have our social zone and our social zone is around 1 to 4 meters away from you. +[184.400 --> 187.800] Now this zone is used for more formal interactions, +[187.800 --> 192.000] such as business meetings, presentations, or group discussions. +[192.000 --> 196.800] It allows for clear communication while maintaining a sense of professionalism. +[196.800 --> 199.600] So guys, if you're doing a team presentation, +[199.600 --> 202.600] or you want to facilitate a bit of a group discussion, +[202.600 --> 206.600] or you're having a business meeting with someone you haven't met for the first time, +[206.600 --> 210.600] this is probably the distance that you want to keep from them, +[210.600 --> 213.400] whether it's through a cleverly arranged meeting room, +[213.400 --> 217.400] or sitting across the table, or keeping a little bit of distance +[217.400 --> 223.000] so you can project to everybody and not just feel like you're talking to just one person. +[223.000 --> 225.400] This is the zone that you want to stay in. +[225.400 --> 229.200] This distance that is 1 to 4 meters away from you. +[229.200 --> 234.200] And finally guys, we have the public zone, which is 4 meters or more. +[234.200 --> 239.000] And now this zone is for public speaking lectures or performances, +[239.000 --> 242.400] because it actually creates a sense of detachment, +[242.400 --> 245.200] which is useful when addressing large audiences. +[245.200 --> 249.000] Because if you get any closer, it's a little bit uncomfortable for the audience +[249.000 --> 252.800] seeing someone speak from such a close distance for so long. +[252.800 --> 257.400] It often, sometimes what will happen is like the people at the back might not even see you, +[257.400 --> 260.200] or they might just think that you're talking to a few people. +[260.200 --> 263.600] But actually, if you want to address a large group, +[263.600 --> 266.800] whether it's for public speaking lecture performances, +[266.800 --> 271.800] you want to be in this public zone or of 4 meters or more. +[271.800 --> 276.600] So as we mentioned, that understanding the approximate, +[276.600 --> 279.600] approximate zones for different types of interactions +[279.600 --> 284.000] can really help you communicate more effectively with your team members, +[284.000 --> 288.200] your clients, your stakeholders, and also in different communication. diff --git a/transcript/allocentric_eK3T5UIwr3E.txt b/transcript/allocentric_eK3T5UIwr3E.txt new file mode 100644 index 0000000000000000000000000000000000000000..c81623e0713f47c59a373e72488d367d7717d04b --- /dev/null +++ b/transcript/allocentric_eK3T5UIwr3E.txt @@ -0,0 +1,1084 @@ +[0.000 --> 5.000] This program is presented by University of California Television. +[5.000 --> 13.000] Like what you learn? Visit our website or follow us on Facebook and Twitter to keep up with the latest UCTV programs. +[30.000 --> 35.000] Visit our website or follow us on Facebook and Twitter to keep up with the latest UCTV programs. +[60.000 --> 65.000] Earlier this program has been funded by University of California Television, ago. +[65.040 --> 70.040] Detailed and Climate Assessment, China Education to welcome protects communities and values of Italy. +[70.040 --> 73.560] specified by US Tale County National Journal and Transportation Professional Legal Policy. +[73.560 --> 76.000] Let's begin to get creative and creative together. +[76.000 --> 79.540] Today, you're gonna hear 3 talks +[79.900 --> 82.400] From 3 members of the Membering Ageing Center, which is actually just執 alongside the��는 of the Media Mystery College. +[82.520 --> 85.320] Thank you for your coming today! +[85.680 --> 87.880] Thank you. +[87.880 --> 95.120] So first I'm going to talk about brain games that capture brain circuits, specifically how +[95.120 --> 100.280] to use brain games to make inferences about memory systems. +[100.280 --> 102.680] And after me you'll hear from Breen Betcher. +[102.680 --> 107.120] She's also a neuropsychologist at the Memory and Aging Center as I am. +[107.120 --> 112.360] And she's going to talk about the evidence for using brain games to improve your cognitive +[112.360 --> 114.200] function. +[114.200 --> 118.880] And lastly you'll hear from one of our neurology fellows Winston Chung who's going to talk +[118.880 --> 121.480] about neuroscience and philosophy. +[121.480 --> 124.440] So I think it'll be an interesting evening. +[124.440 --> 129.400] So in my talk I hope that you'll learn that there are multiple distinct memory systems +[129.400 --> 131.640] in the brain. +[131.640 --> 137.320] And by using carefully designed cognitive tests we can measure separately how well each +[137.320 --> 142.840] of these systems are functioning. +[142.840 --> 148.680] During the first half of my talk I'll focus on the distinction between working memory +[148.680 --> 152.520] and long-term memory consolidation. +[152.520 --> 158.040] I'll start with the story of a famous patient known as HM who taught us that there are +[158.040 --> 162.000] multiple memory systems in the brain. +[162.000 --> 166.320] Then we'll try out some tests of working memory and long-term memory like the ones that +[166.320 --> 169.800] we use at the Memory and Aging Center. +[169.800 --> 174.880] And with that section I'll end with some tips about how you can maximize your memory +[174.880 --> 179.920] function using these insights from neuroscience. +[179.920 --> 185.560] During the second half of my talk I'll focus on the distinction between allocentric and +[185.560 --> 189.360] egocentric navigation memory strategies. +[189.360 --> 194.240] So there's two major ways that we can navigate how we can find our way around without getting +[194.240 --> 196.560] lost. +[196.560 --> 202.400] And I'll ask you which strategy do you prefer to use? +[202.400 --> 208.680] At the Memory and Aging Center the most common reason why new patients come to us is because +[208.680 --> 212.240] they have a memory problem. +[212.240 --> 218.600] When a patient tells us that they have a memory problem we ask them to give us some examples. +[218.600 --> 222.680] And when we ask this question we get very different answers. +[222.680 --> 227.280] So here are some of the most common answers that we get. +[227.280 --> 232.640] I have trouble finding words or names when I need them. +[232.640 --> 238.440] Sometimes I can't remember why I walked into a room. +[238.440 --> 242.440] Especially if I get distracted on the way. +[242.440 --> 248.080] I forget where I put my keys or parked my car. +[248.080 --> 256.680] I can't remember the meanings of words or even what objects are used for. +[256.680 --> 263.320] I sometimes forget what I did yesterday or last week and even when I am reminded I sometimes +[263.320 --> 265.760] can't remember. +[265.760 --> 271.280] These are all very different memory problems and in fact they rely on different memory +[271.280 --> 276.000] circuits in the brain. +[276.000 --> 280.640] We learned that there are different memory systems from a famous patient who is known +[280.640 --> 282.640] as HM. +[282.640 --> 287.800] HM had a seizure disorder that was not well treated with medications. +[287.800 --> 294.680] And so his surgeon Dr. William Scoville performed a bilateral medial temporal lobe resection +[294.680 --> 300.800] cutting out the middle parts of his temporal lobe including the hippocampus on each side. +[300.800 --> 306.720] You can see in the figure there on the left an HM spring there is a big chunk of brain +[306.720 --> 310.240] that is missing. +[310.240 --> 315.800] So the good thing about this surgery was that it cured his seizures but it had a horrible +[315.800 --> 317.880] side effect. +[317.880 --> 322.640] He could no longer commit new events to his long-term memory. +[322.640 --> 327.680] He actually lived a long life and he would see the same doctors sometimes day after day +[327.680 --> 332.560] and it was like he was meeting them for the first time. +[332.560 --> 341.120] So HM was impaired at laying down new memories, long-term memory consolidation. +[341.120 --> 345.560] This is the type of memory that we often mean when we talk about memory. +[345.560 --> 349.600] It's what we use when we are a student and we study a subject so that we'll remember +[349.600 --> 351.800] the information later. +[351.800 --> 354.720] It's memory for facts and events. +[354.720 --> 361.480] It seems to have almost an unlimited capacity. +[361.480 --> 366.160] One of the important findings with HM was that there were memory functions that were +[366.160 --> 368.040] spared. +[368.040 --> 373.120] So we know that the medial temporal lobe is critical for long-term memory consolidation +[373.120 --> 379.280] from HM but we know that it's not critical for some other memory functions. +[379.280 --> 383.320] For example, HM was able to learn new skills. +[383.320 --> 389.320] You call this type of memory procedural memory like learning to dance the salsa or learning +[389.320 --> 390.720] to ride a bike. +[390.720 --> 392.560] It becomes a habit after a while. +[392.560 --> 394.200] You don't even really have to think about it. +[394.200 --> 398.240] You just remember how to do it almost effortlessly. +[398.240 --> 403.080] So this is called procedural memory and it's subserved by very different brain circuit +[403.080 --> 407.560] than the long-term memory consolidation. +[407.560 --> 411.280] Short-term memory was also relatively preserved in HM. +[411.280 --> 416.680] It's also called working memory because we use this kind of information to hold small +[416.680 --> 421.600] amounts of information in our mind so that we can work with the information. +[421.600 --> 426.600] This type of memory is very temporary and has a very small capacity. +[426.600 --> 432.320] So again, these two types of memory were preserved in HM despite the fact that he had those +[432.320 --> 435.800] big chunks of his medial temporal lobe removed. +[435.800 --> 442.360] So HM taught us that there are multiple distinct memory systems. +[442.360 --> 446.160] So I'll talk a bit more now about this short term or working memory. +[446.160 --> 450.040] It's the type of memory you're using right now to listen to this talk and process it in +[450.040 --> 455.440] your mind and think about how the stuff you're learning may apply to you or people you know. +[455.440 --> 459.760] Your processing or working with this memory as you listen. +[459.760 --> 465.120] So working memory holds information and conscious awareness so we can use it. +[465.160 --> 470.160] The information can come from our senses like right now you're listening to me talk and +[470.160 --> 473.280] that information is going into your working memory. +[473.280 --> 477.960] The information can also come from your long-term memory stores. +[477.960 --> 480.400] The duration is seconds. +[480.400 --> 486.200] It only lasts up to maybe 20 or 30 seconds unless you keep rehearsing the information over +[486.200 --> 488.880] and over again in your mind. +[488.880 --> 494.400] For example, if someone gave you a phone number to you and then you walked over the phone +[494.440 --> 499.400] to dial it you would hold that phone number in your working memory so that you could remember +[499.400 --> 501.960] it when you need to dial on the phone. +[501.960 --> 506.960] But if someone distracts you when you're on your way to the phone you're likely to use +[506.960 --> 512.200] it and that's because this working memory has a very limited capacity and so distracting +[512.200 --> 517.720] information can compete with the information you want to pay attention to and then you +[517.720 --> 520.440] can lose it. +[520.440 --> 525.000] So it can only hold about five to seven items in your mind at a time which is perfect because +[525.000 --> 528.840] phone numbers are about seven digits long or nine digits. +[528.840 --> 533.440] If you can chunk the information you can hold onto it longer. +[533.440 --> 539.120] So for example if you recognize an area code in the phone number you can chunk that and +[539.120 --> 545.000] it becomes one unit and then you only have to remember the other seven digits. +[545.000 --> 550.560] So the better you inhibit irrelevant information the more information you can hold in your +[550.560 --> 553.160] working memory. +[553.160 --> 558.720] Now I think this is why people who are under a lot of stress have trouble with their memory. +[558.720 --> 563.960] They may have a lot of distressing thoughts that are interfering with their working memory. +[563.960 --> 570.920] So there's not enough room in their working memory for what they want to pay attention to. +[570.920 --> 577.440] So good strategies for improving your working memory are to reduce your stress and also +[577.440 --> 581.000] just try to reduce distracting information. +[581.000 --> 585.320] If you need to concentrate on something or concentrate on important conversation try to +[585.320 --> 590.160] do in a quiet place with fewer distractions. +[590.160 --> 593.240] So let's try a working memory test now. +[593.240 --> 599.960] I'm going to administer to you a very popular neuropsychological test of working memory. +[599.960 --> 603.480] I'm going to say some letters and numbers to you. +[603.480 --> 610.680] I'll jump up and I want you to say them back to me with the letters in order first followed +[610.680 --> 613.320] by the numbers in order. +[613.320 --> 615.120] Okay are you ready? +[615.120 --> 618.120] All right. +[618.120 --> 622.160] F three a eight. +[622.160 --> 629.160] All right good let's try a longer one now. +[629.160 --> 632.160] All right. +[632.160 --> 639.160] K, W, 9, 2, P. +[639.160 --> 645.160] All right good. +[645.160 --> 649.960] So that's a test of what we would call verbal working memory where you have to hold online +[649.960 --> 656.840] those letters and numbers and manipulate them, reorder them in your mind. +[656.840 --> 661.240] So now let's try another test of working memory that we're using in our research right +[661.240 --> 662.240] now. +[662.240 --> 686.480] I want you to remember the last three locations that are shown. +[686.480 --> 689.600] So this is a test of spatial working memory. +[689.600 --> 694.640] It turns out that spatial working memory and verbal working memory the test we just did +[694.640 --> 699.680] have some similar neural underpinnings but also have some separate neural underpinnings. +[699.680 --> 704.600] So for example some patients might be impaired in spatial working memory but not verbal working +[704.600 --> 708.960] memory or vice versa. +[708.960 --> 715.040] In Alzheimer's disease early on working memory is actually pretty good. +[715.040 --> 719.920] But the type of memory that they have problems with is the same type that HM had problems +[719.920 --> 724.360] with long term memory consolidation. +[724.360 --> 725.560] Why is that? +[725.560 --> 731.440] Well you can see in this healthy control brain this is the hippocampus it's nice and tight +[731.440 --> 735.600] and plump and lots of neurons there. +[735.600 --> 741.680] But in the Alzheimer's brain there's a lot of black which is the cerebral spinal fluid +[741.760 --> 746.360] that has come in to fill in where the neurons have died. +[746.360 --> 752.560] So this is an early target of Alzheimer's disease and this is why early in the disease many +[752.560 --> 758.840] patients have trouble laying down new information into long term memory stores. +[758.840 --> 764.560] They may have trouble telling you what movie they saw last week for example. +[765.560 --> 772.560] Well let's try a test of memory, a test of long term consolidation. +[772.560 --> 777.200] So I'm going to read a list of words to you and want you to listen carefully and what +[777.200 --> 778.200] I'm through. +[778.200 --> 782.520] I want you to say them back in your mind in any order. +[782.520 --> 786.920] And if you want you can try to keep track of how many you're remembering on your fingers +[786.920 --> 790.520] or tallying but don't write down the words. +[790.520 --> 794.480] So I'll read the list of words to you and then you can repeat them back to yourself when +[794.480 --> 797.400] I'm done in your mind. +[797.400 --> 811.920] A rugula, paperclip, apple, stapler, telephone, gorgonzola, scissors, red onion. +[811.920 --> 812.920] I'm finished. +[812.920 --> 814.920] Repeat them back in your mind. +[814.920 --> 815.920] Okay. +[815.920 --> 816.920] Let's try it again. +[817.480 --> 819.520] Let's see if you can remember more this time. +[819.520 --> 821.880] It'll be the same list. +[821.880 --> 834.880] A rugula, paperclip, apple, stapler, telephone, gorgonzola, scissors, red onion. +[834.880 --> 837.880] Okay. +[837.880 --> 844.240] We administer a test like that one to all the patients who come in our clinic. +[844.240 --> 850.700] And what we find is the first time we read the list of words, the Alzheimer's patients +[850.700 --> 852.960] perform pretty similarly to controls. +[852.960 --> 855.560] So this test actually I think has 16 words. +[855.560 --> 858.880] It's a different test than the one I just gave you but it's similar. +[858.880 --> 864.440] And at trial one, they repeat back a similar number of words. +[864.440 --> 869.040] But then over the learning trials, we actually administer five learning trials. +[869.040 --> 872.880] You can see the controls get better every time. +[872.880 --> 875.360] Every time they remember more words. +[875.360 --> 879.920] And this is because their hippocampus is helping them to consolidate the information. +[879.920 --> 884.720] But in Alzheimer's disease, they don't show as much improvement over the learning trials. +[884.720 --> 888.040] Because their hippocampus is not as effective at this. +[888.040 --> 893.480] And importantly, over the long delay, which is 20 minutes, we see that the Alzheimer's +[893.480 --> 896.760] patients remember almost none of the words. +[896.760 --> 901.440] In fact, many of the patients don't remember that a list had been read to them. +[901.440 --> 908.800] So this is a problem with long-term memory consolidation. +[908.800 --> 916.000] How does the hippocampus consolidate new information into long-term memory stores? +[916.000 --> 923.000] Well, it consolidates the memories in a widely distributed network of brain regions +[923.000 --> 925.920] in neocortex. +[925.920 --> 931.120] So for example, let's say you went to an important family wedding several years ago. +[931.120 --> 936.520] Well, the brain doesn't just consolidate your memory of that wedding into one node in +[936.520 --> 939.320] the brain and its connection to hippocampus. +[939.320 --> 945.600] Rather, it consolidates the memory in a widely distributed network of brain regions, the +[945.600 --> 951.600] same brain regions that you used when you process the information at the wedding. +[951.600 --> 957.040] So the same brain regions that processed the sites of the wedding, the taste of the +[957.040 --> 962.440] cake, the sound of the music, the conversations that you had there, the emotions that you +[962.440 --> 969.000] felt there, those same brain regions are involved in the memory for the event. +[969.000 --> 974.320] Emotion in particular seems to be a really important organizing force for these memories. +[974.320 --> 979.600] So these nodes in your brain that represent the event are all interconnected functionally +[979.600 --> 983.400] for this memory and connected with the hippocampus. +[983.400 --> 989.720] And every time you recall that wedding over the years, these same regions are active and +[989.720 --> 991.040] interact. +[991.040 --> 996.480] The hippocampus is critically important for bringing up that memory. +[996.480 --> 1002.200] Over time, however, the hippocampus becomes less and less important for bringing up that +[1002.200 --> 1004.000] memory. +[1004.000 --> 1009.400] So many years after the wedding, the hippocampus may not be important hardly at all for +[1009.400 --> 1012.080] bringing up that memory. +[1012.080 --> 1017.840] This is why patients with Alzheimer's disease can remember better events from earlier in +[1017.840 --> 1021.840] their life than the movie they saw last week. +[1021.840 --> 1026.840] They may be able to tell you stories from their childhood, but they can't remember that +[1026.840 --> 1029.440] you went to a party with them last week. +[1029.440 --> 1034.480] And this is because the hippocampus is less important for recalling memories from earlier +[1034.480 --> 1040.920] in your life than for more recent events. +[1040.920 --> 1046.440] So we've talked about the brain's circuits important for memory and the differences between +[1046.440 --> 1050.120] short-term memory and long-term memory consolidation. +[1050.120 --> 1053.680] What can we take from all of this to maximize our memories? +[1053.680 --> 1057.520] Well, I'm going to leave you with two tips here. +[1057.520 --> 1061.440] The first is we remember when we pay attention. +[1061.440 --> 1065.280] So when you focus and you reduce distractions. +[1065.280 --> 1070.880] And the second is we remember when we make it meaningful. +[1070.880 --> 1076.720] So when you make associations that give new information, context or significance in terms +[1076.720 --> 1082.600] of all the other things you have in your mind, the reason this works is because memories +[1082.600 --> 1090.120] are stored based on their associations to other events or memories. +[1090.120 --> 1094.840] So let's try this technique out. +[1094.840 --> 1097.040] So breathe is someone you're going to meet in a few moments. +[1097.040 --> 1099.040] She's going to give the next talk. +[1099.040 --> 1103.320] And I don't know about all of you, but sometimes when someone introduces themselves to me, I hear +[1103.320 --> 1106.720] the name and then second later it's gone. +[1106.720 --> 1112.440] So I encourage you, when someone tells you their name, to stop a moment and focus and make +[1112.440 --> 1115.000] associations. +[1115.000 --> 1120.840] So for breathe, you might imagine a plate of breachies. +[1120.840 --> 1125.760] And just think about how delicious that cheese is and imagine breathe eating that big plate +[1125.760 --> 1126.760] of breachies. +[1126.760 --> 1129.920] You'll probably never forget her name again. +[1129.920 --> 1133.360] And if that doesn't work, I haven't even better strategy for you. +[1133.360 --> 1137.800] And you can think of someone else you knew by the name of breathe. +[1137.800 --> 1141.840] Maybe there was a girl back in high school with the name of breathe. +[1141.840 --> 1145.680] So even better, let's say that she stole your boyfriend. +[1145.680 --> 1149.400] So you just remember that girl, breathe, who stole your boyfriend. +[1149.400 --> 1154.440] Remember, emotions are a very powerful organizing force for memories. +[1154.440 --> 1158.280] So if you can activate your emotions while you're trying to remember something, you're +[1158.280 --> 1160.880] much more likely to remember it. +[1160.880 --> 1163.560] All right, let's try another one. +[1163.560 --> 1164.560] So Winston. +[1164.560 --> 1170.880] Winston's going to be giving a talk on philosophy and neuroscience later this evening. +[1170.880 --> 1172.520] And I think it's going to be really good talk. +[1172.520 --> 1176.560] So you could remember Winston, he's a real winner. +[1176.560 --> 1179.760] Or you might think of Winston Churchill. +[1179.760 --> 1182.520] Winston Churchill was always smoking cigars. +[1182.520 --> 1186.280] So you might visualize Winston smoking a cigar. +[1186.280 --> 1191.800] So the more you engage your different senses, I find visualization in particular to be helpful, +[1191.800 --> 1197.600] the more likely you're going to be able to remember new information. +[1197.600 --> 1202.640] So we've talked about short-term memory and long-term memory and how to transition information +[1202.640 --> 1204.960] into long-term memory. +[1204.960 --> 1213.600] And again, the tips I have for you are one, stop and pay attention and two, make associations. +[1213.600 --> 1218.160] Because we consolidate long-term memories in terms of their associations to other memories +[1218.160 --> 1219.720] or concepts. +[1219.720 --> 1226.240] The most effective associations are original, even absurd. +[1226.240 --> 1228.640] They engage multiple senses. +[1228.640 --> 1230.920] They engage emotions. +[1230.920 --> 1234.200] Or they're personally salient. +[1234.200 --> 1239.000] So before we shift to the second half of the talk, I'll just review the brain-basis of +[1239.000 --> 1241.280] these two memory systems. +[1241.280 --> 1246.080] So the short-term memory or the working memory relies principally on the frontal lobes +[1246.080 --> 1249.080] and frontal-prideal circuits. +[1249.080 --> 1253.920] But the long-term memory consolidation relies critically on the hippocampus. +[1253.920 --> 1261.160] And over time, the hippocampus lays down memory throughout neocortex. +[1261.160 --> 1267.360] And after many years, the hippocampus isn't even really that critical to recall the memory. +[1267.360 --> 1271.840] So now I'm going to move to the little talk on navigation memory. +[1271.840 --> 1278.880] So I want you to think for a moment, how will you find your way home after this talk? +[1278.880 --> 1285.680] If your GPS isn't working. +[1285.680 --> 1292.080] There are two primary strategies that we use to find our way around. +[1292.080 --> 1298.840] The first that I'll talk about is the aloe-centric system, which means other centered. +[1298.840 --> 1305.360] When we use this system, we represent where locations are relative to major landmarks +[1305.360 --> 1307.840] in three-dimensional space. +[1307.840 --> 1313.760] We often anchor our aloe-centric cognitive maps in Cartesian coordinates, north, south, +[1313.760 --> 1316.080] east, west. +[1316.080 --> 1321.640] For example, if you are using the aloe-centric navigation system, you might think my house +[1321.640 --> 1327.240] is north of UCSF between Quite Tower and Fisherman's War. +[1327.240 --> 1332.480] So you're appreciating the relationship between these major landmarks in space. +[1332.480 --> 1338.880] Your aloe-centric cognitive map of San Francisco does not change if you are at UCSF, if you're +[1338.880 --> 1341.800] at the Golden Gate Bridge, if you're a New York City. +[1341.800 --> 1343.000] It's the same map. +[1343.000 --> 1347.880] It doesn't depend on your position in space. +[1347.880 --> 1353.440] This system relies critically on the hippocampus, more so on the right hippocampus in the posterior +[1353.440 --> 1356.040] portion. +[1356.040 --> 1363.520] So in contrast to the aloe-centric system, the egocentric system is self-centered. +[1363.520 --> 1369.600] When we use this system, we chain responses with local cues. +[1369.600 --> 1375.640] For example, you might think to get to my house, I take a left on third street, I take +[1375.640 --> 1379.720] a right on King Street, and follow along the water. +[1379.720 --> 1383.120] After I pass the ferry building, I take a left. +[1383.120 --> 1387.120] You can see with this system, you don't have to appreciate the relationship between +[1387.120 --> 1389.440] these locations and three-dimensional space. +[1389.440 --> 1393.880] You just need to know when you get to the ferry building you take a left. +[1393.880 --> 1398.960] This type of system is very efficient when you've navigated along the same route so +[1398.960 --> 1403.440] many times that it becomes routine. +[1403.440 --> 1407.640] But let's say you're going to work on the same route that you take every day, and there's +[1407.640 --> 1408.840] a detour. +[1408.840 --> 1413.040] Well, your egocentric system isn't going to work anymore, and you need to pull up your +[1413.040 --> 1418.680] aloe-centric cognitive map to come up with another way to get home. +[1418.680 --> 1423.960] So this system, this habit learning system, relies critically on the Codate Nuculus, +[1423.960 --> 1429.560] which is a structure in the basal ganglia deep inside your brain. +[1429.560 --> 1434.440] The reason we know so much about the neural circuit's important for navigation learning +[1434.440 --> 1439.520] is because if you want to know how a rodent's cognition is working, you put them in a maze +[1439.520 --> 1444.240] and you see if they can find your way out or get some food. +[1444.240 --> 1450.240] So when people are looking at rodent models of Alzheimer's disease, for example, they +[1450.240 --> 1455.040] evaluate how well the treatment's working by seeing how well the rodents can find their +[1455.040 --> 1458.680] way out of a maze. +[1458.680 --> 1462.400] So this is the most popular cognitive test for rodents. +[1462.400 --> 1464.200] It's the Morris Water Maze. +[1464.200 --> 1470.040] On this task, the mouse is put into a cloudy, cold pool, and the mouse is swimming around +[1470.040 --> 1475.560] trying to find the hidden submerged platform so he can escape. +[1475.560 --> 1481.480] He does this over many trials, and the platform is always hidden in the same place. +[1481.480 --> 1486.000] However, the rodent starts from a different position on every trial. +[1486.000 --> 1490.000] So the only way to get better and better at finding that hidden platform, which he's +[1490.000 --> 1495.840] sitting on right now, is to learn the relationship between the hidden platform and the cues that +[1495.840 --> 1502.040] surround the pool in three-dimensional space, just like you know the relationship between +[1502.040 --> 1506.600] quite tower and the Golden Gate Bridge in UCSF when you pull up a map of San Francisco +[1506.600 --> 1509.600] in your mind. +[1509.600 --> 1515.120] So we've developed some virtual reality tests of those two navigation strategies that +[1515.120 --> 1519.400] we're using in our lab because we think that they're sensitive to different brain circuits +[1519.440 --> 1523.440] and are disrupted by different diseases. +[1523.440 --> 1526.120] So I'll show you a couple examples of these. +[1526.120 --> 1529.960] So this is the human version of the test I just showed you. +[1529.960 --> 1534.800] We actually have a version of this test outside and I invite you to try it after the talks. +[1534.800 --> 1541.280] So on this test, you drive around in a circular field looking for the buried treasure. +[1541.280 --> 1544.840] When you drive over it, it will appear. +[1544.840 --> 1549.640] So you have many trials to find it and for you to get faster and faster at finding it, +[1549.640 --> 1555.240] you need to appreciate the relationship between the external cues, the houses and watertower +[1555.240 --> 1561.080] and mountains and so forth and the location of the buried treasure. +[1561.080 --> 1566.440] Just like the mouse had to learn where the hidden platform was relative to the cues around +[1566.440 --> 1567.440] the pool. +[1567.440 --> 1574.600] So we think this test is very sensitive to hippocampal system dysfunction and we're finding that +[1574.600 --> 1579.440] it's particularly impaired in the earliest stages of Alzheimer's disease, which targets +[1579.440 --> 1581.600] that system. +[1581.600 --> 1590.000] I'm going to show you now another test that we're using. +[1590.000 --> 1595.120] This one to measure specifically the egocentric navigation strategy. +[1595.120 --> 1599.720] On this test, the subject navigates through a long route through a neighborhood. +[1599.720 --> 1604.120] It's always the same route and you learn it by trial and error. +[1604.120 --> 1608.880] Each time you get to an intersection, you take a guess about which way you think it goes +[1608.880 --> 1613.360] and if you get it wrong, you're prompted to guess again until you get it right. +[1613.360 --> 1618.000] Over time, subjects get much more accurate at this test and it becomes almost a habit +[1618.000 --> 1620.560] for them. +[1620.560 --> 1625.280] So to do this test well, you just have to chain responses with local cues. +[1625.280 --> 1630.400] When I get to this cactus, I turn right, for example. +[1630.400 --> 1634.540] So we think these two types of navigation memory are really tapping different brain +[1634.540 --> 1639.760] circuits in our brains and that they're affected by different diseases. +[1639.760 --> 1644.540] I think we all use both of these strategies, but I think some of us tend to use one more +[1644.540 --> 1645.860] than the other. +[1645.860 --> 1649.840] So think to yourself, which strategy do you tend to use? +[1649.840 --> 1655.960] There are actually some sex differences on these tasks as well and you tend to be a little +[1655.960 --> 1661.440] bit better on average on the allocentric navigation paradigm. +[1661.440 --> 1667.360] Although I've definitely had some women volunteers who have done amazingly well and one explanation +[1667.360 --> 1670.800] for this comes from evolutionary psychology. +[1670.800 --> 1676.280] You think about hunters back in prehistoric days, they had to wander long distances through +[1676.280 --> 1680.440] winding paths to try to search for prey and find their way home. +[1680.440 --> 1685.920] They really needed to rely on the allocentric memory system. +[1685.920 --> 1688.840] So I'm going to finish now with some take home points. +[1688.840 --> 1690.880] There are several types of memory. +[1690.880 --> 1696.720] We've focused on the distinction between working memory and long term consolidation, as well +[1696.720 --> 1702.280] as the distinction between allocentric and egocentric navigation memory. +[1702.280 --> 1708.000] Each type of memory relies on a set of brain regions and circuits. +[1708.000 --> 1713.800] By measuring the function of different types of memory, neuropsychologists can make inferences +[1713.800 --> 1720.080] about the integrity of the different underlying brain circuits. +[1720.080 --> 1721.360] Why is this important? +[1721.360 --> 1727.280] Why do we need to understand the links between memory and brain circuits? +[1727.280 --> 1731.320] Well memory disorders tend to target specific circuits. +[1731.320 --> 1737.360] And so to treat these diseases, we need to understand how these memory systems work and +[1737.360 --> 1740.960] why they fail. +[1740.960 --> 1745.280] So even healthy people can benefit from these understandings. +[1745.280 --> 1752.760] They can maximize their memories by understanding how memory systems work. +[1752.760 --> 1753.760] Thank you. +[1753.760 --> 1762.440] All right, good evening everyone. +[1762.440 --> 1768.320] As Kate mentioned, my name is Breeb Batcher and I also often introduce myself to patients +[1768.320 --> 1771.600] by saying that it's like the cheese. +[1771.600 --> 1775.440] And feel very fortunate that I wasn't named after Gouda. +[1775.440 --> 1778.800] So I want to be talking to you tonight about something I think is really salient to all +[1778.800 --> 1785.120] of us, which is for stalling cognitive decline, so preventing any decline over time and +[1785.120 --> 1786.360] our thinking. +[1786.360 --> 1793.080] All right, so just to begin, I think one of the main questions that dominates our field +[1793.080 --> 1797.280] is how do we slow the cognitive aging process? +[1797.280 --> 1803.040] And by cognitive aging, what I mean is this is typically gradual decline in our ability +[1803.040 --> 1805.880] to process and manipulate information quickly. +[1805.880 --> 1809.720] And this isn't restricted to middle or older age. +[1809.720 --> 1814.120] We actually start to experience declines in how quickly we process things pretty early +[1814.120 --> 1816.800] even after our 20s. +[1816.800 --> 1821.360] And so in terms of the research landscape, I think what has been most remarkable in the +[1821.360 --> 1824.240] past few years is the transition in focus. +[1824.240 --> 1828.840] So for quite a while, we've had an anchor in looking at preventing dementia. +[1828.840 --> 1832.160] And this still is a very important focus of our work. +[1832.160 --> 1836.680] But over the years, particularly the last decade, there's been even more focus on staving +[1836.680 --> 1842.320] off decline, so not even necessarily dementia, but just preventing any cognitive decline. +[1842.320 --> 1846.640] And in addition to that, I think in the last couple of years, we've seen a lot more information, +[1846.640 --> 1852.640] a lot more media buzz around remaining cognitively robust throughout our life and maybe even improving +[1852.640 --> 1854.520] our cognition. +[1854.520 --> 1859.800] I think this Newsweek article actually sort of personifies this interest that's developed +[1859.800 --> 1864.800] over the past few years of how do we maintain our abilities and can we even get smarter +[1864.800 --> 1868.680] over time. +[1868.680 --> 1875.400] So transitioning from Dr. Postine's talk on spatial cognition and verbal memory, I plan +[1875.400 --> 1880.920] to talk a little bit tonight about cognitive plasticity and brain games. +[1880.920 --> 1884.680] And I'm also going to follow it up with a brief discussion of physical exercise. +[1884.680 --> 1889.960] So how physical activity is related to brain health and what are the mechanisms by which +[1889.960 --> 1896.560] physical exercise might actually impact our thinking. +[1896.560 --> 1903.280] So just to provide some context for how this evolution and aging research has transpired, +[1903.280 --> 1908.160] I think it's really important to examine early studies of plasticity and cognitive reserve. +[1908.560 --> 1913.360] I think one of really the most striking examples of this comes from the early non-studies, +[1913.360 --> 1915.640] which I think some of you might be probably familiar with. +[1915.640 --> 1918.600] We talked about a little bit last year. +[1918.600 --> 1922.600] The non-study refers to this longitudinal study of Catholic sisters. +[1922.600 --> 1927.600] They were members of the school sisters of Notre Dame congregation. +[1927.600 --> 1931.520] There's actually a book on this topic. +[1931.520 --> 1938.520] And this included approximately, it was a little bit over 650, between 650 and 700 Catholic +[1938.520 --> 1940.080] centers were enrolled. +[1940.080 --> 1947.960] And their ages ranged from 75 to 102 years old when the study began in 1991. +[1947.960 --> 1952.400] And what was great about this study is that the sisters received annual examinations and +[1952.400 --> 1957.160] they all agreed to donate their brains upon autopsy. +[1957.160 --> 1961.440] They were taught to donate their brains for autopsy upon death. +[1961.440 --> 1965.160] Probably an important distinction there. +[1965.160 --> 1970.560] So the non-study provided this really controlled means of evaluating predictors of cognitive +[1970.560 --> 1975.960] resilience and also cognitive decline in a group of individuals who clearly had very similar +[1975.960 --> 1976.960] lifestyle. +[1976.960 --> 1984.040] So we didn't have to worry about any multiple partners across the lifespan, any exposure +[1984.040 --> 1985.040] to particular diseases. +[1985.040 --> 1990.840] It's a pretty clean sample though that they had to look at. +[1990.840 --> 1996.200] And from this study, the researchers led by Dr. Snowden at the University of Kentucky reported +[1996.200 --> 2000.440] several important findings that I think has really changed how we think about cognition +[2000.440 --> 2002.840] over the lifetime. +[2002.840 --> 2007.960] And one of these findings includes the observation that some nuns had brains that were riddled +[2007.960 --> 2015.360] with Alzheimer's disease pathology but did not show any manifestations of a dementia. +[2015.360 --> 2018.240] And Dr. Snowden reported several case examples. +[2018.240 --> 2023.520] So including one of Sister Mathia shown there to illustrate the individual differences +[2023.520 --> 2027.240] he noted in pathology and clinical manifestation. +[2027.240 --> 2033.760] So Sister Mathia reportedly died at 104 years of age, relatively healthy, dementia-free. +[2033.760 --> 2038.960] And upon autopsy, they noted that the severity of Alzheimer's disease pathology in her brain +[2038.960 --> 2043.920] was at around a stage four, suggesting that there was moderate spread of the disease in +[2043.920 --> 2048.960] her brain, including the areas that Dr. Poseen mentioned that are very important for memory, +[2048.960 --> 2053.960] namely your hippocampus. +[2053.960 --> 2060.080] So stemming from this research is the question of how there can be such heterogeneity in clinical +[2060.080 --> 2065.720] outcome among individuals have a pretty similar degree of pathology in their brain. +[2065.720 --> 2071.160] So importantly, I think it's an important fact to highlight that what we see under a microscope +[2071.160 --> 2075.320] does not always reflect what we see in everyday life. +[2075.320 --> 2079.880] It's not necessarily a one-to-one correspondence. +[2079.880 --> 2085.360] So when you examine individuals with the same severity of Alzheimer's disease in their brain, +[2085.360 --> 2091.880] some may show Alzheimer's disease related to dementia, and some may be clinically normal +[2091.880 --> 2094.360] with no dementia. +[2094.360 --> 2096.520] And so the question really is, why is this? +[2096.520 --> 2102.000] And how can we tip the scales towards clinically normal with no dementia? +[2102.000 --> 2105.120] All right. +[2105.120 --> 2110.720] And one theory that has led to an influx of research on cognitive exercise and training is +[2110.720 --> 2113.520] the theory of cognitive reserve. +[2113.520 --> 2119.200] And cognitive reserve was propagated by Dr. Yakhostirn, he's at Columbia University, +[2119.200 --> 2124.520] and he developed this idea to account for the disparity between the degree of pathology +[2124.520 --> 2127.960] someone has in their brain and their clinical presentation. +[2127.960 --> 2129.760] So what is cognitive reserve? +[2129.760 --> 2135.160] It really relies on the idea that there are individual differences in how tasks are processed +[2135.160 --> 2141.720] that permit some people to cope better than others with brain changes, brain pathology, +[2141.720 --> 2144.000] damage or degeneration. +[2144.000 --> 2149.920] So in the face of aging, or even Alzheimer's disease pathology, a brain with higher cognitive +[2149.920 --> 2156.480] reserve may try to cope with impending changes by using pre-existing cognitive strategies +[2156.480 --> 2163.040] more efficiently, or they may flexibly use different strategies for the same task. +[2163.040 --> 2167.960] So cognitive reserve is really hard to measure because in many ways it's a theoretical construct. +[2167.960 --> 2173.520] So we can't measure it the same way that we measure plaques and tangles in the brain. +[2173.520 --> 2179.400] Because of that, researchers often rely on proxy measures to assess cognitive reserve. +[2179.400 --> 2184.360] So this would be something like educational attainment, how far you went in school, your +[2184.360 --> 2192.960] occupation, your mental activities, which is sort of a nebulous term, and your IQ. +[2192.960 --> 2198.240] So consistent with what we saw in the non-study, this also suggests that individuals with more +[2198.240 --> 2203.600] cognitive reserve may be able to tolerate or handle greater amounts of damage to the +[2203.600 --> 2206.920] brain before clinical impairment is evident. +[2206.920 --> 2212.520] So I think the figure here illustrates this model nicely as it shows that at the same level +[2212.520 --> 2217.720] of brain pathology, individuals with higher cognitive reserve are performing much better +[2217.720 --> 2219.720] on the same tasks. +[2219.720 --> 2223.600] So an alternative way to look at this is that individuals with higher cognitive reserve +[2223.600 --> 2228.720] only start to approximate the lower levels of performance when they have more pathology +[2228.720 --> 2231.120] in their brains. +[2231.120 --> 2235.440] And there's been a tremendous amount of support for the benefits of high cognitive reserve. +[2235.440 --> 2240.760] And I think what's nice about this conceptualization is that it's an active model. +[2240.760 --> 2245.720] So it doesn't assume that you need a certain amount of change to your brain before you start +[2245.720 --> 2248.120] to show difficulties in everyday life. +[2248.120 --> 2253.280] And instead it focuses on the processes that actually allow individuals to experience +[2253.280 --> 2258.840] these changes and still maintain a similar level of function. +[2258.840 --> 2263.000] What's also helpful I think about the recent data is that even late stage interventions +[2263.000 --> 2266.720] to improve cognitive reserve look promising. +[2266.720 --> 2270.000] So that could ultimately delay or even prevent dementia. +[2270.000 --> 2273.960] And this is, I think, really intimately related to that concept of plasticity, which +[2273.960 --> 2278.520] really relates to the brain's ability to modify its structure and its function in light +[2278.520 --> 2281.720] of new experiences that we have. +[2281.720 --> 2284.240] All right. +[2284.240 --> 2289.560] And I think a natural extension of this topic is that of cognitive exercise and brain +[2289.560 --> 2291.080] games. +[2291.080 --> 2295.640] So translating the cognitive reserve and plasticity research into interventions has been +[2295.640 --> 2298.240] kind of a difficult process, I would say. +[2298.240 --> 2303.840] And there is an extensive scientific literature that is messy and difficult to interpret. +[2303.840 --> 2310.100] So brain games, Sudoku and intellectual engagement have been heavily fed to the media as this +[2310.100 --> 2312.920] sort of ultimate panacea for cognitive decline. +[2312.920 --> 2317.760] I'm sure most people here have seen these in the mainstream media before. +[2317.760 --> 2323.600] And I should say that some of this has occurred without research supporting it. +[2323.600 --> 2326.240] So what do we know about these things? +[2326.240 --> 2330.120] There's been some encouraging results in terms of leisure activities that were reported +[2330.120 --> 2332.000] in the last couple of years. +[2332.000 --> 2337.520] And they've shown that people who have high rates of intellectual leisure activity, which +[2337.520 --> 2346.200] they define as things like reading books, going out to operas, playing games at home, +[2346.200 --> 2351.880] playing cards, taking a new class, that all of these were protective. +[2351.880 --> 2356.840] And that individuals who did this had cognitive decline that started much later in life that +[2356.840 --> 2361.080] individuals who did not report doing these activities. +[2361.080 --> 2365.560] And so we're still a little unclear about the mechanisms by how this works. +[2365.560 --> 2372.240] But there has been some promising evidence to suggest that even intellectual leisure activities +[2372.240 --> 2374.320] might be helpful. +[2374.320 --> 2379.880] Similarly, there's been a lot of buzz and rightfully so about cognitive interventions. +[2379.880 --> 2381.800] These findings have also been mixed. +[2381.800 --> 2386.440] And with some studies demonstrating a lot of benefit and then some studies showing absolutely +[2386.440 --> 2387.440] no benefit. +[2387.440 --> 2392.400] And I think this is where being an educated consumer is critically important, particularly +[2392.400 --> 2397.520] given the sheer volume of brain games that are being marketed to the mainstream public. +[2397.520 --> 2402.120] So on one hand, we have studies that have shown no benefit from brain games. +[2402.120 --> 2406.840] And by no benefit, I mean that when individuals are trained on these tasks, they do get better +[2406.840 --> 2407.840] on these tasks. +[2407.840 --> 2412.720] But that it's not generalizing to other things, to other important activities in someone's +[2412.720 --> 2413.720] life. +[2413.720 --> 2420.360] So for example, in a study reported in nature back in 2010, researchers randomly assigned +[2420.360 --> 2425.520] over 4,000 people to two different experimental groups where they were being trained on things +[2425.520 --> 2430.440] like memory tasks or reasoning tasks and then a control group. +[2430.440 --> 2435.200] And they completed training sessions over a period of six weeks. +[2435.200 --> 2439.960] And while again, like I said, they show significant improvement in the tasks that they trained +[2439.960 --> 2444.440] on, they did not show any transfer of benefit from that. +[2444.440 --> 2446.000] And that's really what you want. +[2446.000 --> 2449.440] You really want to have transfer of benefits to other things in your life for these to be +[2449.440 --> 2450.760] most meaningful. +[2450.760 --> 2452.320] So that's one side of the coin. +[2452.320 --> 2456.840] I think on the other side, what we're seeing is in the last couple of years, there have +[2456.840 --> 2459.680] been very encouraging studies that have been coming out. +[2459.680 --> 2465.040] And I think what's different about them is that they are training people on very targeted +[2465.040 --> 2466.440] cognitive processes. +[2466.440 --> 2472.040] So they're very specific about what they're training the individual on. +[2472.040 --> 2479.400] And it seems like that's probably most important in terms of reaping the cognitive benefits. +[2479.400 --> 2481.160] So the findings are promising but mixed. +[2481.160 --> 2485.440] And I want on the positive side, I want to give you an example from some studies that are +[2485.440 --> 2487.000] occurring at UCSF. +[2487.000 --> 2491.880] And these are going to be talked about a little bit later in the interactive portion of +[2491.880 --> 2493.280] the night. +[2493.280 --> 2499.200] So this is Dr. Adam Gazzali's lab and Dr. Angara who also works with him and has really +[2499.200 --> 2503.160] spearheaded some of these studies will be here in the interactive portion to talk with you +[2503.160 --> 2505.120] about these. +[2505.120 --> 2510.840] So for these studies, the Gazzali lab in collaboration with colleagues at Lucas Arts, +[2510.840 --> 2514.840] they have developed this game called Neuroracer. +[2514.840 --> 2520.120] And as you can see here, I mean, I think it even just looks really exciting when you see +[2520.120 --> 2521.120] it. +[2521.520 --> 2526.400] So this is really thinking about cognitive training in the context of single task versus +[2526.400 --> 2527.400] multitasking. +[2527.400 --> 2531.600] I think multitasking is something that comes up a lot for people, something that can be +[2531.600 --> 2533.760] difficult over time. +[2533.760 --> 2539.960] So in this study, they are doing some pre-testing where they have people come in and they do +[2539.960 --> 2545.920] some tests on them and then they have people go home with these laptops and they do trainings +[2545.920 --> 2547.720] at home on this task. +[2547.720 --> 2554.120] And then they come back in later and do some more testing in the laboratory. +[2554.120 --> 2559.240] So and these tests are really designed to emulate multitasking in everyday life while +[2559.240 --> 2562.240] controlling for specific cognitive processes. +[2562.240 --> 2566.760] And so you can see here what we have is the, there's a single task version and there's +[2566.760 --> 2567.760] a multitask. +[2567.760 --> 2573.720] And with the single task, they have is a sign that the participant will have to respond +[2573.720 --> 2574.720] to. +[2574.720 --> 2578.920] And multitask, they will still have to respond to that but they're also driving a car. +[2578.920 --> 2583.160] And so again to really emulate the kinds of things that we're dealing with on an everyday +[2583.160 --> 2584.400] basis. +[2584.400 --> 2587.840] And from this, they calculate a multitasking cost. +[2587.840 --> 2591.080] So what is the cost to your performance just by multitasking? +[2591.080 --> 2596.120] And it's a fairly basic calculation that they use there. +[2596.120 --> 2601.520] So using the index shown before, you can see the cost of multitasking increases across +[2601.520 --> 2603.400] the lifespan. +[2603.400 --> 2610.240] So in other words, the ability to efficiently handle and respond to multiple sources of information +[2610.240 --> 2612.120] worsens over an individual's life. +[2612.120 --> 2617.360] And you can see this actually starts even in your, in your 20s. +[2617.360 --> 2623.920] Now, these results start to look very different upon providing multitasking training. +[2623.920 --> 2631.200] So specifically, individuals who did not receive any training remained at around the same +[2631.200 --> 2634.480] level a month later. +[2634.480 --> 2639.520] Individuals who obtained the single task training at home, so again, they were trained on just +[2639.520 --> 2642.960] responding to those signs without actually driving the car. +[2642.960 --> 2645.200] Their performance, it looks a little bit better. +[2645.200 --> 2648.320] This is not statistically significant. +[2648.320 --> 2653.600] And then those who were trained on the multitasking component, you see this striking difference. +[2653.600 --> 2657.720] And striking improvement in terms of how much cost there is there. +[2657.720 --> 2663.960] And most importantly, what we see is that these actually hold over time. +[2663.960 --> 2670.200] And so after a period of about six months, you're still seeing much better improvement with +[2670.200 --> 2674.800] individuals that were trained on the multitasking component. +[2674.800 --> 2679.600] So again, Dr. Engar will be demonstrating this latest version of these games on an iPad +[2679.600 --> 2683.200] during the interactive portion of the night. +[2683.200 --> 2687.000] All right. +[2687.720 --> 2696.000] Okay, so just to briefly review the cognitive engagement and brain game section of this, +[2696.000 --> 2700.560] just want to say that in terms of plasticity and cognitive reserve, I think there's really +[2700.560 --> 2704.800] strong evidence that our brains continue to change and adapt. +[2704.800 --> 2707.120] That's part of what plasticity is. +[2707.120 --> 2712.040] And research, I think, has really uncovered a lot of protective and risk factors for this. +[2712.040 --> 2716.120] And in terms of the actual brain games, I think, as I said, there's a lot of new research +[2716.120 --> 2721.880] suggesting that if it's targeting specific cognitive processes, that's the most helpful. +[2721.880 --> 2726.440] And this is, I think, a really promising area of research, but also requires a critical +[2726.440 --> 2732.960] eye and thinking about the fact that not all of these studies have a lot of research behind +[2732.960 --> 2733.960] them. +[2733.960 --> 2737.080] So being an educated consumer about this, I think, is one of the most important facets +[2737.080 --> 2739.440] of it. +[2739.440 --> 2747.440] So something that I am increasingly excited about is the role of physical exercise in brain +[2747.440 --> 2748.440] health. +[2748.440 --> 2754.280] And in particular, how exercising might actually improve cognition and potentially delay +[2754.280 --> 2757.800] or even prevent dementia. +[2757.800 --> 2760.720] So have studies shown. +[2760.720 --> 2765.960] In general, what we've shown is that what's good for your heart is good for your brain. +[2765.960 --> 2771.960] So individuals who participate in physical activity, particularly aerobic activity, +[2771.960 --> 2777.760] they've shown in various studies that you could have up to 30% reduction in the risk of +[2777.760 --> 2783.000] cognitive decline in dementia, which I think is a very striking and exciting finding. +[2783.000 --> 2786.440] Because it's something that we can do something about at any stage. +[2786.440 --> 2795.480] And in particular, just to answer that question, Christine Yaffe and Dr. Middleton at both UCSF +[2795.480 --> 2800.200] and the San Francisco VA have conducted studies trying to answer the question, doesn't matter +[2800.200 --> 2802.080] when you become physically active. +[2802.080 --> 2808.160] So is it too late to start if I wasn't doing any sort of activity as a teenager, is it +[2808.160 --> 2810.960] too late to start in middle or late life? +[2810.960 --> 2817.080] So what they found was that women who reported that they had been physically active, particularly +[2817.080 --> 2821.600] during their teenage years, showed the lowest likelihood of cognitive impairment. +[2821.600 --> 2823.920] So they seem to be the most protected. +[2823.920 --> 2830.120] However, individuals who became active later in life also showed a reduced risk of developing +[2830.120 --> 2831.640] cognitive impairment. +[2831.640 --> 2836.160] So even though it seems like a lifetime of physical activity is most helpful, people are +[2836.160 --> 2841.440] reaping benefits from this even if they start late in life to become physically active. +[2841.440 --> 2847.400] So it seems to be a really critical component to brain health. +[2847.400 --> 2852.280] So in addition, that was more of what we call an epidemiological study on cognitive decline. +[2852.280 --> 2856.360] So what if we turn our attention to what the brain looks like in those who are physically +[2856.360 --> 2857.680] active? +[2857.680 --> 2862.960] So research with animal models has shown that a molecule in the brain called brain-derived +[2862.960 --> 2869.440] neurotrophic factor, or BDNF, is critical for neuron health and is really important for +[2869.440 --> 2872.240] plasticity or synapses. +[2872.240 --> 2878.360] And exercise has been shown to have a really robust effect on BDNF levels in the brain. +[2878.360 --> 2884.480] So in this case, if you have rats run on a wheel for as little as a week, what you can +[2884.480 --> 2891.040] see is that they have nearly a one and a half-fold increase in BDNF expression in their hippocampus. +[2891.040 --> 2893.440] And these effects were also still noted. +[2893.440 --> 2897.360] They were still raised three months later in these animals. +[2897.360 --> 2902.960] So you can see here that there's this induction of BDNF in various parts of the hippocampus. +[2902.960 --> 2904.440] So that's the Dintate Gioros. +[2904.440 --> 2908.880] This is the CA3 and CA1 regions. +[2908.880 --> 2913.400] If we kind of try to take that literature from animals and apply it to humans, we're also +[2913.400 --> 2915.480] starting to see some really exciting results. +[2915.480 --> 2918.320] And this study came out in the last year. +[2918.320 --> 2925.580] And this was looking at 120 adults who were randomized to either a walking group or a +[2925.580 --> 2928.000] stretching toning group. +[2928.000 --> 2932.180] And these groups were completely identical except that the walking group participated +[2932.180 --> 2938.460] in moderate intensity walking for about 30 to 45 minutes per day, three times per week. +[2938.460 --> 2943.180] So they both groups received the same amount of social interaction and health instruction. +[2943.180 --> 2946.780] So they really controlled for a lot of variables here. +[2946.780 --> 2952.540] And then brain MRI scans were conducted before randomization after six months and again +[2952.540 --> 2955.500] after the completion of the one-year trial. +[2955.500 --> 2959.100] So if you can see here that what they were really focusing on was the hippocampus. +[2959.100 --> 2963.940] The hippocampus is a very metabolically active area that seems to be very sensitive to plasticity +[2963.940 --> 2966.980] and where there's been probably the most research in terms of plasticity. +[2966.980 --> 2970.620] So that's where they were really focusing on. +[2970.620 --> 2975.100] The Codate and the Thalmas were also regions that they looked at more for control areas. +[2975.100 --> 2979.740] So with the hippocampus, what they noted was that for the individuals that were in the +[2979.740 --> 2985.940] stretching toning group, they had about a 1.5% decline in their hippocampal volume over +[2985.940 --> 2986.940] the one-year. +[2986.940 --> 2990.820] And this is very consistent with normal aging research. +[2990.820 --> 2995.180] So this is something that we often see when we're following adults over time. +[2995.180 --> 3002.020] But in contrast, what they found was that individuals who were in this more aerobically active group +[3002.020 --> 3007.460] that they actually had a 2% increase in the size of the hippocampus, particularly the anterior +[3007.460 --> 3010.260] part of the hippocampus, over one year. +[3010.260 --> 3013.260] And this was a significant difference in the two. +[3013.260 --> 3017.660] So this is one of the things that one of the first studies to really robustly show this +[3017.660 --> 3023.500] in a regimented way. +[3023.500 --> 3027.860] So these observational studies, along with others, provide considerable support for the +[3027.860 --> 3034.740] hippocuses that physical activity may reduce the risk of cognitive decline in dementia. +[3034.740 --> 3035.740] But how does this actually happen? +[3035.740 --> 3041.140] And I think this is an important question to ask any time we're reading literature about +[3041.140 --> 3042.140] something new. +[3042.140 --> 3044.860] What is the possible mechanism behind this? +[3044.860 --> 3046.620] How could this possibly happen? +[3046.620 --> 3049.380] How does this confer benefit? +[3049.380 --> 3053.940] And as you might guess, physical activity is related to lower rates of obesity. +[3053.940 --> 3057.500] Like I mentioned before, what's good for your heart is good for your brain. +[3057.500 --> 3062.100] So obesity, particularly middle age, has been shown to associate significantly with dementia +[3062.100 --> 3064.260] in later life. +[3064.260 --> 3068.700] Physical activity is also linked to reduced vascular risks. +[3068.700 --> 3072.980] So again, anything having to do with your cardiovascular system, blood being innervated +[3072.980 --> 3078.100] up to your brain, it has significant benefit for any sort of vascular risk factors that +[3078.100 --> 3079.100] someone might have. +[3079.100 --> 3085.540] So this could be diabetes, hypertension, cardiovascular disease. +[3085.540 --> 3090.260] And as I just mentioned before, it also seems to induce BDNF, which is incredibly important +[3090.260 --> 3092.740] for neuronal function. +[3092.740 --> 3097.420] And something that's more near and dear to my heart is its relationship to inflammation, +[3097.420 --> 3100.300] which is something I study in healthy, older adults. +[3100.300 --> 3105.700] And people who are very physically active seem to have lower levels of inflammation in their +[3105.700 --> 3106.700] bodies. +[3106.700 --> 3111.260] And inflammation has been shown to be related to your brain structure. +[3111.260 --> 3117.060] In particular, what we have shown is that inflammation, people who have higher levels of +[3117.060 --> 3120.860] inflammation, so they're just healthy people who do not have cognitive impairment. +[3120.860 --> 3125.740] But if they have higher levels of inflammation, they seem to have lower integrity in the +[3125.740 --> 3128.140] white matter areas of the brain. +[3128.140 --> 3133.860] And so if you can see here actually the white parts here and the green tracks, these are +[3133.860 --> 3138.020] not clearly what your tracks look like, but they are color coded here. +[3138.020 --> 3142.300] And you actually have lower integrity and something particularly called the corpus colosum +[3142.300 --> 3145.340] that connects the two hemispheres of your brain together. +[3145.340 --> 3147.900] And this seems to be highly related to inflammation. +[3147.900 --> 3153.220] So people who are physically active seem to have lower levels of inflammation. +[3153.220 --> 3158.740] So in terms of lower integrity, we use something called diffusion tensor imaging, which basically +[3158.740 --> 3163.100] looks to see how well does water molecules move across a track. +[3163.100 --> 3167.420] And if something is really intact, like if you think about any sort of, if you think about +[3167.420 --> 3171.900] a fire or anything that's a really intact track, things should move along very easily. +[3171.900 --> 3177.620] If it's starting to degrade at all, you will have lower directionality of the water. +[3177.620 --> 3179.860] You can think about water just starting to spread out. +[3179.860 --> 3181.260] And so that's how we measure that. +[3181.260 --> 3186.740] So it seems like it's unclear exactly what's degrading necessarily if it's the outer +[3186.740 --> 3194.260] sheath of it, but it seems like there's lower, the structure of it doesn't seem to be +[3194.260 --> 3197.180] quite as intact as it was before. +[3197.180 --> 3201.140] These white matter tracks are really important for processing information quickly. +[3201.140 --> 3205.780] So they connect all these different parts in your brain, the gray areas, they connect +[3205.780 --> 3208.740] these so that you can think more efficiently. +[3208.740 --> 3210.420] All right. +[3210.420 --> 3216.460] And so just to start to conclude a little bit, what I want to really highlight here is +[3216.460 --> 3222.160] that based on this evidence with physical activity, I think it's pretty clear that there's +[3222.160 --> 3226.060] considerable evidence that you can reap the benefits of physical exercise at any age. +[3226.060 --> 3231.580] And it's actually what we tell our patients the most often, I would say, in our clinics, +[3231.580 --> 3236.100] is that this is something that you can do at any point in time and it will really benefit +[3236.100 --> 3240.260] your neuronal health, as well as benefiting cardiovascular health. +[3240.260 --> 3244.660] And I think there's also ample evidence to suggest that exercise really reduces vascular +[3244.660 --> 3251.020] risk factors, obesity and flammatory markers, and may alter brain structure as well. +[3251.020 --> 3255.300] So I think the combination of these, these cognitive training, cognitive exercise and thinking +[3255.300 --> 3261.860] about physical exercise are two very tightly interwoven facets of how we can improve our +[3261.860 --> 3265.140] cognitive health of our time and hopefully stave off dementia. +[3265.140 --> 3266.140] All right. +[3267.140 --> 3268.140] All right. +[3268.140 --> 3271.900] So I just want to thank my colleagues and I really appreciate everyone's attention and +[3271.900 --> 3275.180] letting me talk to you about this topic and I really look forward to speaking with you +[3275.180 --> 3277.180] afterwards in the A3M there. +[3277.180 --> 3283.820] So moving in a slightly different direction, my name is Winston Chong. +[3283.820 --> 3287.780] I'm a neurologist and neuroscientist at the Memory and Aging Center. +[3287.780 --> 3293.380] And I bring a sort of an interdisciplinary perspective in that my PhD was actually in philosophy. +[3293.380 --> 3298.020] One of my areas of interest is actually in kind of points of contact between philosophy +[3298.020 --> 3302.380] and other kind of more humanistic disciplines and clinical medicine and neuroscience. +[3302.380 --> 3307.060] And so what I'll be talking about today is a little bit more speculative, but I'm really +[3307.060 --> 3311.620] trying to take a look at some points of contact, some recent findings in neuroscience and +[3311.620 --> 3317.660] how we might use these in connection with some older ideas to think a little bit more about +[3317.660 --> 3322.060] what makes us kind of uniquely human and kind of what contributes to our sense of self. +[3322.060 --> 3325.620] So I hope you'll bear with me on that. +[3325.620 --> 3329.660] So before I talked about the self though, I wanted to start by talking about kind of a +[3329.660 --> 3334.300] more general principle, which is the idea that brain diseases tell us about how the healthy +[3334.300 --> 3339.300] brain is organized, that when we pay attention to what goes wrong when something goes happens +[3339.300 --> 3344.100] in the brain, that that tells us that gives us important clues about how things are connected +[3344.100 --> 3347.020] in normal function. +[3347.020 --> 3351.340] And one of my favorite examples of this actually comes from this passage from the Bible, which +[3351.420 --> 3355.180] many of you will already be familiar with, but after tonight, I hope after you leave, +[3355.180 --> 3357.620] you'll think about it in a slightly different way. +[3357.620 --> 3362.740] So this is Psalm 137 from the King James Version, and this is after the conquest of Jerusalem +[3362.740 --> 3364.620] by the Babylonians. +[3364.620 --> 3368.740] And what's interesting about the Psalm is that it describes two kind of divine punishments +[3368.740 --> 3374.260] that the speaker would wish upon himself if he would forget about Jerusalem. +[3374.260 --> 3380.020] And the two punishments are, let my right hand forget her cunning and let my tongue +[3380.020 --> 3382.340] leave to the roof of my mouth. +[3382.340 --> 3387.460] And so if you think about this, either of these in its own right would be a very severe punishment. +[3387.460 --> 3392.100] So for the first one, we're talking about essentially losing the use of the hand that +[3392.100 --> 3396.780] 90% of us used to do pretty much everything, and we're talking about the loss of the ability +[3396.780 --> 3398.180] to speak. +[3398.180 --> 3401.940] And so you might think originally, well, this seems like a bit much. +[3401.940 --> 3405.180] Why should they both, why should they happen at the same time? +[3405.180 --> 3409.380] But I think that what's very striking as a neurologist when you read this is actually +[3409.380 --> 3411.700] that these two problems often come together. +[3411.700 --> 3415.740] We actually do tend to see people with both of these problems at the same time. +[3415.740 --> 3420.020] And I'm assuming that the ancient Israelites observed this also. +[3420.020 --> 3423.340] So to understand why, it helps to take a look at the brain. +[3423.340 --> 3424.980] So this is a picture of the brain from the left side. +[3424.980 --> 3429.020] So if you're looking at my left ear, if you could see through my skull, this is what +[3429.020 --> 3430.540] you'd see. +[3430.540 --> 3434.220] And I wanted to call your attention to a couple brain regions. +[3434.220 --> 3439.340] So this region here in yellow is what we might call a motor speech area. +[3439.340 --> 3444.220] And among other things that's done by this area is basically, it helps us go from words +[3444.220 --> 3448.460] to the actual movements that you have to make, again, with your lips, your tongue, and +[3448.460 --> 3449.460] so forth. +[3449.460 --> 3453.580] And one thing that we don't think about, because we're all fluent speakers of a language, +[3453.580 --> 3457.620] is what a skillful and coordinated action it is to speak. +[3457.620 --> 3461.820] Because basically, you're talking about coordinating the movements again of your jaw, your lips, +[3461.820 --> 3465.580] your tongue, your vocal cords, your breathing to produce each word. +[3465.580 --> 3468.740] And ordinarily, you don't have to think about how to do that. +[3468.740 --> 3473.100] And that's partly because the sort of motor program for how to perform all of those actions +[3473.100 --> 3478.980] correctly is kind of stored on the left side in this sort of yellow region. +[3478.980 --> 3483.460] Then close by along this red strip, there's another region. +[3483.460 --> 3486.620] So you may know that the left side of the brain controls the right side of the body, the +[3486.620 --> 3488.980] right side of the brain controls the left side of the body. +[3488.980 --> 3493.700] And so along this red strip is a region that controls basically the movements of the +[3493.700 --> 3494.700] right hand. +[3494.700 --> 3499.460] So the neurons in this region send signals down to the spinal cord that in turn send +[3499.460 --> 3502.900] other signals down to the hand and basically control those movements. +[3502.900 --> 3507.460] And so you can imagine that if something happens to the brain here, that it's likely also +[3507.460 --> 3509.820] to affect this region and vice versa. +[3509.820 --> 3513.420] And in fact, if you take a look at the map of the blood supply to the brain, there's +[3513.420 --> 3517.060] a very important blood vessel that comes up through the neck, comes into the skull, and +[3517.060 --> 3520.420] basically gives off this branch that supplies this whole region. +[3520.420 --> 3524.820] So something where it happened like a blood clot, where to migrate or develop here, you +[3524.820 --> 3528.620] can easily see how it would affect the blood supply to this region of brain. +[3528.620 --> 3532.660] So this region of brain would be permanently injured, leading to loss of the ability to +[3532.660 --> 3536.900] speak, as well as loss of the movement of the right hand. +[3536.900 --> 3541.820] And it's sort of fitting, I think to me, that we talk about this as a divine punishment +[3541.820 --> 3547.020] or in a theological context because our English word stroke, which is the modern +[3547.020 --> 3551.580] term we use for this disease when you have a blood clot that blocks this vessel, comes +[3551.580 --> 3553.620] from the term the stroke of God's hand. +[3553.620 --> 3554.620] Right? +[3554.620 --> 3557.900] So this expressed again the idea that this is a sudden devastating loss of neurological +[3557.900 --> 3558.900] function. +[3558.900 --> 3562.860] And while the ancient Israelites probably did not know that this is the way things were +[3562.860 --> 3567.940] connected, we can learn from this observation that has been made for a long period of time +[3567.940 --> 3570.820] that this is how these parts of the brain are connected. +[3570.820 --> 3575.900] So that's just an illustration that I like about how we can learn from these brain diseases +[3575.900 --> 3582.380] about how these things come together, even if we didn't know about the brain itself. +[3582.380 --> 3587.620] So this is the way we've learned about a lot about how particular parts of the brain +[3587.620 --> 3588.620] work. +[3588.620 --> 3592.460] We take what we call these focal lesions, these diseases that affect particular parts +[3592.460 --> 3598.180] of the brain, so strokes, tumors, Dr. Prasin talked about side effects of brain surgery. +[3598.180 --> 3602.780] And so we've learned in this way about how these particular parts of the brain are important +[3602.780 --> 3607.540] for functions like vision, language, memory, our control of movement, our sense of touch +[3607.540 --> 3609.140] and so forth. +[3609.140 --> 3613.540] What is kind of a new frontier though in neuroscience, and this is what I want to talk to you about +[3613.540 --> 3616.900] today, is a little bit more distributed in the brain. +[3616.900 --> 3617.900] Right? +[3617.900 --> 3620.340] And that's the question about how do these parts all work together? +[3620.340 --> 3621.340] Right? +[3621.340 --> 3623.620] How are these different functions brought together to make us who we are? +[3623.620 --> 3624.620] Okay? +[3624.620 --> 3627.780] Because we're not just language, we're not just vision, we're all of these things brought +[3627.780 --> 3628.780] together. +[3628.780 --> 3633.260] And then the suggestion I'm going to try to present today is the idea that it's really +[3633.260 --> 3637.180] the coordinate activity of multiple parts of the brain working together, and there's +[3637.180 --> 3642.220] something we can really learn about how the brain is organized kind of in this way. +[3642.220 --> 3647.140] So here we're getting, again, from sort of more hard clinical neuroscience to something +[3647.140 --> 3652.220] that's a little bit more ineffable, a little bit more intellectual, and philosophical maybe, +[3652.220 --> 3656.380] and starting to talk about, again, the topic for tonight, which is kind of neuroscience +[3656.380 --> 3657.780] in the self. +[3657.780 --> 3662.020] And I think one of the problems that we have as a starting point is just the observation +[3662.020 --> 3666.980] that when we talk about the self, people use this language in a lot of different ways, +[3666.980 --> 3672.060] and seem to refer into many different things that might be related, but it's helpful to +[3672.060 --> 3673.980] think about these differences. +[3673.980 --> 3677.340] And so one thing that philosophers like to do is to kind of catalog the ways that people +[3677.340 --> 3679.820] use natural language. +[3679.820 --> 3684.860] And so if we think about different ways that people speak about their selves, we can identify +[3684.860 --> 3686.980] maybe hopefully a few themes. +[3686.980 --> 3691.540] So obviously the self, the idea of the self is very related to ideas of individuality. +[3691.540 --> 3697.220] So myself, yourself, and also this difference between self and others, right? +[3697.220 --> 3700.700] There's sort of boundary where I end and the rest of the world begins. +[3700.700 --> 3704.300] We also see this, you know, in immunology when we talk about, you know, our immune system +[3704.300 --> 3708.100] is recognizing self versus other. +[3708.100 --> 3712.620] Another step that's related to this is the idea of reflexivity or reflectiveness that +[3712.620 --> 3718.300] we think about self-awareness, our ability to take ourselves as kind of the object of +[3718.300 --> 3720.740] our thought or perception. +[3720.740 --> 3722.340] And then there's a lot of ethical ideas, right? +[3722.340 --> 3725.820] So there's an idea of personhood, that there's something special about beings that are +[3725.820 --> 3727.580] ourselves that have a self. +[3727.580 --> 3732.300] There's a relationship to identity, and this is the problem that kind of most philosophers +[3732.300 --> 3734.380] would think about in terms of the problem of self. +[3734.380 --> 3738.700] And that's kind of almost the problem of what makes you the same person over time, right? +[3738.700 --> 3745.580] So you're a you today, and there's also you, you know, five days ago or five years ago, +[3745.580 --> 3749.500] and what's the connection between them that makes it all a continuous shared life, a shared +[3749.500 --> 3750.500] self? +[3750.500 --> 3753.900] And finally, you know, there's this ethical idea of autonomy, which again, we go back +[3753.900 --> 3759.100] to the Greek roots, is really giving a lot to yourself, being the kind of being that +[3759.100 --> 3762.020] you know, can self-legislate in this way. +[3762.020 --> 3765.460] And so what I'm going to suggest today, when I try to translate this language into sort +[3765.460 --> 3770.180] of more the language of neuroscience, is that these different senses are related to, +[3770.180 --> 3774.220] again, more global processes, not things that one particular part of the brain does, but +[3774.220 --> 3778.260] rather these more general processes that integrate the activity of these different parts +[3778.260 --> 3780.260] of the brain. +[3780.260 --> 3785.420] So if we take again our lesson that brain diseases tell us about how the brain is organized, +[3785.420 --> 3789.980] then one thing that we might look to for inspiration is to think about brain diseases that are +[3789.980 --> 3794.780] not like strokes or tumors, but brain diseases that affect many different parts of the brain +[3794.780 --> 3796.460] at the same time. +[3796.460 --> 3800.380] And I'm going to get into a slightly controversial area, and I hope I don't get myself into +[3800.380 --> 3806.860] too much trouble, but there is an idea that dementia is a disease that's very threatening +[3806.860 --> 3809.460] to people's self. +[3809.460 --> 3811.580] And I say it's controversial. +[3811.580 --> 3814.660] So you know, there are some people that say that this is something that happens. +[3814.660 --> 3818.660] So here's a popular book for family members of patients with Alzheimer's disease, and +[3818.660 --> 3822.100] the title of the book would suggest that yes, this is something that happens, this is +[3822.100 --> 3826.220] something we see that family members need to know about that patients with Alzheimer's +[3826.220 --> 3831.060] disease can lose their self in the course of the disease. +[3831.060 --> 3834.820] But at the same time, you'll have other people who say, no, how could you say that? +[3834.820 --> 3840.060] The self isn't lost in Alzheimer's disease, the self-indulers in Alzheimer's disease. +[3840.060 --> 3844.220] And what I'm going to suggest in part is that some of this controversy reflects actually +[3844.220 --> 3850.300] different neurobiological and neuroscientific aspects that are related to the self that +[3850.300 --> 3856.020] might be preserved or might be lost in these different diseases. +[3856.020 --> 3859.460] So hoping to broker a compromise of sorts in this. +[3859.460 --> 3864.460] So I'm going to unfortunately introduce a little bit of philosophical jargon, but I hope +[3864.460 --> 3866.500] it'll be helpful. +[3866.500 --> 3872.340] Thinking about us as people as agents that have to move around and be effective in the world +[3872.340 --> 3877.540] and make sense of the world, there are two kinds of problems of sort of integration. +[3877.540 --> 3881.580] So we talk already about the different functions that these different parts of our brain +[3881.580 --> 3886.380] do, but they've got to be brought together in a coherent way that allows us to deal with +[3886.380 --> 3889.100] the world and to be effective in the world. +[3889.100 --> 3894.100] And two problems in particular that I want to focus on, the first I'll call a problem +[3894.100 --> 3895.860] of synchronic unity. +[3895.860 --> 3901.820] And by this, I mean unification of kind of your activity at a given point in time. +[3901.820 --> 3905.940] And the first observation to make is that at any point in time, there are hundreds of +[3905.940 --> 3909.100] different things that are all competing for your attention. +[3909.100 --> 3913.100] So you might be trying to pay attention to what I'm saying, but you might find your mind +[3913.100 --> 3917.500] wandering to think about what kind of cheese they're going to be serving in the reception +[3917.500 --> 3923.620] after work or how can I get my hands on one of those brain games. +[3923.620 --> 3927.020] And in addition, there's also kind of sensory information. +[3927.020 --> 3933.100] So you're listening to my voice, you're looking at the slides, but you might also be distracted +[3933.100 --> 3938.140] by an itchy feeling on your leg or the way the tag of your shirt is rubbing against +[3938.140 --> 3939.980] your neck, things like that. +[3939.980 --> 3943.500] And your brain is being bombarded by this information all the time. +[3943.500 --> 3945.700] All of these things are actually being represented in your brain. +[3945.700 --> 3951.580] These things are happening, but you can't be responding to all of those at the same time. +[3951.580 --> 3955.820] And similarly, from the point of view of motivation, we all have conflicting aims and desires +[3955.820 --> 3957.860] that can't all be satisfied at once. +[3957.860 --> 3962.780] So you came here to learn about the brain, to learn about brain games and so forth, but +[3962.780 --> 3967.140] you might have also hoped to go to a movie or meet some friends for dinner and so forth. +[3967.140 --> 3970.780] And we know that we can't satisfy all these different aims and desires at once. +[3970.780 --> 3974.900] And again, so in both cases, you've got to focus, you have to prioritize and allocate +[3974.900 --> 3975.900] attention. +[3975.900 --> 3979.060] And that's just in any given moment. +[3979.060 --> 3982.940] Then in addition, there are problems that I'll call the problem of diacronic unity. +[3982.940 --> 3985.860] And that's kind of being here in sense across times, right? +[3985.860 --> 3991.060] Because your life extends far beyond this moment, far beyond this room. +[3991.060 --> 3993.180] It extends forward and backward in time. +[3993.180 --> 3997.820] And we all have important plans and projects that extend over the course of our lifetimes +[3997.820 --> 4002.340] or when we think about things for the sake of our children or important causes we have, +[4002.340 --> 4007.140] we actually have important projects and plans that extend beyond our own lifetimes. +[4007.140 --> 4014.020] And so part of being human is kind of the ability to think about yourself extended beyond +[4014.020 --> 4015.020] the present moment. +[4015.020 --> 4016.660] You plan for the future. +[4016.660 --> 4020.620] And then in order to do this, you've also got to be able to recall prior intentions. +[4020.620 --> 4024.620] So you signed up for this course maybe a week or two ago and then you had to remember +[4024.620 --> 4027.380] today that today was the day you're going to come. +[4027.380 --> 4030.620] You've also got to be able to keep track of different things that you do in order to +[4030.620 --> 4032.180] realize these long-term goals. +[4032.180 --> 4036.020] So yes, I already did step one and step two and then now I have to think about step three +[4036.020 --> 4037.620] and step four. +[4037.620 --> 4042.860] And some psychologists have suggested that one way of thinking about this task that we +[4042.860 --> 4048.580] all have as human beings is in terms of a faculty they call mental time travel, right? +[4048.580 --> 4054.940] And so that's kind of the ability to project your perspective and to imagine yourself kind +[4054.940 --> 4056.860] of in the future or in the past. +[4056.860 --> 4063.660] And we use this when we recall old experiences, when we think about particularly moving experiences +[4063.660 --> 4065.540] that we had in life. +[4065.540 --> 4068.020] But we also use it when we think about the future. +[4068.020 --> 4072.900] So maybe you've never been to Barcelona before but you'd like to go and you can imagine +[4072.900 --> 4077.060] yourself walking along less, round less or standing at the base of less agrarian familiar +[4077.060 --> 4080.620] and looking up at the spires, right? +[4080.620 --> 4086.500] So it's helpful that I have two problems of the self because there's also two different +[4086.500 --> 4092.180] forms of dementia that I think might be relevant as disease models that tell us about the +[4092.180 --> 4095.660] way that, you know, again, we think about the way the brain is organized in health and +[4095.660 --> 4100.260] we also think about the way that things can go wrong in the case of disease. +[4100.260 --> 4104.460] And so one of them, all the time, is diseases one that you've already heard a lot about. +[4104.460 --> 4107.780] One that may be less familiar is this disease called front of temporal dementia. +[4107.780 --> 4112.820] I guess those of you who were here last week would have heard a lot about it as well. +[4112.820 --> 4117.260] But you know, these diseases tell us actually about different brain systems that seem to +[4117.260 --> 4118.260] be involved. +[4118.260 --> 4122.020] I would suggest in these different aspects of self-integration, these different kinds +[4122.020 --> 4124.620] of integrative problems of the self. +[4124.620 --> 4129.540] So patients with front of temporal dementia, these patients are very, very unique. +[4129.540 --> 4134.860] They're very tragic in that they really have an inability to make their actions coherent, +[4134.860 --> 4137.260] particularly, you know, just even in the moment. +[4137.260 --> 4139.220] So these patients are often disinhibited. +[4139.220 --> 4143.940] So these patients are prone to do things like, you know, they might see people in the supermarket, +[4143.940 --> 4148.180] complete strangers and say that they're fat or that they would like to have sex with +[4148.180 --> 4149.180] them. +[4149.180 --> 4152.980] And, you know, one thing I would say is that, you know, these are thoughts that even in +[4152.980 --> 4157.620] normal people might occur to somebody in the course of their interactions, you know, +[4157.620 --> 4158.700] kind of being out in the world. +[4158.700 --> 4161.500] But, you know, we know not to say these things. +[4161.500 --> 4165.540] And, sadly, these patients don't have that ability anymore. +[4165.540 --> 4167.260] These patients can be very distractible. +[4167.260 --> 4171.620] So even when they're focused on a particular goal, they can be easily distracted and so +[4171.620 --> 4173.300] they wind up doing something else. +[4173.300 --> 4177.620] They have a certain loss of concern for other people, loss of empathy that might be connected +[4177.620 --> 4182.860] more broadly to a loss of a sense of kind of the importance of other people. +[4182.860 --> 4185.220] They tend to perform a lot of compulsive and repetitive movements. +[4185.220 --> 4189.020] They might tap their leg in a certain way or we've seen patients that rub their skin +[4189.020 --> 4194.540] raw because they just have a tendency to rub in a certain way or they might make repetitive +[4194.540 --> 4200.820] kind of vocalizations like, in a certain way, that can be kind of very inappropriate to +[4200.820 --> 4202.980] the setting. +[4202.980 --> 4207.060] They kind of overeat so if there's food in front of them, they're likely to eat especially +[4207.060 --> 4208.060] sweets. +[4208.060 --> 4210.980] Something I put in gray because it's not part of our formal criteria anymore, but it's +[4210.980 --> 4213.180] something that we use clinically as a loss of insight. +[4213.180 --> 4217.340] So these patients seem especially unable to reflect upon kind of the changes in their +[4217.340 --> 4223.460] personality and the ways that their behavior is affected to other people. +[4223.460 --> 4227.740] So that's front and temporal dementia and then we've also talked about Alzheimer's disease. +[4227.740 --> 4232.740] And two of the things that we already talked about are that they kind of forget these episodic +[4232.740 --> 4237.100] memories so they kind of lose the ability to lay down these memory traces and refer +[4237.100 --> 4238.420] back to them. +[4238.420 --> 4242.940] And as Dr. Prasin pointed out, they also have trouble even in learning and acquiring +[4243.020 --> 4244.300] these memories. +[4244.300 --> 4248.940] These patients are often disoriented in times so they lose track of the day of the week, +[4248.940 --> 4251.060] the month, even the year. +[4251.060 --> 4252.780] And they also have difficulties in navigation, right? +[4252.780 --> 4255.580] So these patients don't get lost in time but they get lost in space. +[4255.580 --> 4260.660] So these patients tend to wander or they tend to lose track of where they are. +[4260.660 --> 4266.180] So one of the things is that that's important to know about these diseases is that they +[4266.180 --> 4268.420] don't just strike sort of randomly. +[4268.420 --> 4270.180] They actually tend to occur in patterns. +[4271.140 --> 4276.540] They affect different parts of the brain but they're quite repeatable in terms of which +[4276.540 --> 4279.060] parts of the brain, these particular disease's effect. +[4279.060 --> 4282.380] And so here I have a map in blue. +[4282.380 --> 4285.940] Might be familiar to those of you who are Dr. Celie's talk last week. +[4285.940 --> 4289.100] But there are certain parts of the brain that are affected in certain parts that are +[4289.100 --> 4293.060] spared in front of temporal dementia and similarly for Alzheimer's disease. +[4293.060 --> 4297.340] And then what's quite interesting is that when we go back and look in the healthy brain, +[4297.340 --> 4300.420] we've done a lot of research that looks at these sort of networks that are distributed +[4300.420 --> 4301.420] across the brain. +[4301.420 --> 4306.420] So again, not asking about what one particular part of the brain does on its own, but more +[4306.420 --> 4309.900] research that's devoted to how these different parts of the brain are connected. +[4309.900 --> 4311.780] And so we've identified these networks. +[4311.780 --> 4316.580] But when you look at them, there's a significant amount of correspondence between the areas +[4316.580 --> 4321.620] of the brain that are affected, that are atrophied and lost in these dementia syndromes +[4321.620 --> 4325.140] and these networks that we see even in the healthy brain. +[4325.140 --> 4329.060] And we're calling these the salient network and the default network. +[4329.060 --> 4333.860] So I should say that while Alzheimer's disease affects the hippocampus, as was mentioned, +[4333.860 --> 4337.260] there is actually this broader constellation of brain regions that's also affected in +[4337.260 --> 4338.380] this disease. +[4338.380 --> 4343.500] So for the salient network, when we look at this network that's affected in front of temporal +[4343.500 --> 4349.540] dementia and we ask, what are we learning about what this network does in healthy people? +[4349.540 --> 4354.580] We're finding that it's related to a lot of the functions that you might guess just based +[4354.660 --> 4356.820] upon our knowledge of the disease. +[4356.820 --> 4361.060] So we know that regions of this network are very important for things like value, what +[4361.060 --> 4367.140] value we attach to things, even the value of things like money or relationships, emotion. +[4367.140 --> 4371.340] There are nodes of this that are very closely associated with motivation and drive, with +[4371.340 --> 4375.140] kind of the will to get up and do things. +[4375.140 --> 4380.020] And then even very basic cognitive processes like paying attention and being alert or staying +[4380.020 --> 4384.180] on task or all related to this network and are all very closely related to deficits +[4384.260 --> 4387.860] that we see in patients with these diseases. +[4387.860 --> 4393.140] On the other hand, we know from, again, studies of healthy people that this default network +[4393.140 --> 4396.900] is important for things like autobiographical memory and envisioning the future. +[4396.900 --> 4400.740] So these are things that we would relate again to this idea of mental time travel. +[4400.740 --> 4405.260] A couple other things that the default network seems to be involved with that may be related. +[4405.260 --> 4410.060] One has to do again with navigation, with certain kinds of tasks where we have to orient +[4410.060 --> 4411.380] ourselves in space. +[4411.940 --> 4414.740] The one that's also interesting is adopting other perspectives. +[4414.740 --> 4419.180] So imagining myself and yours shoes, knowing the things that you know, and which might be +[4419.180 --> 4422.580] different from the things that I know, this seems to be involved. +[4422.580 --> 4428.380] And also mind wandering, which might be actually tapping into some of the memory and envisioning +[4428.380 --> 4429.380] the future, right? +[4429.380 --> 4433.140] So when your mind wanders, you're often likely to think about maybe something that you're +[4433.140 --> 4436.980] doing or conversation you're having yesterday or you might find your mind wandering to +[4436.980 --> 4439.540] something that you'd like to do in the future. +[4439.540 --> 4443.820] And kind of a broader picture that includes the idea of mental time travel that some people +[4443.820 --> 4448.700] have proposed is that this default network is involved in engaging in these sort of dynamic +[4448.700 --> 4451.540] simulations of possible states of affairs. +[4451.540 --> 4455.860] So that when we recall a memory, one thing that we do when we kind of reconstruct that +[4455.860 --> 4460.700] experience is that we draw upon things that we've stored in the brain to recreate the +[4460.700 --> 4462.580] experience that we had of that memory. +[4462.580 --> 4466.300] And that we might use a very similar system when we think about something we're going +[4466.300 --> 4467.300] to do in the future. +[4467.300 --> 4472.100] And there we're not drawing upon memory per se, but we use a similar system along with +[4472.100 --> 4477.100] information that we already know about some future event that allows us to simulate it +[4477.100 --> 4478.860] in a similar way. +[4478.860 --> 4484.020] So in conclusion, I've talked about two distributed networks, right? +[4484.020 --> 4487.580] So again, we're moving beyond thinking about things that any particular region of the +[4487.580 --> 4493.020] brain does in isolation, and instead thinking about what the coordinated activity of these +[4493.020 --> 4495.660] distributed parts of the brain do together. +[4495.660 --> 4500.460] And these networks seem to be very central to the activity of other parts of the brain. +[4500.460 --> 4506.060] And I think that when we think about what is the upshot of this for us as human beings, +[4506.060 --> 4509.700] that the function of these networks seems to be to give some coherence to our thoughts, +[4509.700 --> 4511.900] our motivations, and our actions. +[4511.900 --> 4516.780] And my suggestion is that this problem that I've mentioned of synchronic unity of kind +[4516.780 --> 4522.500] of being a unified agent at a particular point in time able to kind of deal with all of +[4522.500 --> 4527.060] the potentially distracting information, the conflicting desires that we have, and so +[4527.060 --> 4531.740] forth, is really something that is served by this salient network. +[4531.740 --> 4536.340] And meanwhile, this problem of diacronic unity of being an agent that's extended over +[4536.340 --> 4541.260] time, that's agency can go kind of back and forth beyond the present moment, is something +[4541.260 --> 4545.140] that's served in part by this default network. +[4545.140 --> 4549.900] And then getting back to the controversy that I mentioned before, if people were to ask +[4549.900 --> 4556.860] the question, is the self-lost in dementia, my suggestion would be that to answer this +[4556.860 --> 4560.740] question, we really have to distinguish between different kinds of unity that are important +[4560.740 --> 4563.620] to being a coherent coordinated self. +[4563.620 --> 4567.860] So I think that one thing that we definitely do see in Alzheimer's disease is a loss of +[4567.860 --> 4570.380] unity, a loss of self across time. +[4570.380 --> 4575.140] So these are patients who have trouble linking one moment to the next. +[4575.140 --> 4580.260] And so it can be very, very difficult for these patients to make plans or to rely upon +[4580.260 --> 4584.260] their knowledge of past events and being effective in the present and future. +[4584.260 --> 4588.460] But we also know that these patients can be very, very present in the moment. +[4588.460 --> 4593.220] These patients can be very sensitive to other people's needs and emotions. +[4593.220 --> 4598.620] They can respond in very socially appropriate, very graceful ways to all kinds of challenging +[4598.620 --> 4599.940] situations. +[4599.940 --> 4604.420] And this is, I think, what a lot of people refer to when they say that this is the preserved +[4604.420 --> 4607.380] part of the self in Alzheimer's disease. +[4607.380 --> 4610.740] Meanwhile, when we see patients with front of temporal dementia, I think one of the things +[4610.740 --> 4616.020] that's very striking about them is this loss of unity and coherence even at a given time. +[4616.020 --> 4619.820] So just in a single interaction with one of these patients, you might find that they're +[4619.820 --> 4625.100] distracted, that they're emotionally disengaged, they act in somewhat bizarre ways. +[4625.100 --> 4629.420] But if they're paying attention, their memory of these past events and their ability to +[4629.420 --> 4631.780] project forward and backward can be preserved. +[4631.780 --> 4638.060] And so I think that overall, I'd say that when we think about these diseases, we might +[4638.060 --> 4643.260] think about different aspects, different tasks of self-integration and see ways that they +[4643.260 --> 4645.260] can stay together or come apart. +[4645.260 --> 4646.260] Thanks. diff --git a/transcript/allocentric_ePP0G7FJGPI.txt b/transcript/allocentric_ePP0G7FJGPI.txt new file mode 100644 index 0000000000000000000000000000000000000000..47234fecaec43818af6a03ff78c4d756283d8d71 --- /dev/null +++ b/transcript/allocentric_ePP0G7FJGPI.txt @@ -0,0 +1,317 @@ +[0.000 --> 3.680] The famous work of hubel and weasel, mostly done with cats. +[3.680 --> 5.920] Sorry, I realize it's going to upset some people. +[5.920 --> 8.160] That's just how it was done. +[8.160 --> 10.640] Cat is lying on a table looking at a stimulus. +[10.640 --> 15.560] There's an electrode in the cat's visual cortex. +[15.560 --> 18.440] And so here's the view looking on at the cat. +[18.440 --> 19.680] He's in an apparatus here. +[19.680 --> 21.080] There are electrodes there. +[21.080 --> 22.720] And out here in front of the cat, +[22.720 --> 24.280] hubel and weasel and their colleagues +[24.280 --> 27.600] are flashing up light in different shapes +[27.600 --> 30.800] in different positions in the cat's visual field. +[30.800 --> 33.200] And recording from neurons. +[33.200 --> 36.080] OK, so here is an example. +[36.080 --> 38.360] So this is one of hubel and weasel's movies. +[38.360 --> 39.680] This is what the cat is seeing. +[43.960 --> 46.880] So they're flashing a bar of light in front of the cat. +[46.880 --> 48.800] And what you hear are the action potentials +[48.800 --> 52.680] to be single neuron in the cat's visual cortex. +[52.680 --> 55.240] And so they're marking on a piece of paper +[55.240 --> 58.320] what the receptive field of that cell is. +[61.240 --> 65.160] See, they keep moving it around. +[65.160 --> 67.640] And once it gets to that edge, their response. +[76.040 --> 77.160] OK, everybody got the idea. +[77.160 --> 80.520] That's how you map out a receptive field. +[80.520 --> 82.920] Now, there's another movie that I'm running low on time. +[82.920 --> 84.960] So I think I'll just post it on the stellar side +[84.960 --> 86.160] and you can look at it offline. +[86.160 --> 87.640] Where this goes on for minutes. +[87.640 --> 89.480] And they change the orientation of that bar +[89.480 --> 91.480] and they do all kinds of stuff and it's pretty cool. +[91.480 --> 94.200] OK, but I'm going to take the time to run through it. +[94.200 --> 98.560] The upshot of that is that what you find in primary visual +[98.560 --> 101.200] cortex is that neurons have a property +[101.200 --> 103.640] called orientation selectivity. +[103.640 --> 106.720] And that means that each neuron likes bars +[106.720 --> 108.880] of a certain orientation. +[108.880 --> 111.800] That neuron we just saw liked what was it like this. +[111.800 --> 115.560] Later on in that movie, they present lines like this. +[115.560 --> 116.320] Bars of light. +[116.320 --> 117.800] It doesn't like those. +[117.800 --> 120.080] It likes this. +[120.080 --> 121.080] OK. +[121.080 --> 123.280] But you say it doesn't like the means of it. +[123.280 --> 124.760] It doesn't fire as much. +[124.760 --> 125.280] Yeah. +[125.280 --> 128.040] Yeah. +[128.040 --> 130.040] OK, so here is a depiction of that. +[130.040 --> 131.600] This is all one neuron. +[131.600 --> 133.480] The dotted bars are receptive field +[133.480 --> 136.920] where you have to put stuff to make that neuron fire. +[136.920 --> 140.280] And here's the firing over time when you put stuff in there. +[140.280 --> 142.800] And what you see is that this neuron responds +[142.800 --> 146.720] to bars tilted like this, not bars tilted like that or like that. +[146.720 --> 149.000] Everybody see that that's orientation selectivity +[149.000 --> 151.480] of that single neuron? +[151.480 --> 152.360] OK. +[152.360 --> 157.160] If you plot that, you smoothly vary the orientation of that bar +[157.160 --> 158.160] in the receptive field. +[158.160 --> 160.600] And you measure the firing rate of that neuron, +[160.600 --> 164.560] you get a curve like this, also showing orientation selectivity. +[164.560 --> 165.280] Everybody got that? +[168.080 --> 169.800] Why would a brain, why would a visual system +[169.800 --> 171.680] have orientation selectivity? +[171.680 --> 172.520] What could it be? +[175.000 --> 178.000] If you were writing up the code to do object recognition, +[178.000 --> 179.160] would you do this at the front end? +[182.240 --> 183.440] Why? +[183.440 --> 185.440] So one way to look at it is this is sort +[185.440 --> 189.600] of the very primitive beginning alphabet out +[189.600 --> 191.560] of which we will build shape. +[191.560 --> 193.360] Remember what the visual system is doing +[193.360 --> 195.200] is trying to tell you what's out there. +[195.200 --> 197.880] It's got to somehow go from spots of light hitting +[197.880 --> 202.880] your retina to Miguel, right, or Stephanie. +[202.880 --> 203.880] At how's it going to do that? +[203.880 --> 205.960] We don't know, but it's going to have +[205.960 --> 209.760] to extract information with some kind of building blocks out +[209.760 --> 213.120] of which it's going to construct some perceptual representation. +[213.120 --> 213.320] OK. +[213.320 --> 216.480] So what we're seeing here is one of the very early stages +[216.480 --> 219.240] of building up a perceptual representation. +[219.240 --> 222.480] First, finding the bits that matter in the visual field. +[222.480 --> 224.080] That's what you do with retinal ganglion cells. +[224.080 --> 226.320] Find thing to change over space and time. +[226.320 --> 229.360] Next, start getting primitives of shape. +[229.360 --> 229.600] Right? +[229.600 --> 231.680] If you're going to describe the shape of an object, +[231.680 --> 235.680] you need to know how the edges are oriented. +[235.680 --> 237.200] That's what you see right here. +[237.200 --> 239.400] This one fires a lot like that. +[239.400 --> 242.960] It fires a little bit like that, and not at all like that. +[242.960 --> 244.720] Not at all, often in the nervous system +[244.720 --> 246.080] needs background firing rate. +[246.080 --> 248.760] The occasional spike that just happens now in them, +[248.760 --> 251.760] but much lower firing rate. +[251.760 --> 252.960] That's what this represents. +[252.960 --> 256.280] Much more firing to the preferred orientation +[256.280 --> 259.720] than the non-prefer orientation. +[259.720 --> 263.720] Now, all of this is sticking electrodes right in, in this case, +[263.720 --> 267.840] cat V1, responding the recording of the firing rate +[267.840 --> 271.400] as a function of the orientation of that bar. +[271.400 --> 272.320] That's cool. +[272.320 --> 275.200] That seems like a sensible way to do it. +[275.200 --> 278.360] Is there any way to detect orientation selectivity +[278.360 --> 281.240] just with behavior? +[281.240 --> 282.720] It seems like how the hell would you do that? +[282.720 --> 284.400] We're looking in the middle of the system. +[284.400 --> 285.960] We're recording from neurons. +[285.960 --> 287.920] What could we do with behavior that would tell us +[287.920 --> 292.800] about the orientation selectivity or lack thereof in the brain? +[292.800 --> 293.440] But there's a way. +[293.440 --> 297.200] In fact, this was hypothesized way before, +[297.200 --> 298.080] Hubble and Weasel. +[298.080 --> 300.680] Question is, could we discover the same thing? +[300.680 --> 302.200] Could we discover the idea that there +[302.200 --> 305.600] are neurons in your visual system tuned to orientations, +[305.600 --> 307.600] to specific orientations? +[307.600 --> 310.320] Could we discover that without making a measurement +[310.320 --> 313.720] from neurons, just measuring behavior? +[313.720 --> 315.120] We're going to discover it right now. +[315.120 --> 318.640] I hope this is a slightly weak demo, but I hope it will work. +[318.640 --> 320.000] OK, so now here's what you need to do. +[320.000 --> 321.680] First, look here. +[321.680 --> 323.400] Everybody see nice vertical lines? +[323.400 --> 324.800] Got that? +[324.800 --> 329.400] OK, now, your job is to fixate right on that horizontal bar. +[329.400 --> 331.640] You can move back and forth along the width of the bar, +[331.640 --> 332.920] but you can't leave the bar. +[332.920 --> 336.400] And you have to keep fixating for a pretty good wireless +[336.400 --> 337.320] as a subtle effect. +[337.320 --> 339.680] So you're going to have to keep doing this for another 20 seconds +[339.680 --> 343.120] or so while I fill in airtime. +[343.120 --> 347.520] And what you're doing now, as you stare at that, hopefully, +[347.520 --> 351.760] is tiring out those orientation selective neurons +[351.760 --> 353.960] above and below your visual field. +[353.960 --> 355.200] Keep fixating there. +[355.200 --> 358.720] Keep tiring out those neurons. +[358.720 --> 361.800] And the idea is, if you do that long enough, +[361.800 --> 364.920] the signal that your brain will be sending up +[364.920 --> 368.640] to you, the conscious perceiver, wherever that is, +[368.640 --> 373.520] will be a code in which the representation of those orientations +[373.520 --> 376.080] has been diminished, because you burn them out. +[376.080 --> 378.280] You adapted them out. +[378.280 --> 380.120] OK, keep looking for another few seconds. +[380.120 --> 383.480] Don't do this yet, but when I say what you'll do +[383.480 --> 388.120] is you'll shift your gaze over to the horizontal bar +[388.120 --> 388.880] to the left. +[388.880 --> 391.400] And it's pretty subtle, but you can tell me if you see anything. +[391.400 --> 394.720] OK, try shifting them. +[394.720 --> 396.520] Did it work? +[396.520 --> 397.120] Did you see? +[397.120 --> 399.560] Did these guys tilt it a little bit more like that? +[399.560 --> 400.800] Awesome. +[400.800 --> 403.800] OK, this is a tilt after effect. +[403.800 --> 406.480] And isn't that cool, like, right here in this class +[406.480 --> 409.360] with a projector and a bunch of people, +[409.360 --> 412.920] we discovered the properties of neurons in your visual system +[412.920 --> 416.800] just by looking at what you see after you stare at this. +[416.800 --> 420.000] Does everybody get the gist of why that would happen? +[420.000 --> 423.960] Think of these pools of neurons in your primary visual cortex +[423.960 --> 426.200] tuned to each of these different orientations. +[426.200 --> 428.400] And now what we did was we made you really +[428.400 --> 430.920] tire out the neurons that like this, or whatever it was. +[430.920 --> 432.040] Yeah. +[432.040 --> 433.040] Look at that long enough. +[433.040 --> 435.920] They adapt, just like retinal ganglion cells adapt. +[435.920 --> 437.280] OK, those neurons adapt. +[437.280 --> 438.160] They tire out. +[438.160 --> 440.480] They're less interested in firing, just like you run a marathon. +[440.480 --> 441.320] You don't want to run anymore. +[441.320 --> 443.200] They're done, right? +[443.200 --> 444.680] And so they are firing less. +[444.680 --> 447.400] And so the net average orientation +[447.400 --> 449.200] indicated by the whole pool of neurons +[449.200 --> 452.000] is shifted in the direction of the other ones, +[452.000 --> 455.080] because they're kind of taken out of your representation. +[455.080 --> 456.600] Does that make sense? +[456.600 --> 458.680] And it gives you an opposite after effect. +[458.680 --> 461.560] OK, I mentioned that just to say that it's kind of cheating +[461.560 --> 463.120] to record from neurons. +[463.120 --> 466.240] The really hip thing is to infer what the neurons are doing +[466.240 --> 468.040] with a nice low-tech sort of kidding. +[468.040 --> 470.760] But it's pretty cool to be able to do this without actually +[470.760 --> 471.720] recording. +[471.720 --> 473.760] The coolest thing actually is having both +[473.760 --> 476.440] to really make it a strong argument. +[476.440 --> 478.200] OK. +[478.200 --> 480.480] All right, so this adaptation is sometimes +[480.480 --> 483.360] called the psychophysicist microelectro. +[483.360 --> 485.720] Psychophysicists are people who do just like this. +[485.720 --> 488.240] They present visual or sensory stimuli +[488.240 --> 489.840] and measure behavioral responses. +[489.840 --> 493.040] And from that, they try to infer how the system works. +[493.040 --> 495.240] And in this case, they infer the properties of neurons +[495.240 --> 498.200] just from behavior. +[498.200 --> 500.680] And there's like a million variations of this. +[500.680 --> 503.640] OK, so now we know that there's neurons in your visual +[503.640 --> 504.840] court. +[504.840 --> 507.360] The tilt after effect doesn't tell you where +[507.360 --> 508.800] in the brain those neurons are. +[508.800 --> 510.640] It just says somewhere on your processing chain, +[510.640 --> 513.200] you have neurons that do that and that adapt out. +[514.000 --> 516.640] You need physiology to tell you where. +[516.640 --> 519.560] OK, so now we know that there are neurons in your primary +[519.560 --> 522.560] visual cortex that have orientation selectivity. +[522.560 --> 525.760] OK, how do you compute that? +[525.760 --> 528.840] I keep making all this loose talk about how vision is visual +[528.840 --> 531.560] information processing and you're computing things +[531.560 --> 533.280] on representations. +[533.280 --> 535.080] This is actually one of the few cases where +[535.080 --> 537.920] there's a pretty good idea of how that's actually computed +[537.920 --> 540.320] in a simple neural circuit. +[540.400 --> 544.520] So remember that we're going to try to derive this property +[544.520 --> 548.120] from a simple circuit starting with the properties of retinal +[548.120 --> 549.520] ganglion cells. +[549.520 --> 551.040] It's true there's an LGN in between, +[551.040 --> 554.200] but the LGN responds much like the retinal ganglion cells. +[554.200 --> 559.240] OK, so this is what Hubell and Weasel proposed for which +[559.240 --> 560.920] there's some evidence and still some dispute +[560.920 --> 562.960] about exactly how this works. +[562.960 --> 567.720] But imagine just taking a bunch of those retinal ganglion cells, +[567.720 --> 570.280] or I'm sorry, lateral geniculate cells +[570.280 --> 572.080] that behave like retinal ganglion cells. +[572.080 --> 573.280] Here are four of them. +[573.280 --> 580.200] Each of them is an on-center off-surround spot detector. +[580.200 --> 582.680] And if you have them aligned in a row in space, +[582.680 --> 585.920] that is the receptive fields are aligned, not the cells. +[585.920 --> 588.920] They respond to different parts of space like this. +[588.920 --> 593.400] And now you have all of them feed into a V1 cell. +[593.400 --> 597.320] If it functions as a kind of AND gate, which neurons can do, +[597.320 --> 601.800] more or less, then this neuron is going to detect bars +[601.800 --> 603.840] of that orientation. +[603.840 --> 605.800] Everybody see how that works? +[605.800 --> 607.800] Nice and simple and low-tech. +[607.800 --> 611.240] So here's this basic building block in your visual system +[611.240 --> 613.520] that you can detect indirectly with adaptation +[613.520 --> 616.320] behaviorally, that you can measure nearly. +[616.320 --> 621.160] And here we have an idea of how that simple thing is computing. +[621.160 --> 623.320] We won't be able to do this for, say, face recognition. +[623.320 --> 624.720] We don't have the circuit for that. +[624.720 --> 626.800] But for these simple early building blocks, +[626.800 --> 628.120] there are very sensible circuits that +[628.120 --> 632.640] can do these first few computations. +[632.640 --> 635.200] All right, so how's this thing going to behave? +[635.200 --> 639.640] Let's imagine a row of these, just as the same thing. +[639.640 --> 643.600] But what happens here is if you add up the on-center +[643.600 --> 647.280] in the off-surround across those neurons aligned like this, +[647.280 --> 651.680] you will get a receptive field of the primary visual cortex +[651.680 --> 653.160] neuron that looks like this. +[653.160 --> 656.240] Everybody see if you can average that, you get this? +[656.960 --> 660.280] So it has orientation sensitivity as we just described. +[660.280 --> 663.640] But it's also got these flanking fields here, +[663.640 --> 666.360] these inhibitory flanking fields here. +[666.360 --> 671.320] So if you put stimulus A right in the center like that, +[671.320 --> 675.840] it'll turn on like that, with it turning it on in the middle +[675.840 --> 679.440] right there, you get an activation. +[679.440 --> 685.080] If you put in a bar right here, right on top of the inhibitory +[685.080 --> 690.000] flanker, you're going to get an inhibition in that neuron. +[690.000 --> 693.960] And if you put it diagonally like C, there's no change, +[693.960 --> 696.080] because the excitation from the center of the field +[696.080 --> 699.680] is canceled by the inhibition from the flankers. +[699.680 --> 701.560] Everybody get that? +[701.560 --> 704.400] So this is just how these are called simple cells, +[704.400 --> 707.560] basic orientation selective cells in primary visual cortex. +[707.560 --> 710.920] That's how they behave and how they're computed +[710.920 --> 715.920] from the properties of LGN input. +[715.920 --> 717.360] Makes sense? +[717.360 --> 718.520] There's much more to V1. +[718.520 --> 720.600] There's all their kinds of selectivities, +[720.600 --> 721.440] and we'll skip all that. +[721.440 --> 723.440] Here's the basic idea. +[723.440 --> 726.240] So that's one neuron. +[726.240 --> 729.560] How are these guys oriented spatially across the brain? +[729.560 --> 732.200] I'm going to go rather quick through a few slides here +[732.200 --> 735.480] and then get to some more basic facts. +[735.480 --> 738.960] Turns out that they're clustered together, +[739.800 --> 742.080] they progress systematically across the cortex. +[742.080 --> 744.160] So here's a piece of cortex outside the head, +[744.160 --> 746.640] inside the head, piece of slab of cortex. +[746.640 --> 748.640] And what you see is if you send an electrode +[748.640 --> 752.760] along the length of cortex, you see this even smooth progression +[752.760 --> 756.280] in orientation selectivity. +[756.280 --> 758.040] So it's not like random cells. +[758.040 --> 759.480] Right next to a cell that likes this, +[759.480 --> 761.120] there's a cell that likes that. +[761.120 --> 765.440] No, they progress smoothly and evenly across the cortex. +[765.440 --> 768.480] So there's like a little map, a little fine scale map +[768.480 --> 770.680] of orientation selectivity spatially +[770.680 --> 773.520] across primary visual cortex. +[773.520 --> 777.960] These are sometimes called orientation columns. +[777.960 --> 780.800] And it's another kind of functional organization +[780.800 --> 784.760] on top of retinotopy, all in the same chunk of cortex. +[784.760 --> 786.880] So primary visual cortex is getting complicated. +[786.880 --> 788.080] It isn't just a map. +[788.080 --> 789.160] It's a map. +[789.160 --> 791.440] And then on top of that map is a smooth progression +[791.440 --> 794.520] of orientation happening all over the place. +[794.520 --> 795.560] OK? +[795.560 --> 797.520] All right. +[797.520 --> 798.840] Can we see this with humans? +[798.840 --> 800.080] OK, do this really fast? +[800.080 --> 802.680] So here's another study with 7 Tesla, +[802.680 --> 804.680] super fancy high resolution. +[804.680 --> 806.960] Here's a little piece through the back of the brain. +[806.960 --> 809.560] Here's the sulcus between the two hemispheres. +[809.560 --> 813.760] Here's a piece of V1 in a human subject, +[813.760 --> 816.000] scanned at 7 Tesla. +[816.000 --> 818.480] And in fact, it's claimed that you +[818.480 --> 820.720] can see orientation columns like that +[820.720 --> 822.120] across the cortex in humans. +[822.120 --> 823.520] If you have high enough resolution, +[823.520 --> 826.640] it needs to be down to around a millimeter or less. +[826.640 --> 829.400] Each of those colors is a preferential response +[829.400 --> 833.240] to a different orientation. +[833.240 --> 835.080] This can be shown much better in animals, +[835.080 --> 838.400] but you can see it here even in humans. diff --git a/transcript/allocentric_fLaslONQAKM.txt b/transcript/allocentric_fLaslONQAKM.txt new file mode 100644 index 0000000000000000000000000000000000000000..2cb2fe61cafc39548c97627614da5ddea03f98e3 --- /dev/null +++ b/transcript/allocentric_fLaslONQAKM.txt @@ -0,0 +1,194 @@ +[0.000 --> 5.220] That is trulyigious. +[5.280 --> 9.780] That, as allflower in the world, +[9.860 --> 12.500] has been my passion for Uni, +[12.580 --> 16.520] you followed the message of God, +[17.580 --> 24.460] and become everything my God has done to us, +[25.420 --> 27.460] but the power I stand in this beauty and the powerove +[27.460 --> 29.880] The things that you attach to yourself, +[31.640 --> 36.640] a purse, a pen, a fancy car, all these things are communicating. +[38.200 --> 41.640] How you look at others communicate. +[43.180 --> 47.680] And all day long, we are communicating non-verbaly. +[49.760 --> 50.660] All day long. +[52.240 --> 54.940] You can look in on your child as they sleep +[54.940 --> 57.180] and you can tell if they're having a nightmare +[57.180 --> 58.980] or they're sleeping soundly. +[60.480 --> 65.160] As you sit there, and now I'm starting to see some of you, +[66.980 --> 69.280] you're giving information up, +[70.560 --> 73.500] even as I'm giving information up. +[73.500 --> 74.820] You're assessing me. +[76.960 --> 81.080] If I can speak to you from an anthropological standpoint, +[81.780 --> 86.080] I am transmitting information about myself, +[86.420 --> 91.420] my beliefs, the things that I value, even as you are. +[95.940 --> 98.260] Now that I can see you a little clearer, +[98.260 --> 101.460] how many of you were dressed by your parents today? +[101.460 --> 102.780] Raise your hand. +[103.780 --> 104.600] Wow. +[108.780 --> 111.780] Spouses, that's okay, your spouse is gonna draw. +[113.780 --> 117.780] So you chose to dress the way you did, +[118.780 --> 121.780] even as I chose to dress the way I did. +[121.780 --> 124.780] They said, well it's Ted Talks, you can dress down. +[125.780 --> 128.780] I said, you know, I was in the FBI for 25 years. +[128.780 --> 130.780] I don't know how else to dress. +[131.780 --> 133.780] It would be such a disappointment. +[133.780 --> 137.780] It's like on TV they always have suits, +[137.780 --> 140.780] even when they're walking through the marsh. +[141.780 --> 143.780] It's true. +[143.780 --> 146.780] I can't tell you how many crime scenes I went through, +[146.780 --> 150.780] that ruined, really inexpensive suits. +[152.780 --> 153.780] But we look good. +[153.780 --> 155.780] We look good. +[161.780 --> 163.780] I guess humor is allowed. +[164.780 --> 170.780] And so all day long, we're making choices. +[171.780 --> 172.780] We're making choices. +[172.780 --> 175.780] They're based on culture. +[177.780 --> 181.780] They're based on peer pressure, on personal preferences. +[183.780 --> 187.780] And so the things we wear and attach to ourselves +[187.780 --> 191.780] are transmitting our bodies, or transmitting information. +[193.780 --> 197.780] And the question that I'm often asked is, well, how authentic is it? +[201.780 --> 203.780] How authentic is it? +[203.780 --> 208.780] And as I pondered this, I said, you know what? +[208.780 --> 214.780] What do we think of the power of nonverbal communication? +[218.780 --> 222.780] But let's do it by taking the myths out of it +[222.780 --> 226.780] and plugging in what really values. +[226.780 --> 230.780] What really is a value when it comes to nonverbals? +[231.780 --> 234.780] How many of you have had a bad handshake? +[237.780 --> 242.780] And normally, of course, now we have the coronavirus. +[242.780 --> 246.780] I would have you turn to each other and give each other +[246.780 --> 249.780] a handshake that's really bad. +[249.780 --> 251.780] But I'm not going to do that. +[251.780 --> 254.780] I want you to just put your hand in front of you +[254.780 --> 257.780] and pretend to give someone a bad handshake. +[257.780 --> 259.780] Ready? Let's do it. +[259.780 --> 261.780] Let's do it. +[261.780 --> 263.780] Yeah. +[263.780 --> 264.780] Good. +[264.780 --> 267.780] Do you realize the funny faces you make? +[267.780 --> 270.780] It's like, I didn't ask you to make a funny face. +[270.780 --> 272.780] And yet you did. +[272.780 --> 275.780] Why is that? +[275.780 --> 278.780] Because you're human. +[278.780 --> 284.780] And humans betray what we feel, what we think, +[284.780 --> 288.780] what we desire, what we intend, +[288.780 --> 293.780] what makes us anxious and what we fear. +[293.780 --> 296.780] And we do it in real time. +[296.780 --> 299.780] We don't have to wait 20 minutes. +[299.780 --> 302.780] It happens now. +[302.780 --> 306.780] And our body language, in a way, it's exquisite +[306.780 --> 310.780] because there's an area of the brain that is elegant. +[310.780 --> 314.780] And it's elegant because it takes shortcuts. +[314.780 --> 317.780] It doesn't think. +[317.780 --> 322.780] If I bring in a Bengal tiger here and walk it around, +[322.780 --> 325.780] nobody sits around and waves at it. +[325.780 --> 330.780] That's like, you know, eat me. +[330.780 --> 333.780] No. Everybody freezes. +[333.780 --> 336.780] And that's because of the limbic system. +[336.780 --> 341.780] This rather primitive area of the brain that reacts to the world +[341.780 --> 344.780] doesn't have to think about the world. +[344.780 --> 350.780] And everything that comes from the limbic brain is so authentic. +[350.780 --> 354.780] You hear a loud noise and you freeze. +[354.780 --> 355.780] Right? +[355.780 --> 358.780] What was that? +[358.780 --> 361.780] You see bad news or you see something on TV +[361.780 --> 364.780] and you cover your mouth. +[364.780 --> 366.780] Why is that? +[366.780 --> 371.780] When the conquistadores arrived in the new world, +[371.780 --> 377.780] they didn't have any problem finding out who was in authority. +[377.780 --> 384.780] The same behaviors that they had just left in Queen Isabella's court, +[384.780 --> 387.780] they saw in the new world. +[387.780 --> 391.780] They had better clothing and an entourage. +[391.780 --> 398.780] They didn't have their own show on television, but pretty close. +[398.780 --> 406.780] All these behaviors are very authentic because the limbic system +[406.780 --> 409.780] resides within that human brain. +[409.780 --> 412.780] It's part of our paleo circuits. +[412.780 --> 419.780] So when we see the furrowed forehead on a baby that's three weeks old, +[419.780 --> 423.780] we know that this little area called the globella. +[423.780 --> 427.780] Something is wrong. There's an issue. +[427.780 --> 429.780] When we see the bunny nose, right? +[429.780 --> 431.780] When you wrinkle the nose. +[431.780 --> 433.780] Yeah, we know what that means. +[433.780 --> 435.780] Ooh, I don't like that. +[435.780 --> 437.780] I don't want that. +[437.780 --> 440.780] Ooh. Right? +[440.780 --> 446.780] Did I just say that in public? +[446.780 --> 452.780] When we squint, we're focusing, but we have concerns. +[452.780 --> 458.780] Ah, when the eyelids close, you want me to do what? +[458.780 --> 468.780] And if things are really bad, you want me to talk for 15 minutes. +[468.780 --> 470.780] Here's what's interesting. +[470.780 --> 476.780] Children who are born blind, when they don't like things, they don't like. +[476.780 --> 478.780] Here's things they don't like. +[478.780 --> 480.780] They don't cover their ears. +[480.780 --> 484.780] They cover their eyes. They've never seen. +[484.780 --> 491.780] This is millions of years old. +[491.780 --> 495.780] Smiles are important. +[495.780 --> 502.780] Smiles. The lips begin to disappear when we're stressed. +[502.780 --> 507.780] Most politicians look something like that. +[507.780 --> 511.780] Right before they're indicted, they look like that. +[511.780 --> 516.780] Dramatic lip pulls, jaw shifting. +[516.780 --> 519.780] Covering of the neck. +[519.780 --> 523.780] You've seen that clutching up the pearls. +[523.780 --> 529.780] Where's that creep? Oh, he's gone now. He's back. +[529.780 --> 532.780] But did you know why? +[532.780 --> 535.780] Large felines. +[535.780 --> 546.780] We have seen large felines for so long taking down prey that we immediately cover our neck. +[546.780 --> 555.780] How many of you have been told that you can detect deception by the use of non-verbals? +[555.780 --> 559.780] I'm here to clear that up. +[559.780 --> 563.780] When you leave here today, you say, well, I heard that Navarro fellow. +[563.780 --> 568.780] And he did about 13,000 interviews in the FBI. +[568.780 --> 572.780] He said there is no Pinocchio effect. +[572.780 --> 577.780] Not one single behavior indicative of deception. +[577.780 --> 580.780] Not one. +[580.780 --> 583.780] And we mustn't propagate that. +[583.780 --> 588.780] We must not tell people that we can detect they're lying because of behaviors. +[588.780 --> 590.780] They may be anxious. +[590.780 --> 592.780] They may be stressed. +[592.780 --> 595.780] But not deceptive. +[595.780 --> 598.780] How many of you have been told that if you cross your arms, +[598.780 --> 601.780] that you're blocking the people away? +[601.780 --> 603.780] And you say that. +[603.780 --> 605.780] There's a clinical term for that. +[605.780 --> 608.780] It's called crap. +[608.780 --> 612.780] Yeah, I said it. +[612.780 --> 615.780] Get over it. +[615.780 --> 618.780] It's crap. It's a self-hug. +[618.780 --> 620.780] You're comfortable? +[620.780 --> 621.780] Yeah. +[621.780 --> 626.780] Where does this nonsense come from? +[626.780 --> 629.780] I asked the question often. +[629.780 --> 632.780] You were a spy catcher. +[632.780 --> 635.780] You use nonverbals every day. +[635.780 --> 637.780] What do you use it for? +[637.780 --> 640.780] To make sure people are comfortable. +[640.780 --> 644.780] To make sure that we are empathetic. +[644.780 --> 651.780] The only way to be truly empathetic is by understanding nonverbals. +[651.780 --> 658.780] Carl Sagan, the famous cosmologist, said, who are we? +[658.780 --> 660.780] What are we? +[660.780 --> 662.780] You think about that. +[662.780 --> 667.780] It really takes a smart person to ask that question. +[667.780 --> 670.780] What are we in this universe? +[670.780 --> 673.780] And he summed it up this way. +[673.780 --> 676.780] And I think it's rather exquisite. +[676.780 --> 678.780] He said, oh, we are. +[678.780 --> 684.780] Is the sum total of our influence on others. +[684.780 --> 686.780] That's all we are. +[686.780 --> 689.780] It's not how much you earn. +[689.780 --> 691.780] It's not how many cars you have. +[691.780 --> 694.780] It's our influence on each other. +[694.780 --> 700.780] And what's interesting is that the primary way that we influence each other +[700.780 --> 703.780] through nonverbals, +[703.780 --> 706.780] it's that nice handshake, +[706.780 --> 708.780] it's a pad on the shoulder, +[708.780 --> 711.780] it's that touch of the hand, +[711.780 --> 715.780] it is that behavior that communicates love +[715.780 --> 721.780] in a way that words simply can't do it. +[721.780 --> 725.780] When you leave here, you're going to have choices. +[725.780 --> 727.780] You always have choices. +[727.780 --> 731.780] You have free agency. +[731.780 --> 735.780] And one of the things that you should think about is, +[735.780 --> 739.780] how do I change my nonverbals? +[739.780 --> 744.780] How do I become that person of influence? +[744.780 --> 748.780] Because if there's one thing we need in this world, +[748.780 --> 752.780] it's truly to be more empathetic. +[752.780 --> 757.780] And so when I see this, it says it all. +[757.780 --> 760.780] That's why we use nonverbals. +[760.780 --> 763.780] Because they're powerful. +[763.780 --> 764.780] Thank you. diff --git a/transcript/allocentric_gLUcuv2PxuU.txt b/transcript/allocentric_gLUcuv2PxuU.txt new file mode 100644 index 0000000000000000000000000000000000000000..b6436d09c99bd614ed853c317ba29935496fb3db --- /dev/null +++ b/transcript/allocentric_gLUcuv2PxuU.txt @@ -0,0 +1,7 @@ +[0.000 --> 15.000] Do you mind? +[15.000 --> 16.000] Who? +[16.000 --> 17.000] Me? +[17.000 --> 18.000] Yes, you. +[18.000 --> 19.000] Do you mind? +[19.000 --> 20.000] Mind what? +[20.000 --> 24.000] Learn more at www.9thplanet.org. diff --git a/transcript/allocentric_h-2vuT1fRbw.txt b/transcript/allocentric_h-2vuT1fRbw.txt new file mode 100644 index 0000000000000000000000000000000000000000..6e2c9bdbcf79101470cdb5579fe0e43309cd9e45 --- /dev/null +++ b/transcript/allocentric_h-2vuT1fRbw.txt @@ -0,0 +1,452 @@ +[0.000 --> 2.760] I'm going to be reading out 15 words, +[2.880 --> 5.840] and I'd like to do is to write those words down in that space there... +[5.920 --> 8.760] ...so they're fairly legible, so people can see them later, okay? +[8.840 --> 11.160] And, everybody, I'm going to be reading you out 15 words, +[11.240 --> 13.320] and I want you to try to remember these words. +[13.400 --> 14.920] There may be a test. +[14.920 --> 20.920] 2019 +[27.320 --> 29.160] Mae rushingeth o Meredith lead, +[29.240 --> 31.400] bu Powder Millister Programmer, +[33.240 --> 37.560] 2021 oedd y claid Cyfrgol TrablTaid o'n gallu +[37.640 --> 40.440] dar ebl cael sb� cara. +[40.520 --> 43.760] Qulei oedd gyda feb會 oenderioeddu ropeh +[43.760 --> 47.240] undebūshil y f舞ra preparena mewn nawr. +[47.600 --> 51.160] O'ja gais gyda gweld yn gyfer yn sydd âwm plays detawr – +[51.160 --> 52.060] swich w Margaret 112, +[52.920 --> 56.720] a bell bw gé any travelling ac y humrfoddi grefynos 인سمiaeth ein, +[56.720 --> 57.660] at yw gwag, +[57.800 --> 61.020] y fŵs i sent mewn ffwhrwchwg wedyn nhw. +[61.140 --> 63.040] Byddwch y ceffleid, +[63.140 --> 65.500] er nhw sefym Cyfleid ychydr bar a lockdowni, +[65.640 --> 68.360] fynd er cwoddi myllillem yn cw tobacco, +[68.360 --> 69.960] i'r gallach am dda ganddu. +[70.100 --> 72.120] A bell ambwer i panodion medd ourin Dylla man, +[72.120 --> 75.380] ар ddi eakin ac gole Noaf Knальth y muz dew i Gym***'r gWouldau yn y fawr cyfan yn yn�� gyfreithiad o bod, +[75.620 --> 78.020] Mae Ysgynirgol i h Werend'I Pro, yn roddad. +[78.120 --> 79.620] Yw ryw weda'r cymdan yn niw. +[80.100 --> 83.820] Omin, allau hwybod talchbio am gwneud gwneud gedydyddio awr mal bacch yna. +[86.640 --> 88.300] Rydy! +[88.880 --> 90.280] Rydym adhyddedd. +[90.880 --> 92.100] Rydy! +[93.160 --> 94.200] █ol. +[94.300 --> 95.720] Rydy! +[96.820 --> 99.500] Rydw rydw i cynyddio hwchyd. +[99.960 --> 101.620] Rydw i fini flu a diaphrag. +[102.120 --> 103.620] maximal +[106.720 --> 126.500] ei wneud llawer o leph HI +[126.500 --> 127.780] Fel did aprendiant. +[127.780 --> 130.780] Oey... +[134.900 --> 136.980] B fitt c sankion... +[136.980 --> 138.580] …hel techas hunflall. +[138.580 --> 141.980] Gyd lesaeth rowdidos y broodrium rhanestr CHRIS me 63 milion syg, +[141.980 --> 149.040] Rangerselt ag Nawrr Wngdd Stephan VedraethayD 다 Fis +[149.040 --> 152.580] Xiongurani Matteiolosfenigir, +[152.580 --> 155.780] hefрем fel 14918, +[157.620 --> 162.140] daafdol minus rydym crena i jack am, +[162.160 --> 167.920] refку roerauius andi drethu per yn rhweith, +[167.960 --> 169.500] g advances ago, +[169.520 --> 171.480] oedd który'r gallniant a gy frust yn Bestie +[171.520 --> 172.800] neu i dvargenag iADR, +[172.820 --> 174.460] vela wedi efon honod harlaeth face +[174.480 --> 175.580] ddevais a発 지도 Crabraedd tr Hamilton +[175.600 --> 178.700] a o'r stat lefnit a oedd fakr Stron +[178.740 --> 179.900] rhywun. +[179.940 --> 181.940] Ddochab錊 barley eich syni +[181.940 --> 187.940] a'r new experience, as is shown in people with amnesia. +[190.940 --> 193.940] The link between memory and awareness and consciousness +[193.940 --> 197.940] and subjective experience is strikingly apparent in Clive Wearing, +[197.940 --> 200.940] who's a man with very profound amnesia. +[200.940 --> 204.940] Clive was a noted musician and musicologist and conductor +[204.940 --> 209.940] who contracted a virus that attacked a region in the middle of the temporal lobe +[209.940 --> 212.940] of the brain, called the region called the hippocampus. +[212.940 --> 216.940] He lacks the ability to form new memories as a result of this +[216.940 --> 218.940] and there's only moment to moment consciousness, +[218.940 --> 222.940] believing every few seconds that he's just woken up from a coma. +[222.940 --> 227.940] And yet memory and conscious awareness don't always go together. +[227.940 --> 230.940] Clive appears to have an intact unconscious memory. +[230.940 --> 233.940] He retains the ability to play and conduct music +[233.940 --> 237.940] to a very high level indeed, as this short video shows. +[240.940 --> 244.940] So evidence from people with amnesia like Clive +[244.940 --> 248.940] shows how medial temporal lobe structures like the hippocampus +[248.940 --> 251.940] are critical for conscious long-term memory. +[251.940 --> 254.940] But not for unconscious long-term memory. +[254.940 --> 256.940] Clive was still able to play music to a very high level. +[256.940 --> 259.940] He's still remembered his wife and had a very firm, +[259.940 --> 262.940] obviously very strong emotional attachment to his wife. +[262.940 --> 266.940] But the hippocampal damage that he suffered seems to have completely destroyed +[267.940 --> 270.940] his long-term memory abilities, his conscious long-term memory abilities, +[270.940 --> 272.940] more or less completely. +[272.940 --> 276.940] So Clive has no objective measurable memory +[276.940 --> 280.940] and also lacks any subjective experience of remembering. +[280.940 --> 283.940] As Oliver Sacks once wrote in a New Yorker article on Clive, +[283.940 --> 287.940] memories just disappear into the abyss. +[287.940 --> 291.940] So one thing this means is that studying the hippocampus +[291.940 --> 295.940] is unlikely to reveal much about the brain mechanisms responsible +[295.940 --> 298.940] for subjective aspects of remembering. +[298.940 --> 302.940] But of course we know that no brain region really works in isolation. +[302.940 --> 306.940] Functional brain imaging reveals that networks of brain regions +[306.940 --> 310.940] support all of our cognitive abilities and memories no exception. +[310.940 --> 313.940] And recent evidence suggests that two other regions +[313.940 --> 316.940] that are part of the memory network might be important +[316.940 --> 319.940] for subjective aspects of remembering. +[319.940 --> 322.940] I'm going to show you, so you can see up on the screen these regions, +[322.940 --> 325.940] I'm also going to show you with a real live brain that I just recently removed +[325.940 --> 328.940] from a willing audience member. +[328.940 --> 333.940] So if I show you, we're going to try something here by putting this on. +[333.940 --> 336.940] Let's see if this works. +[336.940 --> 339.940] The technology doesn't defeat me. There we go. +[339.940 --> 342.940] Okay, so hopefully you can see that, right? +[342.940 --> 346.940] So the regions that we're very interested in here are the frontal cortex, +[346.940 --> 349.940] which is this region here, the front of the brain, +[349.940 --> 352.940] particularly this region at the front, the very front, +[352.940 --> 357.940] this region just behind your forehead, which is called the anterior prefrontal cortex. +[357.940 --> 361.940] And another region that we're going to be talking about is a region further towards the back, +[361.940 --> 364.940] this region here, which is called the Priatal lobes, +[364.940 --> 369.940] this is a region just behind and above your ear. +[369.940 --> 375.940] And then the region that was damaged in clive is a region of the temporal lobe, +[375.940 --> 378.940] right in the middle here, the medial part of the temporal lobe, +[378.940 --> 380.940] called the hippocampus. +[380.940 --> 384.940] And that's the region that tends to be most damaged in patients with amnesia. +[384.940 --> 386.940] So these are the three regions that we're going to be talking about. +[386.940 --> 390.940] And particularly the lateral-priatal cortex, this region up behind the ear, +[390.940 --> 395.940] and then the anterior prefrontal cortex, this region just behind the forehead. +[395.940 --> 398.940] And to investigate the role of these regions and the contribution they might make +[398.940 --> 404.940] to subjective aspects of remembering, we first need to think about what information +[404.940 --> 412.940] processing operations might be necessary for generating our subjective experience. +[412.940 --> 418.940] So what exactly is the subjective experience of remembering and how might we begin to understand it? +[418.940 --> 422.940] The study of subjective experience, of conscious nursing awareness, +[422.940 --> 425.940] dates back to Freud and beyond. +[425.940 --> 430.940] And yet, its scientific investigation has largely been neglected. +[430.940 --> 434.940] Psychologists and cognitive neuroscientists have discovered a great deal +[434.940 --> 440.940] about the cognitive and brain mechanisms that enable us to recall a shopping list, for example, +[440.940 --> 444.940] but considerably less is known about the processes underlying the subjective experience +[444.940 --> 449.940] of reliving a previous event, and as if it's happening in front of us again. +[449.940 --> 451.940] So why is that? Why then neglect? +[451.940 --> 458.940] In part, it's because it's really hard to gain insight into what somebody's experiencing inside their heads, +[458.940 --> 463.940] even our most sophisticated functional imaging technology doesn't allow us to look at the brain +[463.940 --> 467.940] and understand what that person is experiencing. +[467.940 --> 475.940] Indeed, scientists have struggled even to know what questions to ask in order to be able to tap into that experience. +[475.940 --> 482.940] And one strategy that I've begun to explore with my colleagues Charles Furnyho and Raphael Lyon, among others, +[482.940 --> 487.940] is to look for inspiration from novelists and poets and philosophers, +[487.940 --> 493.940] many of whom have spent their whole lives thinking deeply about the nature of subjective experience. +[493.940 --> 501.940] And this interdisciplinary approach has proven to be quite valuable in highlighting some key characteristics of remembering +[501.940 --> 506.940] that we can then test empirically using psychological methods or cognitive neuroscience methods, +[506.940 --> 513.940] and begin to shed light on what the cognitive processes and the brain mechanisms are that are responsible +[513.940 --> 517.940] for that subjective experience of remembering. +[517.940 --> 526.940] So one key characteristic is the realization that recalling a memory is not like picking a DVD off the shelf and just playing it, +[526.940 --> 530.940] but rather is an act of reconstruction. +[530.940 --> 536.940] Quoting for a Marcel Proust is a bit of a cliché in a talk on memory, so this is the only time I'm going to do it, +[536.940 --> 539.940] but it does give me an excuse to show one of my favorite photos of all time, +[539.940 --> 544.940] which is Proust playing Ergator with a tennis racket. +[544.940 --> 552.940] Proust was also one of the first to describe the reconstructive nature of memory in his epic 3000-page treatise on remembering, +[552.940 --> 554.940] which is called in search of lost time. +[554.940 --> 558.940] I'm not going to read from the whole of the 3000 pages, but I've just picked this quote here. +[558.940 --> 563.940] Remembrance of things passed is not necessarily the remembrance of things as they were, +[563.940 --> 569.940] which captures the essence of this reconstructive nature of remembering. +[569.940 --> 579.940] This is also captured by this quote from AS Byatt, who captures the essence of memory as something that's influenced not just by how we might encode it into memory, +[579.940 --> 584.940] but also it can be changed and distorted by the way it's remembered. +[584.940 --> 590.940] She characterizes remembering as an act of reconstruction, as storytelling. +[590.940 --> 597.940] What is remembering? I know that I have added to this memory every time I've thought about it, or brought it out to look at it, +[597.940 --> 604.940] it has got both further away and brighter, more and less real. +[604.940 --> 612.940] In this way, the subjective experience of remembering can be thought of as our way of telling ourselves stories about the events we've experienced. +[612.940 --> 617.940] Stories that allow us to think not just about what happened, but how and why it happened, +[617.940 --> 627.940] and to learn from that in a way that enables us to better understand ourselves and those around us and the world that we live in. +[627.940 --> 634.940] So Frederick Bartlett, who was the first professor of experimental psychology in the University of Cambridge Department that I now work in, +[634.940 --> 641.940] published one of the most important books in the history of memory research in 1932, entitled Remembrance. +[641.940 --> 650.940] He was one of the first to consider how memory retrieval can be a reconstructive process, as opposed to a retrieval process. +[650.940 --> 659.940] In one of his experiments, Bartlett asked his 1920s Cambridge undergraduates to read a Native American folk tale, +[659.940 --> 666.940] and then over hours and days and weeks to repeatedly rewrite the tale from memory. +[667.940 --> 674.940] And what he found was that over the repetitions over the weeks and months, the story would get shorter, +[674.940 --> 679.940] with the details starting to be lost, but the gist of the story tended to be retained. +[679.940 --> 688.940] The details, either were lost completely or were altered, distorted, to fit the students' Edwardian culture and social environment, +[688.940 --> 693.940] with details added from stories that were more familiar to them. +[693.940 --> 696.940] Hard to interpret items would be emitted altogether. +[696.940 --> 705.940] And so in this way, our memory is distorted by our thoughts and our expectations at the time of retrieval. +[705.940 --> 714.940] This is another quite striking example from Bartlett's book where he describes this experiment in which people were asked to repeatedly reproduce an abstract drawing from memory. +[714.940 --> 719.940] Over repetitions, you can see how the details are lost, but the gist is retained, +[719.940 --> 726.940] with the drawing gradually becoming more consistent with the generic schema of a human head. +[726.940 --> 733.940] Okay, we're now going to do a bit of a demonstration of hopefully of this effect. +[733.940 --> 738.940] This is going to be some live science here, where hopefully we're going to see the effect, +[738.940 --> 741.940] but they're given that it's live science, it may not work, but that doesn't matter. +[741.940 --> 746.940] So what we're going to do is, first of all, I'm going to ask for a volunteer, someone who has legible handwriting. +[746.940 --> 751.940] Is anyone willing to own up to having legible handwriting who'd like to volunteer to come down and write for me? +[751.940 --> 755.940] Thank you very much, do come down. +[755.940 --> 756.940] What's your name? +[756.940 --> 757.940] Melinda. +[757.940 --> 758.940] Hi, Melinda, nice to meet you. +[758.940 --> 764.940] I'm going to be reading out 15 words, and what I'd like to do is to write those words down sort of in that space there, +[764.940 --> 767.940] so they're fairly legible, so people can see them later, okay? +[767.940 --> 771.940] And everybody, I'm going to be reading you out 15 words, and I want you to try to remember these words. +[771.940 --> 774.940] There may be a test. +[774.940 --> 802.940] So the words are, door, glass, pain, shade, ledge, sill, yeah, house, open, curtain, +[802.940 --> 811.940] fantastic, frame, view, you can maybe calm to death now, yeah. +[811.940 --> 825.940] Breeze, sash, blind, shutter. +[825.940 --> 828.940] Okay, thank you very much indeed, that's fantastic, beautiful handwriting. +[828.940 --> 830.940] Thank you very much, thank you. +[830.940 --> 845.940] Okay, now what we're going to do, so I'm going to ask you some questions about the words that you studied a little bit earlier, +[845.940 --> 851.940] and a word is going to come up, and I want you to select yes or no if you think that word was in the original list. +[851.940 --> 853.940] Okay, are we ready? +[853.940 --> 856.940] No, okay, I'll wait a little bit for the technology. +[856.940 --> 865.940] So the first word was door on the list. +[865.940 --> 871.940] So everybody voted, okay, James, you reveal the answers, is it correct? +[871.940 --> 876.940] Well, we got 93% yes, and that is the correct answer, well done, fantastic. +[876.940 --> 879.940] That's a good start, well done everybody. +[879.940 --> 884.940] Okay, let's move on to the next one. +[884.940 --> 889.940] Was banana on the list? +[889.940 --> 893.940] Okay, James, let's reveal. +[893.940 --> 898.940] Good, oh, well done everybody. +[898.940 --> 901.940] Marvelous, you've been paying attention, well done. +[901.940 --> 905.940] Okay, let's do on to the next one. +[905.940 --> 911.940] Was window on the list. +[911.940 --> 914.940] Okay, James, let's reveal the answer. +[914.940 --> 924.940] Excellent, okay, so let's just have a look and prove whether or not you're correct. +[924.940 --> 928.940] So those were the words on the list, was window on there? +[928.940 --> 933.940] No, it wasn't, ladies and gentlemen. +[933.940 --> 939.940] Science, isn't it wonderful? +[939.940 --> 944.940] So the significant proportion of you thought the window was on the list +[944.940 --> 949.940] because memory typically works by remembering the gist of a situation, +[949.940 --> 952.940] rather than its details. +[952.940 --> 957.940] So we tell ourselves a story that we heard a list of words that were kind of vaguely related to windows, +[957.940 --> 962.940] so that we falsely think the window must have been on the list when in fact it wasn't. +[962.940 --> 966.940] And that sort of gives this example of how memory is this reconstructive phenomenon, +[966.940 --> 970.940] and the fly as we're retrieving a memory. +[970.940 --> 974.940] Another key feature of remembering is exemplified in these extracts. +[974.940 --> 979.940] So for many people, memories can be vivid sensory experiences, +[979.940 --> 983.940] whereas others report much less mental imagery. +[983.940 --> 986.940] That's a great deal of individual variation in that. +[986.940 --> 992.940] But among those who do relive past events, memories tend to be primarily visual. +[992.940 --> 998.940] But T.S. Eliot here emphasises how our most important memories can often be multi-sensory, +[998.940 --> 1004.940] combining sites, sound smells, etc. +[1004.940 --> 1009.940] Why for all of us, out of all we have heard, seen, felt in a lifetime, +[1009.940 --> 1014.940] do certain images recur, charged with emotion rather than others? +[1014.940 --> 1018.940] The song of one bird, the leap of one fish, in a particular place and time, +[1018.940 --> 1022.940] the scent of one flower. +[1022.940 --> 1027.940] Similarly, Virginia Woolf captures how a multi-sensory experience often remembered very much +[1027.940 --> 1033.940] from a first person perspective, tends to characterise many people's most vivid, +[1033.940 --> 1039.940] most emotionally salient memories, those that we often have the most confidence in. +[1039.940 --> 1042.940] The new wolf describing her first memory that she can remember, +[1042.940 --> 1047.940] it is of lying half asleep, half awake in bed in the nursery at St. Ives, +[1047.940 --> 1055.940] hearing the waves breaking, one, two, one, two, lying and hearing this splash and seeing this light +[1055.940 --> 1059.940] and feeling it's almost impossible that I should be here, +[1059.940 --> 1063.940] of feeling the purest ecstasy I can conceive. +[1063.940 --> 1066.940] Very evocative. +[1066.940 --> 1071.940] As Virginia Woolf highlighted, the self-referential nature of our most important memories +[1071.940 --> 1074.940] is another key characteristic. +[1074.940 --> 1077.940] As the great philosopher, psychologist William James wrote, +[1077.940 --> 1081.940] memory requires more than mere dating of a fact in the past. +[1081.940 --> 1084.940] It must be dated in my past. +[1084.940 --> 1089.940] In other words, I must think that I directly experienced its occurrence. +[1089.940 --> 1094.940] And this extract from Wordsworth illustrates how our memory is attired closely to us, +[1094.940 --> 1097.940] the person who originally experienced the event, +[1097.940 --> 1103.940] and we typically relive them from our original point of view, a first person perspective. +[1103.940 --> 1108.940] Oh, many a time have I, a five years child, a naked boy and one delightful rill, +[1108.940 --> 1111.940] a little mill race severed from his stream, +[1111.940 --> 1114.940] made one long bathing of a summer's day, +[1114.940 --> 1118.940] basked in the sun and plunged and basked again, +[1118.940 --> 1120.940] alternate all a summer's day, +[1120.940 --> 1123.940] or co-st over the sandy fields, +[1123.940 --> 1126.940] leaping through groves of yellow ground snow. +[1126.940 --> 1131.940] Part of the reason I think that this is such a powerful depiction is the use of the first person, +[1131.940 --> 1136.940] so we can see it through the poet's own eyes. +[1136.940 --> 1140.940] We're going to demonstrate the importance of the self to memory. +[1140.940 --> 1143.940] I need you to get your phones out again. +[1143.940 --> 1146.940] Perhaps we should have stopped at the first one, given that that work, +[1146.940 --> 1149.940] through now asking for trouble. +[1149.940 --> 1152.940] If we can James, we can go to the screen. +[1152.940 --> 1154.940] Thank you very much. +[1154.940 --> 1159.940] I'm going to be showing you some words and asking you some questions about each of the words. +[1159.940 --> 1162.940] Okay, and then we'll be testing your memory for them again. +[1162.940 --> 1164.940] This is a memory talk after all. +[1164.940 --> 1166.940] Everyone ready? +[1166.940 --> 1169.940] Okay, here we go. +[1169.940 --> 1176.940] Does gentle contain the letter E? +[1176.940 --> 1179.940] Who was that? +[1179.940 --> 1182.940] Sorry, you can show them. +[1182.940 --> 1184.940] You can show these ones. +[1184.940 --> 1187.940] Okay, very good. +[1187.940 --> 1194.940] One person who said no, come and see me afterwards. +[1194.940 --> 1199.940] Does Ys describe you? +[1199.940 --> 1202.940] I'm not sure about that one. +[1202.940 --> 1204.940] Okay. +[1204.940 --> 1214.940] Does artistic contain the letter E? +[1214.940 --> 1218.940] Very good. +[1218.940 --> 1221.940] Does lazy describe you? +[1221.940 --> 1224.940] Okay. +[1224.940 --> 1227.940] Good. +[1227.940 --> 1234.940] Does rude contain the letter E? +[1234.940 --> 1236.940] Okay, there's still somebody, isn't there? +[1236.940 --> 1239.940] One person. +[1239.940 --> 1243.940] Does reckless describe you? +[1243.940 --> 1250.940] Okay, excellent. +[1250.940 --> 1252.940] Right, well done, everybody. +[1252.940 --> 1255.940] Now we have the memory test. +[1255.940 --> 1259.940] Oh, everyone's heart sinks. +[1259.940 --> 1261.940] Are we ready? +[1261.940 --> 1268.940] Okay, so what I'd like you to do is to select on your screen the words that you think appeared in the list just now. +[1268.940 --> 1272.940] So select all the ones that you think were present. +[1272.940 --> 1275.940] Everyone made a response you wish is to. +[1275.940 --> 1278.940] Okay, James, we can reveal the answers. +[1278.940 --> 1280.940] Okay, that's very interesting. +[1280.940 --> 1285.940] So if you could show the correct answer, James, you could just select that box in the middle. +[1285.940 --> 1287.940] The correct answer. +[1287.940 --> 1288.940] Thank you. +[1288.940 --> 1293.940] So the ones that are highlighted in into a bold there are the ones that you referred to yourself. +[1293.940 --> 1296.940] So you did the self-referential processing on. +[1296.940 --> 1302.940] And the other ones are the ones that you just decided whether or not it contained the letter E. +[1302.940 --> 1304.940] So actually it's worked. +[1304.940 --> 1306.940] What's the chances of that? Wonderful. +[1306.940 --> 1309.940] Thank you very much, everybody. +[1309.940 --> 1313.940] Give yourselves a round of applause, I think so. +[1313.940 --> 1324.940] Now we remember events that are related to ourselves better because we encode them into memory more deeply by relating them to our highly developed self-concept +[1324.940 --> 1331.940] compared to the relatively shallow processing that's associated with deciding whether a word contains an E or not. +[1331.940 --> 1338.940] Now this self-reference advantage can also be seen in terms of the perspective with which we remember events. +[1338.940 --> 1345.940] So if you think back to your most important memory, might be your first day at school or your wedding day or the birth of your first child perhaps, +[1345.940 --> 1353.940] do you envision the scene from a first person perspective from the same point of view as you originally experienced the event? +[1353.940 --> 1364.940] Or do you see yourself in the scene viewing it as a detached observer from the third person or perhaps from above as in this figure? +[1364.940 --> 1377.940] The chances are that for your really important memories you re-experience them from a first person egocentric perspective rather than a third person allocentric perspective. +[1377.940 --> 1393.940] And Sutton and Robbins found that first person autobiographical memories tend to rate higher on things like vividness, the coherence of the event, sensory detail, the amount of detail involved, and the amount of emotional intensity associated with the memory. +[1394.940 --> 1408.940] Whereas remembering the objective circumstances of an event tends to lead to relatively more third person memories, a focus on feelings and emotion tends to lead to more first person memories. +[1408.940 --> 1421.940] Interestingly, two types of memories that researchers have often found to be experienced from a detached third person perspective are giving an individual public presentation and walking or running from threatening situations, +[1421.940 --> 1430.940] both of which I'm currently experiencing, out of body experience. +[1430.940 --> 1439.940] So to isolate the brain mechanisms of subjective remembering, we're looking for regions where damage does not cause total amnesia as in clive wearing, +[1439.940 --> 1456.940] but which regions that support processes that imbue our reconstructed memories with vividness and precision and confidence, integrating event features, perhaps across different sensory modalities, into a first person perspective representation. +[1456.940 --> 1463.940] So which brain regions might be important for those processes? +[1463.940 --> 1477.940] With the advent of newer imaging, a lot was learned about how regions beyond the hippocampus, the region damaged in clive and patients, other people with amnesia, how there are other regions that are involved and that contribute to memory. +[1477.940 --> 1489.940] And in particular, something was learned about the parietal lobe, region of the, just behind the ear and above the ear, and in particular, a region of the parietal lobe known as the angular gyros. +[1490.940 --> 1498.940] This region came up very, very frequently in functional brain imaging studies of memory, even more frequently than the hippocampus indeed for a long time. +[1498.940 --> 1508.940] And this was a surprise because patients who have parietal lobe damage are not generally thought of as being amnesic, but perhaps memory was never properly tested in such patients, +[1508.940 --> 1515.940] and this redamaged this region might cause memory impairments that were missed before. +[1515.940 --> 1533.940] We tackled this question a few years ago by first asking healthy volunteers to come into the functional brain imaging scanner and perform a task in the scanner that involved recollecting details about the context in which they'd studied words or the study famous faces. +[1533.940 --> 1543.940] And when we did that, we found patterns of activity, recollecting words, was associated with activity, these yellow dots or the activity in the healthy volunteers, these blobs. +[1543.940 --> 1550.940] So activity here in, this is the left hemisphere of the brain, this is the left parietal lobe around the angular gyros. +[1550.940 --> 1562.940] So activity in the left parietal lobe when people were recollecting words, and bilateral activity in both of the hemispheres, both of the parietal lobes, the left and the right, when people were recollecting famous faces. +[1562.940 --> 1573.940] So then what we did was we went to a database that we have of patients who have suffered strokes and other kinds of brain damage that affects these exact same regions. +[1573.940 --> 1581.940] And we selected those patients whose lesions closely overlap the patterns of activity that were elicited by the healthy volunteers. +[1581.940 --> 1594.940] You can see these areas in purple and green here that closely overlap with the left region of activity and regions that overlap with the right area of activity. +[1594.940 --> 1610.940] So if those specific parietal lobe areas are necessary for recollection, then patients with left parietal lobe lesions should show amnesia for words, and patients with right parietal lesions should show amnesia for faces. +[1611.940 --> 1627.940] As you can see, however, that's far from what we found. So the performance of healthy control volunteers is shown in blue, and then performance of the left patients with left parietal lobe damage in red, and performance of patients with right parietal lobe damage in green. +[1627.940 --> 1636.940] And you don't really need statistics to show that the patients were not only not amnesic, but their recollection was pretty much as good as healthy control participants. +[1636.940 --> 1646.940] Despite their brain damage, closely overlapping the areas that healthy volunteers were engaging when they were doing the exact same task. +[1646.940 --> 1653.940] So these patients are definitely not amnesic, but it's also not the case that their memory is completely normal. +[1653.940 --> 1658.940] Talking to them, and many of the best experiment ideas come from just talking with patients. +[1659.940 --> 1667.940] They say, well, I can remember whether something was on the left or the right or the screen, or if it was a male or a female voice that read a word out or something, if you ask me. +[1667.940 --> 1675.940] But I'm not very confident about it, and I can't see that event unfolding in my mind's eye in the way that I would describe to them. +[1676.940 --> 1688.940] And indeed, when asked to rate how confident they were in each recollection response, patients tend to show reduced subjective confidence in the vividness and the precision of each of their accurate memories. +[1688.940 --> 1695.940] So their accuracy is just as good as healthy controls, but their confidence in those accurate responses is significantly reduced. +[1696.940 --> 1709.940] We recently sought to isolate these effects using a technique called transcranial magnetic stimulation, or TMS, which is a non-invasive method of temporarily disrupting brain function in specific regions. +[1709.940 --> 1723.940] So Yasemin Yzar disrupted the function of that angular gyros region of the parietal lobe, and compared the effect on memory of disrupting a vertex control site, so somewhere up towards the top of the head. +[1723.940 --> 1733.940] And just as with the patients, disruption of this angular gyros region in red here made no difference to accuracy compared with disrupting the control region, no significant difference. +[1733.940 --> 1742.940] But there was a significant reduction in these healthy volunteers in their confidence in their accurate recollections, just as in the patients. +[1742.940 --> 1751.940] Similarly, Marion Berryhill and colleagues found that patients with parietal lesions are impaired when they're asked to freely recall autobiographical events from their lifetimes. +[1751.940 --> 1758.940] So they just asked to come up with a memory from, I don't know, from adolescence or something, and asked to recall as much as they can about that memory. +[1758.940 --> 1770.940] Their free recall, compared with control participants, was massively reduced in terms of the amount of detail, in particular, the amount of internal detail that are roughly particularly relevant to that memory. +[1771.940 --> 1782.940] So they contained reduced vividness, reduced amounts of detail. These patients appeared not to be subjectively reliving the events as they told them, in the way that many healthy control participants do. +[1782.940 --> 1793.940] However, again, the patients were not amnesic because when their recall was asked, was queued by being asked specific questions about the memories, their memory was completely unimpaired, or statistically unimpaired. +[1793.940 --> 1804.940] They were able to come up with the answer to questions about those memories, even though they couldn't spontaneously come up with them. They couldn't spontaneously recall them because they weren't reliving that event unfolding in front of them again. +[1806.940 --> 1810.940] So what process underlie this subjective aspect of remembering? +[1811.940 --> 1823.940] It was suggested by the literary insights discussed earlier, a key process involves the integration of multi-sensory memory features into a conscious representation that enables this reliving of the event. +[1824.940 --> 1838.940] An anatomical connectivity data indicate very rich interactivity between the angular gyros region of the paratholope, shown here, and sensory processing areas, regions that process visual information, +[1838.940 --> 1844.940] regions that process auditory information, sound information, and other sensory modalities. +[1844.940 --> 1863.940] And also rich interactivity with memory regions, other memory regions, part of the memory network, so frontal regions of the brain, hippocampus and medial temporal lobe regions of the brain, all end up connecting in this kind of hub-like structure of the angular gyros where this converging information is perhaps combined and integrated. +[1864.940 --> 1881.940] So to test this hypothesis in terms of memory, Heidi Bonucci and Franco Richter asked volunteers to study short clips that were presented either or digitally or audiovisually. +[1882.940 --> 1894.940] So for example, they might just hear an alarm going off, or they might watch a sudden rising over the sea silently, or they might watch an ambulance zooming down a busy street with its siren blaring. +[1895.940 --> 1908.940] Then in the functional brain imaging scanner, participants were presented with one of the words, and were asked to recall the clip, report the sensory modality in which it was presented in during the study phase, +[1908.940 --> 1915.940] and to rate the subjective vividness of their memory for that clip. +[1916.940 --> 1932.940] What we found was that the brain areas involved in processing auditory information, shown here, were reactivated when recalling auditory memories, to a greater extent than when recalling visual memories, for example. +[1932.940 --> 1943.940] And visual processing pathways in the brain, shown here, were reactivated during memory for visual features, more than for auditory features, for example. +[1944.940 --> 1956.940] However, angular gyros activity didn't differentiate between the auditory and the visual memories, but it did show greater activity instead during retrieval of integrated audiovisual information. +[1957.940 --> 1969.940] And then we used a statistical technique called pattern classification, which tries to decode, if you like, the activity patterns in a brain region that are tied to specific memories. +[1970.940 --> 1979.940] And we found that specific individual multi-sensory memories could be decoded quite accurately from the patterns of activity in angular gyros. +[1980.940 --> 1995.940] And what was even more exciting was that the accuracy with which that classifier was able to decode those memory representations, tracked the subjective vividness that participants rated their memories with. +[1996.940 --> 2005.940] So in other words, a detailed and precise memory representation in the angular gyros was experienced by participants as a more vivid memory. +[2006.940 --> 2014.940] However, like confidence ratings, this is based on self-report and people reporting the qualities of their memory. +[2015.940 --> 2020.940] And we know that self-report measures can be influenced by biases and expectations on the part of the rateer. +[2021.940 --> 2028.940] So is it possible to come up with a more objective measure of subjective experience? This of course is a very difficult thing to do. +[2029.940 --> 2036.940] One possible objective measure that we've been exploring recently is to test the precision with which people can remember. +[2037.940 --> 2049.940] So Frank or Richter and Rose Cooper recently developed a new task using continuous measures of memory to reveal not just whether people can remember something or not, but the precision with which they're able to remember. +[2050.940 --> 2058.940] So people studied displays that each contained three everyday objects like this and the objects were overlaid on a scene background. +[2059.940 --> 2066.940] And each object was assigned a random location on the scene and a random orientation and a random color. +[2067.940 --> 2072.940] So they could be anywhere on the screen, they could be any which way round and any color of the rainbow. +[2073.940 --> 2091.940] And then after studying a set of such stimulus displays, people's memory is tested by asking them to recreate the features of studied objects by moving a 360 degree continuous dial until each feature matches their memory of it as precisely as possible. +[2092.940 --> 2103.940] And we test memory precision for the three features that vary for the orientation that people think the item had, its location and the precise color of the object. +[2104.940 --> 2115.940] When people did this task in the functional brain imaging scanner, we found that hippocampal activity determined whether or not they remembered anything at all about an object. +[2116.940 --> 2125.940] When they did remember an object, activity in the angular gyros, the reason behind the back of the ear, tracked the precision with which they remembered it. +[2126.940 --> 2139.940] So we're currently excited about the potential of these continuous measures of memory precision for providing a much more sensitive measure of memory than traditional approaches that just differentiate between yes or no or whether someone successful or not successful. +[2140.940 --> 2156.940] This raises the prospect of perhaps being able to detect very early signs of age related memory vulnerability, for example in middle age, which might mean that interventions could be employed that might maintain or even perhaps enhance memory abilities before they're lost. +[2156.940 --> 2160.940] See something we're exploring in research that's going on in the lab at the moment. +[2161.940 --> 2169.940] What about the other element that's so crucial to subjective experience, which is this first person perspective nature of memory. +[2170.940 --> 2186.940] Evidence from the spatial navigation literature, people who study the brain's GPS, find that allocentric map based navigation activates the hippocampus, as shown in this study by Zylethyle. +[2187.940 --> 2195.940] Whereas pariahal regions are involved in the egocentric first person perspective types of navigation. +[2195.940 --> 2204.940] So does this mean that these pariahal regions contribute this first person perspective component of reminiscence that imbues our memories with such salience and vividness. +[2205.940 --> 2210.940] This is some hot off the press data from the lab suggesting that it might just. +[2211.940 --> 2219.940] So Heidi Benicci used trans cranial magnetic stimulation to see what happened to people's autobiographical memories where an angular gyros was disrupted. +[2220.940 --> 2225.940] We were first able to replicate Marion Berryhill's findings from the patients that I talked about earlier. +[2225.940 --> 2234.940] The free recall of autobiographical memories is reduced after stimulating angular gyros in a healthy volunteer compared with stimulating a control region. +[2234.940 --> 2240.940] But the memories are intact when recall is queued with specific questions about the events. +[2240.940 --> 2255.940] What was fascinating though, which is that the graph here, is that fewer autobiographical memories are reported as being experienced in the first person perspective after angular gyros disruption compared with this other control region disruption. +[2255.940 --> 2262.940] So that suggests that angular gyros is indeed critical for tying our memories to us, the person who originally experienced the event. +[2266.940 --> 2279.940] So the lateral pariahal lobe and the angular gyros in particular seems to help to integrate event features into a vivid first person perspective representation that enables this subjective experience of remembering. +[2280.940 --> 2286.940] But why does our brain bother to conjure up a subjective experience of remembering? What is its adaptive value? +[2287.940 --> 2302.940] The great memory theorist, Endel Tollving, has proposed that subjective experience, what he called autonoetic awareness, allows us to reflect on the content of our memories, to understand and make judgments about the things that we remember. +[2303.940 --> 2311.940] Such as the critical ability of distinguishing events that actually occurred from those that we might have imagined or been told about by somebody else. +[2312.940 --> 2324.940] And we've been studying the processes involved in distinguishing real from imagined events, and finding a role for the anterior pre-funtal cortex, this region just behind your forehead, in that ability. +[2325.940 --> 2333.940] This quote from Byron captures the dilemma of not being sure whether a memory is real or imagined. +[2334.940 --> 2339.940] It is singular how soon we lose the impression of what ceases to be constantly before us. +[2340.940 --> 2349.940] There is little distinct left without an effort of memory, then indeed the lights are rekindled for a moment, but who can be sure that imagination is not the torch bearer? +[2350.940 --> 2370.940] Although we are normally pretty good at distinguishing real and imagined events, we can all from time to time be prone to that moment of confusion, where we think that something we dreamt about actually happened, like me opening the bowling for England for example, or where we are unable to remember whether we locked the front door when we left home or just thought about locking it. +[2371.940 --> 2385.940] Such confusion can be particularly debilitating in psychiatric conditions such as schizophrenia, in which a person's relation to reality can be altered in ways that can of course be very disruptive to their everyday functioning. +[2388.940 --> 2398.940] The Dioń of Memory Research, Marsha Johnson, has coined the term reality monitoring to refer to the processes involved in distinguishing real and imagined events from one another. +[2399.940 --> 2416.940] She has proposed that mental experiences don't come with tags that specify them as being real or imagined, but that we make judgments about their origin at the time of retrieving a memory by considering the features that we are retrieving in the light of what we expect real and imagined experiences to be like. +[2417.940 --> 2427.940] We might expect memories of real events to be full of externally generated perceptual details such as where and when an event occurred, who else was there at the time, who said what to who etc. +[2428.940 --> 2437.940] Whereas imagined events might comprise traces of internally generated thoughts and feelings and mental operations to a greater degree. +[2438.940 --> 2452.940] Now work I was involved in with Paul Burgess and colleagues at UCL a while ago identified that anterior prefrontal cortex might be important for switching attention between thoughts and perceptions when performing difficult problem solving tasks for example. +[2453.940 --> 2460.940] Perhaps this region is also recruited when distinguishing between internally and externally generated memories. +[2461.940 --> 2466.940] Here is an example of one of the tasks we use to test reality monitoring. +[2467.940 --> 2473.940] So in a study phase here people study everyday word pairs, Laurel and Hardy, Bacon and eggs etc etc. +[2474.940 --> 2479.940] Half of the time they are presented with the whole word pair and have to read the whole thing out loud. +[2480.940 --> 2489.940] Half of the time they are presented with the first word in a question mark in which case they have to imagine the second word that completes the word pair and then read the whole word pair out loud again. +[2490.940 --> 2498.940] And then half the trials, the subject does this and then half the trials, the experimenter does this and both those two factors are obviously varied across the experiment. +[2499.940 --> 2508.940] And then in a test phase people are shown the first word of a studied word pair and asked to make some kind of an internal versus external judgment about their memory of it. +[2509.940 --> 2517.940] Did they see or imagine the second word of the word pair or was it themselves or the experimenter who read that word pair out loud. +[2518.940 --> 2521.940] So different kinds of internal versus external kinds of distinction. +[2522.940 --> 2532.940] And when they are asked these questions in the functional brain imaging scanner, the only region of the brain that consistently shows significant activity is the anterior prefrontal cortex, this region here. +[2533.940 --> 2541.940] So this is the frontal lobes of the brain here and then this is the anterior prefrontal cortex, the region just behind your forehead at the very front of the brain. +[2542.940 --> 2557.940] Now if you don't believe me and you should indeed be skeptical since we're all concerned about how reproducible scientific results are these days, this shows that the effect replicates across different labs and across different kinds of reality monitoring task. +[2558.940 --> 2566.940] So this is when people are remembering lots of different internal versus external kinds of distinctions but you can see how consistent that anterior prefrontal cortex activity is. +[2567.940 --> 2579.940] Now we can all confuse real and imagined events from times of time but reality monitoring performance on the kinds of tasks that we carry out in the lab varies considerably even in apparently healthy individuals. +[2580.940 --> 2588.940] So this graph shows the performance on one or other of our reality monitoring tasks of around 150 random participants who came into the lab. +[2589.940 --> 2595.940] These are all healthy volunteers who are contributing to our science and we're very grateful for them for doing so. +[2596.940 --> 2608.940] They all perform absolutely normally on every other task that we give them, tasks that are tapping all kinds of different cognitive abilities and indeed most memory tasks that we give them, these participants perform absolutely typically on. +[2609.940 --> 2618.940] But whereas some people find these reality monitoring tasks apparently very easy, others seem to find it far more difficult. +[2618.940 --> 2626.940] And so we wondered after seeing these sorts of data, this variability, could it be due to something that's going on in the anterior prefrontal cortex of these people. +[2626.940 --> 2635.940] We know that people activate this region when they're performing the task, is there some variability in this region that might be associated with the individual differences that we see here. +[2636.940 --> 2650.940] So Marie Buddha studied a region called the parasingular sulcus which is a very well characterized brainfold variation in the middle part of the anterior prefrontal cortex. +[2651.940 --> 2664.940] It's what's called a tertiary sulcus which means it develops very late ingestation and it's very prominent in around half of the general population with its variability perhaps due to a combination of genetic and environmental factors. +[2666.940 --> 2686.940] So Marie classified healthy volunteers who came into the lab to perform our reality monitoring tasks based on their structural brain scans, into groups with a prominent parasingular sulcus in one or both hemispheres or a group who had complete absence with the parasingular sulcus in both hemispheres. +[2687.940 --> 2705.940] And what she found was that people whose parasingular sulcus was absent in both the left and the right hemispheres exhibited significantly reduced reality monitoring performance compared to all the other healthy volunteers who had a prominent parasingular sulcus in one or both of their hemispheres. +[2706.940 --> 2721.940] These were all typical healthy volunteers who showed normal performance on other tasks that we gave them certainly didn't think they had any memory difficulty and indeed they actually didn't have any memory difficulty apart from on the making these specific judgments about real versus imagined types of events. +[2722.940 --> 2741.940] Now disturbed awareness of what's real may also underlie some of the symptoms of clinical conditions like schizophrenia such as hallucinations for example which many patients with schizophrenia experience sometimes on a very frequent basis can be extremely disruptive to their lives. +[2742.940 --> 2751.940] And it's possible to consider hallucinations as resulting from the misattribution of imagined information as having occurred in the outside world. +[2751.940 --> 2761.940] So you imagine a voice conveying a message to you but you misattribute it as being a real voice coming from a person or from the radio or from something in the outside world. +[2762.940 --> 2787.940] And this image shows how there's a close overlap between the brain areas that tend to be dysfunctional in schizophrenia which is shown as green spheres in this image and the areas that show activity in healthy volunteers when they perform reality monitoring tasks which are these yellow dots hopefully you can see the very very close overlap between those two different kinds of independent data. +[2788.940 --> 2801.940] And so this close overlap supports the idea that disrupted reality monitoring might be responsible for the experience of hallucinations that many people with schizophrenia are plagued by. +[2803.940 --> 2811.940] Jane Garrison really recently tested whether parasyngal at sulcus length might predict the occurrence of hallucinations in schizophrenia. +[2812.940 --> 2827.940] She compared people with a diagnosis of schizophrenia who had a history of hallucinations against people whose schizophrenia diagnosis was due to other diagnostic symptoms such as thought disorder for example but these people never actually experienced hallucinations. +[2828.940 --> 2840.940] And what she found was that there was no difference or no significant difference in the length of the parasyngal at sulcus between healthy matched control volunteers and patients with schizophrenia who had never hallucinated. +[2840.940 --> 2853.940] There was no difference in terms of their parasyngal at sulcus length but the length of the parasyngal at sulcus significantly differentiated patients with a diagnosis of schizophrenia who hadn't hallucinated from patients who had hallucinated. +[2853.940 --> 2858.940] Patients with a history of hallucinations had a reduced length of the parasyngal at sulcus. +[2858.940 --> 2876.940] This is a very fascinating finding this is a tiny brain fold in the medial part of the anterior prefrontal cortex and it seems to be that that's the only region of the brain because we did other analyses to look elsewhere in the brain to see whether there was anything else any other regions that differentiate between these two different patient groups and there wasn't. +[2876.940 --> 2885.940] This is the sole region that differentiated between those who hallucinated and those who didn't and these patients were matched on every other factor that we were able to match them on. +[2885.940 --> 2905.940] And so it seems that this region is not only crucial for those reality monitoring processes it's ability to differentiate real from imagined information but if this region is disrupted in some way then that ability is reduced and people have greater propensity to experience things they imagine as being real and experience a hallucination. +[2907.940 --> 2915.940] So to summarize I've talked about the contribution of three key components of the brain's memory network. +[2915.940 --> 2929.940] So the hippocampus we think plays a role in constructing an allocentric representation that comprises the core features of a memory and that determines whether we remember something at all or not. +[2929.940 --> 2947.940] The parietal lobe then integrates multi sensory features within a first person perspective helping to create that subjective sense of reliving a past event, experience it all over again as if it's unfolding in front of us. +[2948.940 --> 2956.940] And then the anterior prefrontal cortex is among regions that use those representations to make decisions about our memories. +[2956.940 --> 2963.940] For example distinguishing events that actually occurred from those we might have imagined or been told about. +[2963.940 --> 2974.940] And together these regions generate that subjective experience of remembering which for many people is such a rich and vivid part of our mental lives. +[2975.940 --> 2982.940] So I just like to finish by thanking all the people in my lab, many of whom are here this evening. +[2982.940 --> 2988.940] Thanks very much for coming along. They did all the great work and I couldn't have done any of this without them. +[2988.940 --> 2993.940] Thank you very much also to the people who are kind enough to fund our research. +[2993.940 --> 2998.940] I hope they continue to be kind enough to fund our research and thank you very much for coming. +[2998.940 --> 2999.940] Thank you. +[3004.940 --> 3006.940] Thank you. diff --git a/transcript/allocentric_h8ZJSAyJ_bQ.txt b/transcript/allocentric_h8ZJSAyJ_bQ.txt new file mode 100644 index 0000000000000000000000000000000000000000..9b92af3b3a9ce73086c76eaf4cefd0309b205e50 --- /dev/null +++ b/transcript/allocentric_h8ZJSAyJ_bQ.txt @@ -0,0 +1,10 @@ +[0.000 --> 3.520] Here's a super simple addition to your language that will dramatically change the way +[3.520 --> 6.440] you communicate with others and how they then respond to you. +[6.440 --> 10.560] So say someone in your team excitedly makes a suggestion and asks, don't you think that +[10.560 --> 11.560] would be great? +[11.560 --> 14.320] And you default to, yes, but last time it didn't really work. +[14.320 --> 17.960] Now, you've just unintentionally invalidated their idea and them. +[17.960 --> 20.840] So instead, change but to end. +[20.840 --> 24.760] Yes, and we can learn from what didn't work last time to increase our chances of success +[24.760 --> 25.760] this time. +[25.760 --> 29.480] They'll feel validated and it puts everyone in a more collaborative mode of problem-solving diff --git a/transcript/allocentric_i_9CTMZ60zc.txt b/transcript/allocentric_i_9CTMZ60zc.txt new file mode 100644 index 0000000000000000000000000000000000000000..ad1373729bb72cb8723e456ce5e81cc70988209e --- /dev/null +++ b/transcript/allocentric_i_9CTMZ60zc.txt @@ -0,0 +1,123 @@ +[0.000 --> 5.000] Consider men in the orange box. +[5.000 --> 9.800] Can you predict what you will move next? +[9.800 --> 14.560] This prediction problem has been tackling in comparison to modelled human behaviour +[14.560 --> 16.760] from third person view. +[16.760 --> 22.000] But a question is whether this prediction will modelled how we perceive and behave. +[22.000 --> 27.240] I argue that if you like to understand his behaviour, we should put yourself into his +[27.240 --> 31.760] shoes, or we should be able to answer the following questions. +[31.760 --> 36.120] If I were him, how would I move into the scene? +[36.120 --> 40.240] But the third person prediction can not answer this question because we can experience +[40.240 --> 42.400] what he is experiencing. +[42.400 --> 45.920] Then what is he experiencing visually? +[45.920 --> 52.200] This might be the image he sees from his perspective, with daps of course. +[52.200 --> 58.200] We perceive the scene such that strong emphasis is on the show range area. +[58.200 --> 60.880] Occlusion reasoning is done by my perspective. +[60.880 --> 64.600] We perceive and behave in relation with me. +[64.600 --> 67.640] For instance, what does the stop sign mean to me? +[67.640 --> 71.720] Such questions cannot be answered by the third person prediction. +[71.720 --> 76.080] Therefore in order to predict human behaviour, we should use the first person view by putting +[76.080 --> 79.400] yourself into his shoes. +[79.400 --> 84.200] In this paper, we present a future localisation predicting future of ego motion from first +[84.200 --> 87.880] person RGBD image. +[87.880 --> 93.880] The himad represented that likelihood of future location. +[93.880 --> 97.560] This shows overall projectories. +[97.560 --> 101.800] Then why future localisation is challenging? +[101.800 --> 104.880] Consider an image with a future trajectory. +[104.880 --> 110.160] The testing scene is similar to the training image in terms of visual semantics and geometric +[110.160 --> 111.160] configurations. +[111.160 --> 117.720] Therefore, we can predict the trajectory by copying it from the training image. +[117.720 --> 123.200] But this is not geometric correct because two images has a different header orientation. +[123.200 --> 126.480] While looking down, while looking forward. +[126.480 --> 131.720] Geometry correction can solve this problem because now it is a semantically wrong. +[131.720 --> 139.240] In this paper, we use the ego return representation and preference learning to address these challenges. +[139.240 --> 142.800] Consider a man with the first person image eye. +[142.800 --> 147.800] He will navigate the scene by planning his trajectory onto the ground plane or configuration +[147.800 --> 152.640] space using first person image eye. +[152.640 --> 158.280] We construct the ego return configuration space by projecting images onto the ground plane. +[158.280 --> 165.520] The projected image is a global entry representation measured by the first person image. +[165.520 --> 169.080] This representation is a header orientation invariant. +[169.080 --> 175.880] This is very important for first person because we move our header significantly. +[175.880 --> 180.160] We encode two information in the configuration space. +[180.160 --> 186.560] First we encode RGB value measured by the first person image eye. +[186.560 --> 191.280] And we also encode the height of the occluding object which is a global entry representation +[191.280 --> 194.400] measured by depth image. +[194.400 --> 199.520] Inspired by proxy mix, the configuration space is represented by the low polar coordinate +[199.520 --> 202.560] which produces retinal representation. +[202.560 --> 208.560] This representation is a precision 3D because it is a header orientation invariant. +[208.560 --> 214.920] And it is a persistent in 2D because projection of unit distance in configuration space doesn't +[214.920 --> 219.840] introduce severe perspective distortion in first person image. +[219.840 --> 223.480] This produces our ego return map in the middle. +[223.480 --> 228.320] The same way ego return map for the depth image can be computed which encode the height +[228.320 --> 233.160] of the occluding object. +[233.160 --> 235.920] This representation has three properties. +[235.920 --> 243.480] First, it is a header orientation invariant and it is a precision in 2D and 3D distance. +[243.480 --> 246.880] And it isn't about occlusion. +[246.880 --> 251.160] Here we compare ego return map with other representation used in computer vision and +[251.160 --> 253.160] robotics. +[253.160 --> 257.800] Cautation quote in an image produces severe collapsing and room range pixel. +[257.800 --> 261.640] More importantly, it is not header orientation invariant. +[261.640 --> 268.720] Therefore, it is not suitable for the representation of first person image prediction. +[268.720 --> 273.320] Cautation in ground plane has been widely used in robotics because it produces global +[273.320 --> 275.320] central representation. +[275.320 --> 279.960] However, it collapses short range pixel severely. +[279.960 --> 287.320] It can only model one destination because there exists only one dimension for one. +[287.320 --> 293.000] Our representation can model infinite number of destination and it is a precision in 2D +[293.000 --> 295.320] and 3D distance. +[295.320 --> 301.280] We learn that ego return map using our dataset captured by first person stereo cameras which +[301.280 --> 313.200] produce ego depth images. +[313.200 --> 317.600] For each training image, we can associate with future trajectory because the camera +[317.600 --> 320.560] wearer already has been there. +[320.560 --> 325.880] Trajector is a label which can be computed by the structure from motion. +[325.880 --> 330.000] Note that no manual supervision is required. +[330.000 --> 338.320] This dataset include various scenes such as mole, commute, downtown, and cafe. +[338.320 --> 343.640] We represent a future trajectory on a configuration space. +[343.640 --> 349.080] In addition to the polar coordinate, we encode the spatial distribution of obstacles around +[349.080 --> 350.800] the trajectory. +[350.800 --> 356.720] Omega is a vector toward the near obstacles computed by depth map. +[356.720 --> 360.440] This reflects the topological structure of the trajectories. +[360.440 --> 368.360] For instance, trajectory AMB are spatially closed but have topologically different. +[368.360 --> 373.720] Here we visualize topological structure of trajectories in space. +[373.720 --> 380.280] We learn walkable pixel by associating trajectory with image on the top right, transparency of +[380.280 --> 384.280] the image indicate the walkable pixels. +[384.280 --> 391.600] We also learn height of the occluding object from depth image. +[391.600 --> 398.240] In prediction given a test image, we retrieve trajectory by associating with ego return +[398.240 --> 403.160] representation using k-nears neighbor search. +[403.160 --> 407.360] We predict the trajectory that minimize following cost. +[407.360 --> 412.200] Data cost takes into account distance from the retrieved trajectories. +[412.200 --> 417.000] We want a predicted trajectory to be preserved walking preference presented in a training +[417.000 --> 419.600] image. +[419.600 --> 425.000] depth costs account for the walking preference over occluding objects. +[425.000 --> 429.520] An RGB cost reflects walking preference over pixels. +[429.520 --> 436.200] This optimization allows us to adapt the retrieved trajectories onto the training image. +[436.200 --> 439.240] This result in a future localization. +[439.240 --> 445.160] On the bottom right, we visualize the ego return map with predicted trajectories. +[445.160 --> 451.720] Note that we perform the perception and prediction test in the same ego return representation. +[451.720 --> 456.120] We evaluate our prediction by comparing with other representations. +[456.120 --> 463.120] Cartesian image produces error rate 0.7 m per second, which produces 7 m error after 10 +[464.120 --> 469.920] Cartesian quote in a ground plane producer 0.2 m per second. +[469.920 --> 474.800] But this representation producer larger error for short term prediction. +[474.800 --> 481.320] Because the short range pixels are projected on the few pixels in a configuration space. +[481.320 --> 485.800] Our ego return representation producer about the point more with the per second, which means +[485.800 --> 488.760] that one meter error after 10 second. +[488.760 --> 494.200] The prediction is not biased by the short and room range pixels. +[494.200 --> 499.360] On the left, we show the image with ground truth trajectories computed by the short front +[499.360 --> 500.360] model. +[500.360 --> 505.560] In the middle, we visualize the ego return map with the predicted trajectories. +[505.560 --> 511.840] On the right, we project back to the first person image. +[511.840 --> 518.840] Our predicted trajectory reflected human walking preference and avoid obstacles. +[518.840 --> 528.120] Interestingly, we find the walkable space behind the awkwardly object such as Cart. +[528.120 --> 533.160] Logically, we cannot make any prediction behind the awkwardly object. +[533.160 --> 538.080] But our walking preference tells us it is likely that there is a space behind the +[538.080 --> 542.080] car so that we can cost through. +[542.080 --> 547.080] We can also predict the highly dynamic scene where the car and people are passing by. +[547.080 --> 557.520] This is a moral sequence for indoor navigation. +[557.520 --> 562.800] Our method reliably finds a possible trajectory in a presence of an dynamic obstacle such as +[562.800 --> 572.720] the humans. +[572.720 --> 574.560] This is Ikea scene. +[574.560 --> 579.160] Our methods find a space around the furniture, although similar scene are not presented +[579.160 --> 587.240] in the training data. +[587.240 --> 591.840] In this paper, we present a future localization from first person view by putting yourself +[591.840 --> 592.840] into other shoes. +[592.840 --> 598.720] We use the ego return map that is a global centric but seen from first person view to predict +[598.720 --> 599.720] the trajectories. +[599.720 --> 600.720] Thank you. diff --git a/transcript/allocentric_iby0BGVy2ik.txt b/transcript/allocentric_iby0BGVy2ik.txt new file mode 100644 index 0000000000000000000000000000000000000000..57b53e4cea25da57f7751355f688a19a02dfcce2 --- /dev/null +++ b/transcript/allocentric_iby0BGVy2ik.txt @@ -0,0 +1,128 @@ +[0.000 --> 2.600] Hi, I'm Dr. Dustin York. +[2.600 --> 3.400] You're a doctor? +[3.400 --> 4.600] My friend needs help. +[4.600 --> 7.000] Not not a doctor. +[7.000 --> 9.800] I'm a professor of communication. +[9.800 --> 12.300] And what I love is nonverbal communication. +[12.300 --> 14.600] Nonverbal communication really helps with leadership, +[14.600 --> 18.400] negotiation, politicians, even public speaking in the classroom. +[18.400 --> 21.800] Now some people tend to be better at nonverbal communication than others. +[21.800 --> 24.800] For example, extroverts are really good at nonverbals. +[24.800 --> 28.800] People that truly know empathy and can use empathy very well are really good. +[28.800 --> 33.000] People who have people oriented jobs, they know nonverbal communication really well. +[33.000 --> 38.600] And finally, women tend to be better than men at picking up nonverbal communication and by language. +[38.600 --> 41.800] But you know who else is really good at nonverbal communication? +[41.800 --> 42.800] Politicians. +[42.800 --> 46.600] Politicians are trained in nonverbal communication and by language. +[46.600 --> 49.800] Now they're trained for debates and presentations, +[49.800 --> 54.800] specifically from media trainers who tell them these tricks of by language and nonverbal communication. +[55.000 --> 60.600] This ad brought to you by Dr. DustinYork.com for your media training needs. +[60.600 --> 64.600] Here are a few tips that politicians use that are trained in nonverbal communication. +[64.600 --> 68.800] One, when they come on stage, they actually point out two or three people in the audience. +[68.800 --> 69.800] This is what they're going to do. +[69.800 --> 71.600] They're pointing at a complete stranger and say, +[71.600 --> 73.500] Hey, how's it going? +[73.500 --> 75.100] They don't know this person whatsoever. +[75.100 --> 79.600] When pulled people in the audience and on TV think that politician spent time in this city +[79.600 --> 83.000] and actually had one-on-one interaction, I like that politician. +[83.000 --> 87.800] Actually, they just got off the bus, gave a presentation and went right back on the exact same bus. +[87.800 --> 89.200] Some other quick things. +[89.200 --> 92.800] Every politician has their own light bulb that makes them look the healthiest. +[92.800 --> 96.300] So there's a team of people like myself who change out the light bulb on stage +[96.300 --> 99.000] to make them look the best they can possibly be. +[99.000 --> 104.600] And the last time, if you ever see a politician like roll up their sleeves or take their blazer off, +[104.600 --> 105.600] it's all trade. +[105.600 --> 108.500] It's all playing out because I roll up my sleeves and I say, +[108.500 --> 110.800] I grew up in a town just like this one. +[110.900 --> 112.000] It's not because they're hot. +[112.000 --> 114.500] It's because they seem like an everyday person. +[114.500 --> 118.100] So politicians can obviously use nonverbal communication training. +[118.100 --> 123.400] But you say, okay, Ginger, how can I use these tips for my class presentations? +[123.400 --> 126.200] First, whoa, a Tarsh. +[126.200 --> 129.400] But second, here are some tips for your class presentations. +[129.400 --> 133.800] First, get rid of any barriers, stand behind nothing whatsoever. +[133.800 --> 136.100] Barriers hold your back, your message. +[136.100 --> 140.100] Now second, pretend like there's an invisible box right in front of you. +[140.200 --> 142.500] Your hands never leave this box. +[142.500 --> 144.700] Do you know why people started giving handshakes? +[144.700 --> 147.700] It was actually to show that you had no weapon on you. +[147.700 --> 152.300] So we are trained as humans through hundreds of years of handshaking to trust people +[152.300 --> 154.300] when we see the palms of their hands. +[154.300 --> 156.300] So make sure when you're giving a presentation, +[156.300 --> 161.300] keep your hands in this box and show the palms of your hands just like this. +[161.300 --> 162.900] And here's your last tip. +[162.900 --> 166.700] When giving a class presentation, pretend the room is split up in the therves, +[166.700 --> 169.300] the left side, the middle side, and the right side. +[169.300 --> 172.700] Spin your time rotating between each of the three points. +[172.700 --> 177.900] Now, what you're going to do is pretend to make eye contact with just one third of that class. +[177.900 --> 179.700] You're only looking at the left side. +[179.700 --> 182.100] This helps maintain eye contact with people. +[182.100 --> 184.900] Think you're making eye contact with everyone in the room. +[184.900 --> 186.900] Much easier this way. +[186.900 --> 189.500] Yeah, I'm really sorry to hear about what happened to Tony Stark. +[189.500 --> 191.700] I really feel for your pain. +[191.700 --> 193.400] Good luck with that. +[193.400 --> 197.100] You really need to know not really be communication to help with persuasion. +[197.200 --> 200.900] Whether that be negotiating a sale for a new bicycle +[200.900 --> 205.300] or coming up with the name for your next quidditch team for your city league squad +[205.300 --> 209.500] or maybe you're trying to negotiate for that rare, my little pony card. +[209.500 --> 212.500] Use this next tip to help with negotiation. +[212.500 --> 218.900] One research study found that compliance increased dramatically using one nonverbal communication by the language tip. +[218.900 --> 221.100] So you take a phone book, oath school phone book, +[221.100 --> 223.500] and what we're going to do is going to leave a quarter in that phone booth. +[223.500 --> 225.900] We're going to wait for someone random to go into the phone booth, +[225.900 --> 226.900] take the quarter. +[226.900 --> 229.500] Now on average in the United States, when someone comes up and say, +[229.500 --> 231.900] Hey, I think I left a quarter in this phone booth. +[231.900 --> 240.300] Did you find one only 22% of people in the United States actually gave the person the quarter that they found using one nonverbal communication tip. +[240.300 --> 244.500] They actually increased to 76% compliance of giving back the quarter. +[244.500 --> 250.900] All that was was simply with a open palm slightly touching for one half of a second your elbow. +[250.900 --> 253.300] So these just went up to them to the elbow and said, excuse me, +[253.300 --> 257.500] did you buy a chance to find a quarter in here up to 76% compliance? +[257.500 --> 263.100] The same thing to use for like waiters and waitresses tips increase using this one by the language gesture. +[263.100 --> 265.300] So you can use this technique at work. +[265.300 --> 267.900] Perhaps you're working with someone like Karen. +[267.900 --> 276.700] Karen really needs to send you that 1084 that you've requested many times over the past few weeks by Thursday at 2pm. +[276.700 --> 279.700] Karen can't get off her lazy. +[280.100 --> 284.300] Did you know that nonverbal communication is also very impactful with colors. +[284.300 --> 286.500] Cool colors actually are calming in effect. +[286.500 --> 291.900] That's why you'll see a lot of times with hospitals or even prisons will use cool colors to calm people down. +[291.900 --> 294.700] Citrus colors actually increase your appetite. +[294.700 --> 301.300] You'll see a lot of fast food places will use citrus colors around to hopefully get you to actually buy more and eat more food. +[301.300 --> 304.900] So if you're looking for a diet in the upcoming few months, +[305.100 --> 310.100] try surrounding yourself with as many cool colors as possible to depress your appetite. +[314.700 --> 316.700] What do you need for your next date? +[316.700 --> 318.000] Help! +[318.000 --> 319.500] Yes, help. +[319.500 --> 324.200] Nonverbal communication and body language can really help you get your second date. +[324.200 --> 331.500] There's a lot of evolutionary cues that we pick up as humans that we can find a mate easier and more successfully. +[331.500 --> 333.100] For example, here's three tips. +[333.100 --> 336.700] On your next date, don't sit on the opposite side of the table. +[336.700 --> 338.500] Make sure that you sit on the side. +[338.500 --> 341.400] Don't be those creepy people that sit on the same side of the booth. +[341.400 --> 342.600] It's always to the side. +[342.600 --> 343.900] Sit on the side of the person. +[343.900 --> 345.900] Tip two is two T's. +[345.900 --> 349.400] Your toes and torso should point toward the other person. +[349.400 --> 352.400] Anytime you're pointing two T's toward the other individual, +[352.400 --> 356.900] they're going to feel more special and feel like you're giving all of your attention. +[356.900 --> 358.800] And the third is mirroring. +[358.800 --> 362.300] Anytime that we're mirroring the body language of the other person, +[362.300 --> 364.800] likability increases dramatically. +[364.800 --> 369.200] Nonverbal communication and body language can also predict breakups. +[369.200 --> 372.500] John Gottman's research found that using this tactic, +[372.500 --> 377.000] you can predict breakups by 92% in just four minutes. +[377.000 --> 380.500] One of those tactics, you're looking at micro expressions on the face. +[380.500 --> 387.000] If you see that contempt and disgust is shown on your partner's face multiple times in a four minute span, +[387.000 --> 389.200] that relationship's not going to work out. +[389.200 --> 390.700] Is this accurate? +[390.700 --> 393.700] No, it's hella accurate. +[393.700 --> 399.200] Now, nonverbal communication and body language is essential for things like leadership, persuasion, +[399.200 --> 402.000] politicians, and even class presentations. +[402.000 --> 405.100] If you're interested in more of these very tactical tips for communication, +[405.100 --> 409.500] nonverbaly, social media, branding, subscribe to this weekly video. +[409.500 --> 414.100] Now, perhaps you're someone like Karen who watched this entire video +[414.100 --> 416.200] and is not hitting subscribe. +[416.200 --> 418.500] Thanks, Karen. +[418.500 --> 420.000] Onward James. +[420.000 --> 422.000] Back here. diff --git a/transcript/allocentric_lC7cNSB1ZWE.txt b/transcript/allocentric_lC7cNSB1ZWE.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/transcript/allocentric_ppxK4R8XWfU.txt b/transcript/allocentric_ppxK4R8XWfU.txt new file mode 100644 index 0000000000000000000000000000000000000000..e049c645e88a0df6344acd14a95e9d9d1a7c375b --- /dev/null +++ b/transcript/allocentric_ppxK4R8XWfU.txt @@ -0,0 +1,1299 @@ +[0.000 --> 14.120] So we're talking about navigation, how you know where you are and how you can get from +[14.120 --> 16.960] here to wherever else you want to go. +[16.960 --> 22.080] And last time we talked about just the general problems that arise in navigation and we talked +[22.080 --> 28.640] about the perihidbacample place area and other parts of the brain that are involved in navigation. +[28.640 --> 32.760] So today we're going to continue that but we're going to talk more about the actual populations +[32.760 --> 37.120] of neurons in your head that are involved in doing this and we'll talk about a particular +[37.120 --> 40.840] aspect of the problem of navigation which is called reorientation. +[40.840 --> 44.680] That's what happens when you lose your bearings and you need to figure out where you are +[44.680 --> 50.360] again, reset your internal map of your set of where you are. +[50.360 --> 54.400] And then we'll talk about the idea that this whole system for navigation, cool as it +[54.400 --> 59.560] is and fascinating as navigation itself is is even more interesting because there's +[59.560 --> 64.520] increasing evidence that we use that same system for lots of other aspects of high level +[64.520 --> 67.200] cognition that have nothing to do with space per se. +[67.200 --> 70.640] Okay, that's the and then we'll have a quiz, short quiz. +[70.640 --> 72.160] That's the agenda, here we go. +[72.160 --> 78.720] So the basic problems of navigation are one, where am I and two, how do I get from here +[78.720 --> 81.000] to wherever else I want to go. +[81.000 --> 85.240] Now as I mentioned last time we can break down each of these into a bunch of different +[85.240 --> 87.480] components and facets of that question. +[87.480 --> 93.200] So when we want to know where we are that can involve recognizing a familiar location. +[93.200 --> 98.400] So if you see a photograph or you were plunked down spontaneously in an environment someplace +[98.400 --> 101.880] you know, you would visually recognize it and that would be one way to know where you +[101.880 --> 102.880] were. +[102.880 --> 105.200] Like this is my living room. +[105.200 --> 108.880] Even if that location is unfamiliar and you're plunked down at random, you still have some +[108.880 --> 110.840] idea of what kind of a place this is. +[110.840 --> 117.560] Am I in a natural environment, an urban environment, am I inside, am I outside, etc. +[117.560 --> 124.400] And finally you would have some sense of where you are with respect to the immediate bounding +[124.400 --> 125.880] structures in your immediate environment. +[125.880 --> 127.840] Like for example where you are in this room. +[127.840 --> 132.760] Like as I'm talking to you right now I'm aware that there's a wall behind me, right? +[132.760 --> 136.960] That kind of immediate spatial location. +[136.960 --> 140.800] In terms of questions at a rise when we have to figure out how do we get from here to wherever +[140.800 --> 144.160] else we want to go. +[144.160 --> 149.320] If you can directly see or hear your destination then you have the simplest possible kind of +[149.320 --> 152.880] navigation strategy, you just go toward that thing. +[152.880 --> 157.400] Okay, that's called beaconing and it's like the minimalist case works great if you can +[157.400 --> 159.880] see or hear your destination. +[159.880 --> 166.760] But when you can't you need to know where am I in my broader understanding of the layout +[166.760 --> 169.680] of my environment and where is my goal. +[169.680 --> 173.200] And for that you need a mental map of your environment and we'll talk more about that +[173.200 --> 174.200] today. +[174.200 --> 175.840] That's why it's in red. +[175.840 --> 178.760] You also need to know your current heading in that environment. +[178.760 --> 183.440] It's not enough to know in my map of the world I am here with a dot you need to know which +[183.440 --> 187.680] way you're facing in that map of the world in order to plan your navigation and we'll +[187.680 --> 189.920] talk about that too. +[189.920 --> 194.220] We also need to know what routes are possible from here so I may want to go over to +[194.220 --> 198.040] Stata and get a cup of coffee but I can't go this way I got to go around because I can't +[198.040 --> 199.540] go through that glass. +[199.540 --> 204.860] Okay, and so finally that this whole magnificent system that enables us to process all this +[204.860 --> 209.500] stuff works pretty impressively but every once in a while something will go wrong and +[209.500 --> 213.960] it will you know get the wrong signal and then we're lost and so then we need a way to +[213.960 --> 217.300] regain our bearings and we'll talk about that too. +[217.300 --> 222.380] So last time I talked about a bunch of brain regions that are implicated in perceiving +[222.420 --> 224.680] scenes and in navigation. +[224.680 --> 229.020] We talked about the Para Hippon-Hippon-Blaze area right here and this region over here +[229.020 --> 233.420] formerly known as TOS, now known as OPA, you don't need to remember all that. +[233.420 --> 237.580] It's the bit that's out on the lateral surface that we can zap because it's out there and +[237.580 --> 241.980] both of those regions seem to be involved broadly in perceiving the shape of space around +[241.980 --> 244.020] you. +[244.020 --> 248.580] We also talked a bit about retro splenial cortex that region that's hiding in the sulcus +[248.580 --> 252.060] here that you can see better when you mathematically unfold the sulcus. +[252.140 --> 257.140] There it is, responds more to scenes and objects and that region seems to be involved +[257.140 --> 261.340] in something like getting your bearings that is the location and orientation of where you +[261.340 --> 265.180] are with respect to your cognitive map and environment. +[265.180 --> 269.780] Okay, so to make that a little more vivid I gave you one description of a patient before +[269.780 --> 272.380] but here's from another study. +[272.380 --> 276.380] Patients with damage to retro splenial cortex. +[276.380 --> 278.340] So here's from a recent article. +[278.380 --> 283.660] In every case, the patient with this damage was able to recognize landmarks in their +[283.660 --> 286.380] neighborhoods and retain the sense of familiarity. +[286.380 --> 291.100] I know that place, that's the coffee shop, five blocks from my house, right? +[291.100 --> 296.500] But despite that, none of those patients were able to find their way in familiar environments +[296.500 --> 299.260] and all but one were unable to learn new routes. +[299.260 --> 304.060] So they can recognize the visual form of a particular place but they don't know how to +[304.060 --> 309.500] relate that to their cognitive map of the world and therefore plan a route from there. +[309.500 --> 315.900] Okay, so the part that I only alluded to at the end, yes, question? +[315.900 --> 317.900] Just to be sure on the question. +[317.900 --> 324.340] Okay, is the retro splenial cortex at home for cognitive maps or is it like a... +[324.340 --> 326.300] Great question, we don't exactly know. +[326.300 --> 331.180] The typical story is that the home of the cognitive map is the hippocampus which we're +[331.180 --> 334.020] about to talk about next for reasons I will tell you. +[334.060 --> 335.500] But all of this is kind of... +[335.500 --> 337.820] This is a very active area of research. +[337.820 --> 339.460] It kills me every time I do these lectures. +[339.460 --> 343.660] I look at my old notes and I think here are these 10 other awesome studies and then I try +[343.660 --> 345.740] to fit them in and then they just don't fit. +[345.740 --> 350.060] So actually one question I want to ask you guys after this lecture is, should I in future, +[350.060 --> 354.140] either later in this course or in future, courses allocate even more time or do you guys +[354.140 --> 356.100] feel like, okay, enough already with navigation? +[356.100 --> 357.580] But I just think it's the coolest system. +[357.580 --> 361.820] So there's lots of work exactly trying to answer that kind of question. +[361.820 --> 368.180] I'll give you a current snapshot of the approximate state but all of this is in flux and very +[368.180 --> 372.180] much actively investigated. +[372.180 --> 376.420] Okay, so cognitive map, what do we mean by that? +[376.420 --> 382.300] Just to remind you of this classic study from the 1940s in rats where the rat, when they +[382.300 --> 385.940] learned this root and then went up here and found their gold block, the rat immediately +[385.940 --> 388.380] comes out and goes straight toward the goal. +[388.380 --> 392.100] Seeing you, they've learned something much more interesting than the series of left and +[392.100 --> 393.740] right turns to get to the goal. +[393.740 --> 398.980] They must have done something much more like actually learn the layout of space and the +[398.980 --> 402.380] relative position of that goal so they could come up with a new vector to get there when +[402.380 --> 404.660] the original root was blocked. +[404.660 --> 408.340] Okay, and you guys can do this too, right? +[408.340 --> 412.340] When your root is blocked, you come up with a novel root, right? +[412.340 --> 417.260] And you do that by having some knowledge of your environment, something tantamount to +[417.260 --> 420.580] that in your head, some version of that. +[420.580 --> 425.540] And further, you know where you are in that map, like right now, you know where you are. +[425.540 --> 428.700] Okay, now here's the cool thing. +[428.700 --> 434.020] Specific neurons in your hippocampus right now are firing, telling you that you are right +[434.020 --> 436.020] there. +[436.020 --> 440.740] Okay, so these neurons are called play cells and this is what they do. +[440.740 --> 446.300] Okay, so I'll be a play cell or rather what I'll do is I will act out the activity of +[446.300 --> 448.300] a play cell by a series of clicks. +[448.300 --> 449.700] I will make as I walk around. +[449.700 --> 454.940] So imagine there's an electrode in my hippocampus and you are hearing the activity of a single +[454.940 --> 457.820] neuron in my hippocampus as I walk around. +[457.820 --> 459.420] And here's what it would do. +[459.420 --> 463.980] You'd hear background firing, so it's going to go click, click, click, click, click, click, +[463.980 --> 471.500] noisy background fighting, firing, click, click, click, click, click, click, click, click, +[471.500 --> 474.820] click, click, click. +[474.820 --> 475.780] Click, click, click. +[475.780 --> 483.380] click. OK. Not going to get him out in there click click click click click click click +[483.380 --> 490.700] click click click click click click click. That's one neuron that fires only when I'm +[490.700 --> 495.980] right over there. Up close. It's not where I'm facing over there. It's not what I'm +[495.980 --> 503.020] crewsally it's when I'm right their. Okay let's a play so. And so there's lots of +[503.020 --> 507.620] of play cells in your hippocampus that do that and they do it for different locations +[507.620 --> 509.900] in your environment. +[509.900 --> 514.580] And all of this was first worked out, of course, in rodents, who were running around who had +[514.580 --> 519.060] electrodes in their hippocampus, but where those electrodes were connected with a loose +[519.060 --> 523.580] tether so that the rodent could move around in their environment while recording from individual +[523.580 --> 525.500] neurons in the hippocampus. +[525.500 --> 527.100] So that's the setup. +[527.100 --> 533.300] So I'm going to show you a movie of an aerial view of a rodent moving around a rat, moving +[533.300 --> 534.300] around in its environment. +[534.300 --> 536.940] Can you see the little rat there? +[536.940 --> 542.460] And what's happening is this video is tracing out the rat's path with a light gray. +[542.460 --> 546.460] And every time that, and it's recording from one neuron, every time that neuron fires, +[546.460 --> 548.460] it makes a red dot. +[548.460 --> 550.660] And so this is obviously sped up. +[550.660 --> 555.940] But as the rat moves around in his environment, you see an accumulation like more firing when +[555.940 --> 557.900] the rat is right there. +[557.900 --> 561.460] It's not which direction the rat is going through when he goes there, just basically whenever +[561.460 --> 567.580] he passes through that in any direction, neurons fire more than anywhere else. +[567.580 --> 573.500] And then if we take that and blur it as scientists like to do to make nice, idealized pictures, +[573.500 --> 576.860] that is the place cell for that neuron. +[576.860 --> 583.460] That is the place in space that that animal has to be to make that neuron fire. +[583.460 --> 585.460] Yeah, question? +[585.460 --> 586.460] Is it one dimension? +[586.460 --> 591.700] I mean, can you multiple cases speak about the same neural? +[591.700 --> 593.180] That's complicated. +[593.180 --> 595.940] In an immediate environment like this, generally not. +[595.940 --> 598.820] Okay, I'll show you some examples in a moment. +[598.820 --> 603.300] It's more complicated if you follow that cell when the animal moves to new locations. +[603.300 --> 606.300] Let me say a few more things, and then if it's not clear, I'll take questions. +[606.300 --> 609.020] Oops, I'm going to see it again. +[609.020 --> 610.540] Okay, right. +[610.540 --> 615.380] So in answer to Shosh's question, here are a bunch of play cells from a rodent exploring +[615.380 --> 617.180] the same environment. +[617.180 --> 621.700] So you might say, well, there's like a hotspot here and a little sub one there. +[621.700 --> 627.580] But in general, most of these cells respond with a hotspot in a particular single location +[627.580 --> 629.340] in this particular environment. +[629.340 --> 633.100] Okay, did you have a different question about that? +[633.100 --> 636.860] Yes, that all depends on the rat. +[636.860 --> 641.220] Is it in the context of the fact that it's a nice place? +[641.220 --> 647.020] If the rat was anesthetized or if he was blindfolded and you passively moved him around in that +[647.020 --> 651.420] space and he had no idea, no way to tell where he was, that wouldn't work. +[651.420 --> 657.140] However, if the rat knows the environment and then you do this in a darkened room where +[657.140 --> 661.940] he's actively locomoting around, these things will still work pretty well because rats are +[661.940 --> 666.300] very good at keeping track of where they are, even without visual cues, if they know +[666.300 --> 667.300] the environment. +[667.300 --> 670.940] They'll have other cues like tactile cues and like they will know how far they went in +[670.940 --> 672.260] each direction. +[672.260 --> 677.020] Remember, I talked briefly about the Tunisian ants doing dead reckoning, right? +[677.020 --> 681.020] Keeping track of their vector and speed at each moment and integrating the whole thing +[681.020 --> 684.860] to know where they are that's called dead reckoning, rats are pretty good at that too. +[684.860 --> 685.860] Another question over here? +[685.860 --> 686.860] Yeah. +[686.860 --> 690.340] For the play cells, did they have to like look at maps as well? +[690.340 --> 692.340] Can I break right to talk to you? +[692.340 --> 693.340] We'll get there. +[693.340 --> 694.340] Great question. +[694.340 --> 695.340] We'll get there. +[695.340 --> 696.340] Just give you the answer. +[696.340 --> 697.340] No, they don't. +[697.340 --> 699.340] It's too bad. +[699.340 --> 700.340] They could have. +[700.340 --> 702.780] They could have been all organized, but it's actually a little complicated. +[702.780 --> 703.940] How would you organize them? +[703.940 --> 707.420] Then what if you like learn more stuff off of the edge of space? +[707.420 --> 711.420] Well, like we had a whole other piece of your hippocampus, it would be inconvenient. +[711.420 --> 713.100] So maybe that's why it doesn't work. +[713.100 --> 717.780] Whereas with visual space, you know, your retinotopic information always stays the same. +[717.780 --> 722.500] We don't have to suddenly add a whole new part of retinotopic space thereby screwing up +[722.500 --> 724.260] our retinotopic maps in the brain. +[724.260 --> 726.420] That's just making that up as a possible reason. +[726.420 --> 727.420] I don't know if that's why. +[727.420 --> 728.420] Yeah. +[728.420 --> 729.420] What's sorry behind you, David? +[729.420 --> 730.420] Tell me your name. +[730.420 --> 731.420] Just this. +[731.420 --> 732.420] Yeah, right. +[732.420 --> 733.420] Hi. +[733.420 --> 739.420] So I was wondering if you're in a smaller space comparatively or like a bigger space, right? +[739.420 --> 746.660] Like, what are the areas of these specific place cells that are mapping to? +[746.660 --> 748.660] Will they also scale up right now? +[748.660 --> 749.660] Oh, yeah. +[749.660 --> 750.660] It's a great question. +[750.660 --> 751.660] I don't know the answer. +[751.660 --> 752.660] My guess is they'll scale according to the space. +[752.660 --> 753.660] Right? +[753.660 --> 759.460] So if I had, if that, if my place, my fake place cell fields that I just acted out over +[759.460 --> 761.580] there is maybe five feet across. +[761.580 --> 765.220] If I was then confined to a little space, you'd probably have smaller ones for that space, +[765.220 --> 766.860] but I don't know. +[766.860 --> 768.660] Let me say a little bit more about this. +[768.660 --> 773.580] So, just to catch this out, the place field is the location in space the animal has to +[773.580 --> 776.860] be to make that hippocampal cell fire. +[776.860 --> 777.860] Okay? +[777.860 --> 783.100] So let's distinguish that from a receptive field in visual cortex, which is a similar +[783.100 --> 784.100] idea, but a different one. +[784.100 --> 789.540] A receptive field in visual cortex is the location in the visual field where a stimulus +[789.540 --> 792.780] has to be to make a visual neuron fire. +[792.780 --> 795.780] Not where the animal itself has to be, where the stimulus has to be. +[795.780 --> 796.780] Okay? +[796.780 --> 797.940] So keep those ideas separate. +[797.940 --> 799.780] They're related, but different. +[799.780 --> 800.780] Okay. +[800.780 --> 806.380] So what about, you know, we and rodents tend to go around mostly on a 2D plane. +[806.380 --> 808.940] That is we have buildings and trees and stuff. +[808.940 --> 812.900] We sometimes go up in the z-axis, but most of them we live in a 2D plane. +[813.580 --> 815.460] But that's not true of all animals. +[815.460 --> 817.780] So recall the bat that I mentioned last time. +[817.780 --> 824.380] These amazing flyers and navigators who fly in 3D and complicated trajectories and yet +[824.380 --> 830.340] have amazing abilities to keep track of where they are over 30 to 50 miles that they fly +[830.340 --> 834.820] at night, and even as they change their orientation. +[834.820 --> 839.300] Well it turns out that in the hippocampus of bats, there's a bunch of work where people +[839.300 --> 847.300] have put remote, what do you call these things, recording devices on bats where you can +[847.300 --> 853.700] remotely record neural activity in the hippocampus as the bat flies around. +[853.700 --> 859.660] And turns out that bats have play cells too and they're play cells as they, you also +[859.660 --> 862.980] can do this in a lab environment where they're flying around and you keep track of their +[862.980 --> 865.100] location with cameras. +[865.100 --> 867.820] So you know exactly where they are in 3D space. +[868.220 --> 873.420] And it turns out that play cells in bats are 3D dimensional because bats live in a 3D +[873.420 --> 874.740] world. +[874.740 --> 881.660] So whereas these would be a bunch of kind of schematized play cells for different hippocampus +[881.660 --> 888.340] cells in a rodent, these are different play cells for different hippocampus cells in a +[888.340 --> 889.340] bat. +[889.340 --> 890.340] Makes sense? +[890.340 --> 891.340] Bat's need this. +[891.340 --> 892.340] They need to not make a sense. +[892.340 --> 895.300] Okay, so the bat is moving around in 3D. +[895.300 --> 897.660] This place field isn't just like the one I did there. +[897.660 --> 900.140] I can't act this out because I can't fly. +[900.140 --> 904.740] But that play cell might fire over in that location but then if the bat flew directly +[904.740 --> 906.580] above it, it wouldn't. +[906.580 --> 909.660] So it's got 3D dimensions. +[909.660 --> 911.660] Okay. +[911.660 --> 913.660] Okay. +[913.660 --> 917.580] So I said before that I had one of those and I acted it out but what's the evidence +[917.580 --> 918.980] for that? +[918.980 --> 925.020] That evidence in humans came way after the evidence in rodents because as you can imagine +[925.020 --> 930.100] it's harder to arrange to record from individual neurons in human hippocampuses. +[930.100 --> 936.260] Nonetheless, as I've mentioned a few times there are occasional opportunities where a neurosurgeon +[936.260 --> 941.260] has stuck in electrode in an interesting part of the brain for clinical reasons and the +[941.260 --> 946.980] patient and the neurosurgeon are nice enough to let scientists collect data. +[946.980 --> 950.100] So I'm going to show you a really gross bloody picture if that's going to bother you, just +[950.100 --> 951.100] look away. +[951.100 --> 952.980] Okay. +[952.980 --> 955.340] So this is neurosurgery. +[955.340 --> 961.500] You take the skull off, you take the dura off, that's the direct surface of the brain, +[961.500 --> 965.940] the neurosurgeon stick electrodes right on top of there and in this case they put them +[965.940 --> 967.260] deep inside the brain. +[967.260 --> 968.260] Okay. +[968.260 --> 969.260] The gross pictures are gone. +[969.260 --> 973.540] We have just a nice clean x-ray here. +[973.540 --> 980.100] So in these cases, these are, this is a patient who's got an electrode sticking straight +[980.100 --> 984.420] into the brain from the surface straight down into the hippocampus. +[984.420 --> 986.420] Okay. +[986.420 --> 991.140] Kind of horrifying but sometimes clinically called for. +[991.140 --> 995.820] Seasures very often start in hippocampus so this is a common place for clinicians to put +[995.820 --> 997.940] electrodes. +[997.940 --> 1002.740] And so what would you do if you had a patient who was willing to do your short experiment +[1002.740 --> 1007.540] while hanging out in the hospital waiting to have a seizure with electrodes in their hippocampus? +[1007.540 --> 1013.420] Well duh, you'd have them play a little game in a virtual space and some kind of, you +[1013.420 --> 1018.340] don't even need VR, you can use a pretty cheesy little video game and I'm sure this one +[1018.340 --> 1021.980] was quite cheesy, this study was done back in 2003. +[1021.980 --> 1027.660] So they had patients navigate through a space, this is an aerial view of the space. +[1027.660 --> 1032.020] The patients didn't see that, they saw this kind of front view and they kind of navigated +[1032.060 --> 1038.180] around with the joystick in that space and there were three kind of visually recognizable +[1038.180 --> 1043.220] kind of locations in that space and they had to do things to go from one location to another. +[1043.220 --> 1045.380] Okay, details don't really matter. +[1045.380 --> 1052.020] So all the while, extra almond colleagues are recording from individual neurons in this +[1052.020 --> 1054.900] patient's hippocampus. +[1054.900 --> 1057.020] So here's an example of a place cell. +[1057.020 --> 1061.940] So this is a diagram of the space I just showed you, right, with those three recognizable +[1061.940 --> 1067.300] locations and other locations that the patient could virtually navigate through with the joystick. +[1067.300 --> 1072.020] The red lines are the patient's trajectory as they moved around in that space and the +[1072.020 --> 1079.740] colors within each square are the average firing rate when the patient navigated through +[1079.740 --> 1081.380] that location. +[1081.380 --> 1086.300] And so this is the place field of that individual cell in this patient's brain as they went +[1086.300 --> 1091.060] through this space because the firing rate there was around five hertz compared to three +[1091.060 --> 1094.540] hertz for some other locations and mostly lower than that. +[1094.540 --> 1096.220] Okay, does that make sense? +[1096.220 --> 1100.540] So we're just like the rodent experiment, but it's a person with a joystick looking at +[1100.540 --> 1105.680] this space as they go through this virtual environment and we're mapping out their place +[1105.680 --> 1107.660] fields like that. +[1107.660 --> 1112.900] Okay, so that shows that humans have place fields in their hippocampus just as rodents and +[1112.900 --> 1113.900] bats do. +[1114.020 --> 1119.140] All this is like independent of landmarks? +[1119.140 --> 1121.180] That's a very complicated question. +[1121.180 --> 1123.300] This patient had access to landmarks. +[1123.300 --> 1125.540] They are seeing as they go through. +[1125.540 --> 1129.460] So one could ask, for example, if you did it with your eyes closed and you had to go by +[1129.460 --> 1134.100] deadbrent reckoning, remembering the left and right turns you had in a familiar environment, +[1134.100 --> 1135.500] how well could these things go? +[1135.500 --> 1137.580] They would go for at least a while. +[1137.580 --> 1141.820] They'd probably go for longer in rodents because rodents are more accustomed to navigating +[1141.820 --> 1146.740] in the dark and they relied less on visual cues and more on other cues. +[1146.740 --> 1151.500] But yeah, these are not place cells aren't just visually responsive, right? +[1151.500 --> 1158.580] So if we had, like for example, if we set up a distinctive sound source in this corner +[1158.580 --> 1163.460] of the room and a difference, you know, like say somebody was singing quietly over here +[1163.460 --> 1167.980] and we tied a dog over there who was barking, right? +[1167.980 --> 1170.820] And you walked around in this room with your eyes closed. +[1170.820 --> 1175.300] You'd have a good way to keep track of your bearings as you moved around because you'd +[1175.300 --> 1178.980] know that the singing was coming from here and the dog barking was coming from there. +[1178.980 --> 1180.300] You wouldn't be seeing anything. +[1180.300 --> 1183.660] Your eyes would be closed, but your place cells would work pretty well. +[1183.660 --> 1188.220] Okay, so whenever you have some basis for knowing where you are, no matter what modality +[1188.220 --> 1194.140] is telling you that, usually it's many modalities, those place cells will go. +[1194.140 --> 1195.140] Okay. +[1195.140 --> 1198.060] Okay, so humans have these things too. +[1198.060 --> 1202.300] So you can think of the place cell as the kind of UR here system, right? +[1202.300 --> 1203.780] That is the whole set of place cells. +[1203.780 --> 1207.700] Any one place will only tell you, are you in this particular location or not? +[1207.700 --> 1212.820] But you have a whole array of them, then collectively, that whole representation across all of +[1212.820 --> 1217.540] those neurons can tell you where you are in your familiar environment. +[1217.540 --> 1218.540] Okay. +[1218.540 --> 1224.420] But if you want to not just know where you are, but you want to go somewhere else like +[1224.500 --> 1230.100] there, you also need to know your current heading as we discussed last time. +[1230.100 --> 1231.900] Okay. +[1231.900 --> 1237.060] So it turns out that there's a whole other batch of cells that tell you what way you're +[1237.060 --> 1238.060] heading. +[1238.060 --> 1243.620] Okay, these are called head direction cells, also first studied in rodents, and each head +[1243.620 --> 1249.860] direction cell responds when that rodent is heading in a particular direction, not in +[1249.860 --> 1251.420] another direction. +[1251.420 --> 1252.420] Okay. +[1252.420 --> 1258.620] So for example, if we're mapping along the x-axis different heading direction, so the +[1258.620 --> 1263.260] rodent is facing in different directions in his environment, you map up the whole 360 +[1263.260 --> 1268.980] degrees, this would be the response of one cell as that rodent moves around. +[1268.980 --> 1271.300] This one will be tuned to this particular direction. +[1271.300 --> 1275.100] It would fire only when the rodent was facing this way, not when it was facing this way +[1275.100 --> 1277.540] or this way or this way or this way. +[1277.540 --> 1278.540] Okay. +[1278.540 --> 1281.860] So does everybody get how where you are in space is different? +[1281.860 --> 1285.660] That's not a very good way to show this. +[1285.660 --> 1291.620] Where you are in space is different from where you're aimed and headed in that location. +[1291.620 --> 1292.620] Okay. +[1292.620 --> 1295.740] Two orthogonal axes of relevant to your location. +[1295.740 --> 1296.740] Yeah. +[1296.740 --> 1302.180] So this is an angle of the head, respect the body weight, it's the entire... +[1302.180 --> 1306.940] I think I meant to look that up again because this question always arises. +[1306.940 --> 1310.780] I think that there's some muck about that in the literature, which is why I never remember +[1310.780 --> 1312.420] a clear answer. +[1312.420 --> 1317.660] Usually in a rodent, especially they're the same, and whether, you know, because, you know, +[1317.660 --> 1321.700] rodents can turn their heads a little bit, but, you know, mostly they're going to keep +[1321.700 --> 1324.260] it aimed the way they're moving. +[1324.260 --> 1327.980] So I don't know, this is a long complicated excuse, so I forget what the answer to that +[1327.980 --> 1329.660] is. +[1329.660 --> 1331.300] But send me an email and I'll look it up. +[1331.300 --> 1335.060] I meant to before this lecture, just ran out of time. +[1335.060 --> 1336.060] Okay. +[1336.060 --> 1338.100] Most of the time they'll be the same. +[1338.100 --> 1342.580] I actually am pretty sure it's which way your body's facing, because if I turn like this, +[1342.580 --> 1344.980] well, anyway, I'm not going that way. +[1344.980 --> 1345.980] Yeah. +[1345.980 --> 1350.380] Have you found at least 360 cents for each survival? +[1350.380 --> 1353.260] I mean, are there cells for each? +[1353.260 --> 1354.260] Yes. +[1354.260 --> 1355.260] Yes. +[1355.260 --> 1359.100] They pretty much evenly tile the 360 degrees around the animal. +[1359.100 --> 1360.100] Yeah. +[1360.100 --> 1365.620] So collectively, that whole set of cells, just as a collective set of play cells, is sufficient +[1365.620 --> 1366.900] to tell the animal where it is. +[1366.900 --> 1371.460] A collective set of head direction cells is sufficient to tell the animal which way it's +[1371.460 --> 1372.460] oriented. +[1372.460 --> 1373.460] Okay. +[1373.460 --> 1374.460] Okay. +[1374.460 --> 1378.020] I think we just said all these things are in a structure called the, well, first found +[1378.020 --> 1381.580] in a structure called the subiculum, which is part of the hippocampus, but since then +[1381.580 --> 1383.540] they've been found in lots of different regions. +[1383.540 --> 1386.100] You don't need to remember that. +[1386.100 --> 1388.500] So they get input from lots of different information. +[1388.500 --> 1392.420] There's many different ways to know which way we're oriented. +[1392.420 --> 1396.220] For example, or, to be honest, we don't have a rotating chair. +[1396.220 --> 1398.380] If we did, I would have done the following ridiculous thing. +[1398.380 --> 1401.820] I would have sat one of you in it and told you to close your eyes and I would suddenly +[1401.820 --> 1403.220] turn it. +[1403.220 --> 1405.300] And the person in the chair would notice that, right? +[1405.300 --> 1409.260] That's your vestibular system that tells you if your body is being turned, even if +[1409.260 --> 1413.260] you yourself don't decide to turn it, it will tell you if you get turned. +[1413.260 --> 1418.020] That's another cue that provides input to the head direction cells, just as visual information +[1418.020 --> 1422.580] does and potentially auditory information and lots of other kinds of information. +[1422.580 --> 1426.420] So many different sources of information feed in to inform these head direction cells +[1426.420 --> 1428.740] about the orientation of the animal. +[1428.740 --> 1429.740] Okay. +[1429.740 --> 1431.660] All right. +[1431.660 --> 1434.900] So you can think of this as the brain's compass, right? +[1434.900 --> 1437.780] Telling the organism what way they're facing. +[1437.780 --> 1438.780] Okay? +[1438.780 --> 1442.020] And lots of organisms have versions of this. +[1442.020 --> 1446.700] In the fly, there's an amazing structure that was discovered just a couple years ago, +[1446.700 --> 1450.500] where there's a whole kind of layout of this little neural structure, I forget what +[1450.500 --> 1456.380] it's called, but actually spatially in that structure, there's a little array of direction +[1456.380 --> 1457.380] cells. +[1457.380 --> 1462.620] So actually you could see it in a little spatial map of direction in that little structure +[1462.620 --> 1463.620] in the fly. +[1463.620 --> 1464.620] Okay. +[1464.620 --> 1469.740] And humans and primates and rodents, it's not organized spatially like a literal map of +[1469.740 --> 1470.740] direction. +[1470.740 --> 1471.740] Okay. +[1471.740 --> 1475.260] So now we have where you are in which way you're facing. +[1475.260 --> 1476.260] Okay? +[1476.260 --> 1479.740] You know, one pool of cells plays cells for where you are, another pool of cells for which +[1479.740 --> 1482.460] way you're heading. +[1482.460 --> 1489.220] But those are just, we're just going here, the coolest navigation related cells are grid +[1489.220 --> 1491.100] cells in interrhynal cortex. +[1491.100 --> 1492.100] Okay? +[1492.100 --> 1497.260] So this is a slice of the brain like this showing that the hippocampus is folded up thing +[1497.260 --> 1498.820] right here. +[1498.820 --> 1503.700] And interrhynal cortex is just right next door. +[1503.700 --> 1504.700] Okay? +[1504.700 --> 1512.460] So in interrhynal cortex, these things were discovered a little around a dozen years +[1512.460 --> 1515.260] ago, maybe 15 years ago. +[1515.260 --> 1519.980] And I'm going to show you a video of a rodent moving around his environment, mapping out +[1519.980 --> 1521.500] activity like we saw before. +[1521.500 --> 1525.780] But now we're in interrhynal cortex and this neuron is going to be a grid cell and you'll +[1525.780 --> 1531.780] see why as it moves around in its space. +[1531.780 --> 1532.780] Okay? +[1532.780 --> 1533.780] Okay. +[1533.780 --> 1541.900] So there's a rodent, he's moving around, that's the tether taking the neural activity, the +[1541.900 --> 1545.020] white dots are every time this one neuron fires. +[1545.020 --> 1550.180] We're following one neuron this whole time and the rodent is moving around, sped up video +[1550.180 --> 1551.820] so you can see this happening. +[1551.820 --> 1554.460] And at first it looks like completely random. +[1554.460 --> 1559.460] But as the rodent keeps migrating around in his space there, you start to see that there's +[1559.460 --> 1561.020] like wabs in there. +[1561.020 --> 1566.620] It's not totally random, there are particular blobs that are clustered and oh my god those +[1566.620 --> 1571.180] blobs are organized in a hexagonal grid. +[1571.180 --> 1573.020] It's a hexagon. +[1573.020 --> 1575.340] Isn't that awesome? +[1575.340 --> 1578.900] That's a grid cell. +[1578.900 --> 1582.900] And whoops, here we go, we don't need to see it again. +[1582.900 --> 1588.260] So this is a picture of what you just saw, the trajectory of the animal and the hot spots +[1588.260 --> 1593.700] in that array and here's a kind of smooth, mathy version of where the firing is significant +[1593.700 --> 1598.420] in that space, both showing you hexagonal grid cells. +[1598.420 --> 1599.420] Okay? +[1599.420 --> 1603.340] So this is that first glance, a very weird thing. +[1603.340 --> 1609.780] Why would it help to have a kind of plate, essentially a place field that has multiple +[1609.780 --> 1611.780] different places that make it fire? +[1611.780 --> 1612.780] Okay? +[1612.780 --> 1615.340] And actually somebody asked before where their place cells have two hot spots. +[1615.340 --> 1621.180] These cells generally have one, but grid cells, as you see, have many organized in this grid. +[1621.180 --> 1623.420] Okay? +[1623.420 --> 1630.660] So the kind of circuitry and math of this whole system is mind blowing and super exciting +[1630.660 --> 1635.000] and the talk that I mentioned yesterday was on this topic and many people are working +[1635.000 --> 1639.300] on this and they're working out like really deep, interesting math about how you can take +[1639.300 --> 1643.740] these cells, how they're arranged spatially in the brain at multiple scales and how you +[1643.740 --> 1649.580] can use them to do path integration and keep track of how far an animal has gone along +[1649.580 --> 1650.580] as trajectory. +[1650.580 --> 1655.740] It's a little bit much for this course, but I'll just say the current thinking is what +[1655.740 --> 1660.460] these cells enable us to do is to keep track of how far we've gone in each direction and +[1660.460 --> 1662.180] that's really crucial in navigation. +[1662.180 --> 1666.540] We need to know where we are, not just by the landmarks we see. +[1666.540 --> 1669.980] We need to know how far we've gone in a given direction and the thought is that that's +[1669.980 --> 1675.220] the function that these grid cells primarily serve in navigation. +[1675.220 --> 1680.060] And so that's especially important for dead reckoning, like integrating where you've +[1680.060 --> 1682.660] gone according to your trajectories. +[1682.660 --> 1683.660] Okay? +[1683.660 --> 1685.460] So you also need head direction cells at each point. +[1685.460 --> 1691.420] So you can think of the head direction cells as telling you the orientation of your vector +[1691.420 --> 1695.740] and the grid cells of telling you the magnitude of the vector of how far you went. +[1695.740 --> 1699.220] And then you take a whole bunch of those and you integrate them and you know where you've +[1699.220 --> 1701.500] gone from your starting point. +[1701.500 --> 1704.540] And lots of animals do all that math in their head. +[1704.540 --> 1707.340] Like it's pretty complicated in a girls, right? +[1707.340 --> 1710.300] But they all do that. +[1710.300 --> 1711.620] Okay. +[1711.620 --> 1713.580] So this is super awesome work. +[1713.580 --> 1722.420] And fittingly the 2014 Nobel Prize was awarded to the Mosers, then husband wife team +[1722.420 --> 1728.660] who discovered the grid cells and also to John O'Keefe who discovered place cells decades +[1728.660 --> 1730.460] earlier. +[1730.460 --> 1737.060] And it's a super exciting line of work and continuing to be very exciting one. +[1737.060 --> 1738.060] Okay. +[1738.060 --> 1743.260] So so far we've talked about place cells in the hippocampus direction cells and the +[1743.260 --> 1749.100] subitulum and lots of other places and grid cells and enterrinal cortex. +[1749.100 --> 1752.140] And this is just a schematic diagram of where those locations are. +[1752.140 --> 1754.900] The anatomy is complicated and you don't need to know it. +[1754.900 --> 1759.740] So they're all sort of in the hippocampus and it's neighboring structures. +[1759.740 --> 1763.460] That's good enough for here. +[1763.460 --> 1764.460] Well okay. +[1764.460 --> 1768.180] Know that the grid cells are in enterrinal cortex and the place cells are in hippocampus. +[1768.180 --> 1770.220] That's worth knowing. +[1770.220 --> 1773.500] Direction cells are going to all over. +[1773.500 --> 1774.500] Okay. +[1774.500 --> 1776.460] So that's cool. +[1776.460 --> 1778.740] But there's one more cool kind of cell. +[1778.740 --> 1779.740] Actually there's several more. +[1779.740 --> 1783.940] The new one I never heard of was reported in this job talk yesterday but we won't go +[1783.940 --> 1784.940] there. +[1784.940 --> 1786.780] We'll try to keep it simple. +[1786.780 --> 1790.260] Another well-established one is called a border cell. +[1790.260 --> 1791.260] Okay. +[1791.260 --> 1797.540] So this is the these are the place fields of three different neurons from an animal moving +[1797.540 --> 1799.260] around in this space. +[1799.260 --> 1800.260] Okay. +[1800.260 --> 1803.460] So you see how these are very interesting kind of place fields. +[1803.460 --> 1805.180] They're not just like a nice round blob. +[1805.180 --> 1809.460] They stretch around a whole border of the animal's environment. +[1809.460 --> 1810.460] Okay. +[1810.460 --> 1812.620] So does that make you think of anything? +[1812.620 --> 1817.220] Is that ringing any bells with other stuff we've talked about in here? +[1817.220 --> 1819.260] Right. +[1819.260 --> 1824.660] Think we've talked a bunch about how the parahippocampal place area cares about the shape of space around +[1824.660 --> 1825.660] you. +[1825.660 --> 1826.660] Right. +[1826.660 --> 1830.100] Well you might think that you'd really want to have awareness of where you are with respect +[1830.100 --> 1831.700] to navigational barriers. +[1831.700 --> 1832.700] Right. +[1832.700 --> 1835.820] Turns out border cells respond not just to walls. +[1835.820 --> 1839.660] If you put a rodent in an environment where there's a cliff they can't go off. +[1839.660 --> 1842.740] The border cells also respond to the edge of that cliff. +[1842.740 --> 1843.740] Okay. +[1843.740 --> 1848.540] So any navigational barrier, basically telling you where you are with respect to navigational +[1848.540 --> 1849.540] barriers. +[1849.540 --> 1850.540] Okay. +[1850.540 --> 1851.540] Okay. +[1851.540 --> 1852.540] All right. +[1852.540 --> 1855.540] Blah blah blah. +[1855.540 --> 1857.020] Okay. +[1857.020 --> 1863.100] So as I mentioned in the last lecture when we talked about the parahippocampal place area +[1863.100 --> 1868.860] the shape of space around you has this kind of privileged role in many aspects of navigation. +[1868.860 --> 1870.340] Okay. +[1870.340 --> 1876.300] So now we're going to talk about this problem of reorienting or regaining your sense of +[1876.300 --> 1878.740] direction once you've been disoriented. +[1878.740 --> 1879.740] Okay. +[1879.740 --> 1885.300] And so again I mentioned this before but just to give you the intuition of what we're +[1885.300 --> 1887.060] talking about here. +[1887.060 --> 1892.420] You come up from the subway in Manhattan or any other environment that's rectilinear +[1892.420 --> 1893.740] that you know. +[1893.740 --> 1895.980] And you know which stop you're coming at up at. +[1895.980 --> 1900.780] So you kind of know where you are but you come out and you don't know which way to head. +[1900.780 --> 1902.740] You don't know which way is which. +[1902.740 --> 1903.740] All right. +[1903.740 --> 1908.180] So that's a modern version of a classic problem that animals face in their environment. +[1908.180 --> 1912.500] They may know where they are but that doesn't tell them which way they're facing. +[1912.500 --> 1914.460] So just to be really concrete about this. +[1914.460 --> 1916.140] So here's an aerial view of a person. +[1916.140 --> 1918.180] You're standing here. +[1918.180 --> 1925.020] You have a cognitive map in your mind and your place cells are telling you your location +[1925.020 --> 1926.020] in that map. +[1926.020 --> 1927.020] Okay. +[1927.020 --> 1932.460] So you know where you are in that map but you're looking down a street so you know that you're +[1932.460 --> 1940.180] oriented with respect to some external axis like this but you don't know how your mental +[1940.180 --> 1942.620] map should be aligned with that street. +[1942.620 --> 1951.420] Are you facing like this facing north in Manhattan or are you facing south? +[1951.420 --> 1952.420] Right. +[1952.420 --> 1954.820] So that's the problem of reorientation. +[1954.820 --> 1959.100] It's figuring out your particular orientation not just your location but which way you're +[1959.100 --> 1961.740] facing in a known environment. +[1961.740 --> 1962.740] Okay. +[1962.740 --> 1966.420] And we've all faced some version of this presumably at some point and it's annoying. +[1966.420 --> 1970.340] It takes a while to figure out and then I don't know if anybody's had this experience. +[1970.340 --> 1973.620] I've had it only in Manhattan because that's where this arises for me but I'm sure there +[1973.620 --> 1976.980] are other locations where you come up and you think you're going one way and then all +[1976.980 --> 1981.060] of a sudden like your whole like it's like your whole mental map goes kaboom. +[1982.060 --> 1983.620] So how many people have had that experience? +[1983.620 --> 1985.380] Like it's very sudden and punk date. +[1985.380 --> 1986.380] Yeah. +[1986.380 --> 1991.180] Turns out that when that happens all of your neurons flip together in unison. +[1991.180 --> 1992.540] Like they're all in kuhuts. +[1992.540 --> 1994.380] They have one version of this. +[1994.380 --> 1997.220] When you have that experience as cause they're all flipping together and I'll show you +[1997.220 --> 1999.220] some data on that in a second. +[1999.220 --> 2000.220] Okay. +[2000.220 --> 2001.860] All right. +[2001.860 --> 2007.500] So there's a very evolutionary old system for solving just this problem and it's a wonderful +[2007.500 --> 2011.180] little piece of the literature that I'm going to spend a couple minutes on because it's +[2011.180 --> 2014.380] so classic and so cool. +[2014.380 --> 2020.060] And this started with work by Randy Galistel in the 1980s and so what he did was he studied +[2020.060 --> 2025.740] this problem of reorientation that is figuring out your orientation in a known environment +[2025.740 --> 2027.220] once you've been disoriented. +[2027.220 --> 2028.220] Okay. +[2028.220 --> 2031.540] It's a very particular aspect of the problem of navigation. +[2031.540 --> 2037.100] So he put rats in a rectangular environment and he had them explore the environment and +[2037.100 --> 2042.580] then he hid some rat relevant thing like a little piece of food say a chocolate chip +[2042.580 --> 2043.580] in that corner. +[2043.580 --> 2044.580] Okay. +[2044.580 --> 2045.580] Rat sees that happen. +[2045.580 --> 2046.580] Rat is interested. +[2046.580 --> 2051.940] Take rat out of box before they get to go take the chocolate chip and then you disorient +[2051.940 --> 2052.940] the rat. +[2052.940 --> 2055.060] You don't grab them by the tail and swing them around. +[2055.060 --> 2056.140] You do some slower version. +[2056.140 --> 2057.340] You don't want to make them sick. +[2057.340 --> 2061.780] You do some slower version of that so they've lost track of which way they're facing. +[2061.780 --> 2062.780] Okay. +[2062.780 --> 2065.140] Now you put them in a new box. +[2065.140 --> 2068.380] New box because you don't want a smell to still be there. +[2068.380 --> 2070.380] New box. +[2070.380 --> 2072.940] And you see which way the rat goes. +[2072.940 --> 2078.580] And you find that the rat goes 50-50 to those two corners. +[2078.580 --> 2083.460] What does that mean the rat has encoded? +[2083.460 --> 2086.220] He doesn't go randomly to any corner. +[2086.220 --> 2087.220] He goes to corners. +[2087.220 --> 2088.220] He knows it was in a corner. +[2088.220 --> 2090.020] He doesn't go randomly to any corner. +[2090.020 --> 2091.020] Yeah. +[2091.020 --> 2092.020] I'm sorry. +[2092.020 --> 2094.340] You can go from the front of the rat into the left. +[2094.340 --> 2095.340] Say again? +[2095.340 --> 2098.340] Like it's specifically in like one of these directions to the left. +[2098.340 --> 2100.820] It's like he's facing the corner. +[2100.820 --> 2102.020] You got to say a little more than that. +[2102.020 --> 2103.020] What's to the left? +[2103.020 --> 2105.540] What's different about those two corners than the other two? +[2105.540 --> 2106.540] Yeah, it's a bell. +[2106.540 --> 2110.540] Well, if he's looking at the shape of the wheel, I think it's too long to walk. +[2110.540 --> 2111.540] It's too short to walk. +[2111.540 --> 2112.540] You have to check it. +[2112.540 --> 2113.540] It's basically like a long. +[2113.540 --> 2117.380] He has to go to what looks like the right. +[2117.380 --> 2118.540] Exactly. +[2118.540 --> 2126.740] He has to have encoded the area, the fact that the room has a is longer and one axis +[2126.740 --> 2128.140] than another. +[2128.140 --> 2130.380] And he's essentially encoded. +[2130.380 --> 2135.160] That chocolate chip was on the right side of the long wall or the left side of the short +[2135.160 --> 2136.160] wall. +[2136.160 --> 2138.020] And both of those corners are consistent with that. +[2138.020 --> 2140.500] That's why he goes 50-50 of them. +[2140.500 --> 2144.300] He can't go 100% of the time to the right corner because he has no information that would +[2144.300 --> 2146.380] tell him that in this experiment. +[2146.380 --> 2147.380] Everybody clear? +[2147.380 --> 2152.180] And he tells you he learned where the thing is with respect to the shape of the room. +[2152.180 --> 2154.380] And it's a particular aspect ratio. +[2154.380 --> 2155.380] OK. +[2155.380 --> 2157.900] So now the plot thickens. +[2157.900 --> 2160.060] And now they repeat the experiment. +[2160.060 --> 2164.500] But this time they make some very rat salient asymmetry over here. +[2164.500 --> 2167.420] You make a color and a texture. +[2167.420 --> 2173.020] And you make other things to make this wall very saliently different. +[2173.020 --> 2176.420] So you would think the rat motivated to find the chocolate chip. +[2176.420 --> 2180.980] And now go 100% to that corner when we put him in the new box with the same landmark queue +[2180.980 --> 2182.780] over there. +[2182.780 --> 2187.900] But no, the rat goes 50-50 to the same two corners. +[2187.900 --> 2193.100] And in control experiments, many control conditions, you can show, and I'll show you one in a moment, +[2193.100 --> 2195.100] the rat absolutely knows about this wall. +[2195.100 --> 2198.180] He's encoded the presence of that asymmetric wall. +[2198.180 --> 2202.700] So he has the information that should enable him to break the symmetry, but he doesn't +[2202.700 --> 2205.420] use it. +[2205.420 --> 2206.420] It's weird. +[2206.420 --> 2208.420] You should be surprised. +[2208.420 --> 2210.060] Everybody get why that's weird. +[2210.060 --> 2212.100] He could have solved this one perfectly this time. +[2212.100 --> 2213.380] He has the information. +[2213.380 --> 2216.220] He's not using that information. +[2216.220 --> 2217.420] OK. +[2217.420 --> 2220.500] So that's weird. +[2220.500 --> 2225.380] But then, Liz Spellke and her colleagues came along 10 years later and said, let's try +[2225.380 --> 2227.740] this with infants. +[2227.740 --> 2234.180] And so they did the infant version where you put the infant in a room with a rectangular +[2234.180 --> 2239.540] room and you hide the doors so the infant doesn't have any queues other than the shape +[2239.540 --> 2245.900] of the room, 18 to 24-month-old infants. +[2245.900 --> 2251.980] And then you hide a toy in a corner and you see what the infant does. +[2251.980 --> 2256.060] And oh, actually what you do with the infant is you make this wall really salient in all +[2256.060 --> 2257.060] kinds of ways. +[2257.060 --> 2261.300] In one case, they was red velvet and they first showed the, and these aren't like, these +[2261.300 --> 2263.060] are, I guess, toddlers, right? +[2263.060 --> 2267.180] They first showed them that when you knock on the red wall, music happens. +[2267.180 --> 2269.740] Totally cool, riveting for a little kid. +[2269.740 --> 2270.740] They totally get it. +[2270.740 --> 2274.500] They know all about the music wall, very salient to them. +[2274.500 --> 2278.940] Nonetheless, you put them in this experiment and they behave just like rodents. +[2278.940 --> 2281.500] They go 50-50 to the two corners. +[2281.500 --> 2285.580] Even though they noticed the red music wall and it could have solved the problem for them +[2285.580 --> 2290.020] perfectly and they were motivated, they didn't use the information. +[2290.020 --> 2295.260] They get why that's kind of interesting and kind of surprising. +[2295.260 --> 2297.260] Okay. +[2297.260 --> 2302.220] Now you might say, okay, rodents, infants, they're dummies. +[2302.220 --> 2303.620] We wouldn't do that. +[2303.620 --> 2306.420] Us smart adult humans, would we? +[2306.420 --> 2309.180] But oh yes, you would. +[2309.180 --> 2311.060] Under certain circumstances. +[2311.060 --> 2316.340] If we tied up your language system and there's lots of ways of doing that, one way is called +[2316.340 --> 2317.620] shadowing. +[2317.620 --> 2320.780] So it's kind of like simultaneous translation, but you don't translate. +[2320.780 --> 2321.780] Try this sometime. +[2321.780 --> 2325.540] I do this occasionally when I board a mic, just because it's amusingly difficult. +[2325.540 --> 2330.500] Turn on the radio, listen to somebody talking and just repeat everything they say after +[2330.500 --> 2331.500] they say it. +[2331.500 --> 2332.500] Right? +[2332.500 --> 2333.500] I'm not even translating. +[2333.500 --> 2334.500] It's still demanding. +[2334.500 --> 2335.500] Right? +[2335.500 --> 2338.780] You have to be listening and producing, okay, running thing. +[2338.780 --> 2342.940] So that's called verbal shadowing and it's an established way to really tie up your +[2342.940 --> 2347.140] language system and kind of take it offline so you can't really use it. +[2347.140 --> 2350.780] Then you do this experiment on human adults. +[2350.780 --> 2355.220] If they're verbally shadowing and their language system is tied up, they behave just like +[2355.220 --> 2356.940] rodents and infants. +[2356.940 --> 2362.060] That is, they use the shape of the space, but they don't use salient landmarks that could +[2362.060 --> 2363.740] help them solve it perfectly. +[2363.740 --> 2367.100] They go 50-50 to the two corners. +[2367.100 --> 2370.100] They become rats and infants. +[2370.100 --> 2373.460] We become rats and infants, okay? +[2373.460 --> 2379.900] So Liz Spellkey has spun a whole fascinating big theoretical story about what this really +[2379.900 --> 2380.900] means. +[2380.900 --> 2384.540] Well, let me just say a little bit more about this first before I do her whole big story. +[2384.540 --> 2385.540] Okay, yeah. +[2385.540 --> 2390.280] So the idea is, first of all, why would it make sense for rodents at least, let's just +[2390.280 --> 2394.780] consider the rats, to use only the shape of space to reorient themselves when they're +[2394.780 --> 2395.780] disoriented? +[2395.780 --> 2398.540] At first glance, that seems really crazy. +[2399.020 --> 2401.780] You think about rodents in natural environments. +[2401.780 --> 2407.020] The idea is that actually in natural environments, features change. +[2407.020 --> 2411.860] Snow comes and goes, plants come and goes, odors change. +[2411.860 --> 2417.620] All those kinds of features of the environment can change, but the shape of the environment, +[2417.620 --> 2421.940] like that there's a slope like this and a barrier here and a cliff there, those are more +[2421.940 --> 2424.420] stable features of the environment. +[2424.420 --> 2431.140] But actually makes evolutionary sense for disoriented rodents at least, to use the shape of space +[2431.140 --> 2436.780] more than the features, the colors and textures and odors of a space as landmarks to reorient +[2436.780 --> 2437.780] themselves. +[2437.780 --> 2438.940] Does that make sense? +[2438.940 --> 2445.180] And so the idea is that rodents have, through evolution, kind of evolved this system for +[2445.180 --> 2449.620] reorienting themselves when they lose their bearings that relies only on the shape of +[2449.620 --> 2450.820] space. +[2450.820 --> 2455.860] So restrictively, that even if another cube becomes relevant and important, they don't +[2455.860 --> 2457.500] use it. +[2457.500 --> 2458.500] Okay? +[2458.500 --> 2463.500] And the further idea is that we have some version of this system in our heads as well. +[2463.500 --> 2468.060] And as smart adult humans, we learn all kinds of other strategies to get beyond this. +[2468.060 --> 2471.340] We're not trapped with only being able to use this one system to solve it. +[2471.340 --> 2476.340] We can use other systems, possibly language to help us say things to ourselves, like it's +[2476.340 --> 2478.500] on the left side of a short wall. +[2478.500 --> 2479.500] Right? +[2479.500 --> 2480.500] That's what's spelky things. +[2480.500 --> 2483.700] There's some version in your head of, it's on the left side of the short wall. +[2483.700 --> 2487.780] And that's why adults can do this when their language system isn't tied up. +[2487.780 --> 2490.620] I don't think that's exactly right, but it's a beautiful story. +[2490.620 --> 2492.420] And there's some evidence for it. +[2492.420 --> 2493.420] Okay? +[2493.420 --> 2497.940] Anyway, part of the reason I go through this whole thing, well, one I think these experiments +[2497.940 --> 2498.940] are cool. +[2498.940 --> 2504.300] But it's also been the basis of a kind of core idea and cognitive science. +[2504.300 --> 2507.220] And that idea is called informational encapsulation. +[2507.220 --> 2510.700] So think about it's just lots of syllables for a pretty simple idea. +[2510.700 --> 2515.100] That you have this system for reorientation, and it is designed to use the shape of space +[2515.100 --> 2519.940] around you as the cue that you used to reorient yourself when you're disoriented. +[2519.940 --> 2523.540] That system is kind of hardwired to do just that. +[2523.540 --> 2527.620] And if some other part of your brain has information that could solve the problem, like +[2527.620 --> 2533.740] the presence of a relevant feature that you could use, you don't have that, your reorientation +[2533.740 --> 2535.940] system doesn't have access to that information. +[2535.940 --> 2538.260] It's informationally encapsulated. +[2538.260 --> 2544.100] It only has access to the particular inputs that are kind of hardwired into it. +[2544.100 --> 2550.020] And so 20 years ago, a lot of people kind of went wild with this and said that all the +[2550.020 --> 2554.420] brain regions that I've talked about and cognitive systems that we're considering this course +[2554.420 --> 2556.380] are informationally encapsulated. +[2556.380 --> 2561.380] It's kind of an extreme idea that goes far beyond functional specificity to say like the +[2561.380 --> 2564.460] inputs are extremely restricted to each region. +[2564.460 --> 2566.340] And that's probably not true. +[2566.340 --> 2571.380] But there's some limitations on the information that each of these processors were considering +[2571.380 --> 2573.300] in this course has access to. +[2573.300 --> 2578.260] And this is kind of the kind of classic evidence, behavioral evidence, that some of those systems +[2578.260 --> 2579.740] have very restricted inputs. +[2579.740 --> 2582.700] Does that make sense, idea of informational encapsulation? +[2582.700 --> 2588.660] Not as an absolute truth about the brain, but as an idea that is interesting to consider +[2588.660 --> 2592.020] individually for each of the systems we study. +[2592.020 --> 2597.100] We've just been pushed back about the extremeness of this claim that infants and rodents only +[2597.100 --> 2598.460] use the shape of space. +[2598.460 --> 2601.900] There are circumstances where you can get them to use other information. +[2601.900 --> 2608.100] But it's definitely true that the shape of space is the dominant queue for reorienting +[2608.100 --> 2610.820] in rodents and infants. +[2610.820 --> 2611.860] All right. +[2611.860 --> 2616.700] So when you're lost, as I've mentioned, there's two questions you need to answer. +[2616.700 --> 2617.700] Where are you? +[2617.700 --> 2618.980] And which way you're oriented. +[2618.980 --> 2623.100] This last stuff we were talking about is about which way you're oriented question. +[2623.100 --> 2627.620] And I just showed you some evidence for this general finding that the geometric cues, +[2627.620 --> 2633.820] the shape of space, are the dominant cues you use to reorient yourself to get your heading +[2633.820 --> 2637.460] back when you're disoriented. +[2637.460 --> 2643.900] But do we really know that those cues are different for a place recognition and for +[2643.900 --> 2644.900] heading direction? +[2644.900 --> 2645.900] All right. +[2645.900 --> 2648.060] So I've sort of said here are two different parts of the problem. +[2648.140 --> 2650.180] But do they function differently? +[2650.180 --> 2652.140] Do we really use different cues? +[2652.140 --> 2657.460] Do we use the shape of space more for heading direction and maybe other cues for place +[2657.460 --> 2659.380] recognition for knowing where we are? +[2659.380 --> 2660.380] Okay. +[2660.380 --> 2665.340] So I'm going to show you a very elegant behavioral experiment in mice that does this all at +[2665.340 --> 2667.180] once in one experiment. +[2667.180 --> 2671.100] So this is Josh Julian, a former lab tech in my lab. +[2671.100 --> 2672.820] I get no credit for this whatsoever. +[2672.820 --> 2674.620] I'm proud even though I shouldn't be proud. +[2674.620 --> 2679.100] He was just an endogenous smart guy who went on and did an awesome experiment after he +[2679.100 --> 2681.220] left my lab and went off to grad school. +[2681.220 --> 2683.180] And here's this awesome experiment. +[2683.180 --> 2684.180] Okay. +[2684.180 --> 2687.220] So he said, let's get mice to do both of these tasks. +[2687.220 --> 2690.180] They have to know where they are and which way they're oriented. +[2690.180 --> 2691.180] Okay. +[2691.180 --> 2694.620] We're going to do the same disorientation thing, take them out, turn them around until they're +[2694.620 --> 2695.620] disoriented. +[2695.620 --> 2698.660] But these mice have to learn two different environments. +[2698.660 --> 2699.660] Okay. +[2699.660 --> 2704.620] So the experiment has the vertical stripes on the short wall, on one of the short walls. +[2704.620 --> 2708.500] The other environment has horizontal stripes on the short wall. +[2708.500 --> 2710.000] Okay. +[2710.000 --> 2711.460] So you do the same experiment. +[2711.460 --> 2714.340] You bait one corner and you see where the rodent goes. +[2714.340 --> 2718.140] Does he go to the two opposite corners? +[2718.140 --> 2723.020] You exactly the same experiment, but he has to remember which room is to solve the problem +[2723.020 --> 2728.220] he has to know whether he's has to rediscover the vertical stripes or the horizontal stripes +[2728.220 --> 2729.780] and act accordingly. +[2729.780 --> 2733.940] Because when he's in the vertical context, the thing gets hit. +[2733.940 --> 2736.420] He does this over repeated trials. +[2736.420 --> 2743.540] The food gets hit on the, let me get this right, long wall on the left. +[2743.540 --> 2744.540] Yeah. +[2744.540 --> 2745.540] Right. +[2745.540 --> 2746.540] Yeah. +[2746.540 --> 2753.180] So when the long wall is on the left of the rodent, okay, that corner, the long wall is +[2753.180 --> 2754.180] on the left. +[2754.180 --> 2760.540] Where is, where is, when he's in the, in the blue context, the reward here, the long wall +[2760.540 --> 2761.540] is on the right. +[2761.540 --> 2762.540] Okay. +[2762.540 --> 2766.060] So you have to learn those two different environments and that the, the relevant shape +[2766.060 --> 2767.940] hues are opposite in each. +[2767.940 --> 2768.940] Okay. +[2768.940 --> 2769.940] Everybody got that? +[2769.940 --> 2770.940] Okay. +[2770.940 --> 2775.140] Now, what you find is that the rodent can learn that just fine. +[2775.140 --> 2776.140] Okay. +[2776.140 --> 2781.460] So this shows that when you put the rodent in the vertical context in a room like this, +[2781.460 --> 2786.580] they go more to these two corners than those two corners. +[2786.580 --> 2791.380] Whereas when you put him in a, in a horizontal context with horizontal stripes, he goes more +[2791.380 --> 2794.620] to those two corners than those two corners. +[2794.620 --> 2800.120] That tells you the rodent has used the orientation of the stripes to figure out which roomies +[2800.120 --> 2803.900] in and hence which two corners are the right ones. +[2803.900 --> 2806.860] Everybody got that? +[2806.860 --> 2808.500] But here's the amazing thing. +[2808.500 --> 2813.220] And though in this experiment, the very same animals in the very same trials are using +[2813.220 --> 2819.180] those stripes to figure out which room they're in, they don't use those stripes at all +[2819.180 --> 2825.700] to break the symmetry and to go only to the crack corner, which they could do but don't. +[2825.700 --> 2826.700] Okay. +[2826.700 --> 2834.660] So once you've trained the rodents on these two things that the reward is here in the +[2834.660 --> 2839.500] vertical context in there in the horizontal context, you disorient them, you put them back +[2839.500 --> 2843.100] in, you find that when you have vertical stripes, they go to these two corners, I'm just +[2843.100 --> 2846.900] repeating the data, when there are horizontal stripes, they go to those two corners. +[2846.900 --> 2847.900] Okay. +[2847.900 --> 2848.900] They've learned that. +[2848.900 --> 2850.900] But why do they go to those two corners? +[2850.900 --> 2852.460] They learn the damn stripes. +[2852.460 --> 2857.700] They use them to know which room they're in, but they don't use them to break the asymmetry +[2857.700 --> 2860.460] and decide which is the correct corner. +[2860.460 --> 2862.460] Okay. +[2862.460 --> 2867.020] So this is like a microcosm of everything I've been saying so far, all in one experiment. +[2867.020 --> 2871.820] The rodents are noticing those feature cues, using them to figure out which room they're +[2871.820 --> 2878.060] in, where are they, but failing to use those features, the orientation of the stripes, +[2878.060 --> 2881.500] to figure out which of the two corners is the correct one. +[2881.500 --> 2884.660] They're not even encoding, food is near stripes. +[2884.660 --> 2889.060] Like, duh, that should have been easy. +[2889.060 --> 2892.820] So this is a beautiful, I mean, this is even more evidence for informational encapsulation +[2892.820 --> 2897.300] of this system, because it shows us on the very same trial, they use the stripe information +[2897.300 --> 2901.260] to know which room they were, they failed to use it to figure out their orientation in +[2901.260 --> 2902.260] that room. +[2902.260 --> 2906.340] This is sort of making sense. +[2906.340 --> 2910.900] I've realized it's kind of subtle, it's sort of simple and subtle at the same time. +[2910.900 --> 2911.900] Yeah. +[2911.900 --> 2917.620] So now, the things that you showed us in the very first building maps, they, you what are +[2917.620 --> 2918.620] they doing here? +[2918.620 --> 2919.620] Yeah. +[2919.620 --> 2921.020] Great question. +[2921.020 --> 2922.020] Let's look at that. +[2922.020 --> 2922.860] That's what we're doing next. +[2922.860 --> 2924.020] It's a great question. +[2924.020 --> 2926.380] What are the damn place cells doing here? +[2926.380 --> 2927.380] Great question. +[2927.380 --> 2931.660] Okay, let's say a little bit more, and then we'll think about what the place cells are doing. +[2931.660 --> 2936.220] Okay, so let me just, like, restate, cache out the findings here. +[2936.220 --> 2939.940] The mice are using the features to figure out which place they're in. +[2939.940 --> 2942.420] Are they in this one or that one? +[2942.420 --> 2947.700] But they are failing to use those features to figure out which is the correct corner. +[2947.700 --> 2949.940] They're still 50-50 for the two corners. +[2949.940 --> 2953.500] Even though logically they have that information and they couldn't use it and they shouldn't +[2953.500 --> 2955.060] use it, they don't. +[2955.060 --> 2957.060] Okay. +[2957.060 --> 2962.620] So that means the mice are using features, in this case orientation, for place recognition, +[2962.620 --> 2967.740] but not for regaining their orientation within that place. +[2967.740 --> 2969.620] I've just, I'm just repeating what I said before. +[2969.620 --> 2971.140] So making sense. +[2971.140 --> 2972.140] Okay. +[2972.140 --> 2974.620] So now David's question. +[2974.620 --> 2976.500] What are the place cells doing here? +[2976.500 --> 2977.500] Great question. +[2977.500 --> 2978.500] Let's look. +[2978.500 --> 2980.060] It's mice, so we can do that. +[2980.060 --> 2981.540] Or K-Nath at all can do that. +[2981.540 --> 2986.500] And Josh Julian, my amazing former lab tech, who again, I get no credit whatsoever. +[2986.500 --> 2987.500] Okay. +[2987.500 --> 2990.300] So, what do they do? +[2990.300 --> 2997.220] They allow the mice to forage for crumbs in a box like this. +[2997.220 --> 3001.500] They disorient the mouse before each trial, take them out, turn them around, so he doesn't +[3001.500 --> 3004.620] know which way he's facing, put them in the box. +[3004.620 --> 3006.620] Okay. +[3006.620 --> 3011.820] And they find that place cells have a particular location in that box, not surprising, that's +[3011.820 --> 3014.300] what place cells do. +[3014.300 --> 3019.060] So here are two different trials, two different cells that were mapped out in a rodent doing +[3019.060 --> 3020.060] this. +[3020.060 --> 3023.700] This cell responds always in that corner. +[3023.700 --> 3026.500] Another cell responds only in that corner. +[3026.500 --> 3027.500] Okay. +[3027.500 --> 3031.300] These are just place cells like we described before, doing what place cells do. +[3031.300 --> 3034.060] Okay. +[3034.060 --> 3041.900] And now, sometimes those place cells are off by 180 degrees, even though the stripes +[3041.900 --> 3044.420] should resolve the ambiguity. +[3044.420 --> 3045.420] Okay. +[3045.420 --> 3054.820] So those same cells on other trials respond to the opposite corner. +[3054.820 --> 3059.100] So the place cells are doing just what the rodent is doing. +[3059.100 --> 3061.020] The place cells are confused. +[3061.020 --> 3065.900] Am I facing, you know, am I, am I facing, am I oriented like this or am I oriented like +[3065.900 --> 3066.900] that? +[3066.900 --> 3070.940] The place cells don't know and the rodent doesn't know. +[3070.940 --> 3074.900] And the coolest thing about this experiment is that these things are linked. +[3074.900 --> 3079.380] On the trials where the rodent goes to the wrong corner, the place cells are also in the +[3079.380 --> 3080.380] wrong corner. +[3080.380 --> 3081.380] Okay. +[3081.380 --> 3085.140] They systematically determine which way the animal will go. +[3085.140 --> 3086.140] Okay. +[3086.140 --> 3090.700] So, oh, and also, as I mentioned before, like all those cells are in cahoots. +[3090.700 --> 3093.340] They're all in sync, going the same way. +[3093.340 --> 3098.420] So when one of the cells rotates to the opposite corner, all the other ones rotate to the opposite +[3098.420 --> 3100.140] corner. +[3100.140 --> 3106.060] So it says, though, somehow on trial to trial, the rodent thinks he's oriented one way. +[3106.060 --> 3108.380] He's actually 50-50, which way he's oriented. +[3108.380 --> 3110.980] He's not using the feature cues. +[3110.980 --> 3115.740] And his behavior, according to where he looks for the food, exactly follows that way he's +[3115.740 --> 3118.740] oriented and so do all of his place cells. +[3118.740 --> 3119.740] Okay. +[3119.740 --> 3122.020] That's that whole system goes together. +[3122.020 --> 3125.700] That tells you that those place cells are relevant behaviorally. +[3125.700 --> 3129.980] They are the system that either directly determines or is tightly linked to the system +[3129.980 --> 3133.940] that determines which way the animal thinks he's facing. +[3133.940 --> 3134.940] Okay. +[3134.940 --> 3138.780] I realize this is a little bit complicated. +[3138.780 --> 3143.700] Does it make sense to you that, you know, as we've been talking about with reorientation, +[3143.700 --> 3147.380] even though the animal should know from this stripe, the difference between that corner +[3147.380 --> 3149.900] in this corner, he doesn't know behaviorally. +[3149.900 --> 3152.740] He's looking for food right there and yet he goes 50-50. +[3152.740 --> 3154.740] Weird and stupid, right? +[3154.740 --> 3156.900] Place cells do the same thing. +[3156.900 --> 3157.900] Okay. +[3157.900 --> 3161.140] And further, the place cells and the behavior go together. +[3161.140 --> 3162.140] Yeah, sure. +[3162.140 --> 3166.500] So if you're reading information off of place cells, can you sap it? +[3166.500 --> 3167.500] Can you provide? +[3167.500 --> 3170.900] Ah, wouldn't that be nice? +[3170.900 --> 3178.020] Once out you can't, for the reason someone over here asked a long time ago, you, I think, +[3178.020 --> 3180.820] and that's because they're all interleaved together. +[3180.820 --> 3183.900] And if you zap just one, you're not going to, one cell, you're not going to have an effect. +[3183.900 --> 3186.260] And if you zap a whole region, you get all of them and you get mock. +[3186.260 --> 3189.780] So you can't do that manipulation, unfortunately. +[3189.780 --> 3193.100] You need some kind of topography to do the manipulation. +[3193.100 --> 3194.100] Okay. +[3194.100 --> 3195.100] Okay. +[3195.100 --> 3200.060] So I just said how all this, I got ahead of myself over relates to behavior. +[3200.060 --> 3201.740] Just to go through that quickly. +[3201.740 --> 3205.900] So what we've done here is they've trained the mouse on this class degree orientation task. +[3205.900 --> 3210.500] They disorient the mouse before each trial while recording from hippocampal place cells. +[3210.500 --> 3216.540] As before, given cell flips 180 degrees from trial to trial, despite the fact that the stripes +[3216.540 --> 3219.940] should disambiguate it and tell them which way he's oriented. +[3219.940 --> 3224.060] And by the way, the head direction cells and the grid cells also flip in the same way +[3224.060 --> 3227.820] and cahoots with the head with the place cells. +[3227.820 --> 3235.140] But you can tell which corner the animal will go to by looking at what the place cells respond. +[3235.140 --> 3241.820] And so when the place cells, when this place cell represents that location, the animal searches +[3241.820 --> 3242.820] first there. +[3242.820 --> 3246.620] And when it flips around, they search in the opposite corner. +[3246.620 --> 3247.620] Okay. +[3247.620 --> 3252.020] So all of that just shows this really strong link between the place cells and behavior. +[3252.020 --> 3253.020] Okay. +[3253.700 --> 3260.020] So to recap, we have talked about four different kinds of cells involved in representing space +[3260.020 --> 3262.380] and navigating around in it. +[3262.380 --> 3266.860] Place cells that are like the UR here, they respond when you're in a particular location, +[3266.860 --> 3272.220] direction cells that respond when you're heading in one direction, not in another direction. +[3272.220 --> 3277.700] Border cells that fire when you're near a particular border in the environment, I have +[3277.700 --> 3280.020] border cells going right now throughout this whole lecture. +[3280.020 --> 3286.500] I've got a batch of border cells that are going grid cells that do this amazing thing of +[3286.500 --> 3290.700] firing when the animal is in multiple different locations and those locations that make it +[3290.700 --> 3293.140] fire are arranged in a hexagonal grid. +[3293.140 --> 3297.500] You think of it as a kind of ruler telling the rodent how far he's gone in his space and +[3297.500 --> 3301.340] those grid cells are like the rulers. +[3301.340 --> 3303.340] Yeah. +[3303.340 --> 3304.340] Right. +[3304.340 --> 3306.780] Those are the four kind of we've talked about. +[3306.780 --> 3308.020] Okay. +[3308.020 --> 3310.580] So now, here's the cool thing. +[3310.580 --> 3311.580] All this stuff. +[3311.580 --> 3313.180] Navigation is awesome. +[3313.180 --> 3314.180] We need it. +[3314.180 --> 3315.180] It's important. +[3315.180 --> 3321.740] All, you know, all mobile animals need it for the reasons we've been talking about. +[3321.740 --> 3325.500] But you can use this whole system for so much more than just navigation. +[3325.500 --> 3329.580] Once you have this fancy system in your head to keep track of your location, to keep track +[3329.580 --> 3333.940] of your direction, to keep track of where things are, how you're moving through that space, +[3333.940 --> 3337.500] you can use that whole magnificent system in other ways. +[3337.500 --> 3341.180] And in the last three or four years, it's just a huge number of studies that are really +[3341.180 --> 3346.060] starting to take this very seriously, particularly the grid cells and thinking about how the grid +[3346.060 --> 3351.820] cells, I mean, probably the whole system, but people have been focusing on the grid cells +[3351.820 --> 3355.460] and how they're used in multiple different situations. +[3355.460 --> 3356.460] So here's one. +[3356.460 --> 3357.460] Okay. +[3357.460 --> 3361.300] This is a cool study where what these guys did was they stuck a little device hanging around +[3361.300 --> 3362.300] people's necks. +[3362.300 --> 3365.020] The subjects next have a little camera aiming forward. +[3365.020 --> 3370.940] It takes pictures at random intervals and records the person's GPS location. +[3370.940 --> 3374.580] So you send them off for a few months with this little device and you do something to protect +[3374.580 --> 3375.580] people's privacy. +[3375.580 --> 3378.940] I don't know exactly how they maneuver that, but I'm sure they found a way. +[3378.940 --> 3383.580] And so then they get this set of photographs taken from this person's front view of wherever +[3383.580 --> 3387.940] they were over several months as they went wherever they went in their lives. +[3387.940 --> 3388.940] Okay. +[3388.940 --> 3392.220] With a little GPS tag for each photograph. +[3392.220 --> 3395.780] So then what they do is they bring the subjects in and pop them in the scanner and show them +[3395.780 --> 3398.540] some of those pictures. +[3398.540 --> 3400.540] Okay. +[3400.540 --> 3404.900] And they ask people to relive the experience that they had when they were looking at that +[3404.900 --> 3405.900] thing. +[3405.900 --> 3406.900] Right? +[3406.900 --> 3407.900] Put this on me. +[3407.900 --> 3409.580] It'd be my monitor like all the time. +[3409.580 --> 3414.020] And I wouldn't know which experience to relive, but I guess these people had, you know, +[3414.020 --> 3415.020] ritual lives. +[3415.020 --> 3416.020] So. +[3416.020 --> 3417.620] Okay. +[3417.620 --> 3424.340] So now what they do is they use multiple voxel pattern analysis in the hippocampus while +[3424.340 --> 3429.060] people are reliving those experiences in the scanner by looking at those images taken +[3429.060 --> 3431.980] from their front facing cameras. +[3431.980 --> 3433.500] Okay. +[3433.500 --> 3437.460] And then they asked, is the pattern of response in the hippocampus, like some bunch of +[3437.460 --> 3443.180] voxels, here's some pattern, is it more similar for events, the subject, remembers that +[3443.180 --> 3446.060] are near that were nearby in space? +[3446.060 --> 3447.060] Okay. +[3447.060 --> 3452.220] So you do this for me, it's like, yes, I occasionally go to the state of cafeteria and I occasionally +[3452.220 --> 3458.260] go to the Coke Center cafeteria and I spend a lot of time at home and those two things +[3458.260 --> 3460.140] are closer to each other than my home thing. +[3460.140 --> 3465.140] Are the patterns more similar for nearby locations than for more distant locations? +[3465.140 --> 3467.420] Okay. +[3467.420 --> 3468.740] And they were. +[3468.740 --> 3475.500] So this is the distance on a log scale between two patterns that result from the subject +[3475.500 --> 3482.540] looking at two different images and this is the similarity of the pattern in the hippocampus. +[3482.540 --> 3486.420] Now some of you might be wondering, in fact, I wonder this too, I think this is a cool +[3486.420 --> 3490.060] study so I'm presenting it, but it sort of doesn't make sense to me because everything +[3490.060 --> 3494.780] we know about the hippocampus is those play cells are pretty interleaved. +[3494.780 --> 3500.220] So how you manage to get a pattern response reading out a systematic location out of the +[3500.220 --> 3502.580] hippocampus is a mystery to me. +[3502.580 --> 3506.940] So they can't be like fully, there must be some kind of structure in there to the layout +[3506.940 --> 3511.300] of those cells to enable them to get this information. +[3511.300 --> 3518.100] Okay, so everybody get how it's telling you that the hippocampus is remembering and +[3518.100 --> 3524.980] reliving some representation of the locations of those, of where you had those experiences. +[3524.980 --> 3527.500] Everybody get how this shows us. +[3527.500 --> 3532.140] But then they asked another interesting question and they said, oh, does it also represent +[3532.140 --> 3533.140] time? +[3533.140 --> 3536.380] So we've been talking about space for the last two lectures, but now we're going straight +[3536.380 --> 3541.260] off the deep end and our first step, not even near the deep end yet, does it do not +[3541.260 --> 3543.380] just space but time? +[3543.380 --> 3547.140] So they can take all those photographs and say, okay, how far apart in time were these +[3547.140 --> 3549.300] two photographs taken? +[3549.300 --> 3551.260] And they can do the same graph? +[3551.260 --> 3553.660] And yes, they get a relationship with time as well. +[3553.660 --> 3558.980] The farther apart in time people saw those two patterns, the more different the patterns +[3558.980 --> 3560.980] in the hippocampus. +[3560.980 --> 3561.980] Isn't that cool? +[3562.820 --> 3563.820] Okay. +[3563.820 --> 3571.420] So that's one example showing that the hippocampus holds some kind of large scale of representation +[3571.420 --> 3574.780] of not just space but also time. +[3574.780 --> 3579.860] And so there's a lot of work on how this gives structures to our memories for distances +[3579.860 --> 3585.540] over the range of 100 meters and times between, you know, 15 hours and a month. +[3585.540 --> 3586.540] I'm going to run out of time. +[3586.540 --> 3589.220] So unless it's a clarification question, I'm going to keep going. +[3589.220 --> 3590.220] Yeah, okay. +[3590.220 --> 3597.300] So you just did it like that so you have to do something to pick out time things that +[3597.300 --> 3598.300] aren't confounded with space. +[3598.300 --> 3601.940] You have this big sample of pictures and you take a subset where you balance for it. +[3601.940 --> 3602.940] Absolutely. +[3602.940 --> 3604.180] They would have to do that. +[3604.180 --> 3608.540] I can't actually remember but they must have done that. +[3608.540 --> 3612.020] Imperfect as peer review is you'd never get through peer review if you didn't take care +[3612.020 --> 3613.020] of that problem. +[3613.020 --> 3614.020] Okay. +[3614.020 --> 3616.340] Okay, so that's the first thing. +[3616.340 --> 3619.220] Here's another even more radical example. +[3619.220 --> 3620.220] Okay. +[3620.220 --> 3624.980] So people have shown that grid-like representations, and I'm skipping over most of the details +[3624.980 --> 3629.780] here to give you the gist because actually the details are a bit complicated. +[3629.780 --> 3635.540] But they've shown that people seem to use their grid cell system when they are thinking +[3635.540 --> 3639.500] about conceptual spaces, not just physical spaces. +[3639.500 --> 3640.500] Okay. +[3640.500 --> 3646.540] So there's one classic experiment in which these guys taught subjects a conceptual space. +[3646.540 --> 3649.220] They taught them about different kinds of birds. +[3649.220 --> 3650.220] Okay. +[3650.220 --> 3652.540] And these birds differed on two dimensions. +[3652.540 --> 3656.220] They could vary in neck length or in leg length. +[3656.220 --> 3659.940] And these things were a foggently varied so they made some artificial birds that kind +[3659.940 --> 3661.460] of filled up that space. +[3661.460 --> 3662.460] Okay. +[3662.460 --> 3663.940] And so here's some of the birds. +[3663.940 --> 3670.300] This one has short legs and okay, here's one with short legs and a long neck and here's +[3670.300 --> 3672.380] one with a longer neck and shorter leg. +[3672.380 --> 3673.380] Wait, let's see. +[3673.380 --> 3675.380] Longer legs and shorter neck right there. +[3675.380 --> 3676.380] Okay. +[3676.380 --> 3678.220] So you got every possible combination. +[3678.220 --> 3680.300] They didn't show people a space like that. +[3680.300 --> 3684.660] They just taught them things about these different birds. +[3684.660 --> 3688.020] They had to remember their names and various facts about them. +[3688.020 --> 3693.500] And so the idea is that when people learn about those birds, they mentally construct a 2D +[3693.500 --> 3699.860] space because in fact those birds were generated from a 2D space, very neck length and leg +[3699.860 --> 3700.860] length. +[3700.860 --> 3702.900] Okay. +[3702.900 --> 3710.500] And so then when they scan subjects, they found essentially a neural signature of a grid +[3710.500 --> 3715.020] system representing that 2D space. +[3715.020 --> 3720.260] So even though the grid system presumably evolved to enable us to navigate around in a 2D +[3720.260 --> 3725.620] space and keep track of where we are in that 2D space, it seems like it's now getting +[3725.620 --> 3731.540] co-opted and being used for all kinds of representations of 2D spaces, including extremely +[3731.540 --> 3737.500] abstract artificial learned 2D spaces that you weren't even taught explicitly as a 2D +[3737.500 --> 3740.380] space, you were just taught these birds. +[3740.380 --> 3742.660] Okay. +[3742.660 --> 3744.900] So that's pretty amazing. +[3744.900 --> 3752.700] In another recent study, they had subjects do a role-playing game while in the scanner. +[3752.700 --> 3756.620] In the role-playing game, they're interacting with virtual characters. +[3756.620 --> 3762.860] And those virtual characters had different kinds of social power and different affiliations +[3762.860 --> 3765.100] with other individuals. +[3765.100 --> 3773.380] So here's another kind of social space that was kind of invented by the experimenters. +[3773.380 --> 3777.760] And the subjects are playing this game, interacting with other virtual individuals who +[3777.760 --> 3785.060] vary in social dominance and their affiliation to others. +[3785.060 --> 3793.660] And they find place-cell activity that seems to echo the position of another person in +[3793.660 --> 3794.660] that social space. +[3794.660 --> 3796.980] I mean, that's extremely abstract. +[3796.980 --> 3802.580] And yet again, parts of the navigation spatial system are being co-opted to do this. +[3802.580 --> 3804.580] I'm not giving you the details on how all this is done. +[3804.580 --> 3808.540] I'm just telling you that this studies have shown that these systems are being co-opted +[3808.540 --> 3811.140] for other uses. +[3811.140 --> 3818.460] Here's another very charming, non-spatial use, well sort of spatial use of place cells. +[3818.460 --> 3822.380] So those bats, turns out, are extremely social organisms. +[3822.380 --> 3826.140] They have very sophisticated social structures and they care a lot about each other and who's +[3826.140 --> 3829.340] related to whom and who's doing what to whom. +[3829.340 --> 3833.860] And it turns out that there are social place cells in bats. +[3833.860 --> 3839.860] That is cells in this bats brain, if I were a vet, that would be representing your location. +[3840.860 --> 3845.180] So not the usual thing where my place cells are just saying, where am I? +[3845.180 --> 3850.060] I'm watching you and my place starts telling me, where are you? +[3850.060 --> 3853.580] Something social organisms care a lot about, including bats. +[3853.580 --> 3859.580] So they have an observer bat here hanging upside down and he's watching this bat fly +[3859.580 --> 3861.060] over there and back. +[3861.060 --> 3864.700] And then in this experiment, he subsequently flies that same path. +[3864.700 --> 3869.220] That's how we know that he's watching that bat because he has to mimic the bats' path +[3869.220 --> 3870.780] that he just observed. +[3870.780 --> 3878.700] But while he's watching that bat fly on that path, what you see is here's a cell right here. +[3878.700 --> 3884.780] Here is the path flown by the bat, like out and back. +[3884.780 --> 3889.500] And this is when the bat is flying out and back himself and this is when the other bat +[3889.500 --> 3891.540] is flying out and back. +[3891.540 --> 3896.220] Blur it a little bit, here's the place field for self and the place field for other. +[3896.220 --> 3897.220] They're not the same. +[3897.220 --> 3902.100] A given cell doesn't represent the same location when it's me who's there and when it's the +[3902.100 --> 3905.580] person or bat I'm watching who's there. +[3905.580 --> 3911.020] But there are place fields in both cases. +[3911.020 --> 3912.260] Social place cells. +[3912.260 --> 3913.260] Okay. +[3913.260 --> 3915.900] I'm going to keep going because otherwise I'm going to run out of time, but I'll hang around +[3915.900 --> 3917.860] after. +[3917.860 --> 3918.860] Okay. +[3918.860 --> 3929.620] So this whole system is used not just for representing social status, what kind of bird this is in +[3929.620 --> 3935.660] this abstract bird space, but actually for making decisions for thinking. +[3935.660 --> 3943.580] So as rats run in mazes, you can record, we've shown this already, you can show multiple +[3943.580 --> 3945.460] hippocampal place cells. +[3945.460 --> 3949.380] And can you guys imagine that if we were recording from several different hippocampal cells at +[3949.380 --> 3953.580] the same time, we could read out those cells and make a guess about where the rat is in +[3953.580 --> 3954.580] its location. +[3954.580 --> 3957.540] It's just like MVPA, but done across neurons. +[3957.540 --> 3960.940] We have pretty good sense of where the rat is. +[3960.940 --> 3961.940] Okay. +[3961.940 --> 3967.140] So now we have a rat navigating around in this maze and what I'm going to show you, the white +[3967.140 --> 3970.260] circle is where the rat actually is. +[3970.260 --> 3974.740] And the little color thing is telling you where the simultaneous read out from several +[3974.740 --> 3980.060] place cells in that rat's hippocampus would predict where the rat is. +[3980.060 --> 3982.860] Like can we tell where the rat is by looking at its place cells? +[3982.860 --> 3983.860] Okay. +[3983.860 --> 3984.860] So right here, they're in the same place. +[3984.860 --> 3985.860] It makes sense. +[3985.860 --> 3987.460] The rat is right there and we're reading it out. +[3987.460 --> 3988.460] Okay. +[3988.460 --> 3989.660] So far so good. +[3989.660 --> 3995.460] But now what we're going to do is watch what that place cell location does as the rat moves +[3995.460 --> 4000.420] around in his environment and makes decisions about where to go next. +[4000.420 --> 4001.420] Okay. +[4001.420 --> 4002.420] Okay. +[4003.020 --> 4006.300] So what we're going to see is the rat is going to come up to an intersection of the maze. +[4006.300 --> 4008.380] I think it's right here. +[4008.380 --> 4010.220] And he's going to decide. +[4010.220 --> 4011.220] Am I going to go this way? +[4011.220 --> 4013.100] Am I going to go that way? +[4013.100 --> 4016.220] And as the rat stays there, deciding which way to go. +[4016.220 --> 4020.620] And the white dot stays there as he sits there thinking, huh, should I do this? +[4020.620 --> 4021.820] Should I do that? +[4021.820 --> 4024.180] You could call it neural deliberation. +[4024.180 --> 4028.100] We will see what his place cell activity shows you. +[4028.100 --> 4029.620] Okay. +[4029.620 --> 4031.900] So here we go. +[4031.900 --> 4032.900] That starts there. +[4032.900 --> 4033.900] Whoops. +[4033.900 --> 4035.900] How do I play this here? +[4035.900 --> 4036.900] Okay. +[4036.900 --> 4039.500] So rat is heading up there and so is place cells. +[4039.500 --> 4041.540] He comes up to the intersection. +[4041.540 --> 4042.700] He stays in one place. +[4042.700 --> 4044.300] But look what his place cells are doing. +[4044.300 --> 4046.620] Should I go over there? +[4046.620 --> 4048.580] I'm just interpreting what this means. +[4048.580 --> 4051.300] But it sure looks like neural deliberation to me. +[4051.300 --> 4053.700] And that's what he decided. +[4053.700 --> 4056.220] Everybody get what we just saw? +[4056.220 --> 4058.420] While he's standing there, he's in one place. +[4058.420 --> 4060.940] But he's clearly deciding where to go next. +[4060.940 --> 4066.460] And while he's deciding, those place cells are essentially, apparently, running simulations +[4066.460 --> 4068.940] of where he might go next. +[4068.940 --> 4070.700] Okay. +[4070.700 --> 4074.620] So we started with this big long list of things you need to know to navigate around in the +[4074.620 --> 4075.620] world. +[4075.620 --> 4078.660] And the neural basis of all this is really not understood yet. +[4078.660 --> 4084.260] But I've shown you what I think are a bunch of tantalizing snippets, which lead to the +[4084.260 --> 4088.820] idea that our best current guess about the kind of neural locus of these things, which +[4088.820 --> 4094.300] is very far from the actual understanding of how they work, is that the perception of +[4094.300 --> 4099.140] the layout of space around us, the PPA and the occipital place area are very involved in +[4099.140 --> 4100.660] that. +[4100.660 --> 4104.700] Also in saying, for an unfamiliar place, what kind of place is this? +[4104.700 --> 4108.180] I didn't show you those data, but you can, in fact, decode whether you're looking at +[4108.180 --> 4113.300] a scene or a beach by looking at the pattern of response in the PPA. +[4113.300 --> 4117.580] We talked about the idea that the retrospective neural cortex may be involved in recognizing +[4117.580 --> 4118.580] familiar locations. +[4118.580 --> 4121.900] That's a bit of a question mark. +[4121.900 --> 4126.100] That the idea that your map of the world is represented in your hippocampus by way of +[4126.100 --> 4131.380] place cells, which also say where you are in that world, that you're heading direction. +[4131.380 --> 4134.340] In humans, I didn't give you all the evidence for this, but in humans, there's quite a bit +[4134.340 --> 4137.820] of evidence that retrospective neural cortex is very involved in heading direction. +[4137.820 --> 4139.140] I guess I did give you evidence. +[4139.140 --> 4143.700] Patients who have had damage there and can recognize places, but not know how they're +[4143.700 --> 4145.500] oriented there. +[4145.500 --> 4149.540] That planning routes around boundaries in your environment involves the occipital +[4149.540 --> 4155.340] place area and the parapagampal place area, and that this business of reorientation seems +[4155.340 --> 4162.060] to particularly involve heading direction cells in humans, most likely in retrospective +[4162.060 --> 4163.060] neural cortex. +[4163.060 --> 4165.060] So, you don't need to memorize all that. +[4165.060 --> 4167.260] I mean, I don't care that much about the locations. +[4167.260 --> 4171.180] What I want you guys to understand is what are these problems that are involved in navigation, +[4171.180 --> 4175.740] and what kinds of things can we learn with different kinds of behavioral and neural measures. +[4175.740 --> 4179.660] You may have noticed in the last couple lectures that I presented lots of behavioral data, +[4179.660 --> 4184.820] because actually, so far, the richest insights about how the system actually works still come +[4184.820 --> 4188.260] or many of the rich ones come from behavioral data. +[4188.260 --> 4190.660] Okay, quiz is in two minutes. +[4190.660 --> 4194.660] Does anybody want to ask me a question before the quiz? +[4194.660 --> 4195.660] Yeah. +[4196.660 --> 4197.660] Yeah. +[4197.660 --> 4202.660] Do you know that the predictor may be less important than the small points? +[4202.660 --> 4208.140] Yeah, so you have to do lots of controls to work that out, and I didn't show you any +[4208.140 --> 4210.980] of the details of the data. +[4210.980 --> 4215.660] But yeah, these guys are pretty careful, and there are many different ways in which people +[4215.660 --> 4224.860] are watching hippocampal neurons and decoding trajectories from hippocampal neurons. +[4224.860 --> 4229.260] You may have heard about replay, which is like a big thing in this department, Tanagawa +[4229.260 --> 4235.860] and Wilson Lab study this, where you have a moving around in one trajectory during the +[4235.860 --> 4240.180] day, and then you record from those neurons at night, and you see replay of the trajectories +[4240.180 --> 4242.740] that the rodent went through in the previous day. +[4242.740 --> 4246.180] And so there you have to be really careful to say, okay, there's a lot of data and a lot +[4246.180 --> 4248.140] of noise, and is this really more than the noise? +[4248.140 --> 4251.380] And it is, but it takes a lot of statistical work to show that. +[4251.380 --> 4252.380] Yeah. +[4252.380 --> 4255.380] So, what's my favorite scenario? +[4255.380 --> 4262.380] Say like a place that I know with the neurons, the same neurons as far as I go back to that +[4262.380 --> 4265.300] environment, I assume, or like the choice. +[4265.300 --> 4266.300] Yeah? +[4266.300 --> 4267.300] Yeah. +[4267.300 --> 4272.900] But then you can't really have that, or like you can't do that a lot for those lines. +[4272.900 --> 4273.900] It's a good question. +[4273.900 --> 4274.900] How do we have enough neurons? +[4274.900 --> 4278.380] Yeah, especially for some place we go every six months, are they sitting around waiting +[4278.380 --> 4279.380] for us to go back there? +[4279.740 --> 4283.620] There's some recycling of neurons across very different locations. +[4283.620 --> 4287.460] So within that location, they'll be consistent, but yes, you do recycle. +[4287.460 --> 4291.460] So the same neuron will have one place cell in this environment, and it may or may not +[4291.460 --> 4293.500] have a place cell in another environment. +[4293.500 --> 4294.100] That's a good question. diff --git a/transcript/allocentric_pw3FZ3xOBVo.txt b/transcript/allocentric_pw3FZ3xOBVo.txt new file mode 100644 index 0000000000000000000000000000000000000000..e007fad3328c2b4ac7b9d4bbca1a7d91c84463a7 --- /dev/null +++ b/transcript/allocentric_pw3FZ3xOBVo.txt @@ -0,0 +1,75 @@ +[0.000 --> 7.000] This is a perfectly normal elevator ride. +[7.000 --> 12.000] This elevator ride is incredibly uncomfortable. +[12.000 --> 15.000] What if you could measure these awkward moments? +[15.000 --> 18.000] What if you could transcribe them like a conversation? +[18.000 --> 22.000] In the 1960s, an anthropologist did just that. +[22.000 --> 27.000] Edward Twitchell Hall is known for conceptualizing the personal space bubble. +[27.000 --> 34.000] He also created a whole system of notation to record how people navigate shared space. +[34.000 --> 40.000] Hall had been around the world and taught thousands of foreign service personnel how to communicate in different cultures. +[40.000 --> 44.000] He believed culture and communication were inseparable. +[44.000 --> 49.000] That communication was as present in silence as in speech. +[49.000 --> 54.000] He once wrote, Man has developed his territoriality to an almost unbelievable extent. +[54.000 --> 57.000] Yet we treat space somewhat as we treat sex. +[57.000 --> 59.000] It is there, but we don't talk about it. +[59.000 --> 62.000] Hall called his study Proximics. +[62.000 --> 70.000] And it dissected personal interaction with eight key modes of analysis that each had their own code for recording. +[70.000 --> 73.000] One, posture and sex. +[73.000 --> 77.000] These drawings use simple lines to show if it was a man or a woman. +[78.000 --> 85.000] And if they were standing, sitting, or lying down. +[85.000 --> 90.000] Every symbol got a number, too, so each position could be clear in an instant. +[90.000 --> 93.000] Two, how people interacted. +[93.000 --> 98.000] Sociofugal relationships preserved an individual's privacy. +[98.000 --> 101.000] Sociopital ones encouraged interaction. +[102.000 --> 106.000] John is from above he could show if people were facing each other or not. +[106.000 --> 109.000] Here's a couple side to side. +[109.000 --> 113.000] Here's one back to back. +[113.000 --> 117.000] And they could measure the effects of space on interaction. +[117.000 --> 119.000] Three and four. +[119.000 --> 121.000] Touch and space. +[121.000 --> 124.000] He built a grid to describe every touch. +[124.000 --> 127.000] Zero, zero was closest with a caress. +[127.000 --> 129.000] Six meant no contact at all. +[129.000 --> 132.000] And in between was the nuance of human interaction. +[132.000 --> 134.000] 22 might be a hug. +[134.000 --> 136.000] 33, a high five. +[136.000 --> 137.000] Five. +[137.000 --> 138.000] A visual code. +[138.000 --> 141.000] Even eye contact could be quantified. +[141.000 --> 146.000] From the center of one retina to the center of another, it could be dazzlingly direct. +[146.000 --> 150.000] Or it could be the peripheral vision that dodged real connection. +[150.000 --> 151.000] Six. +[151.000 --> 152.000] Body heat. +[152.000 --> 156.000] Body heat could be recorded, too, is another way of measuring connection. +[156.000 --> 161.000] Hall quoted one subject who said she could feel her dance partner stomach heat up. +[161.000 --> 162.000] Seven. +[162.000 --> 163.000] Smell. +[163.000 --> 166.000] He even monitored smell and breath. +[166.000 --> 168.000] Giving it its own code. +[168.000 --> 172.000] DBO means smell as differentiated body odor. +[172.000 --> 175.000] A wafting smell could be as loud as a word. +[175.000 --> 178.000] This is the section about smell, isn't it? +[178.000 --> 179.000] Shhh. +[179.000 --> 180.000] Eight. +[180.000 --> 181.000] Loudness. +[181.000 --> 184.000] Now if somebody said, Jeremy, I got you the documents. +[184.000 --> 187.000] It could be coded on a scale to measure the nuance. +[187.000 --> 191.000] Jeremy, I got you the documents. +[191.000 --> 197.000] Now observers could describe interactions like a meeting without needing to use words. +[197.000 --> 203.000] Instead, they could show a man sitting in a group touching no one with indirect eye contact, +[203.000 --> 205.000] no heat or smell and a soft voice. +[205.000 --> 213.000] And together, all of these precise measurements help discover the personal space bubble we all know. +[213.000 --> 219.000] Hall refined it in other papers and books, but his personal space bubble is the one we know well +[219.000 --> 222.000] as he defined it in his book, The Hidden Dimension. +[222.000 --> 227.000] Surrounding a person, he found a one-foot bubble split in two for intimate space. +[227.000 --> 230.000] A bubble of personal space followed out to four feet. +[230.000 --> 236.000] Beyond that was the social space of four to ten feet and public space beyond that. +[236.000 --> 241.000] It became how we think of space, just because one person bothered to observe it. +[241.000 --> 246.000] Today, we still use Proximics to understand space and people. +[246.000 --> 254.000] It's guided us not as a rulebook, but as a theory for everyone from theater directors to intercultural communicators to video game designers. +[254.000 --> 256.000] He's nice, but of a close talker. +[256.000 --> 257.000] Oh, what? +[257.000 --> 259.000] How long you folks in town? +[259.000 --> 263.000] You won't make the elevator ride more comfortable. +[263.000 --> 267.000] But now, at least you know how to describe it. +[271.000 --> 279.000] So Hall had a lot of different inspirations for Proximics, but I wanted to talk about one that was kind of unexpected, an ornithologist. +[279.000 --> 285.000] He was inspired by each e-haward who wrote about territory in bird life. diff --git a/transcript/allocentric_q7_TOQW8Jcg.txt b/transcript/allocentric_q7_TOQW8Jcg.txt new file mode 100644 index 0000000000000000000000000000000000000000..db434513fc3236968fb0e5dffe8e8053fccc74cb --- /dev/null +++ b/transcript/allocentric_q7_TOQW8Jcg.txt @@ -0,0 +1,4 @@ +[30.000 --> 60.000] ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿� +[60.000 --> 90.000] ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ� +[90.000 --> 120.000] ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ� +[120.000 --> 128.220] ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ‿ʻ ʻ� diff --git a/transcript/allocentric_qDXo83OtzgE.txt b/transcript/allocentric_qDXo83OtzgE.txt new file mode 100644 index 0000000000000000000000000000000000000000..d28a4002228cd2e32f04b590e5fe26e5803d09d9 --- /dev/null +++ b/transcript/allocentric_qDXo83OtzgE.txt @@ -0,0 +1,274 @@ +[0.000 --> 10.000] There's no such thing as completely normal. +[10.000 --> 15.000] I mean, there's some people with holiday, +[15.000 --> 20.000] but I'm not sure if I'm right or wrong. +[20.000 --> 25.000] I'm not sure if I'm right or wrong. +[26.000 --> 32.000] I mean, there's some people with holidays yet. +[32.000 --> 37.000] That would mean somebody who's addicted to holidays, +[37.000 --> 44.000] birthdays, death dates, and other anniversaries in personitis. +[44.000 --> 49.000] Somebody who is addicted to different voice impersonations +[49.000 --> 52.000] and they cannot find their home voice, +[52.000 --> 58.000] but people may be stuck with it. +[58.000 --> 63.000] I mean, there's some people with disabilities who want to try to be normal and to fit in. +[63.000 --> 66.000] But being a person with Asperger's, +[66.000 --> 76.000] they may have some very clever ideas that may become unheard of in the normal world. +[77.000 --> 79.000] Well, in terms of growing up, +[79.000 --> 82.000] I preferred to mostly play on my own. +[82.000 --> 84.000] When I was at kindergarten, +[84.000 --> 87.000] I preferred to be rocking on the rocking horse +[87.000 --> 95.000] and the teachers are trying to encourage me to interact with the other children. +[95.000 --> 98.000] The horse then was taken away, +[98.000 --> 101.000] but I preferred, even without the horse, +[101.000 --> 104.000] to sort of play or do things on my own. +[119.000 --> 123.000] Autism affects my life in several ways. +[123.000 --> 127.000] I have to sort of know on a repetitious level +[127.000 --> 131.000] like how to do things accordingly. +[131.000 --> 137.000] Change is very difficult in a routine. +[137.000 --> 140.000] It's hard to interact with people, +[140.000 --> 144.000] even I'd like to get to know people better. +[144.000 --> 151.000] I try to listen very hard and try to become interested and gradually be friends. +[151.000 --> 153.000] Sometimes it does work, +[154.000 --> 160.000] but sometimes I know the subject matters as sort of limited with normal individuals. +[160.000 --> 167.000] I've had cases where I felt turned down, but silently. +[167.000 --> 170.000] I sometimes feel disappointed and hurt, +[170.000 --> 177.000] then I retreat and go back into my own indifferent world. +[177.000 --> 181.000] I may daydream and sometimes wish of certain fantasies, +[181.000 --> 188.000] things I sort of make up as a way to try to hide from reality. +[188.000 --> 193.000] Something to make me happy and to absorb into my head, +[193.000 --> 198.000] such as music and the arts. +[203.000 --> 207.000] I tell you I'm not the rain man. +[212.000 --> 217.000] I'm not the rain man. +[218.000 --> 221.000] If I was scared with being, +[221.000 --> 225.000] shoot pop baby, baby, pop, pop. +[225.000 --> 232.000] Shoot me down baby, baby, pop, pop. +[233.000 --> 237.000] Baby, baby, pop, pop. +[237.000 --> 242.000] Baby, baby, baby, pop, pop. +[249.000 --> 252.000] Boy, Jordan, that's really great how you play the piano. +[252.000 --> 253.000] Thank you very much. +[253.000 --> 255.000] You're a very hard playing piano. +[255.000 --> 256.000] Thank you. +[256.000 --> 257.000] Am I a lousy girlfriend? +[257.000 --> 258.000] Not at all. +[258.000 --> 259.000] Don't put yourself down. +[259.000 --> 261.000] You're the best, Jordan. +[261.000 --> 262.000] You are. +[267.000 --> 271.000] It is very special to really have a Tony around it +[271.000 --> 273.000] and at times I cannot have gotten by, +[273.000 --> 280.000] I know without Tony, she's sort of brought in the reality and into me. +[280.000 --> 281.000] Jordan? +[281.000 --> 284.000] She does bring structure into my life. +[284.000 --> 288.000] I mean, cooperativeness is a very important point +[288.000 --> 294.000] that I learned from her and dealing with relationships in order to make it work. +[294.000 --> 297.000] Jordan, can you come here? +[297.000 --> 298.000] I love you. +[298.000 --> 299.000] Love you too. +[299.000 --> 300.000] Okay, Jordan? +[300.000 --> 301.000] Yeah. +[301.000 --> 302.000] I love you. +[302.000 --> 303.000] Love you too. +[303.000 --> 304.000] Jordan? +[304.000 --> 305.000] I know. +[305.000 --> 306.000] I love you. +[306.000 --> 307.000] Jordan, when you come here? +[307.000 --> 308.000] Yeah. +[308.000 --> 309.000] I love you. +[309.000 --> 310.000] I know your game. +[310.000 --> 311.000] Come on. +[311.000 --> 312.000] Jordan! +[312.000 --> 313.000] I heard. +[313.000 --> 314.000] Come on. +[314.000 --> 315.000] Let's go on. +[315.000 --> 316.000] I love you. +[316.000 --> 320.000] Tony has Tourette Syndrome, which involves his twitches +[320.000 --> 324.000] and she does take medications for them. +[324.000 --> 328.000] She has also a learning disability. +[328.000 --> 333.000] I learned to of course accept this thing and to accept who she or herself is. +[333.000 --> 337.000] I got sent to my room and I lost my TV too, +[337.000 --> 340.000] so I couldn't watch a little house on the prairie. +[340.000 --> 341.000] So it's too bad. +[341.000 --> 343.000] You really were a brat, I see that. +[343.000 --> 346.000] You were a brat and sleep with me for two days. +[346.000 --> 349.000] Yes, you mean to wake up for those two days, I guess? +[349.000 --> 354.000] I tried to do fine things that will get me with Tony. +[354.000 --> 358.000] I mean, relief for both of us and for me, as an example, +[358.000 --> 363.000] the head shelter is a place that does give Tony and for me a relief. +[363.000 --> 365.000] Mommy's here. +[365.000 --> 367.000] Oh, yes. +[367.000 --> 368.000] How about a little kiss? +[368.000 --> 370.000] You, Bobby? +[370.000 --> 372.000] Oh, here's Rochelle. +[372.000 --> 374.000] He's a nice cat. +[374.000 --> 376.000] She likes attention. +[376.000 --> 378.000] Let me Rochelle. +[378.000 --> 383.000] Oh, I noticed when I've had another cat she gets jealous. +[383.000 --> 385.000] Well, we all need to know that. +[385.000 --> 387.000] Very few get along with her breeds. +[387.000 --> 389.000] Oh, yeah. +[389.000 --> 392.000] These are kisses. +[392.000 --> 394.000] I'm the lips. +[394.000 --> 395.000] Yeah. +[395.000 --> 399.000] Reality is an existence and it is not fictional. +[399.000 --> 402.000] Maybe a father and mother could have become one of those cats. +[402.000 --> 404.000] No, no, no, no, no, no, no. +[404.000 --> 405.000] I know. +[405.000 --> 408.000] My mom was in the birds and my dad was in the woods. +[408.000 --> 410.000] What it is, is format. +[410.000 --> 412.000] Every time Jordan when I see it, I get what I want. +[412.000 --> 415.000] It's hard to stay in reality. +[415.000 --> 419.000] I think watching over us. +[419.000 --> 424.000] Sometimes I can make plans, but again, these promises and plans +[424.000 --> 427.000] always go into chaos. +[428.000 --> 431.000] It's like, I think, according to Niche's, +[431.000 --> 435.000] that life you think has one circle, but no, +[435.000 --> 438.000] they added circles added to extra added circles, +[438.000 --> 440.000] which create chaos. +[440.000 --> 444.000] And then you sort of look like you're drowning. +[444.000 --> 445.000] Oh, yeah. +[445.000 --> 449.000] I think you could have been the main con in the mind. +[449.000 --> 451.000] Mom. +[451.000 --> 453.000] You're like, you're like, you're not. +[453.000 --> 457.000] Cream cheese or this cheese. +[457.000 --> 459.000] Is that a cheesy person? +[459.000 --> 465.000] Sometimes it's red on water like that and then come back to reality +[465.000 --> 469.000] and to face it instead of hiding from the present. +[469.000 --> 474.000] You can't go back to the past or do the things that you enjoyed much. +[474.000 --> 478.000] You just have to keep going forward, always forward. +[479.000 --> 484.000] So I mean, so I already come to go over. +[484.000 --> 487.000] Wow, I'm getting down. +[493.000 --> 497.000] So in order to make, say, sing to put the things back, +[497.000 --> 502.000] this is what some of the things we may have to do without. +[502.000 --> 503.000] Sorry. +[503.000 --> 504.000] No, it's okay. +[504.000 --> 505.000] It's okay. +[505.000 --> 507.000] This goes back. +[508.000 --> 510.000] It goes back. +[510.000 --> 513.000] Let me see. +[513.000 --> 515.000] And yet, but you can do it out. +[515.000 --> 516.000] Maybe one of the lots. +[516.000 --> 518.000] Maybe one of the things. +[518.000 --> 519.000] Sorry. +[519.000 --> 520.000] No, no, no. +[520.000 --> 523.000] People that we would still say that we make it my choice, +[523.000 --> 525.000] but then we'll make it to the chin. +[525.000 --> 526.000] We have other parts. +[526.000 --> 527.000] We have other parts. +[527.000 --> 528.000] That's fine. +[528.000 --> 530.000] One, we could take one out. +[530.000 --> 531.000] That's fine. +[531.000 --> 532.000] So that no choice. +[532.000 --> 533.000] Yes. +[533.000 --> 535.000] And then let me leave us with. +[535.000 --> 537.000] That we can afford. +[537.000 --> 538.000] Okay. +[538.000 --> 539.000] That's a change. +[539.000 --> 541.000] Where did everybody go? +[541.000 --> 542.000] It's okay left. +[542.000 --> 543.000] Oh, yes, it is. +[543.000 --> 544.000] That's the only on one bag. +[544.000 --> 545.000] Thank you. +[545.000 --> 546.000] Thank you very much. +[553.000 --> 555.000] With these type of disorders, I mean, +[555.000 --> 558.000] and with that understanding of what she has, +[558.000 --> 561.000] it has brought us close together. +[561.000 --> 564.000] We learned just like in Rudolph the Red Knows' reindeer, +[564.000 --> 566.000] not to run away from our troubles, +[566.000 --> 571.000] which can have a bad effect on a relationship. +[571.000 --> 576.000] I know why I need to miss five in the morning. +[576.000 --> 578.000] No, I'm not. +[578.000 --> 580.000] Well, everybody does have something. +[580.000 --> 583.000] Everybody's not completely normal, +[583.000 --> 587.000] I mean, in their ways of life. +[587.000 --> 592.000] Disabilities are secondary. +[592.000 --> 595.000] Well, we are people first. +[595.000 --> 597.000] You don't say a disabled person. +[597.000 --> 599.000] You say a person with a disability. +[599.000 --> 600.000] Right. +[600.000 --> 602.000] All in there, someone there. +[602.000 --> 605.000] Just like the song Imagine. +[605.000 --> 607.000] It's like, Miss Amidreema, +[607.000 --> 609.000] but I'm not the only one. +[609.000 --> 611.000] I hope someday you'll join us in the world +[611.000 --> 613.000] will be and live as one, +[613.000 --> 618.000] which is kind of, anyway, my hope for the future for everybody, +[618.000 --> 621.000] so that people with disabilities can be treated equally. +[621.000 --> 624.000] Just like what we are. +[639.000 --> 642.000] It's not nice to hear the kids feeling. +[642.000 --> 645.000] Yes, the big band sounds, yeah. +[645.000 --> 647.000] Yeah. +[647.000 --> 648.000] Sure. +[648.000 --> 651.000] It reminds me of you. +[651.000 --> 654.000] Sure. +[654.000 --> 656.000] One of you. +[656.000 --> 658.000] Sure. +[659.000 --> 662.000] So happy to see you. +[662.000 --> 664.000] You know, happy, you know. +[664.000 --> 666.000] So happy to see you. +[666.000 --> 668.000] She's so happy to see. +[668.000 --> 670.000] So happy to see. +[670.000 --> 672.000] So happy to see you. +[672.000 --> 673.000] Yeah. +[673.000 --> 675.000] Come on. +[675.000 --> 677.000] Let's go. +[677.000 --> 679.000] Thank you. +[679.000 --> 681.000] Oh, yeah. +[681.000 --> 683.000] Thank you. +[683.000 --> 684.000] Thank you. +[684.000 --> 685.000] Thank you. +[685.000 --> 686.000] Thank you. +[686.000 --> 687.000] Thank you. +[687.000 --> 688.000] Thank you. +[688.000 --> 689.000] Thank you. +[689.000 --> 690.000] Thank you. +[690.000 --> 691.000] Thank you. +[691.000 --> 692.000] Thank you. +[692.000 --> 693.000] Thank you. +[693.000 --> 694.000] Thank you. +[694.000 --> 695.000] Thank you. +[695.000 --> 696.000] Thank you. +[696.000 --> 697.000] Thank you. +[697.000 --> 698.000] Thank you. +[698.000 --> 699.000] Thank you. +[699.000 --> 700.000] Thank you. +[700.000 --> 701.000] Thank you. +[701.000 --> 702.000] Thank you. +[702.000 --> 703.000] Thank you. +[703.000 --> 704.000] Thank you. +[704.000 --> 705.000] Thank you. +[705.000 --> 706.000] Thank you. +[706.000 --> 707.000] Thank you. +[707.000 --> 708.000] Thank you. +[708.000 --> 709.000] Thank you. +[709.000 --> 710.000] Thank you. +[710.000 --> 711.000] Thank you. +[711.000 --> 712.000] Thank you. +[712.000 --> 713.000] Thank you. +[713.000 --> 714.000] Thank you. +[714.000 --> 715.000] Thank you. +[715.000 --> 716.000] Thank you. +[716.000 --> 717.000] Thank you. +[717.000 --> 718.000] Thank you. +[718.000 --> 719.000] Thank you. +[719.000 --> 720.000] Thank you. +[720.000 --> 721.000] Thank you. +[721.000 --> 722.000] Thank you. +[722.000 --> 723.000] Thank you. +[723.000 --> 724.000] Thank you. +[724.000 --> 725.000] Thank you. +[725.000 --> 726.000] Thank you. +[726.000 --> 727.000] Thank you. +[727.000 --> 728.000] Thank you. +[728.000 --> 729.000] Thank you. +[729.000 --> 730.000] Thank you. +[730.000 --> 731.000] Thank you. +[731.000 --> 732.000] Thank you. +[732.000 --> 733.000] Thank you. +[733.000 --> 734.000] Thank you. +[734.000 --> 735.000] Thank you. +[735.000 --> 736.000] Thank you. +[736.000 --> 737.000] Thank you. +[737.000 --> 738.000] Thank you. +[738.000 --> 739.000] Thank you. +[739.000 --> 740.000] Thank you. +[740.000 --> 741.000] Thank you. +[741.000 --> 742.000] Thank you. +[742.000 --> 743.000] Thank you. +[743.000 --> 744.000] Thank you. diff --git a/transcript/allocentric_qJOXoxAcB3E.txt b/transcript/allocentric_qJOXoxAcB3E.txt new file mode 100644 index 0000000000000000000000000000000000000000..7debcd13a8e4b83bb9d475d33b7e10b08211ed92 --- /dev/null +++ b/transcript/allocentric_qJOXoxAcB3E.txt @@ -0,0 +1,50 @@ +[60.000 --> 62.000] I'm going to have a look at the +[62.000 --> 63.000] the +[63.000 --> 64.000] the +[64.000 --> 65.000] the +[65.000 --> 66.000] the +[66.000 --> 67.000] the +[67.000 --> 68.000] the +[68.000 --> 69.000] the +[69.000 --> 70.000] the +[70.000 --> 71.000] the +[71.000 --> 72.000] the +[72.000 --> 73.000] the +[73.000 --> 74.000] the +[74.000 --> 75.000] the +[75.000 --> 76.000] the +[76.000 --> 77.000] the +[77.000 --> 78.000] the +[78.000 --> 79.000] the +[79.000 --> 80.000] the +[80.000 --> 81.000] the +[81.000 --> 82.000] the +[82.000 --> 83.000] the +[83.000 --> 84.000] the +[84.000 --> 85.000] the +[85.000 --> 86.000] the +[86.000 --> 87.000] the +[87.000 --> 88.000] the +[88.000 --> 89.000] the +[89.000 --> 90.000] the +[90.000 --> 91.000] the +[91.000 --> 92.000] the +[92.000 --> 93.000] the +[93.000 --> 94.000] the +[94.000 --> 95.000] the +[95.000 --> 96.000] the +[96.000 --> 97.000] the +[97.000 --> 98.000] the +[98.000 --> 99.000] the +[99.000 --> 100.000] the +[100.000 --> 101.000] the +[101.000 --> 103.000] the +[103.000 --> 104.000] the +[104.000 --> 104.960] the +[104.960 --> 107.000] the +[107.000 --> 108.000] the +[108.000 --> 109.000] the +[109.000 --> 110.280] the +[110.280 --> 114.000] the +[114.000 --> 115.000] the +[115.000 --> 117.160] the diff --git a/transcript/allocentric_qYYTOnevfrk.txt b/transcript/allocentric_qYYTOnevfrk.txt new file mode 100644 index 0000000000000000000000000000000000000000..1c2e4921021fcf6c7159519225a523eaedb9edae --- /dev/null +++ b/transcript/allocentric_qYYTOnevfrk.txt @@ -0,0 +1,1562 @@ +[0.000 --> 10.000] So I guess I have one main topic and then I also wrote something else on the board that we could talk about is kind of optional. +[10.000 --> 17.000] Lucas does machine learning build specifically hard to help super time lowest to machine learning applications. +[17.000 --> 25.000] So the first is kind of a self contained question of well, I'll just go ahead and stand up and start. +[25.000 --> 27.000] Does that sound better? +[27.000 --> 39.000] So I started with the question of I wanted to understand 3D orientation better. +[39.000 --> 52.000] And I think I got a useful question for understanding 3D orientation is if you just have a simple continuous attractor network that's representing 3D orientation representing and updating it. +[52.000 --> 63.000] What would it look like in the sense that like we know that a 2D continuous attractor like 2D location on this representing location is just trivial. +[63.000 --> 66.000] It's just a sheet of cells with bumps moving over it. +[66.000 --> 79.000] The 3D location one would probably like a naive one would just be a 3D sheet of cells or 3D volume of cells with the bump moving through the equivalent version for 3D orientation. +[79.000 --> 93.000] I didn't know what it would look like and I felt that I would come away smarter if I did know it looks like and it would just in general be something that maybe a tool that we reach out and need to use somewhere in one of the places where we represent orientation. +[93.000 --> 96.000] So we have a few of them. +[96.000 --> 106.000] So to kind of review from previous presentations and make sure I'm not blocking anything cool. +[106.000 --> 114.000] So we have talked about how 3D orientation can and there are multiple ways to do this. +[114.000 --> 124.000] There are multiple ways to visualize 3D orientation as a point on the surface of a sphere combined with a point in our range. +[124.000 --> 129.000] So a 2D value plus a 1D value. +[129.000 --> 143.000] For example, if you think of this dot as being if you think of a rat running over the surface of the sphere, then this dot is just where is the rat and this dot is which way is it pointed. +[143.000 --> 154.000] You could also we haven't talked about this much at least not in this form, but you could also think of this point on the sphere as being the direction of gravity and the wraps reference frame and agents reference frame. +[154.000 --> 167.000] And this being we haven't really put this language on it, but strictly speaking, this would be relative to that gravity vector like if the gravity is that way what direction is north relative to the gravity vector. +[167.000 --> 176.000] And the ring is strictly speaking, it would be representing that we basically talked about that, but not in that language. +[176.000 --> 187.000] But basically the idea of the orientation can be represented as a point on a sphere direction, gravity and a point in a ring say, a direction cells is something we talked about. +[187.000 --> 198.000] And it's a valid way, but it doesn't tell you much about what a continuous tractor would look like. It doesn't tell you much about what that volume of cells would be. +[198.000 --> 216.000] So second way to think of orientation space is you can you can imagine it as being the volume of a sphere where the spheres radius is 180 degrees or pi. +[217.000 --> 234.000] And the way this the way this comes about and this really has a I talked last week or week for about how angular velocity can be thought of as like a rotation direction and in a and a mount. +[234.000 --> 243.000] So the idea of taking a vector pointing in the direction of the axis of rotation and having its length be the rotation amount. +[243.000 --> 257.000] You can imagine all orientation space being a sphere around a point where that point is some reference location. +[257.000 --> 261.000] Sorry, whoa, reference orientation. +[261.000 --> 264.000] I'm not going to read it. +[264.000 --> 267.000] To say, worry. +[267.000 --> 277.000] We're the center of the sphere. Let's say you have like a default orientation or like the rat on top of the sphere facing north is its default orientation that's here at the center. +[277.000 --> 288.000] All of the other orientations of the rat of the agent can be reached by rotating about some axis by some amount up to 180 degrees. +[288.000 --> 293.000] So you can read this off the or the center of this is like it's a kind of a default orientation. +[293.000 --> 301.000] The direction of the vector is the rotation axis of the agent, the length of it is how much is rotated. +[301.000 --> 310.000] And the interesting interesting thing about this sphere is that all opposite points on the surface of it are the same. +[310.000 --> 323.000] Rotating 180 degrees on this axis is equivalent to rotating up 180 degrees on the opposite axis is like rotating 180 degrees one direction versus the other. +[323.000 --> 337.000] Yeah, which is to say, well, I'll just go ahead and say now, which is to say if you were to distribute cells throughout this volume. +[337.000 --> 349.000] And then you can see that cells are near each other. I mean, in the way that they're near each other within the volume, but also the cells that are over here are next to the cells over here. +[349.000 --> 352.000] Is that possible the distributes that way? +[352.000 --> 359.000] Yeah, it's hard to visualize, but yeah, you can you can set up connections between cells. +[359.000 --> 364.000] And then you can set up a network of different types of cells that are not physical, right? Right, you can't you can't place them. +[364.000 --> 369.000] Well, I'm sure imagine like bump is a continuous attractor network. +[369.000 --> 372.000] It's not just connections, the bump physically. +[372.000 --> 376.000] Yeah, I'll show animations of that happening. +[376.000 --> 381.000] Well, do you think that's possible to do this or continue cells like this? It's more of a matter of. +[381.000 --> 385.000] I think it's possible to do it with actual cells. I mean, I can show you. +[385.000 --> 391.000] Oh, okay, let me understand the question. You're saying, can you lay out cells? +[391.000 --> 397.000] You were talking earlier about how can cells be arranged with reading volume like that? +[397.000 --> 403.000] And the question is, you know, I can imagine you can do like the grid cells. +[403.000 --> 408.000] You can physically have this continuous cell that is bumps moving between them. +[408.000 --> 415.000] And it keeps repeating itself and it all works. The cells that are the bump moves continue to see the multiple space. +[415.000 --> 419.000] And the question's not obviously you can have the bump move can visit. +[419.000 --> 423.000] That you could move it physically move into a particular space here. +[423.000 --> 431.000] Okay, you can't you can't arrange cells in physical space so that. +[431.000 --> 434.000] So that the bump never hops. +[434.000 --> 438.000] So you can arrange cells like that's but the bump will sometimes hop. +[438.000 --> 442.000] Now, it's interesting because you think about grid cells you could argue the bump is hopping too. +[442.000 --> 444.000] Yeah, that's what I was a little bit. +[444.000 --> 447.000] Yeah, but so I want to push on this point a little bit. +[447.000 --> 451.000] Yeah, because the grid cells like, okay, so you have this two-manual sheet of cells. +[451.000 --> 455.000] The bump was off to the right. That's a reappear on the lap at the way that looks like the way that's being done is the bump. +[455.000 --> 458.000] You got multiple tiles to these and the bump continues to move in the bump. +[458.000 --> 463.000] And so I'm just wondering. I wonder this question about. +[463.000 --> 468.000] With orientation, whether it's possible to have a similar type of setup or not. +[468.000 --> 477.000] Where you could argue the bump moves backward if I just have a continuous sheet could the bump continue on by multiple copies of the same thing. +[477.000 --> 479.000] Could that same property that would work with grits? +[479.000 --> 482.000] I'll work here. It's not obviously a cut it. +[482.000 --> 484.000] My intuition would say not, but. +[484.000 --> 486.000] But I don't really know. +[486.000 --> 488.000] Do you have a sense of that? I don't know. +[488.000 --> 490.000] No, I don't want to waste time. +[490.000 --> 493.000] I think it'll be. +[493.000 --> 495.000] For now, I don't have a good answer. +[495.000 --> 500.000] I mean, because the question you did close the questions, what would a. +[500.000 --> 505.000] In terms of cells, you start off by proposing to cells and you can not look like so. +[505.000 --> 507.000] At this point, you could ban that. +[507.000 --> 509.000] Just you never knock out words about that. +[509.000 --> 512.000] Well, I'm still going to be showing the cells. +[512.000 --> 514.000] You'll see it in a second. +[514.000 --> 517.000] I'm going to show bumps moving through cells, bumps hopping and. +[517.000 --> 521.000] Well, again, with one thing is to say I can make the connections work this way. +[521.000 --> 522.000] Yeah, yeah, there's this. +[522.000 --> 525.000] It's a difference between having grits out representing one. +[525.000 --> 530.000] You know, one little module versus the series is models to get together to the opposite. +[530.000 --> 531.000] Can you do something? +[531.000 --> 532.000] It's not worth going. +[532.000 --> 535.000] I found something like you haven't really been focusing on that. +[535.000 --> 536.000] So. +[536.000 --> 537.000] Yeah, I'll show you. +[537.000 --> 539.000] I'll show you the rest of it. +[539.000 --> 541.000] I won't have answers to every. +[541.000 --> 542.000] I just want to. +[542.000 --> 543.000] Yeah. +[543.000 --> 546.000] I wonder about that a lot. +[546.000 --> 550.000] So now this visualization is correct. +[550.000 --> 553.000] Cells can be distributed in this way. +[553.000 --> 556.000] One thing about it is that. +[556.000 --> 557.000] Let's see. +[557.000 --> 561.000] If you were to. +[561.000 --> 563.000] If you were to just say, +[563.000 --> 565.000] you sell, you represent this point with this, +[565.000 --> 567.000] you sell, you represent another one. +[567.000 --> 569.000] Have been distance each other from the distance. +[569.000 --> 570.000] I was. +[570.000 --> 572.000] We're waiting on the right all the possibilities. +[572.000 --> 573.000] Or. +[573.000 --> 574.000] Yeah. +[574.000 --> 575.000] Fully numerous except. +[575.000 --> 578.000] Yes, fully numerous except. +[578.000 --> 580.000] I mean, there's a population code. +[580.000 --> 582.000] It's the individual styles represent you. +[582.000 --> 583.000] I mean, you know, +[583.000 --> 584.000] you know, +[584.000 --> 585.000] you know, +[585.000 --> 586.000] you know, +[586.000 --> 587.000] you know, +[587.000 --> 588.000] you know, +[588.000 --> 589.000] you know, +[589.000 --> 590.000] you know, +[590.000 --> 591.000] you know, +[591.000 --> 592.000] you know, +[592.000 --> 593.000] you know, +[593.000 --> 594.000] you know, +[594.000 --> 595.000] you know, +[595.000 --> 596.000] you know, +[596.000 --> 599.000] it's an individual styles represent you. +[599.000 --> 600.000] Yeah. +[600.000 --> 601.000] Yeah. +[601.000 --> 602.000] Yeah. +[602.000 --> 604.000] And you have those cells distance themselves from each other +[604.000 --> 606.000] maximum and their tuning. +[606.000 --> 608.000] They will not be uniformly spread through this fear. +[608.000 --> 609.000] This, this fear does not correctly. +[609.000 --> 610.000] Does not correctly distance, +[610.000 --> 613.000] like isn't that not equivalent to randomly placing them in this +[613.000 --> 617.000] So, we shouldn't go too deep into this, +[617.000 --> 618.080] but I just had to state, +[618.080 --> 622.800] for the record, the actual way to visualize +[622.800 --> 624.800] through the orientation space in a way +[624.800 --> 628.840] that keeps all metric relations through them clear. +[629.720 --> 633.640] It is the area of a top half of a 40 hyper-spirits, +[633.640 --> 637.840] which I mean, here I'm just showing a series of spheres +[637.840 --> 639.560] as the fourth dimension changes. +[639.560 --> 641.520] And all of this is just to say, +[641.520 --> 644.640] you can take this set of shells, the set of spheres, +[644.640 --> 647.920] and pack them into a 3D sphere. +[649.120 --> 651.240] There's a second way you can reach this conclusion +[651.240 --> 654.160] that orientation space can be visualized +[654.160 --> 657.080] as the volume of the sphere like this. +[657.080 --> 659.920] This way gets the metric more correct. +[659.920 --> 662.000] Technically, the cells are gonna be distributed +[662.000 --> 664.680] even when we're on the area of the sphere. +[664.680 --> 665.520] Anyway. +[665.520 --> 668.240] I'm always confused because when you say the cells +[668.240 --> 670.280] are distributed, I'm a magic physical. +[670.280 --> 671.400] No, I'm not. +[671.400 --> 672.560] Talk about that. +[672.560 --> 673.920] That's never what I mean. +[673.920 --> 674.920] I know. +[674.920 --> 677.040] You'd be the receptive field. +[677.040 --> 678.040] Yeah, yeah. +[678.040 --> 679.560] Yeah, yeah, distributed. +[679.560 --> 681.360] But I think, yeah, I think this, +[681.360 --> 684.160] I keep, I'm a one-track person on this thing. +[684.160 --> 687.000] I think the physical instantiation of this +[687.000 --> 688.160] is gonna be essential. +[688.160 --> 689.720] And so when you start talking about, +[689.720 --> 691.680] I'm like, oh, it's a 40 hyper-spirits, +[691.680 --> 692.880] okay, get fine. +[692.880 --> 693.720] That's good. +[693.720 --> 695.400] But I'm trying to imagine what would that be equivalent now? +[695.400 --> 699.120] How could I implement that in an actual neurons +[699.120 --> 700.280] and what would they look like? +[700.280 --> 701.600] I'm going back to like the whole idea +[701.600 --> 703.720] of mini columns and the slabs and things like that. +[703.720 --> 705.880] So I don't think it's not a neural equation. +[705.880 --> 707.440] It's an important question. +[707.440 --> 709.080] And I just want to, +[709.080 --> 710.480] and every time you use yours would sell, +[710.480 --> 711.480] that's what I'm imagining. +[711.480 --> 712.320] You're talking about something else. +[712.320 --> 714.440] Yeah, I'm talking about the distribution +[714.440 --> 716.680] in this sort of conceptual space, +[716.680 --> 718.360] not in a physical space. +[718.360 --> 721.680] Yeah, and my mental model is that these cells +[721.680 --> 723.240] are probably kind of scrambled +[723.240 --> 724.600] and I'm taking them and sorting them +[724.600 --> 726.600] in different orders for visuals. +[726.600 --> 728.120] I don't think it's scrambled, but. +[728.120 --> 729.920] We don't know what the physical band is. +[729.920 --> 732.320] I'm not placing a bet on whether they're scrambled or not. +[732.320 --> 733.160] I'll be right back. +[733.160 --> 735.320] You're not stopping it, but you're trying to just focus +[735.320 --> 736.520] on this topic. +[736.520 --> 737.680] Sure. +[737.680 --> 740.680] So I guess before I just show the pictures, +[741.920 --> 744.240] so the kind of answer to this, +[744.240 --> 745.960] what does the cam look like? +[745.960 --> 747.720] Well, the answer is if you take, +[747.720 --> 751.920] if you choose one cell of the continuous attractor +[751.920 --> 753.880] that represents through the orientation +[753.880 --> 756.120] and just like center it, +[756.720 --> 758.320] if you're sorting cells in a particular way, +[758.320 --> 760.800] just choose one cell, center it. +[760.800 --> 762.640] The rest of orientation space appears +[762.640 --> 764.720] as a sphere around that, +[764.720 --> 768.000] but there's not inherently a correct center. +[768.000 --> 769.080] So it's kind of strange. +[769.080 --> 771.440] And I'll show the visualizations of it here. +[771.440 --> 772.480] I'll just go ahead and do that. +[772.480 --> 773.480] Before you do that, +[773.480 --> 774.840] it's interesting, one point, +[774.840 --> 777.520] because somebody said that right beginning, +[777.520 --> 779.880] what I just want to keep these ideas on the table +[779.880 --> 781.480] is that you said, +[781.480 --> 785.640] oh, it's easy to imagine a location, +[785.640 --> 789.280] like a 2D location and a continuous attractor model. +[789.280 --> 790.720] But actually, it's not. +[790.720 --> 793.640] I mean, it's like because it's, +[793.640 --> 796.040] because the continuous attractor now repeats, +[797.000 --> 798.280] you don't really get location. +[798.280 --> 800.600] You have to, and the way we've got location +[800.600 --> 802.040] and other people do too, +[802.040 --> 805.400] you have different modules of different phases. +[806.800 --> 809.520] And so it's funny that the actual physical cells +[809.520 --> 811.120] that we know grid cells are not good +[811.120 --> 812.240] at representing a location, +[812.240 --> 813.080] because they... +[813.080 --> 814.080] It's not uniquely. +[814.080 --> 815.560] Not uniquely. +[815.560 --> 816.760] Yeah, but that's the whole point, right? +[816.760 --> 818.880] They don't really represent the space. +[818.880 --> 821.720] We have to come up with a trick on top of them +[821.720 --> 824.200] to get them to represent a 2D space. +[824.200 --> 826.160] And the same would be a 3D space. +[826.160 --> 828.480] So it's just an interesting thing +[828.480 --> 829.560] that the actual neurons, +[829.560 --> 830.400] the way we think, +[830.400 --> 832.040] you see them are not good at doing this. +[832.040 --> 833.600] They do rotate around, +[833.600 --> 835.440] go to like an orientation does, +[836.640 --> 838.520] unless like what we want them for a location, +[838.520 --> 840.920] which is you don't want them to be a closed space. +[840.920 --> 844.560] We don't want that tors. +[845.600 --> 847.240] Exactly, but that's what we get. +[847.240 --> 850.520] I'm just pointing out that they actually don't do what we want. +[850.520 --> 851.680] We have to come up with people come up +[851.680 --> 852.840] and trick us together to do what we want. +[852.840 --> 854.680] And then, but the property they do have +[854.680 --> 856.280] is closer to the property orientation, +[856.280 --> 857.680] which is they do rock around. +[859.000 --> 860.520] I find that an interesting clue, +[860.520 --> 863.680] and we shouldn't forget. +[863.680 --> 864.520] Okay. +[865.760 --> 868.320] Okay, I'll go ahead and switch to... +[868.320 --> 870.360] Yeah, I'm sharing my screen right now. +[871.800 --> 873.720] So I'll go ahead and put the... +[875.560 --> 877.560] Let's see. +[880.080 --> 881.520] I have a few things I'm going to show. +[881.520 --> 886.720] And yeah, I'll vary this little animation +[886.720 --> 888.320] a few different ways as I go. +[889.280 --> 892.160] So here I'm showing that I'm using some stuff +[892.160 --> 894.920] from the previous demos where I'm showing an agent +[894.920 --> 896.640] moving around on this sphere. +[896.640 --> 898.520] Here I'm just rearranging the camera. +[898.520 --> 899.360] But... +[899.360 --> 900.560] Oh, it's nice and screening. +[900.560 --> 902.720] I'm just going to try to do that before. +[902.720 --> 905.440] Oh, I guess I just never bothered to do it +[905.440 --> 906.560] in one of these demos. +[906.560 --> 907.400] I'll add seven. +[907.400 --> 909.080] Yeah, so... +[909.080 --> 911.280] So, but now what I have here on the bottom, +[911.280 --> 913.560] I'll add more little pictures in a second. +[913.560 --> 916.560] But this is that population of cells +[916.560 --> 921.040] distributed uniformly through orientation space. +[921.040 --> 923.920] I took a bunch of cells, assigned them orientations. +[923.920 --> 925.120] This is the uniform distribution +[925.120 --> 927.000] which is more like an argument with me. +[927.000 --> 929.000] And that's because I've kind of normalized it +[929.000 --> 930.960] to correct for that. +[930.960 --> 934.040] Technically, if I showed this as... +[934.200 --> 936.000] Normalized density. +[936.000 --> 939.720] If I showed it as the sphere behind me +[939.720 --> 941.680] with the volume, +[941.680 --> 944.880] it's kind of more dense on the center than it is on the outside. +[944.880 --> 946.360] It's not as nice. +[946.360 --> 947.800] That's totally obvious. +[947.800 --> 948.640] Yeah, it's not. +[948.640 --> 949.640] But I didn't... +[949.640 --> 950.960] It's not striking in that way. +[950.960 --> 952.920] I thought it was like really punched up the helmet. +[952.920 --> 954.760] But I did measure the densities of it +[954.760 --> 957.800] and confirmed that when I do it this way, +[957.800 --> 960.400] the density throughout the spheres is constant. +[960.400 --> 962.080] I could show the math if you really want +[962.080 --> 963.080] that I'm not going to remember. +[963.080 --> 964.680] Does the cells only on the surface? +[964.680 --> 965.880] Or do you have a little insight? +[965.880 --> 967.040] No, they're inside. +[967.040 --> 968.040] Like if I... +[968.040 --> 968.880] Yeah, it's empty. +[968.880 --> 969.880] Yeah, you're right. +[969.880 --> 972.200] I think when you rotate it, you can see that. +[972.200 --> 974.240] Yeah, just kind of the perspective effects. +[975.240 --> 977.200] It's not obviously the tension in the middle. +[977.200 --> 978.040] Oh, this is not... +[978.040 --> 978.880] It's not anymore. +[978.880 --> 979.760] Yeah. +[979.760 --> 981.280] So... +[981.280 --> 983.920] So each cell represents some direction +[983.920 --> 986.920] plus how much you're living above to. +[986.920 --> 988.400] Yes, no. +[988.400 --> 991.280] Yeah, so I'll go ahead and give you some more context now. +[991.280 --> 994.360] I'll show these little axes. +[994.360 --> 995.960] Or like... +[995.960 --> 999.600] So here at the center of the sphere +[999.600 --> 1002.720] is this default orientation. +[1003.720 --> 1004.560] This default... +[1004.560 --> 1005.600] The starting orientation, +[1005.600 --> 1006.840] kind of standing at the North Pole +[1006.840 --> 1008.320] with the red facing to the right. +[1009.320 --> 1012.240] And moving a certain direction in the sphere, +[1014.080 --> 1018.360] showing these axes as rotations of the agent. +[1018.360 --> 1021.520] Moving along this axis, this top axis, +[1021.520 --> 1024.440] is like rotating on the blue... +[1024.440 --> 1026.240] Rotating on that axis. +[1026.240 --> 1028.240] Or so it's rotating on the blue plane. +[1028.240 --> 1029.800] Similar with this axis, +[1029.800 --> 1031.560] it's like rotating on the pink plane. +[1031.560 --> 1033.320] This is like rotating on the blue plane. +[1034.440 --> 1035.280] And... +[1035.280 --> 1037.840] That's what it would be like on a physical sphere. +[1037.840 --> 1038.840] Uh... +[1038.840 --> 1040.520] That's... +[1040.520 --> 1043.720] Like you'd be rotating in the actual space. +[1043.720 --> 1044.800] Yeah, yeah. +[1044.800 --> 1045.640] This is a little... +[1045.640 --> 1046.480] Yeah. +[1046.640 --> 1048.560] So like a cell that's centered right here +[1048.560 --> 1050.960] represents this orientation. +[1050.960 --> 1053.720] A cell that's right here represents this orientation. +[1055.040 --> 1055.840] And... +[1055.840 --> 1058.720] Okay, I'll temporarily hide those little pictures. +[1058.720 --> 1059.800] I'll bring them back soon. +[1059.800 --> 1061.120] But now I'm going to show... +[1061.120 --> 1062.280] Oops. +[1062.280 --> 1063.440] Gonna put check boxes here, +[1063.440 --> 1065.400] but instead of type true impulse. +[1066.400 --> 1069.880] I'll go ahead and show a bump of activity. +[1069.880 --> 1071.160] Moving through it. +[1071.960 --> 1072.880] So here's a bump. +[1072.880 --> 1075.560] I'll show it moving as the agent moves. +[1075.560 --> 1078.120] That's one of those yellow squares. +[1078.120 --> 1079.800] Those are the active cells. +[1079.800 --> 1081.160] Oh, I was just saying multiple cells. +[1081.160 --> 1082.160] Exactly. +[1082.160 --> 1083.160] That's not a exact kind of... +[1083.160 --> 1084.160] Yeah. +[1084.160 --> 1085.400] And so... +[1085.400 --> 1087.600] And start a bunch of movements. +[1089.200 --> 1090.720] I must have pressed... +[1090.720 --> 1092.360] There we go. +[1092.360 --> 1095.720] So now the agent is moving just randomly over +[1095.720 --> 1097.480] through orientation space. +[1097.480 --> 1100.200] And this bump of activity is moving +[1100.200 --> 1101.840] tracking its orientation. +[1102.840 --> 1104.160] And it's... +[1104.160 --> 1106.640] I'll show pictures a little bit to make this clearer. +[1106.640 --> 1109.840] But right now you'll see that the bump is... +[1109.840 --> 1112.840] It's usually arcing from position to position. +[1112.840 --> 1115.040] It's not going in a straight line from position to position. +[1115.040 --> 1117.040] Let's get your movements around. +[1117.040 --> 1118.040] Well, no... +[1118.040 --> 1120.240] If you've rotated just on around an axis, +[1120.240 --> 1121.240] then we can go one direction. +[1121.240 --> 1123.840] Each of these movements has a rotation around an axis. +[1123.840 --> 1124.640] Why would you... +[1124.640 --> 1126.640] If I just rotate around one of the planes of the... +[1126.640 --> 1127.640] Of the square by wind, +[1127.640 --> 1129.640] the distance between us is the center of it. +[1129.640 --> 1131.240] So it does... +[1131.240 --> 1132.240] If... +[1135.240 --> 1137.240] I can show you that... +[1137.240 --> 1138.240] Yeah. +[1138.240 --> 1141.240] So I mean, I know the thing randomly moved on to something with it. +[1141.240 --> 1142.240] But if I was on the surface, +[1142.240 --> 1144.240] I just rotate it around one axis. +[1144.240 --> 1146.640] When they move in a linear interaction towards the center... +[1146.640 --> 1149.240] It does if the bump is at the center. +[1150.440 --> 1152.040] If the bump is at the center, +[1152.040 --> 1156.840] then all rotations are going to be some straight direction. +[1156.840 --> 1157.840] Yeah, but if... +[1157.840 --> 1158.840] When I... +[1158.840 --> 1159.840] When I reverse it, +[1159.840 --> 1161.440] so that's the center and I'm okay. +[1161.440 --> 1162.440] I go straight out. +[1162.440 --> 1163.440] But if I go straight out, +[1163.440 --> 1164.440] wouldn't I go straight back? +[1164.440 --> 1165.440] Yes. +[1165.440 --> 1166.440] If your direction of rotation +[1166.440 --> 1168.440] lines the bump up +[1168.440 --> 1170.440] with the center, +[1170.440 --> 1172.440] then yes, it's going to be a straight line. +[1172.440 --> 1174.440] But if it's a different direction, +[1174.440 --> 1176.440] which it usually is and any visualization... +[1176.440 --> 1177.440] Oh, but you could... +[1177.440 --> 1179.440] I mean, under some... +[1179.440 --> 1181.440] Yeah, it's always straight under some... +[1181.440 --> 1182.440] Yeah. +[1182.440 --> 1183.440] ...under some view. +[1183.440 --> 1184.440] Yes. +[1184.440 --> 1185.440] And I'll show you that. +[1185.840 --> 1187.640] It's not proper that it's not a part. +[1187.640 --> 1189.240] It's just normally with the... +[1189.240 --> 1191.240] Unless you set up otherwise, +[1191.240 --> 1192.240] it would look like a part. +[1192.240 --> 1194.240] But you can turn it up so it does not hurt. +[1194.240 --> 1195.240] Right. +[1195.240 --> 1199.240] So now I'm going to start changing the visualization, +[1199.240 --> 1201.240] changing which ones at the center. +[1201.240 --> 1203.240] So I'm going to remove the bump temporarily. +[1203.240 --> 1208.240] And I'll go ahead and let it start changing. +[1209.840 --> 1214.240] So what you see here right now is... +[1215.440 --> 1217.440] Well, if you were a four-dimensional being, +[1217.440 --> 1220.440] this would appear as a 40-sphere rotating +[1220.440 --> 1221.840] and it would be very intuitive. +[1221.840 --> 1224.240] But to us, it's kind of... +[1224.240 --> 1225.640] Messi kind of... +[1225.640 --> 1226.640] Here. +[1226.640 --> 1229.640] So Messi part of the individual cell is moving up to each other. +[1229.640 --> 1231.840] There are some cells moving toward the center, +[1231.840 --> 1233.640] some are moving up and down. +[1233.640 --> 1235.640] Yeah, so then I'm moving as a Maxiv center. +[1235.640 --> 1236.640] Right. +[1236.640 --> 1238.640] They seem to have individual movements. +[1238.640 --> 1240.840] Now, here, this one's kind of fun. +[1240.840 --> 1243.640] Now, I'm bringing those axes... +[1243.640 --> 1245.440] I feel like when Star Wars is something, +[1245.440 --> 1246.440] I can't see the space. +[1246.440 --> 1247.440] Yeah. +[1247.440 --> 1252.440] So these are the same, like sort of the x, y, and zx, +[1252.440 --> 1254.240] these are what everyone had to call those before. +[1254.240 --> 1260.040] But I'm showing where they are in this rotation space +[1260.040 --> 1263.440] as I'm choosing different cells to center it on. +[1263.440 --> 1266.640] And the fun thing is, like, these paths, these lines +[1266.640 --> 1270.240] from this orientation to this orientation +[1270.240 --> 1272.840] are becoming curved, which is to say, +[1272.840 --> 1275.240] under this view, if you move from this orientation +[1275.240 --> 1279.240] to this orientation, it's going to make this kind of arc shape. +[1279.240 --> 1281.040] But under when it's centered, +[1281.040 --> 1283.040] it's going to be more straight again. +[1283.040 --> 1288.040] Meanwhile, since you can see now that what I'm doing right now +[1288.040 --> 1291.240] is I'm rotating the view along this axis. +[1291.240 --> 1295.040] So this line's staying straight. +[1295.040 --> 1297.540] Here, I'm just really trying to get a little bit of an intuition +[1297.540 --> 1298.840] for what this attractor looks like +[1298.840 --> 1300.640] or how everything's connected. +[1300.740 --> 1305.440] And now I'll go ahead and show one more picture. +[1305.440 --> 1308.240] I'm going to make that sound re-arigning +[1308.240 --> 1312.640] because it's making my CPU sad. +[1312.640 --> 1317.440] I'll go ahead and I'm going to go back. +[1317.440 --> 1318.440] I'm going to do one more thing now. +[1321.640 --> 1324.440] I've been showing this agent moving around this sphere +[1324.440 --> 1326.440] and this bump moving with the agent. +[1326.440 --> 1328.940] Now I'm going to add one more thing that occurs. +[1328.940 --> 1330.440] After every movement, I'm going to recenter +[1330.440 --> 1332.340] the center of the bump. +[1332.340 --> 1335.840] I'm going to basically change the view +[1335.840 --> 1337.840] so that the bump sit the center. +[1337.840 --> 1341.840] So I'll go ahead and do this. +[1341.840 --> 1345.640] I want to see the intuition already doing this. +[1345.640 --> 1348.840] To get another sense for, I don't know, +[1348.840 --> 1350.640] what this all looks like. +[1350.640 --> 1352.640] That's the answer. +[1352.640 --> 1355.240] So yeah, it's like. +[1355.240 --> 1356.640] I can try to back work. +[1356.640 --> 1360.240] I think I was going to have an attention +[1360.240 --> 1361.840] with the intuition I'm going to get from it. +[1361.840 --> 1363.240] Okay. +[1363.240 --> 1366.040] Yeah, here what I'm showing for one thing is +[1366.040 --> 1370.240] now the bump's movement is always a straight line. +[1370.240 --> 1373.540] So as you can see, it's kind of like the bump is reaching out +[1373.540 --> 1375.740] and pulling a part of it to the center, +[1375.740 --> 1377.440] reaching out, pulling a part to the center. +[1377.440 --> 1379.840] And every time the reach out is a straight line. +[1379.840 --> 1382.040] Because this wouldn't happen if it happened, right? +[1382.040 --> 1383.840] Because when the animal moves, +[1383.840 --> 1386.840] these orientation bumps don't be changed, right? +[1386.840 --> 1392.240] No, those second updates are just for our, +[1392.240 --> 1394.240] it's just the visualization. +[1394.240 --> 1396.040] Just to show that once you get back to the center, +[1396.040 --> 1398.040] things are straight from there. +[1398.040 --> 1399.240] Yeah, yeah. +[1399.240 --> 1402.040] So the whole point of this was, +[1402.040 --> 1404.640] okay, they're multiple purpose. +[1404.640 --> 1408.040] The point of that last demo was just get some, +[1408.040 --> 1413.240] get a little more of a sense of the connectivity of these cells, +[1413.240 --> 1416.940] what the kind of topology is of it all. +[1416.940 --> 1420.040] And yeah, that was just the last part. +[1420.040 --> 1422.040] But it's worth it for now. +[1422.040 --> 1423.240] But this would not happen. +[1423.240 --> 1424.740] Oh, yeah, just making it totally clear. +[1424.740 --> 1430.440] We're not saying that there's the bump isn't pulling cells +[1430.440 --> 1433.040] into a different shape. +[1433.040 --> 1436.940] So I guess the kind of the conclusion, +[1436.940 --> 1442.040] well, one thing, one reason I wanted to think about +[1442.040 --> 1449.840] this was, and other, +[1449.840 --> 1454.740] when we've been looking at these other ways of representing, +[1454.740 --> 1456.640] these ways of representing orientation +[1456.640 --> 1460.740] that involve the surface of a sphere and a ring. +[1460.740 --> 1462.540] There's always a little bit of weakness +[1462.540 --> 1464.240] that there's always going to be some point +[1464.240 --> 1466.740] on the surface of a sphere that is messy. +[1466.740 --> 1468.240] You could call it a singularity. +[1468.240 --> 1469.240] You could call it, +[1469.240 --> 1472.040] anyway, the point is that if you draw arrows +[1472.040 --> 1473.340] over the surface of a sphere, +[1473.340 --> 1476.540] there's going to be some point where the arrow is suddenly hot. +[1476.540 --> 1479.740] And it makes it feel like orientation +[1479.740 --> 1486.040] is inherently like this hard thing where orientation +[1486.040 --> 1489.040] is inherently something that's messy like that. +[1489.040 --> 1490.840] But what I wanted to prove to my- +[1490.840 --> 1493.640] I mean, just the idea that orientation has +[1493.640 --> 1495.440] this point where it flops all over. +[1495.440 --> 1496.640] Yeah. +[1496.640 --> 1502.640] That's surprising maybe, but is that terrible? +[1502.640 --> 1505.640] It's not like it doesn't mess up everything completely. +[1505.640 --> 1507.540] You could just say, well, that's what I have to do. +[1507.540 --> 1511.440] But what I wanted to point out here is that if you represent +[1511.440 --> 1514.340] orientation and its fullness, like if you have cells +[1514.340 --> 1517.540] represent, if you don't break it into multiple pieces like this, +[1517.540 --> 1520.740] that's probably no longer this. +[1520.740 --> 1524.740] Those weirdness, that's the part of that. +[1524.740 --> 1526.240] It's like a particular representation. +[1526.240 --> 1531.040] Or any type of representation that breaks. +[1531.040 --> 1535.340] I think you can say anything that breaks 3D orientation +[1535.340 --> 1537.340] into multiple populations, like breaks +[1537.340 --> 1541.040] into multiple variables, is going to have weirdness like this. +[1541.040 --> 1543.540] But that weirdness is an artifact of the fact +[1543.540 --> 1545.340] that you're breaking it up like that. +[1548.340 --> 1550.440] And that was one thing I wanted to understand +[1550.440 --> 1553.240] is that this fundamental orientation +[1553.240 --> 1556.040] that you have the South Pole is always broken. +[1556.040 --> 1557.040] Or is it? +[1557.040 --> 1559.240] No, but this other way is not. +[1559.240 --> 1560.240] Right. +[1560.240 --> 1563.940] And under this view, maybe this changes by animal, +[1563.940 --> 1565.740] maybe who knows? +[1565.740 --> 1567.940] Maybe it changes by part of the brain. +[1567.940 --> 1573.440] But if you do find this ring, if you do find +[1573.440 --> 1576.340] head direction cells, for example, in an animal, +[1576.340 --> 1579.240] it's still possible that the animal actually has +[1579.240 --> 1581.040] one of these 3D attractors. +[1581.040 --> 1584.840] And the red direction cells are really just reading it out. +[1584.840 --> 1591.040] And because this 3D version just works, +[1591.040 --> 1592.240] it requires a lot of cells. +[1592.240 --> 1593.940] I don't have the exact numbers on this. +[1593.940 --> 1597.340] Here, I've had like 1,500 cells being visualized. +[1597.340 --> 1602.140] But it's the kind of thing where the more cells you add, +[1602.140 --> 1605.340] the more accurate the path integration will be. +[1605.340 --> 1610.240] Anyway, there's this possibility +[1610.240 --> 1618.040] that updating of orientation occurs in this 3D space. +[1618.040 --> 1621.340] And then these other things, like the direction of gravity +[1621.340 --> 1624.640] and head direction, are kind of bred out from that. +[1624.640 --> 1626.640] And it might change by animal. +[1626.640 --> 1629.840] It could be, but it's also quite possible that it's not. +[1629.840 --> 1630.640] Right. +[1630.640 --> 1631.640] It's not very yet. +[1631.640 --> 1633.440] In fact, it seems to me more likely than the others. +[1633.440 --> 1635.540] But it could be. +[1635.540 --> 1637.740] Yeah, I mean, the nice thing here is that this is +[1637.740 --> 1641.340] much fewer cells that they're always being used. +[1641.340 --> 1644.800] The one point that the paper that we're basically +[1644.800 --> 1648.040] talking about from Kate Jeffery's lab, +[1648.040 --> 1650.740] one thing it brings up is that the weird thing +[1650.740 --> 1654.940] about having a full 3D attractor for orientation +[1654.940 --> 1656.840] is that many of those orientations +[1656.840 --> 1659.240] are going to be very rarely visited. +[1659.240 --> 1662.040] And so it's just strange to just have +[1662.040 --> 1665.540] the exhausting neural material and keeping that all +[1665.540 --> 1666.940] correctly connected and everything. +[1666.940 --> 1672.940] So the thing I'm glad that I know now +[1672.940 --> 1677.240] is that the weirdness is an artifact +[1677.240 --> 1679.040] of the style of representation. +[1679.040 --> 1681.740] It's totally possible there, but it's not fundamental +[1681.740 --> 1682.540] for any picture. +[1682.540 --> 1685.540] No. +[1685.540 --> 1690.340] So yeah, that's pretty much that topic. +[1690.440 --> 1698.740] I went ahead and I figured it would be useful to talk +[1698.740 --> 1703.340] a little bit about where all this fits into our models +[1703.340 --> 1705.940] and it would probably produce a discussion. +[1705.940 --> 1706.940] Hey, Marcus. +[1706.940 --> 1707.940] Yeah. +[1707.940 --> 1709.740] Would you share your screen again? +[1709.740 --> 1711.340] Or wait, are you still? +[1711.340 --> 1713.940] Yeah, I'm actually used it again. +[1713.940 --> 1714.440] I got it. +[1714.440 --> 1714.940] I got it. +[1714.940 --> 1715.440] Everything's cool. +[1715.440 --> 1717.140] It was all my fault. +[1717.140 --> 1718.340] Yeah, I'm using the whiteboard here. +[1718.340 --> 1722.240] I'll turn it toward this item. +[1722.240 --> 1728.540] So here I want to talk about where orientation is represented +[1728.540 --> 1732.220] in our model can mean a lot of things, +[1732.220 --> 1735.140] but one version of our model that we've drawn before, +[1735.140 --> 1739.240] the version that we presented at cosine last year. +[1739.240 --> 1745.940] And the good summary is I'm going to use part of this +[1746.040 --> 1748.940] as me showing my point of view or using the terminology +[1748.940 --> 1757.940] from a while ago to talk about in the past, +[1757.940 --> 1761.040] for example, in a meeting a couple weeks ago, +[1761.040 --> 1769.940] I was saying that the location of the sensed feature +[1769.940 --> 1773.540] is represented in a different way than we usually have talked +[1773.640 --> 1774.640] about. +[1774.640 --> 1777.640] And I kind of provocatively was saying that, +[1777.640 --> 1779.540] I mean, I guess this is probably the right thing +[1779.540 --> 1781.340] to prevent that. +[1781.340 --> 1786.640] The location of here's one thing. +[1786.640 --> 1790.340] When you're viewing a coffee cup as your eyes moving +[1790.340 --> 1795.140] over the coffee cup, are you representing the location? +[1795.140 --> 1797.940] Are you representing like, if you have a laser shooting +[1797.940 --> 1799.340] after your eye and hitting the cup, +[1799.340 --> 1800.940] are you representing that location? +[1800.940 --> 1805.740] Or are you representing where your eye is relative to the cup? +[1805.740 --> 1808.940] And I just wanted to lay out one coherent view +[1808.940 --> 1812.140] of all of this that does represent the location of the sensed +[1812.140 --> 1815.140] feature in a different way is kind of what we talked about +[1815.140 --> 1817.340] in the cosine poster last year. +[1817.340 --> 1822.640] So to keep the, this might confuse the conversation +[1822.640 --> 1824.540] or might make you less confusing, +[1824.540 --> 1827.340] but I'm going to use this terminology of, +[1827.340 --> 1829.540] instead of saying child object and parent object, +[1829.540 --> 1832.340] I'm going to say feature and object. +[1832.340 --> 1833.540] But there are equivalent. +[1833.540 --> 1835.740] Yeah, yeah. +[1835.740 --> 1839.740] And so we have these three layers down +[1839.740 --> 1843.040] at the bottom of the cortex that we often talk about +[1843.040 --> 1845.340] is representing locations and orientations +[1845.340 --> 1847.340] or some mix of the two. +[1847.340 --> 1851.940] And just lay out this model. +[1851.940 --> 1854.540] The 6 a and 4 are doing our normal thing +[1854.540 --> 1855.340] that we talk about. +[1855.340 --> 1857.540] It's a location of a sensor. +[1857.540 --> 1861.840] For example, a touch sensor relative to some basic feature, +[1861.840 --> 1867.140] like a cylinder or a cube or a handle, +[1867.140 --> 1870.140] which might be a little bit of stretch, but let's go with it. +[1870.140 --> 1873.740] So yeah, this is just the basic model +[1873.740 --> 1876.940] for paper locations and the neocortex. +[1876.940 --> 1878.540] So we're adding orientations. +[1878.540 --> 1879.540] Yeah, yes. +[1879.540 --> 1882.140] Because that was one of the things to talk about. +[1882.140 --> 1884.340] Is that sufficient to predict what's in line 4? +[1884.340 --> 1886.340] I think what you're saying in this particular example, +[1886.340 --> 1888.740] is that there's nothing else to do with this. +[1888.740 --> 1890.140] Yeah, location plus orientation. +[1890.140 --> 1893.040] That's the only context right at one. +[1893.040 --> 1895.740] Yeah, now one thing I'll throw in. +[1895.740 --> 1897.640] I didn't know how to denote orientations. +[1897.640 --> 1901.940] So I just drew like a volume of a sphere somehow. +[1901.940 --> 1906.340] The volume of a sphere that's like supposed to be like this. +[1906.340 --> 1909.740] But this circle might actually be broken +[1909.740 --> 1911.740] into multiple subpopulations. +[1911.740 --> 1914.940] The orientation could be represented in different ways. +[1914.940 --> 1916.140] Locations could be represented. +[1916.140 --> 1919.540] Yeah, you're just saying we have the trapezoids +[1919.540 --> 1922.540] for location as well. +[1922.540 --> 1925.140] Yeah, but what I've drawn here is, +[1925.140 --> 1927.940] yeah, yeah, it's just an example of how it could occur. +[1927.940 --> 1930.340] But you could swap in any of these for something else +[1930.340 --> 1932.540] and have it still work. +[1932.540 --> 1935.340] So, but it's conceptually on a high level. +[1935.340 --> 1940.540] These labels will apply to all of those, +[1940.540 --> 1943.140] no matter how you swap in those details. +[1943.140 --> 1948.540] So we've in this model, +[1948.540 --> 1950.340] the child object and the parent object +[1950.340 --> 1954.740] or the feature and the object both have their own location space +[1954.740 --> 1956.340] through on reference frames. +[1956.340 --> 1959.740] And we've talked about 6A representing the location +[1959.740 --> 1961.140] in the features reference frame. +[1961.140 --> 1963.740] Also known as the child objects reference frame. +[1963.740 --> 1969.740] Location of the center, 6B is location of that center. +[1969.740 --> 1974.740] By the way, the motivation behind this was +[1974.740 --> 1977.540] as I was moving from thinking about touch to thinking +[1977.540 --> 1980.340] about vision, I was running into a set of problems. +[1980.340 --> 1983.340] So usually when I talk about sensors in this picture, +[1983.340 --> 1987.540] I'm usually primarily thinking about like a camera +[1987.540 --> 1988.940] or your eyes. +[1988.940 --> 1990.140] So what does it say? +[1990.140 --> 1992.340] It's a sensor location of a success. +[1992.340 --> 1995.340] A location of a sensor of a future. +[1995.340 --> 1998.140] Are you saying that's the camera distance? +[1998.140 --> 1999.140] Yeah. +[1999.140 --> 2001.140] But you're only going to say you're going to try to make +[2001.140 --> 2003.540] a model that will get you that the feature is actually on the object. +[2003.540 --> 2004.540] Right. +[2004.540 --> 2006.140] But by managing down that, +[2006.140 --> 2007.940] then what is the eye position? +[2007.940 --> 2010.740] So you write, and this is where we have a little bit +[2010.740 --> 2012.540] of a different point of view. +[2012.540 --> 2015.540] This is always going to be the eye position. +[2015.540 --> 2017.540] I'm going to use, okay. +[2017.540 --> 2021.540] So location of the eye and one reference frame, +[2021.540 --> 2024.340] location of the eye and the parent reference frame, +[2024.340 --> 2025.140] the object. +[2025.140 --> 2029.540] So so features here are like cylinders or these, +[2029.540 --> 2032.140] we'll call that a rectangle, +[2032.140 --> 2037.540] shape and handles objects are like coffee cups or briefcases, +[2037.540 --> 2040.340] taking a handle and turn it to the side. +[2040.340 --> 2044.740] And so location in the space of the feature location +[2044.740 --> 2047.740] in the space of the object. +[2047.740 --> 2053.340] In my view, well, one coherent way to set this up is, +[2053.540 --> 2057.040] now you have another population of cells detect the transform +[2057.040 --> 2059.040] between those two. +[2059.040 --> 2061.940] That transform using the same word as displacement, +[2061.940 --> 2062.940] type of thing. +[2062.940 --> 2063.940] Yeah. +[2063.940 --> 2064.940] Yeah. +[2064.940 --> 2073.940] The same one we use for displacement. +[2073.940 --> 2074.940] Yeah. +[2074.940 --> 2075.940] Yeah. +[2075.940 --> 2076.940] It will basically, +[2076.940 --> 2078.940] yeah, we've talked about if, +[2078.940 --> 2080.940] if you were leaving out orientation completely, +[2080.940 --> 2085.340] this would certainly be a displacement in a strict sense of the word. +[2085.340 --> 2088.340] We'd have to, anyway, we could talk more about that. +[2088.340 --> 2089.940] You don't have scaling in here. +[2089.940 --> 2090.940] No, there's no scaling. +[2090.940 --> 2093.340] There's no scaling in here. +[2093.340 --> 2095.540] However, there's no scaling in here. +[2095.540 --> 2097.340] However, if you just like, +[2097.340 --> 2100.540] you just change the implementation details inside the boxes. +[2100.540 --> 2105.540] Suddenly, there would be scaling and these labels would still apply. +[2105.540 --> 2107.540] So. +[2107.540 --> 2109.740] The thing I wanted to point out. +[2109.740 --> 2110.740] Yeah. +[2110.740 --> 2112.340] Is that. +[2112.340 --> 2113.940] When I first laid this out, +[2113.940 --> 2116.340] I described this as a transform. +[2116.340 --> 2121.340] And later I realized another other language for describing this is. +[2121.340 --> 2123.940] If you, if you're representing the transform between these, +[2123.940 --> 2125.540] the displacement between these, +[2125.540 --> 2129.340] another way of saying that is you're representing the location and orientation +[2129.340 --> 2132.340] of the feature relative to the object. +[2132.340 --> 2138.540] Yeah, which is another way of saying the location of the sense feature. +[2139.140 --> 2140.140] That's all the same. +[2140.140 --> 2142.140] Is anything different than I threw down the pass on the. +[2142.140 --> 2144.140] Well, just to say that like, +[2144.140 --> 2146.140] I would say that. +[2146.140 --> 2148.140] I don't think you should have this. +[2148.140 --> 2152.340] We've debated whether the location of the sense feature needs to be represented +[2152.340 --> 2153.740] anywhere. +[2153.740 --> 2158.140] I've argued that it didn't because that because it was in a preview. +[2158.140 --> 2161.140] Because we were talking this language of child object and parent objects. +[2161.140 --> 2162.340] But if you change the object, +[2162.340 --> 2165.940] if you change the language where this is features, this is objects, +[2165.940 --> 2167.740] then. +[2167.740 --> 2171.540] Then I am representing the location of the sense feature and this. +[2171.540 --> 2172.540] I'm not speaking. +[2172.540 --> 2174.540] You lost me. +[2174.540 --> 2176.140] So this to me is it, +[2176.140 --> 2179.140] this is what you've been arguing a lot that the sensor, +[2179.140 --> 2182.140] you know the location of the sensor. +[2182.140 --> 2184.140] The camera from the distance, the object. +[2184.140 --> 2185.540] So even though I'm it, +[2185.540 --> 2190.540] there's a large much larger space potential things I find central location would be in. +[2190.540 --> 2194.140] And so this is the language you've used before. +[2194.340 --> 2196.940] But I don't see what's new about it. +[2196.940 --> 2200.340] No, okay, everything I've said here is review. +[2200.340 --> 2206.740] You just said you start by saying I'm going to resolve that problem of this sensor distance. +[2206.740 --> 2210.340] And that you can make it look like it was not sensor distance. +[2210.340 --> 2211.540] But I'm missing that. +[2211.540 --> 2215.940] Yeah, one so one question I have maybe it's the same thing. +[2215.940 --> 2219.140] So you have this position of the sensor relative to the feature. +[2219.140 --> 2220.140] Yeah. +[2220.140 --> 2221.340] And so you know, +[2221.340 --> 2225.940] I'm looking at a feature and it's getting closer further away from me that position keeps changing. +[2225.940 --> 2231.140] Yeah, same thing sensor relative to the object that the object is moving closer further away keeps changing. +[2231.140 --> 2232.140] Yeah. +[2232.140 --> 2237.340] But the feature itself relative to the object, the position that never that doesn't change. +[2237.340 --> 2245.140] But when you use that moving location or the sensor relative to features context or L4. +[2245.340 --> 2256.340] Doesn't L4 have to learn a whole bunch more stuff than it would if it was the position of the feature relative to the object or position of. +[2256.340 --> 2262.340] But the position of the feature relative to the object could cause many different sensor inputs. +[2262.340 --> 2263.940] Yes. +[2263.940 --> 2264.340] That's right. +[2264.340 --> 2266.340] So how it was the resolution. +[2266.340 --> 2273.340] Yeah, so what is that the resolution is that what is predictive of the sensor and but the location orientation of the sensor. +[2273.340 --> 2277.540] Yeah, but then that would just leads to this learning problem. +[2277.540 --> 2279.740] So okay, that's so I thought you're going to resolve. +[2279.740 --> 2283.140] You thought I was going to talk about the learning problem. That's what I kind of talked about. +[2283.140 --> 2290.940] Both of us have this trick of using the phallimus to scale or using some trick to scale the input, which then reduces the learning problem. +[2290.940 --> 2293.340] That is not what I was trying to solve here. +[2293.340 --> 2299.940] What I was what I was pointing out here was that both of us are representing the location of the sense feature. +[2299.940 --> 2301.940] Just in the. +[2301.940 --> 2305.340] Just a bunch of two which relative to the. +[2305.340 --> 2312.340] Yeah, it's not just a scaling problem because if I move this way, you know, the location keeps changing as well. +[2312.340 --> 2314.940] I think this is fundamental. +[2314.940 --> 2316.940] The mini calls are here. +[2316.940 --> 2319.940] This is one of the things that don't know if they go through a lot of research. +[2319.940 --> 2324.140] Assault or maybe it has some sort of don't think minicombs are free. +[2324.140 --> 2328.340] Some sort of genetically determined features that can still be about how these are such. +[2328.340 --> 2333.340] But I think that that's that is going to be one of the. +[2333.340 --> 2339.340] One of the things that we expect to know on all things. +[2339.340 --> 2343.340] So I mean, does this monster conclude the tangular of this whole thing here? +[2343.340 --> 2348.340] Let's go with the idea that later six day projects to the thomas and let's say it's doing this. +[2348.340 --> 2349.340] I think it's. +[2349.340 --> 2353.340] Alright, so I don't understand how to interpret that in this picture. +[2353.340 --> 2363.340] Now we say okay, the thing that that's being used to predict the input to later for is also being used to scale them with the. +[2363.340 --> 2368.340] And and yet later six feet is not seem to be doing. +[2368.340 --> 2372.340] There's no as far as we know there's no. +[2372.340 --> 2374.340] Then how are all like that. +[2374.340 --> 2380.340] So somehow we have this sort of one thing which is needed to scale into what the other one just not. +[2380.340 --> 2383.340] I mean, we did have the right. +[2383.340 --> 2386.340] That's going to be clearly part of the solution. +[2386.340 --> 2388.340] There's problem. +[2388.340 --> 2394.340] And I know you said you suggested that you could part of it like you get to this feature as sort of a change. +[2394.340 --> 2397.340] You know, some pro typical distance or something like that. +[2397.340 --> 2399.340] I just that doesn't sound good to me. +[2399.340 --> 2401.340] It doesn't feel right. +[2401.340 --> 2404.340] I mean, maybe it's right, but it doesn't feel right. +[2404.340 --> 2406.340] So so there's a. +[2406.340 --> 2411.340] I mean, I think the thing is today that would be proposing something new that we have a lot of. +[2411.340 --> 2412.340] Yeah. +[2412.340 --> 2414.340] The thing that okay, it's been. +[2414.340 --> 2420.340] Most of a year since we really talked about this set up because. +[2420.340 --> 2422.340] And yeah, because a year ago was. +[2422.340 --> 2427.340] The new thing is now more clarity of how these orientation spaces might be represented. +[2427.340 --> 2428.340] I myself. +[2428.340 --> 2431.340] But. +[2431.340 --> 2438.340] I haven't tried to drive this point home that this transform can be described this way. +[2438.340 --> 2444.340] That this transform this can also be described as the location of the feature relative to the object. +[2444.340 --> 2445.340] In a long time. +[2445.340 --> 2446.340] And I realized that. +[2446.340 --> 2448.340] This is the definition of the displacement. +[2448.340 --> 2449.340] Yeah. +[2449.340 --> 2451.340] So that that's not a new idea. +[2451.340 --> 2452.340] That's an idea that. +[2452.340 --> 2454.340] That's what we're talking about. +[2454.340 --> 2457.340] And so I don't think. +[2457.340 --> 2459.340] I don't think any of us have been arguing. +[2459.340 --> 2462.340] Okay, so I would certainly that this is only this is purely review. +[2462.340 --> 2465.340] I guess I'm going to say this. +[2465.340 --> 2467.340] No, it's fairly review. +[2467.340 --> 2470.340] And I just wanted to stress this point that this is the. +[2470.340 --> 2475.340] Both of us think that the brain represents the location of this end. +[2475.340 --> 2479.340] But I think of it as not this path integrated. +[2479.340 --> 2480.340] I think that's it. +[2480.340 --> 2481.340] I think that's really. +[2481.340 --> 2482.340] Oh, okay. +[2482.340 --> 2483.340] I guess the. +[2483.340 --> 2484.340] Okay. +[2484.340 --> 2489.340] I'm just missing something new that I want to know. +[2489.340 --> 2491.340] There's not something new. +[2491.340 --> 2494.340] A couple of weeks ago we talked about some stuff. +[2494.340 --> 2495.340] We disagree. +[2495.340 --> 2497.340] I realized that should better look better articulated. +[2497.340 --> 2498.340] This was the model on my head. +[2498.340 --> 2499.340] And. +[2499.340 --> 2500.340] Okay. +[2500.340 --> 2503.340] Maybe I now realizing you actually understood this. +[2503.340 --> 2505.340] I think that's what I'm doing. +[2505.340 --> 2506.340] I just. +[2506.340 --> 2507.340] I just feel like it's got a problem. +[2507.340 --> 2508.340] Yeah. +[2508.340 --> 2509.340] And I don't have a solution to this. +[2509.340 --> 2512.340] But I feel like this doesn't really be solving in a way that's. +[2512.340 --> 2513.340] It's practical. +[2513.340 --> 2515.340] I feel there's another trick to. +[2515.340 --> 2517.340] There's something else that I'm missing. +[2517.340 --> 2519.340] Not complicated, but. +[2519.340 --> 2520.340] We have this basic problem. +[2520.340 --> 2521.340] I'm not. +[2521.340 --> 2522.340] I'm not. +[2522.340 --> 2523.340] I'm not. +[2523.340 --> 2524.340] I'm not. +[2524.340 --> 2525.340] I'm not. +[2525.340 --> 2526.340] I'm not. +[2526.340 --> 2527.340] I'm not. +[2527.340 --> 2528.340] I'm not. +[2528.340 --> 2529.340] I'm not. +[2529.340 --> 2530.340] I'm not. +[2530.340 --> 2531.340] I'm not. +[2531.340 --> 2532.340] I'm not. +[2532.340 --> 2533.340] I'm not. +[2533.340 --> 2534.340] I'm not. +[2534.340 --> 2535.340] I'm not. +[2535.340 --> 2536.340] I'm not. +[2536.340 --> 2537.340] It's between the two. +[2537.340 --> 2538.340] It's between the two parties who got some specific problems that there will be some. +[2538.340 --> 2539.340] Multiplied about these issues. +[2539.340 --> 2540.340] They won't contact you. +[2540.340 --> 2541.340] It will predict. +[2541.340 --> 2542.340] It's not involved in. +[2542.340 --> 2543.340] Bake, it needs to be explained. +[2543.340 --> 2544.340] The second item. +[2544.340 --> 2545.340] Just a bunch of issues on this. +[2545.340 --> 2546.340] tenants it needs to be so. +[2546.340 --> 2547.340] Yes. +[2547.340 --> 2548.340] And, a lot of. +[2548.340 --> 2549.340] Sure, yes. +[2549.340 --> 2550.340] Don't worry. +[2550.340 --> 2551.340] No, no, no. +[2551.340 --> 2552.340] I'm not. +[2552.340 --> 2553.340] It's about brotherhood for. +[2553.340 --> 2554.340] What he asked. +[2554.340 --> 2555.340] Is it? +[2555.340 --> 2559.340] We are there and instead of disability, these issues of rights, +[2559.340 --> 2560.340] do you. +[2560.340 --> 2563.340] I think we will be able to introductory a lot of our institutions? +[2563.340 --> 2563.820] Let me further wonder. +[2563.820 --> 2568.820] So, I just want to keep people in peace on the table. +[2568.820 --> 2573.820] The idea that the 6A is changing the scale, +[2573.820 --> 2576.820] is not just changing the scale of the input, +[2576.820 --> 2580.820] it's changing the scale of my motorbite. +[2580.820 --> 2584.820] It's like saying, so, but the cells are thinking +[2584.820 --> 2587.820] I'm moving X amount, angular position, +[2587.820 --> 2589.820] but I recognize some of the... +[2589.820 --> 2592.820] But reality, my eyes look louder and less and more. +[2592.820 --> 2597.820] And so, the question line. +[2597.820 --> 2603.820] So, it's a total sort of... +[2603.820 --> 2606.820] some diffused of some sort of that's going on here, +[2606.820 --> 2608.820] where you're acting in one way, +[2608.820 --> 2610.820] but actually everything is scale, both the movements +[2610.820 --> 2613.820] and the features are scaled. +[2613.820 --> 2616.820] And so, the cortex doesn't know the difference. +[2616.820 --> 2618.820] This is coincidentally your idea, maybe, +[2618.820 --> 2621.820] of a global distance, +[2621.820 --> 2624.820] but I just don't know what that would be, +[2624.820 --> 2626.820] and that doesn't make sense to me. +[2626.820 --> 2628.820] I just want to throw that out again. +[2628.820 --> 2631.820] I don't have a solution to this problem. +[2631.820 --> 2636.820] Okay, so we're just re-graduating the problem. +[2636.820 --> 2639.820] Why do you prefer feature optic versus child parent +[2639.820 --> 2641.820] is there a reason for that? +[2641.820 --> 2646.820] Or is it just like today's a particular reason? +[2646.820 --> 2650.820] My main motivation here was I wanted to be able to use the words +[2650.820 --> 2653.820] location of the sense feature. +[2653.820 --> 2654.820] And... +[2654.820 --> 2657.820] Because it's easy to say that the definition of the sense child +[2657.820 --> 2658.820] of the... +[2658.820 --> 2667.820] And if my goal is to try to connect to your existing language, +[2667.820 --> 2670.820] I wanted to say, like, no, I agree with you, +[2670.820 --> 2672.820] the location of the sense feature is represented, +[2672.820 --> 2675.820] but is represented in this way. +[2676.820 --> 2680.820] Whereas sometimes we describe how a success is being the location of the sense feature. +[2680.820 --> 2681.820] Welcome, new viewers. +[2681.820 --> 2684.820] By the way, you're watching a mental research meeting. +[2684.820 --> 2686.820] This is live and mental HQ. +[2686.820 --> 2687.820] Obviously, following that, right? +[2687.820 --> 2690.820] It's the fundamental representative of the sense of the representative object of object. +[2690.820 --> 2692.820] And so we start off, +[2692.820 --> 2695.820] when we first start thinking about what kind of this idea of this feature on an object, +[2695.820 --> 2698.820] then we realize later, now that's not what you're right to think about. +[2698.820 --> 2701.820] We go to think about the fundamental object of the representative object of the object. +[2701.820 --> 2703.820] So we've been following the feature of object. +[2704.820 --> 2706.820] When we move on, we did that really on the right language. +[2706.820 --> 2708.820] Yeah, this one. +[2708.820 --> 2710.820] That was more trash. +[2710.820 --> 2713.820] It sounds just trying to say why you weren't back. +[2713.820 --> 2718.820] There's also kind of the current sensory input that's coming in right now, +[2718.820 --> 2720.820] this edge at this particular orientation, +[2720.820 --> 2723.820] which doesn't match either of those things. +[2723.820 --> 2725.820] That's what typically people think about the feature. +[2725.820 --> 2726.820] Yeah. +[2726.820 --> 2728.820] And so there's a way to be careful about that. +[2728.820 --> 2729.820] You're right. +[2729.820 --> 2731.820] Yeah, hopefully because I'll be able to look at the block. +[2731.820 --> 2735.820] Yeah, that's the 90% of the world outside of them. +[2735.820 --> 2736.820] Yeah. +[2736.820 --> 2739.820] And they say feature they mean like the orientation coming in. +[2739.820 --> 2740.820] Yeah. +[2740.820 --> 2741.820] You know, at this point in time. +[2741.820 --> 2743.820] And I see why you're doing this. +[2743.820 --> 2745.820] Pointing out there is a terminology issue. +[2745.820 --> 2747.820] Let me communicate outside. +[2747.820 --> 2749.820] I mean, I'm writing about this in the book. +[2749.820 --> 2752.820] About this idea of. +[2752.820 --> 2754.820] You know, and using it back to now. +[2754.820 --> 2757.820] So it's now still max. +[2757.820 --> 2759.820] And so I thought I had a little kid, +[2759.820 --> 2762.820] you want to map some app and things of occasion is another map. +[2762.820 --> 2764.820] It's like, OK, there's a building. +[2764.820 --> 2766.820] And that's what there's a map of the building. +[2766.820 --> 2768.820] And there's never any feature, right? +[2768.820 --> 2773.820] This and trying to try to go with language that it's this. +[2773.820 --> 2776.820] Well, so I totally adopted this is like, +[2776.820 --> 2778.820] yeah, it's not some apps and dogs about it. +[2778.820 --> 2781.820] So. +[2781.820 --> 2783.820] So I'm going to like to see your own extent. +[2783.820 --> 2785.820] I'm not distracted that way. +[2785.820 --> 2787.820] It's still turned a bit. +[2787.820 --> 2790.820] I just mentioned the trouble with language. +[2790.820 --> 2791.820] Yeah, let me write it up. +[2791.820 --> 2793.820] We just have to be very precise. +[2793.820 --> 2795.820] Yeah. +[2795.820 --> 2796.820] Yeah. +[2796.820 --> 2799.820] OK, so there's no new. +[2799.820 --> 2801.820] You're just being articulating the problems we have before. +[2801.820 --> 2805.820] And I think what's new is that now we might have. +[2805.820 --> 2810.820] Even though it's not a full neural circuit with some sense of what those circles might actually be. +[2810.820 --> 2811.820] Well, I do. +[2811.820 --> 2812.820] I mean, we know they have to exist. +[2812.820 --> 2816.820] And we know here's a mathematical sort of model of it. +[2816.820 --> 2818.820] And we don't we know what it has to do. +[2818.820 --> 2820.820] It's deep understanding what it has to do. +[2820.820 --> 2823.820] But I think I already do understand how they're around. +[2823.820 --> 2829.820] So, yeah, I guess my second motivation for drawing all this was to show this in context. +[2829.820 --> 2836.820] And saying that like, OK, we have two, we kind of have two fundamental. +[2836.820 --> 2841.820] Use cases to fundamental questions for orientation location. +[2841.820 --> 2846.820] And one of them is what's going on inside one of these boxes. +[2846.820 --> 2849.820] How are our location and orientation represented? +[2849.820 --> 2850.820] Yeah. +[2850.820 --> 2853.820] And the second is how to multiple of these. +[2853.820 --> 2858.820] How do you detect the transform between them and determine one from the other, etc. +[2858.820 --> 2861.820] And anyway. +[2861.820 --> 2864.820] I'm just putting that out there, putting showing all this in context. +[2864.820 --> 2870.820] The set of problems in front of us are like what is actually going on inside this rectangle. +[2870.820 --> 2875.820] What are these how these layers actually talk to each other or how in populations of these actually up again. +[2875.820 --> 2880.820] I don't know this is the fact that I just want to keep the re-uploading. +[2880.820 --> 2886.820] It appears that if you think about the classic orientation that people think about it, be one. +[2886.820 --> 2889.820] Then that somehow, you know, goes through these layers, right? +[2889.820 --> 2895.820] And so these guys are, but they're somehow linked together where that's not true of these things. +[2895.820 --> 2898.820] So that's just something to keep in mind if that's true. +[2898.820 --> 2903.820] Yeah, I think at this, you know, when you're talking about, I assume you're sort of at the end of this. +[2903.820 --> 2908.820] Yeah, the other way to talk about the 4d representation. +[2908.820 --> 2910.820] It struck me. +[2910.820 --> 2911.820] Remind me of something. +[2911.820 --> 2914.820] I don't know if it's even has anything to do with this enough, but. +[2914.820 --> 2918.820] Remind me that we think about how there's these orientation columns, right? +[2918.820 --> 2919.820] In the one. +[2919.820 --> 2921.820] And. +[2921.820 --> 2923.820] And people, you know, all part of that. +[2923.820 --> 2924.820] That's not really true. +[2924.820 --> 2926.820] There's really these slabs, right? +[2926.820 --> 2929.820] So this is part of this part of it. +[2929.820 --> 2932.820] Is these slabs of orientation. +[2932.820 --> 2935.820] And so. +[2935.820 --> 2937.820] And so that's like an extra dimension. +[2937.820 --> 2940.820] So this is a, this is a, you know, one dimension. +[2940.820 --> 2942.820] And this is other dimensions on that same thing. +[2942.820 --> 2944.820] It reminds us of a higher dimension. +[2944.820 --> 2949.820] And no one ever explains, you know, there's never really a good explanation why these guys are. +[2949.820 --> 2954.820] Yes, I think I've seen a paper on that it's stuff like. +[2954.820 --> 2957.820] Knowing orientation is not enough is that direction selective. +[2957.820 --> 2959.820] So there's different speeds. +[2959.820 --> 2962.820] Yeah, yeah, special frequency phases. +[2962.820 --> 2964.820] Are they raised along there? +[2964.820 --> 2965.820] And that dimension. +[2965.820 --> 2966.820] I think they differ. +[2966.820 --> 2969.820] I don't know how nicely there later. +[2969.820 --> 2972.820] The interaction that evil within a mini column is all different. +[2972.820 --> 2975.820] Different directional sensitivities. +[2975.820 --> 2976.820] Yeah. +[2976.820 --> 2981.820] But there are other dimensions to represent. +[2981.820 --> 2984.820] And so we have a one other direction. +[2984.820 --> 2987.820] That's, you know, that's an interesting question. +[2987.820 --> 2989.820] You know, the whole idea. +[2989.820 --> 2994.820] That's a huge part of the literature is the directional sensitivity of these cells. +[2994.820 --> 2996.820] Most of them are personal sensitive. +[2996.820 --> 2998.820] Yeah. +[2998.820 --> 3003.820] And we talk about orientation as if it's, if it's not motion. +[3003.820 --> 3006.820] And you know, there's some static thing and then you're off. +[3006.820 --> 3008.820] Yeah, the sudden applied the direction. +[3008.820 --> 3011.820] It's not implies themselves are. +[3011.820 --> 3013.820] And coding direction. +[3013.820 --> 3016.820] It's it's like there's you can't just say what the. +[3016.820 --> 3018.820] What the orientation is. +[3018.820 --> 3021.820] You have to know the orientation represent by which way of moving. +[3021.820 --> 3023.820] Yeah, they they code. +[3023.820 --> 3027.820] Orientate direction of movement and also the speed, I think. +[3027.820 --> 3029.820] Yeah, there's so it's almost. +[3029.820 --> 3031.820] It's kind of a two D version of. +[3031.820 --> 3032.820] Yeah. +[3032.820 --> 3036.820] So, you know, make it a whole way of thinking about the stuff is wrong. +[3036.820 --> 3038.820] In some sense, like, do the. +[3038.820 --> 3045.820] This is other, these other dimensions that have to do with movement that we're not thinking about. +[3045.820 --> 3052.820] I mean, the easy thing that comes to mind in the context of continuous attractors is. +[3052.820 --> 3054.820] In one way to set up the continuous attractor. +[3054.820 --> 3057.820] So, the way most people use. +[3057.820 --> 3060.820] Cells are direction selective. +[3060.820 --> 3062.820] Yeah, I know. +[3062.820 --> 3068.820] Yeah, essentially, like if this is if this is a 2D attractor sheet. +[3068.820 --> 3071.820] And I'll just like. +[3071.820 --> 3074.820] Zoom in on one part of it. +[3074.820 --> 3078.820] The cells are actually like there'll be one for this location. +[3078.820 --> 3081.820] It is actually like north tuned one that's out tuned. +[3081.820 --> 3087.820] So, there's one that. +[3087.820 --> 3090.820] That fires primarily when the animals moving. +[3090.820 --> 3093.820] Quote on North, one that fires one that's moving east, etc. +[3093.820 --> 3096.820] And that's what causes the bump to move in that direction. +[3096.820 --> 3098.820] Yeah, so series of the. +[3098.820 --> 3100.820] But then I would want. +[3100.820 --> 3105.820] To represent my organization by looking at those individual cells because. +[3105.820 --> 3110.820] You know, in the end, when I'm learning the object or you're learning the displacement of the object to another object. +[3110.820 --> 3113.820] I don't really care about what movement I was getting. +[3113.820 --> 3116.820] This is internal mechanism to make this thing work. +[3116.820 --> 3117.820] But I don't really want that. +[3117.820 --> 3121.820] So it's almost like there's almost like a second to go here where it's like a super tied. +[3121.820 --> 3123.820] I can have all these other orientations. +[3123.820 --> 3126.820] And these could be different movement directions. +[3126.820 --> 3128.820] But in the end, it's like this is. +[3128.820 --> 3131.820] I have like the mini column equivalent representation for it. +[3131.820 --> 3132.820] Many conferences. +[3132.820 --> 3133.820] All that's the orientation. +[3133.820 --> 3136.820] I'm learning with these individual cells are really. +[3136.820 --> 3139.820] I'm not a lot of the system something like that. +[3139.820 --> 3142.820] Yeah, because this is sort of. +[3142.820 --> 3146.820] This is what you need the movement, but it's not what you want for object representation. +[3146.820 --> 3147.820] Right. +[3147.820 --> 3149.820] That's a big clue. +[3149.820 --> 3151.820] Remember all these clues. +[3151.820 --> 3152.820] Remember. +[3152.820 --> 3153.820] It's a little weird. +[3153.820 --> 3157.820] Also, if you think about it, just our classic temple memory. +[3157.820 --> 3158.820] Anything about predictions. +[3158.820 --> 3159.820] Yeah. +[3159.820 --> 3164.820] If you can have a cell that has a bunch of cells that have the same orientation. +[3164.820 --> 3169.820] But depending on the direction of movement, they would just naturally have different cells. +[3169.820 --> 3172.820] That is temperament. +[3172.820 --> 3180.820] Our temporal memory would naturally separate these cells out. +[3180.820 --> 3182.820] Does that make sense? +[3182.820 --> 3187.820] If you think of an edge moving in a certain direction and velocity. +[3187.820 --> 3188.820] That's the same thing. +[3188.820 --> 3189.820] Yeah. +[3189.820 --> 3190.820] As a sequence. +[3190.820 --> 3191.820] Yeah. +[3191.820 --> 3193.820] Then the individual points of the sequence. +[3193.820 --> 3196.820] Yeah, but that's only true to remember. +[3196.820 --> 3197.820] I'm not. +[3197.820 --> 3198.820] No, I know. +[3198.820 --> 3199.820] I'm just thinking. +[3199.820 --> 3202.820] You just even our pure temporal memory would separate these cells out. +[3202.820 --> 3203.820] Yeah. +[3203.820 --> 3204.820] Yeah. +[3204.820 --> 3205.820] Yeah. +[3205.820 --> 3211.820] All right. +[3211.820 --> 3216.820] Well, let's go to get more insights as you're doing. +[3216.820 --> 3218.820] And then we'll go to the next one. +[3218.820 --> 3219.820] I'm going to go to the next one. +[3219.820 --> 3220.820] I'm going to go to the next one. +[3220.820 --> 3221.820] What's the next step here? +[3221.820 --> 3224.820] The neural survey or what is the? +[3224.820 --> 3228.820] In part of the motivation for drawing less to the super bow fat discussion. +[3228.820 --> 3229.820] And. +[3229.820 --> 3230.820] I will. +[3230.820 --> 3231.820] I think it was. +[3231.820 --> 3237.820] I would really like to work more on this problem and have more discussion of the latest. +[3237.820 --> 3239.820] You're getting slightly brighter. +[3239.820 --> 3240.820] So I'm going to do it. +[3240.820 --> 3241.820] I'm going to do that. +[3241.820 --> 3242.820] So. +[3242.820 --> 3250.820] Like right now, I would be able to figure out how to connect up these layers so that they perform a full coordinate transform. +[3250.820 --> 3258.820] Such that with when we've documented so far, we could do this coffee pump example, but we can't do the briefcase example. +[3258.820 --> 3261.820] Because the handles been handles rotating. +[3261.820 --> 3262.820] And I could. +[3262.820 --> 3269.820] I'd be if they were the right thing to do right now, I could, I could implement this so that it does both the coffee pump and the briefcase. +[3269.820 --> 3272.820] That said, I already know what it. +[3272.820 --> 3274.820] I don't know the fact that I know I can get it to work. +[3274.820 --> 3277.820] I don't know what information I'll get out of getting to work. +[3277.820 --> 3285.820] So anyway, I'm putting this all out there that we have to space to orientation questions answered what's going on inside here. +[3285.820 --> 3287.820] What's going on between these. +[3287.820 --> 3290.820] Often when I'm presenting at these. +[3290.820 --> 3294.820] I'm not mentally quick to figure out what I'm doing next. +[3294.820 --> 3300.820] I just put out my focus on this. +[3300.820 --> 3304.820] So this is the state of things and that I'll go think. +[3304.820 --> 3307.820] Another way to. +[3307.820 --> 3312.820] Another general approach to the problem. +[3312.820 --> 3316.820] Here starting up with the representation of orientations and how they updated. +[3316.820 --> 3321.820] And then. +[3321.820 --> 3326.820] The other way to do it would be perhaps. +[3326.820 --> 3332.820] The focus on one of the absolute representation that would represent the object that we expected to be. +[3332.820 --> 3336.820] And then escalator, how would I get to that? +[3336.820 --> 3338.820] I don't know how to do this. +[3338.820 --> 3339.820] I'm just saying. +[3339.820 --> 3341.820] I'm assuming I had. +[3341.820 --> 3349.820] You know, an object is represented by a bunch of child objects and in the represent the object, which we also. +[3349.820 --> 3353.820] It starts in that position and more of my way backwards. +[3353.820 --> 3359.820] I feel I was trying to understand that to like, where is there a stable representation of the object. +[3359.820 --> 3361.820] Here. +[3361.820 --> 3363.820] That's what we would want. +[3363.820 --> 3366.820] And then we would want to do it. +[3366.820 --> 3369.820] We'll see here it says feature ID, which is not. +[3369.820 --> 3370.820] The object ID. +[3370.820 --> 3373.820] I mean, the way I would naturally do it is. +[3373.820 --> 3377.820] Yeah, the union of all later five. +[3377.820 --> 3384.820] Well, for one thing, each, each layer five, each feature relative to object is specific to the object. +[3384.820 --> 3387.820] So you could classify the object based on this. +[3387.820 --> 3389.820] Yeah, you can have another population cells. +[3389.820 --> 3392.820] And that's what the teacher one of them's object. +[3392.820 --> 3394.820] There, there are ways to do it. +[3394.820 --> 3395.820] But I don't have. +[3395.820 --> 3397.820] What is the voting layer? +[3397.820 --> 3400.820] Is it both L5 and L4? +[3400.820 --> 3403.820] Yeah. +[3403.820 --> 3405.820] We have two voting layers. +[3405.820 --> 3410.820] So. +[3410.820 --> 3412.820] Yeah. +[3412.820 --> 3415.820] So, I mean, another question. +[3415.820 --> 3418.820] It is kind of specific to this diagram. +[3418.820 --> 3430.820] But another question is, yes, I, this puts a large demand on layer four and layer six on this connection to learn a lot of stuff. +[3430.820 --> 3436.820] If I take one way I think about this whole model is I've just. +[3436.820 --> 3442.820] I've just taken kind of a classic view of computer vision and encoded it into neural, into neurons. +[3442.820 --> 3448.820] Like this idea of having this set of, like this set of basic primitive things that you learn every view of. +[3448.820 --> 3450.820] And then you rearrange them into different shapes. +[3450.820 --> 3455.820] It's kind of like if you were solving vision for first principles, you probably that you would, you. +[3455.820 --> 3458.820] Yeah, a lot of land on that at some point in your reasoning at least. +[3458.820 --> 3463.820] It's classic, but it's also leading edge in the sense that capsules are also kind of in the same. +[3463.820 --> 3464.820] Sure. +[3464.820 --> 3470.820] So, but because it's because it's kind of like this intuitive thing people have reached that conclusion before. +[3470.820 --> 3473.820] And for like less people. +[3473.820 --> 3477.820] To talk about like both more and be German. +[3477.820 --> 3482.820] One thing they had and their, and their primitives like there's, there's cylinders or whatever. +[3482.820 --> 3491.820] Because they assumed the system could like, could kind of skew them or reshape them in different ways. +[3491.820 --> 3495.820] Like you see a new, a new cylinder to spend in a slightly different way. +[3495.820 --> 3498.820] How do you represent that? +[3498.820 --> 3501.820] So like another area of focus is in our diagram. +[3501.820 --> 3504.820] It's like what is going on in this layer, like it's lighter for. +[3504.820 --> 3509.820] Could you give it some more flexibility of some kind working like miss shape. +[3509.820 --> 3510.820] I don't know. +[3510.820 --> 3514.820] It can take one of these and do us a skinnier cell or something like that. +[3514.820 --> 3519.820] Neural mechanisms for gons, neural mechanisms for these parameterized features. +[3519.820 --> 3523.820] I don't even know where to go at that, but that's another. +[3523.820 --> 3528.820] That's another way to really make this work in the real world. +[3528.820 --> 3535.820] Well, you could start allowing a lot of the whole scaling features of the promise. +[3535.820 --> 3536.820] Yeah. +[3536.820 --> 3542.820] I always felt the way we somehow when we do that, we see a novel thing. +[3542.820 --> 3544.820] I've talked about this a bit. +[3544.820 --> 3546.820] You see a novel thing. +[3546.820 --> 3549.820] We always have to, we are always attending the sub part school. +[3549.820 --> 3550.820] Always, always the work. +[3550.820 --> 3554.820] The most important perception is the sub part. +[3554.820 --> 3555.820] It's a peak. +[3555.820 --> 3556.820] It's your, the trial. +[3556.820 --> 3560.820] I'm, you know, I might as well move my eyes over the board. +[3560.820 --> 3563.820] I'm aware that I'm looking at these different parts. +[3563.820 --> 3566.820] If I'm looking, if I'm trying to read the board, I'm looking at the board. +[3566.820 --> 3570.820] I'm just looking at the picture and not trying to, you know, once I know what I want to do that. +[3570.820 --> 3572.820] But we can't, I know we have the lowest of cards. +[3572.820 --> 3573.820] And that's what I'm doing. +[3573.820 --> 3577.820] Every time I move my eyes to the part I'm going to figure out the displacement between some part and another part. +[3577.820 --> 3581.820] And so as I'm building up this structure, I'm going to be building a bunch of displacements. +[3581.820 --> 3585.820] And somehow there's going to be a similarity between some of the displacements of this thing and another thing. +[3585.820 --> 3587.820] And it's called a couple of the signs. +[3587.820 --> 3592.820] I guess I'm saying that the answer to that question of, you know, the gons, I never liked. +[3592.820 --> 3595.820] I just put it on the other side of the board. +[3595.820 --> 3598.820] I'll put it on the other side of the board. +[3598.820 --> 3601.820] The answer to that is that it's really hard to get the whole attention on that. +[3601.820 --> 3603.820] And you were telling the parts. +[3603.820 --> 3607.820] And as you attend to the parts, you are recomposing the object every time. +[3607.820 --> 3608.820] Especially if you do. +[3608.820 --> 3609.820] It's the exact same thing. +[3609.820 --> 3610.820] You can see it. +[3610.820 --> 3613.820] If we don't need to do that, the columns, though, they're done. +[3613.820 --> 3616.820] But if it's different, I have to, I have to attend to the parts. +[3616.820 --> 3620.820] And only by attending to the parts do I end up seeing the similarities of things. +[3620.820 --> 3623.820] And I've already had to sometimes, I need to attend to several parts. +[3623.820 --> 3628.820] Oh, this part of this object is like a motorcycle on these parts over here, like a flower. +[3628.820 --> 3631.820] And it depends on which order which you're attention to. +[3631.820 --> 3634.820] Because some substructure will be similar to something else. +[3634.820 --> 3643.820] I guess I'm saying that part of that answer that I think is going to involve attention on the fact we view by substantial attention of features. +[3643.820 --> 3651.820] I think that's going to be part of the answer to that question. +[3651.820 --> 3661.820] Yeah. +[3661.820 --> 3674.820] I'm going to do that. +[3674.820 --> 3676.820] I'm going to do the next. +[3676.820 --> 3685.820] I need to start having my presentations ready a day early so that I can then focus on that question. +[3685.820 --> 3686.820] No, but that's also. +[3686.820 --> 3687.820] Yeah. +[3687.820 --> 3689.820] I mean, sure, it would be the decision. +[3689.820 --> 3692.820] Yeah, it's good if you have opinions on it. +[3692.820 --> 3694.820] That helps. +[3694.820 --> 3700.820] I know what I would do next, but I can't say that's what Marcus can or wants to do next or can. +[3700.820 --> 3703.820] I know you like the three location thing. +[3703.820 --> 3705.820] That's what you're thinking about. +[3705.820 --> 3708.820] I always like to think about these normal mechanisms. +[3708.820 --> 3715.820] I like these those aesthetic constraints on the solutions to the problem. +[3715.820 --> 3718.820] And so I started on the right here. +[3718.820 --> 3721.820] You start on the left. +[3721.820 --> 3724.820] This is also all in service up and end of paper. +[3724.820 --> 3725.820] Yeah. +[3725.820 --> 3726.820] Yeah. +[3726.820 --> 3727.820] We have done over there. +[3727.820 --> 3728.820] We still have that. +[3728.820 --> 3738.820] It's just a certain writing down some of the issues that Marcus and kind of leave them up there. +[3738.820 --> 3739.820] Yeah. +[3739.820 --> 3740.820] Yeah. +[3740.820 --> 3741.820] Talking about that. +[3741.820 --> 3742.820] Yeah. +[3742.820 --> 3743.820] Yeah. +[3743.820 --> 3744.820] Yeah. +[3744.820 --> 3745.820] Yeah. +[3745.820 --> 3748.820] We talked about how our intuition would really work. +[3748.820 --> 3751.820] When the nice thing there is scale. +[3751.820 --> 3754.820] If we've done one, two, and three. +[3754.820 --> 3756.820] Really done everything. +[3756.820 --> 3760.820] But we want to take a, we want to skip the surface before diving. +[3760.820 --> 3762.820] Well, that doesn't have this whole question. +[3762.820 --> 3763.820] I'm sorry about you. +[3763.820 --> 3765.820] Yeah, supposed to figure out the. +[3765.820 --> 3770.820] So one option is we could figure out the normal circuit for orientation. +[3770.820 --> 3773.820] So, I don't think we could dive deep into scale. +[3773.820 --> 3775.820] I don't think we even know the normal circuit for location. +[3775.820 --> 3780.820] I'm beginning to doubt the entire interpretation we have. +[3780.820 --> 3781.820] And I said this multiple times too. +[3781.820 --> 3784.820] So, okay, we know these bin cells are just a whole solution for the lab. +[3784.820 --> 3785.820] We get multiple bin cells. +[3785.820 --> 3789.820] I was in here, and I was going to be sampling from across the island. +[3789.820 --> 3793.820] And as this time paper and so on, it's going evidence of that. +[3793.820 --> 3794.820] I mean, I might. +[3794.820 --> 3796.820] And Scott, we said that else going on. +[3796.820 --> 3798.820] And I made the point that orientation. +[3798.820 --> 3799.820] I was single. +[3799.820 --> 3800.820] I'm wearing a track. +[3800.820 --> 3801.820] This is sufficient. +[3801.820 --> 3802.820] And it had to be something else. +[3802.820 --> 3804.820] And now we know there's something else like this. +[3804.820 --> 3805.820] This. +[3805.820 --> 3809.820] So, it was a sphere or the gravity vector or whatever we want to call it. +[3809.820 --> 3811.820] And I also think so. +[3811.820 --> 3813.820] I don't think we even understand the location. +[3813.820 --> 3815.820] I think. +[3815.820 --> 3817.820] I don't know my. +[3817.820 --> 3818.820] I wouldn't say. +[3818.820 --> 3819.820] Certainly. +[3819.820 --> 3820.820] I'm just saying. +[3820.820 --> 3822.820] It's certainly an intriguing idea that. +[3822.820 --> 3826.820] The insufficiency of location and the insubstitio is a orientation. +[3826.820 --> 3827.820] Well, insubstitio is an orientation. +[3827.820 --> 3831.820] You can solve the gravity vector already by object. +[3831.820 --> 3834.820] The dimensional director of some sort. +[3834.820 --> 3835.820] And that. +[3835.820 --> 3839.820] That may be part of the solution for the location as well. +[3839.820 --> 3844.820] And this is consistent with the idea that it really, these mini columns and these. +[3844.820 --> 3847.820] Orations that are going through them, whatever they represent. +[3847.820 --> 3852.820] Maybe it's, maybe it's representing the gravity vector in some sense. +[3852.820 --> 3857.820] Then that's being applied to both the grid some modules and the orientation. +[3857.820 --> 3858.820] And. +[3858.820 --> 3859.820] And that's. +[3859.820 --> 3860.820] And so I. +[3860.820 --> 3861.820] I don't know if we. +[3861.820 --> 3863.820] I don't think we understand this one anymore. +[3863.820 --> 3867.820] I think the model we propose, which is consistent with other people. +[3867.820 --> 3868.820] Is. +[3868.820 --> 3870.820] It's a bit suspect in my mind. +[3870.820 --> 3874.820] At this point in time. +[3874.820 --> 3875.820] So. +[3875.820 --> 3876.820] So one. +[3876.820 --> 3877.820] One thing that I. +[3877.820 --> 3878.820] Oh, going back with. +[3878.820 --> 3880.820] I know. +[3880.820 --> 3881.820] And. +[3881.820 --> 3883.820] And I say always. +[3883.820 --> 3887.820] Sometimes it gets, it always gets worse before the answer. +[3887.820 --> 3888.820] So. +[3888.820 --> 3889.820] So. +[3889.820 --> 3891.820] One thing that I. +[3891.820 --> 3893.820] Put into the paper. +[3893.820 --> 3896.820] Palos plus locations in the new cortex. +[3896.820 --> 3897.820] Was. +[3897.820 --> 3899.820] I thought this was interesting and just worth including. +[3899.820 --> 3904.820] So I made sure it included was that the model, when it has only one module. +[3904.820 --> 3905.820] Still. +[3905.820 --> 3906.820] And. +[3906.820 --> 3907.820] It's. +[3907.820 --> 3908.820] It's. +[3908.820 --> 3909.820] It's. +[3909.820 --> 3910.820] It's. +[3910.820 --> 3911.820] It's. +[3911.820 --> 3912.820] It's. +[3912.820 --> 3913.820] It's. +[3913.820 --> 3914.820] It's. +[3914.820 --> 3915.820] It's. +[3915.820 --> 3916.820] It's. +[3916.820 --> 3917.820] Yeah. +[3917.820 --> 3918.820] One module for location. +[3918.820 --> 3919.820] So like the layer four layers. +[3919.820 --> 3921.820] Just one of these are the circling. +[3921.820 --> 3924.820] It's still successfully narrows down to a single bump. +[3924.820 --> 3926.820] It's still the mechanism still works. +[3926.820 --> 3929.820] However, it doesn't represent location uniquely. +[3929.820 --> 3930.820] Yeah. +[3930.820 --> 3931.820] So how work. +[3931.820 --> 3932.820] It doesn't represent their location. +[3932.820 --> 3934.820] Well, the fact that it narrows is still interesting. +[3934.820 --> 3935.820] It's. +[3935.820 --> 3939.820] It's now representing a look look look and ambiguous location on an object, but it's narrow. +[3939.820 --> 3940.820] Okay. +[3940.820 --> 3942.820] But it's still not sufficient to. +[3942.820 --> 3947.820] So what if every cortical column is representing something ambiguous, but it's still narrowing down to that. +[3947.820 --> 3949.820] It's it's removing ambiguity. +[3949.820 --> 3950.820] But it's. +[3950.820 --> 3955.820] It's removing some ambiguity, but not all. +[3955.820 --> 3958.820] The observation is that if you. +[3958.820 --> 3959.820] If you. +[3959.820 --> 3962.820] If one module, if you have only one module, you can still do something useful. +[3962.820 --> 3963.820] You're still narrowing. +[3963.820 --> 3967.820] That's not said to you have multiple modules across quarter of the columns. +[3967.820 --> 3968.820] Yeah. +[3968.820 --> 3969.820] You're mentioned. +[3969.820 --> 3970.820] You get a set of. +[3970.820 --> 3971.820] You're. +[3971.820 --> 3973.820] You're basically got a lot of. +[3973.820 --> 3975.820] Combs. +[3975.820 --> 3977.820] Which in case. +[3977.820 --> 3980.820] With any individual column actually know. +[3980.820 --> 3981.820] So. +[3981.820 --> 3982.820] I think. +[3982.820 --> 3984.820] The individual column can never learn. +[3984.820 --> 3988.820] You can never be certain where it is. +[3988.820 --> 3989.820] Correct. +[3989.820 --> 3994.820] So if I just tell something to my finger and we were around it seemed like I can. +[3994.820 --> 3997.820] I can add out that object. +[3997.820 --> 3998.820] I don't. +[3998.820 --> 4001.820] It could be certain of that. +[4001.820 --> 4008.820] Configuration of features is unique with respect to this module. +[4008.820 --> 4015.820] Brands. +[4015.820 --> 4017.820] Thanks for the follow you. +[4017.820 --> 4019.820] I'm going to. +[4019.820 --> 4023.820] You know, if you see a particular feature above another feature. +[4023.820 --> 4024.820] Yeah. +[4024.820 --> 4026.820] And you never ever see. +[4026.820 --> 4031.820] You know that feature one above feature to anywhere else than you know it's this object. +[4031.820 --> 4032.820] But I still have. +[4032.820 --> 4033.820] I don't know. +[4033.820 --> 4035.820] I represent my location unique. +[4035.820 --> 4038.820] No, but you wouldn't need to because. +[4038.820 --> 4040.820] Nothing similar is there. +[4040.820 --> 4041.820] Okay. +[4041.820 --> 4042.820] But now. +[4042.820 --> 4043.820] Now. +[4043.820 --> 4045.820] So I moved my finger. +[4045.820 --> 4050.820] And I know I know I have another ambiguous location. +[4050.820 --> 4052.820] Based on that. +[4052.820 --> 4055.820] In the case location, I cannot predict input. +[4055.820 --> 4059.820] I might be able to do it if I do the object in the ambiguous location. +[4059.820 --> 4060.820] Maybe, maybe not. +[4060.820 --> 4062.820] But it's still in Z. +[4062.820 --> 4065.820] And we don't really see that in those feedbacks. +[4065.820 --> 4067.820] Two, three to four. +[4067.820 --> 4069.820] You just think like five. +[4069.820 --> 4070.820] Time. +[4070.820 --> 4071.820] There's problems with that. +[4071.820 --> 4073.820] Still seems like the mind. +[4073.820 --> 4075.820] And the next location. +[4075.820 --> 4076.820] Will predict. +[4076.820 --> 4078.820] Multiple sensory inputs. +[4078.820 --> 4079.820] Yeah. +[4079.820 --> 4080.820] And. +[4080.820 --> 4082.820] But it will predict every sensory input. +[4082.820 --> 4084.820] No, but, but I, but it feels like. +[4084.820 --> 4085.820] Well, it depends how much. +[4085.820 --> 4086.820] It depends. +[4086.820 --> 4087.820] Yeah. +[4087.820 --> 4088.820] Yeah. +[4088.820 --> 4089.820] I mean, it does feel like that we, +[4089.820 --> 4094.820] even I can consciously imagine the feeling of going to get one of my fingers in the location. +[4094.820 --> 4099.820] And so that I can, I can generate a non, I talk about this to the generate a non silent prediction. +[4099.820 --> 4100.820] There's a. +[4100.820 --> 4103.820] There's an ability to say, I, this is the feature. +[4103.820 --> 4105.820] I'm really expecting that. +[4105.820 --> 4111.820] Just, we never resolved this issue, too, that there's silent predictions like some of the temple memory, which I'm sure they're going on. +[4111.820 --> 4122.820] But then also conscious, you know, how to just wear predictions, which is sort of like, I think in that case, what you're predicting is the entire next child object. +[4122.820 --> 4129.820] Cut. +[4129.820 --> 4131.820] Just answer one question. +[4131.820 --> 4135.820] You said the capacity would might fall a lot when you had only one module. +[4135.820 --> 4139.820] It falls, but not by a lot capacity does not. +[4139.820 --> 4148.820] This capacity scales self-linearly with number of modules, because it's not the representation representational capacity scales exponentially, but the union. +[4148.820 --> 4151.820] It's not a little bit. +[4151.820 --> 4153.820] Thanks for the follow, Doug. +[4153.820 --> 4155.820] As you have not deals. +[4155.820 --> 4164.820] The whole weird thing with the tank paper, the different, the, the, the, I forget the term to use with the different bumps, how different. +[4164.820 --> 4166.820] You know, there was encoding in the bumps themselves. +[4166.820 --> 4168.820] So you have one with some module. +[4168.820 --> 4171.820] There was nine bumps, maybe in the height of the. +[4171.820 --> 4172.820] That one was posted. +[4172.820 --> 4176.820] That was a very pointed feature that they pointed out. +[4176.820 --> 4180.820] Yeah, and that's, I can point to one other paper that I'm not. +[4180.820 --> 4182.820] Yeah. +[4182.820 --> 4186.820] I'm always talking about cake Jeffrey papers for some reason. +[4186.820 --> 4193.820] The funny thing is this is yet another paper from the wrapper lab that rat moving in 2D around a box. +[4193.820 --> 4198.820] Some of its firing fields are much stronger than others. +[4198.820 --> 4200.820] Fire feels in place. +[4200.820 --> 4202.820] Fire feels of good cells. +[4202.820 --> 4206.820] So I think it will sell fire much more reliably. +[4206.820 --> 4209.820] So that's interesting. +[4209.820 --> 4211.820] So it's reliable. +[4211.820 --> 4212.820] It's very beautiful. +[4212.820 --> 4213.820] All right. +[4213.820 --> 4214.820] So that can. +[4214.820 --> 4216.820] It goes again that supports. +[4216.820 --> 4217.820] I appreciate that. +[4217.820 --> 4224.820] That goes again with the idea that, okay, one grid cell module, which has. +[4224.820 --> 4228.820] The four phase cluster six phase clusters, whatever it is. +[4228.820 --> 4232.820] That doesn't, there's an additional coatings came on top to the actual 3D. +[4232.820 --> 4246.820] Or the, the, the, the non repeating representation in some sense could be a code that way. +[4246.820 --> 4247.820] That one. +[4247.820 --> 4255.820] If the same thing would be viewed as seen with orientation cells. +[4255.820 --> 4261.820] In any phase, like for example, here's an orientation cells that always represent this orientation in the box. +[4261.820 --> 4268.820] In different locations in the box, it could have different scales or a different, you know, 3D. +[4268.820 --> 4270.820] You know, sphere things. +[4270.820 --> 4272.820] Whatever you want to call that. +[4272.820 --> 4274.820] You know, I'll get the paper. +[4274.820 --> 4275.820] Where we're going on this model. +[4275.820 --> 4283.820] This is the same cell I happened to put regardless of this three dimensional thing, but, but which cell become the strength of it could vary depending on. +[4283.820 --> 4288.820] So imagine now I have six phase clusters as this tank showed me. +[4288.820 --> 4291.820] We've had six reattractors. +[4291.820 --> 4299.820] And, and, and yet they, they all behave correctly, but some of them are more active than others at any point of time. +[4299.820 --> 4305.820] Which would be a similar type of coating scheme. +[4305.820 --> 4307.820] I would like to be that paper about that. +[4307.820 --> 4309.820] I think that's a huge clue. +[4309.820 --> 4311.820] That's what I got excited about the tank. +[4311.820 --> 4312.820] I saw this presentation. +[4312.820 --> 4316.820] It was one of the things that jumped out of me. +[4316.820 --> 4319.820] It's like, oh, my God, a whole different coating scheme. +[4319.820 --> 4321.820] I hadn't even thought about. +[4321.820 --> 4323.820] That's a big clue. +[4323.820 --> 4325.820] Maybe I'll pick that paper and, and review it. +[4325.820 --> 4326.820] How about that? +[4326.820 --> 4327.820] Okay. +[4327.820 --> 4328.820] The next thing to do. +[4328.820 --> 4329.820] A good thing to do. +[4329.820 --> 4330.820] Yeah, I'll set it. +[4330.820 --> 4331.820] All right. +[4331.820 --> 4333.820] And maybe I can do that on Wednesday. +[4333.820 --> 4334.820] Okay. +[4334.820 --> 4336.820] That's one thing to do. +[4336.820 --> 4339.820] Thank you. +[4339.820 --> 4341.820] That's good. +[4341.820 --> 4343.820] I think that's a great animation. +[4343.820 --> 4345.820] I'm hoping we can do it in the weather. +[4345.820 --> 4346.820] Great. +[4346.820 --> 4348.820] Yeah, there was a. +[4348.820 --> 4352.820] There's a four four four four four floor. +[4352.820 --> 4362.820] About a video game or an animation where you're viewing a four dimensional hypercube projected on to 3D projected on to 2D. +[4362.820 --> 4363.820] Okay. +[4363.820 --> 4364.820] So basically on the screen. +[4364.820 --> 4366.820] You got to scan. +[4366.820 --> 4370.820] And you're able to navigate around this four dimensional hypercube. +[4370.820 --> 4373.820] And all you see is the two the projection. +[4373.820 --> 4375.820] Yeah, 3D, the wireframe. +[4375.820 --> 4379.820] And apparently as you move around it, it's completely confusing. +[4379.820 --> 4381.820] Because it's moving around in 40 and we're not. +[4381.820 --> 4384.820] It's similar to what you're going to show here. +[4384.820 --> 4386.820] But if you keep doing that. +[4386.820 --> 4389.820] For a while, apparently it just clicks. +[4389.820 --> 4390.820] And you get it. +[4390.820 --> 4394.820] And then suddenly you have a completely predictable. +[4394.820 --> 4397.820] Map of this for the hypercube. +[4397.820 --> 4402.820] It just reminded me of what you're trying to show here. +[4402.820 --> 4403.820] It's not a video game. +[4403.820 --> 4405.820] It's like a demonstration or something. +[4405.820 --> 4406.820] It's not it is not in there. +[4406.820 --> 4410.820] So if you're navigating that for you. +[4410.820 --> 4413.820] I was thought that would be like fun to. +[4413.820 --> 4416.820] But I don't know how long it takes before. +[4416.820 --> 4418.820] In the beginning, it's completely unpredictable. +[4418.820 --> 4421.820] So I don't know how long it takes before it snaps. +[4421.820 --> 4423.820] But. +[4423.820 --> 4424.820] I just just. +[4424.820 --> 4427.820] It's reminded me. +[4427.820 --> 4428.820] Thanks you guys. +[4428.820 --> 4429.820] Yeah, thank you. +[4429.820 --> 4432.820] Would somebody take the double back? +[4432.820 --> 4433.820] I'm going to log off. +[4433.820 --> 4434.820] Yes, sure. +[4434.820 --> 4435.820] Thanks. diff --git a/transcript/allocentric_r0tWomRZMuA.txt b/transcript/allocentric_r0tWomRZMuA.txt new file mode 100644 index 0000000000000000000000000000000000000000..7d9f8c9cc23d7106df4b246ead7f915abf1e101e --- /dev/null +++ b/transcript/allocentric_r0tWomRZMuA.txt @@ -0,0 +1,177 @@ +[0.000 --> 6.960] Hey everyone and welcome to TopThink. +[6.960 --> 12.920] Today we're going to learn about 8 ways to read someone's body language. +[12.920 --> 14.920] Now let's begin. +[14.920 --> 15.920] Number 1. +[15.920 --> 20.840] Manipulating Clothing Clothing sends a powerful message. +[20.840 --> 25.360] Not because of the clothes you wear, but because of the way you use them. +[25.360 --> 30.320] Most people express their body language by interacting with their clothing. +[30.320 --> 34.240] You might notice someone fiddling with their scarf or messing with the buttons on their +[34.240 --> 35.400] jacket. +[35.400 --> 38.560] Both of these cues are types of grooming. +[38.560 --> 43.040] Gruming is when you make small adjustments to your physical appearance. +[43.040 --> 46.280] Usually when you're feeling nervous, restless, or embarrassed. +[46.280 --> 50.840] So if you catch someone fidgeting with their clothes, well you know exactly how they're +[50.840 --> 51.840] feeling. +[51.840 --> 55.960] But grooming isn't the only way people manipulate their clothing. +[55.960 --> 58.240] Many people use them as barriers. +[58.240 --> 63.520] Yeah, they'll put their hands in their pockets or add on more items of clothing like a jacket +[63.520 --> 64.720] or a hat. +[64.720 --> 68.040] These body language cues mean that they're putting up walls. +[68.040 --> 73.360] They're using their clothing to shield their body and feel a sense of safety. +[73.360 --> 77.480] If you notice these walls going up, then that means they're feeling uncomfortable. +[77.480 --> 81.120] So give them a little more room to breathe. +[81.120 --> 84.760] Number two, supporting their body. +[84.760 --> 86.440] Take a look around any room. +[86.440 --> 90.720] Pay attention to people's posture and the way they support their bodies. +[90.720 --> 95.080] You'll often find people slumped against a wall or a piece of furniture. +[95.080 --> 101.000] But few people even realize how much leaning tells you about someone's emotional state. +[101.000 --> 107.040] When you let your body slouch, your muscles relax, your spine slumps forward, your blood +[107.040 --> 109.760] even circulates a little bit slower. +[109.760 --> 113.400] In other words, you're letting yourself relax for a reason. +[113.400 --> 115.880] And that reason is usually one of two things. +[115.880 --> 120.000] Either you're really bored or you're really interested. +[120.000 --> 122.120] So how can you tell the difference? +[122.120 --> 124.000] It's all about direction. +[124.000 --> 127.960] If they're leaning forward on their elbow, supporting their head as you talk, well, +[127.960 --> 130.960] it's safe to say that you've got their attention. +[130.960 --> 136.160] But if they're falling back into the wall, arms crossed over their chest, they're probably +[136.160 --> 138.560] just bored. +[138.560 --> 141.960] Number three, proximity matters. +[141.960 --> 146.800] Have you ever noticed how uncomfortable you feel when someone gets too close? +[146.800 --> 151.600] Even if they're not actually touching you, it's all you can think about because space +[151.600 --> 154.920] is much more powerful than you realize. +[154.920 --> 160.920] Edward T. Hall, a cultural anthropologist, was the first to recognize how important personal +[160.920 --> 162.800] space can be. +[162.800 --> 168.520] In his book, The Hidden Dimension, Hall explains that space carries many social and +[168.520 --> 170.240] cultural meanings. +[170.240 --> 172.160] It demonstrates closeness. +[172.160 --> 177.240] It demonstrates trust and different levels of physical intimacy. +[177.240 --> 180.720] Space even helps us organize our relationships. +[180.720 --> 185.080] Depending on how close someone is standing, they fall into different categories. +[185.080 --> 188.600] They might be a partner, a friend, or a complete stranger. +[188.600 --> 194.160] Either way, those categories help you make sense of your relationships, set boundaries, +[194.160 --> 197.040] and be vulnerable with the right people. +[197.040 --> 202.880] That's why space or proximity, as Edward Hall calls it, is such a powerful form of body +[202.880 --> 204.000] language. +[204.000 --> 209.120] It gets left off most people's lists because there aren't any gestures or expressions +[209.120 --> 210.120] involved. +[210.120 --> 215.600] But if you think about it, proximity actually involves the entire body. +[215.600 --> 221.200] You have to station yourself somewhere in space, so you drift toward areas of comfort like +[221.200 --> 222.880] a familiar face. +[222.880 --> 228.240] By paying attention to proximity, you can uncover all kinds of emotions without saying +[228.240 --> 229.760] a word. +[229.760 --> 232.720] So how does proximity actually work? +[232.720 --> 235.880] Well, Edward Hall breaks it down like this. +[235.880 --> 243.840] He separates space into four zones, public space, social space, personal space, and intimate +[243.840 --> 245.480] space. +[245.480 --> 250.440] So let's imagine you're standing in a busy room, like in an airport or a department store. +[250.440 --> 255.240] Now, draw a circle around yourself, leaving you at the very center. +[255.240 --> 259.400] For now, let's give that circle a 25-foot radius. +[259.400 --> 261.280] That's a pretty big circle, right? +[261.280 --> 263.640] Well this is your public zone. +[263.640 --> 269.000] It's a free space where anyone can travel, without making you feel threatened or uncomfortable. +[269.000 --> 274.040] In general, when you don't know someone, you keep around 12 to 25 feet of distance between +[274.040 --> 275.040] you. +[275.040 --> 279.800] Now below 12 feet is the social zone, a place for familiar faces. +[279.800 --> 283.880] This is where you'll find acquaintances, classmates, and co-workers. +[283.880 --> 287.480] People you know to some degree without being actual friends. +[287.480 --> 291.680] The next step down at four feet is your personal space. +[291.680 --> 294.280] This is where most people draw the line. +[294.280 --> 297.560] Social and public spaces tend to get a bit mixed up. +[297.560 --> 303.040] At the grocery store, for example, strangers will enter your social circle all the time. +[303.040 --> 305.160] And there's nothing you can really do about it. +[305.160 --> 310.120] And if they invade your personal space, things start to feel weird. +[310.120 --> 313.920] Your personal space is reserved for your real friends. +[313.920 --> 316.040] People you already know and trust. +[316.040 --> 319.600] But there's still one more intimate space. +[319.600 --> 325.720] The only people allowed in this one-foot circle are partners, family, and close friends. +[325.720 --> 330.040] Because in a one-foot circle, you're usually making physical contact. +[330.040 --> 335.520] You've closed the space completely, which carries a whole lot of subconscious weight. +[335.520 --> 340.160] So if you want to read someone's body language, pay attention to the space they keep. +[340.160 --> 341.320] Where do they stand? +[341.320 --> 343.240] How do they introduce themselves? +[343.240 --> 345.760] When you talk, do they keep their distance? +[345.760 --> 349.320] Or do they get in close and make physical contact? +[349.320 --> 354.000] All these signals tell you what someone is feeling, what kind of person they are, and +[354.000 --> 356.880] what they think about you. +[356.880 --> 360.200] Number four, just your clusters. +[360.200 --> 364.200] When reading body language, you might search for one signal at a time. +[364.200 --> 368.080] You watch their feet, and then their mouth, and then their eyes. +[368.080 --> 371.640] And most of the time, you really don't discover much. +[371.640 --> 375.040] That's because body language comes in clusters. +[375.040 --> 379.120] People send out rapid fire cues over a short period of time. +[379.120 --> 380.880] And then they stop for a while. +[380.880 --> 381.880] They'll get distant. +[381.880 --> 385.640] They'll hold the same pose, or they'll keep their hands in their pockets. +[385.640 --> 389.880] People suddenly, they're sending out another jam-packed cluster of cues. +[389.880 --> 395.000] So if you want to get an accurate read on someone, then you need to look out for these clusters, +[395.000 --> 401.440] because each one gives you an important window into their mood and their personality. +[401.440 --> 405.000] Number five, open palms. +[405.000 --> 407.960] Everyone knows how expressive your hands can be, right? +[407.960 --> 413.200] When it comes to nonverbal cues, your hands are far and away the loudest part of your +[413.200 --> 414.200] body. +[414.200 --> 420.040] They can show any kind of emotion, positive or negative, exaggerated or subtle. +[420.040 --> 424.640] You throw them in the air after a big win, or you wave them around when you're excited. +[424.640 --> 427.920] But your palms have a special meaning. +[427.920 --> 434.120] Humans and many other animals use this part of their hand as sign of non-threatening behavior. +[434.120 --> 439.400] In other words, if someone wanted to fight, you might back up, open your arms, and show +[439.400 --> 441.040] your palms. +[441.040 --> 445.680] That kind of body language instantly tells the other person that you don't want to play +[445.680 --> 447.000] ball. +[447.000 --> 452.080] Since open palms display vulnerability, we use them to judge people's characters. +[452.080 --> 454.800] Or find out whether someone is telling the truth. +[454.800 --> 459.440] If someone widens their body and opens their hands, it shows you that they've got nothing +[459.440 --> 460.760] to hide. +[460.760 --> 465.160] Because they're willing to be open, you're much more likely to take their word. +[465.160 --> 469.400] So if someone opens their palms while they're talking, that usually means they're being +[469.400 --> 473.640] honest, or at least they want you to think they are. +[473.640 --> 474.720] Number 6. +[474.720 --> 479.160] The Closed Point Every parent has told their kids, it's not +[479.160 --> 482.960] nice to point, but what's wrong with pointing? +[482.960 --> 488.440] It's actually a primitive form of body language, and humans aren't the only ones who do it. +[488.440 --> 494.840] If you go to the zoo, you'll see apes pointing at people, food, and other animals all the time. +[494.840 --> 497.640] But what does pointing actually mean? +[497.640 --> 502.960] By closing your fist and extending your index finger, you're establishing dominance, you're +[502.960 --> 504.920] singling someone out. +[504.920 --> 509.840] In social settings, that point removes them from the group, and it makes them feel left +[509.840 --> 510.840] out. +[510.840 --> 513.880] Your finger is commanding other people to look. +[513.880 --> 518.240] It's throwing someone under the spotlight, whether they like it or not. +[518.240 --> 522.920] So the next time you catch someone pointing, you'll know exactly what they're trying to +[522.920 --> 524.800] do. +[524.800 --> 526.480] Number 7. +[526.480 --> 531.200] Extended Eye Contact Eye contact is one of the first cues we look +[531.200 --> 532.640] for in a person. +[532.640 --> 536.840] If someone doesn't meet your eyes, well there's a good chance something's wrong. +[536.840 --> 540.360] They might be feeling embarrassed, anxious, or insecure. +[540.360 --> 545.440] They might feel intimidated by you, so they're having trouble making eye contact. +[545.440 --> 550.840] But not all eye contact is good, especially when it goes on for too long. +[550.840 --> 555.720] When you first meet someone, you want to make about 5 seconds of eye contact. +[555.720 --> 557.040] You get a good look at them. +[557.040 --> 561.360] You smile, you introduce yourself, and then you glance at something else. +[561.360 --> 566.600] You should keep this process going throughout the conversation, because too much eye contact +[566.600 --> 569.840] is going to make people uncomfortable. +[569.840 --> 575.920] That's because extended eye contact usually means someone is lying, or trying to get inside +[575.920 --> 577.120] your head. +[577.120 --> 579.880] So don't let that physical connection fool you. +[579.880 --> 584.520] The right amount of eye contact is the sign of trust and confidence. +[584.520 --> 589.600] But too much means that person may have a hidden agenda. +[589.600 --> 590.840] Number 8. +[590.840 --> 592.640] Touching Their Face +[592.640 --> 596.840] When you're feeling stressed or anxious, your face is a dead giveaway. +[596.840 --> 598.360] It turns red. +[598.360 --> 602.240] It gets itchy, and sometimes it even starts to hurt. +[602.240 --> 604.920] Obviously that's not something you want. +[604.920 --> 608.880] So you try to make the pain go away by soothing your nerves. +[608.880 --> 611.840] Now for most people, that means touching their face. +[611.840 --> 612.840] A lot. +[612.840 --> 618.480] They'll reach up to scratch their nose, brush their forehead, or just rub their cheek. +[618.480 --> 621.400] Every one of these gestures means one thing. +[621.400 --> 625.520] They're feeling nervous, and they definitely don't want you to know. +[625.520 --> 630.560] If you spot these body language cues, the best thing you can do is to pretend not to +[630.560 --> 631.560] notice. +[631.560 --> 636.040] The chances are, that person is already feeling embarrassed or self-conscious. +[636.040 --> 639.720] So try to lighten the mood, make them feel more comfortable. +[639.720 --> 643.920] If they suddenly stop touching their face, well, it means you did your job. +[643.920 --> 646.160] Hey, thank you for watching TopThink. +[646.160 --> 650.160] And be sure to subscribe, because more incredible content is on the way. diff --git a/transcript/allocentric_rdxNCeZLOGc.txt b/transcript/allocentric_rdxNCeZLOGc.txt new file mode 100644 index 0000000000000000000000000000000000000000..16b35cee37429ac972d6265b7d83f07b5792c98d --- /dev/null +++ b/transcript/allocentric_rdxNCeZLOGc.txt @@ -0,0 +1,157 @@ +[0.000 --> 15.000] So hi everyone, welcome to this new session. So today it's our great pleasure to have two speakers. So the first speaker is Chang Liu. +[16.000 --> 22.000] And so he worked previously to work a fellow with +[24.000 --> 39.000] Maimon and Gabi Maimon and Larry Abbott and he will present that work is now at Stanford University. And so he will he will talk about how the +[40.000 --> 60.000] the the the flies brain do math to compute the flight direction, but you will give of course much more detail about it. And the second talk is from Logan Logan Sharicker, who was previously who did this PhD and postdoc at current with +[60.000 --> 74.000] with Lay Stangling and you work with Bob Shapley to and it's about network of e and i neurons and to model visual cortex. So today, this are really +[74.000 --> 88.000] very applied math because they very deal with real real applications. And so it's my pleasure to have them today for a BDD. All right. +[90.000 --> 113.000] Thank you for the invitation. My name is talk about how brings at vectors today and more explicitly, I'll try to talk about how brings performance through basic operations of vector addition, like vector scaling rotation and addition. +[113.000 --> 126.000] Although I study fly out keeps thinking about how the and in brain so hot especially home or maybe embrace our grid could do it. So let's start with listening to the spike train of +[126.000 --> 138.000] Hatch style, Hatch style in rat. +[138.000 --> 142.000] Do you hear here the sun audio, okay. +[142.000 --> 146.000] No, we didn't hear the song. +[146.000 --> 158.000] Okay, I think given limited of times that the you hear a lot of spectra when the rest had it pointed to this upper left direction. +[158.000 --> 170.000] I think trying to fix this may take a few minutes, but I don't have much videos in my top. +[170.000 --> 190.000] So here is the tuning curve of the cells, which is recorded, just shown. And you can see this style has a sharp tuning curve at this orientation, which corresponds to the upper left of the animals environment. +[190.000 --> 203.000] And so this had the record style since it's discovered as many actually many additional features that kind of remain and clear or only have models for. +[203.000 --> 211.000] And this is just to show that they're supposed to remain in head directly in cells that could tell the entire angular space. +[211.000 --> 225.000] But occasionally or sometimes you can find cells from most brain or respiring that has a turning curve to a direction, but with a much broader tuning curve. +[225.000 --> 233.000] And sometimes this tuning curve fits well with the sending soil, which is shown in this dash line here. +[233.000 --> 241.000] And so it's been curious what are the function of the shape of that they're a consultant tuning curve. +[241.000 --> 255.000] And more recently, actually, the early this year has been published on paper by Ruben, a projective slab that where they record in the handbrain of liver fish. +[255.000 --> 273.000] And it's also high direction cell like activity, but you can see the so much are specialized in our localize in a very nice way that you can actually see activity bound of the of in this cells. +[273.000 --> 286.000] And when the animal is moving and you can see that the activity bomb changes indicating the animal sink hits changing directions. +[286.000 --> 296.000] And if you look at the individual cells from this population, this is the summer, it projects to another brain structure coming into the tank, the nucleus. +[296.000 --> 308.000] But interestingly, the dendrin and the axons are separated and quite large, largely separated almost to the half extent of this structure. +[308.000 --> 316.000] So what are the functional of this large separation between dendrin and axons of had their themselves. +[316.000 --> 322.000] And most last day, but not least, it's been known that either it can sell. +[322.000 --> 335.000] If you recorded more often than not, you'll see that conjunctive tuning to other spatial information such as animals location, animals, feed, etc. +[335.000 --> 341.000] So what are the functions of this pervasive conjunctive tuning in had their own cell. +[341.000 --> 347.000] So that's from the mouse and the rats. +[347.000 --> 358.000] And so the purpose of studying flies also to add insight onto this question marks, although there are many models already trying to correct rats, the function of this. +[358.000 --> 368.000] But hopefully after today, I can provide some experimental data from flies similar to this observations. +[368.000 --> 371.000] Okay, so flash also have had their themselves. +[371.000 --> 379.000] And it's first being shown by you, Hanasilika and Vivek Jaramah in 2015, almost eight years from now. +[379.000 --> 385.000] I think the audio is now working. Here's a virtual trajectory of the fly. +[385.000 --> 400.000] And now the also respond crazily when the flights had it toward the upper left direction and the tuning style is very similar from the one you see from the rat. +[401.000 --> 408.000] So this is one cell is actually caught and up if we're moving on to the anatomy of this neurons. +[408.000 --> 418.000] The realness is virtual like a cartoon like video is because we can't put so far a can put electrode onto a freely moving flight. +[418.000 --> 437.000] So the way we do it as I mentioned earlier is that we tied it to the flight to a plate and put the fly either our crucial ball like here and he can she can walk around and we hooked up this bright bar as a close look with the rotation of the bar. +[437.000 --> 447.000] So it's mimicked to mimic a landmark at infinity to the flight and of to simulate the high direction of the flight in reality. +[447.000 --> 457.000] So we can record new activities in flies in walking flies and also do similar recording in flying flies also tether. +[458.000 --> 472.000] And this is one of the video that we collected when we record from flying flies also we can add this video information or other stimulus to the fly. +[472.000 --> 477.000] So this is the stimulus I'll use quite a lot into this talk. +[477.000 --> 505.000] So back to the neurons had a reconnaissance shown on the left are actually recording from one of these blue neurons called e peachy so the towel almost all the ellipse body this donut like structure and most of the person bridge is best go handler like the structure they are located in the center of the fly bridge. +[505.000 --> 534.000] So these are the population of this neurons but if you look at it was the only in a way to one watch of the ellipse body and goes up and send external output to one glomerulet of the person bridge to this neuron let's go to the left and if you record from this neuron use have a tuning curve shown previously. +[534.000 --> 547.000] But the never in your have unique anatomical pattern that will go to the right bridge and left right left right those through this alternating pattern if these neurons are active. +[547.000 --> 568.000] What you will see if you image all the neural activity of the ZPT cells you will see this activity bump similar to what was shown before in the zebra fish brain that you see three bumps here one from the the body and the two others from the same neural neural populations. +[568.000 --> 589.000] So you will see the same thing in the other directions but in the bridge and of course you if all the neurons take turns of being active like when the flies turning around in space what you will see and the microscope with this bump and rotate in a secret way of indicating the heading direction of the cell of the fly heading. +[589.000 --> 606.000] So back to the to the title of this talk how do bridge at vectors this is one of the very the key component of the vector computation another key component is traveling direction. +[607.000 --> 631.000] This is it's not trivial to distinguish hiding and traveling direction for example in humans when we walk we can walk in a look at a different direction when we walk and this is an and going home backwards shown by the study of math reading that they sometimes when the food is super heavy there walk home backwards. +[631.000 --> 652.000] And there are many behavior evidence indicating that throughout this process this and do have a sense of their traveling direction which is different that they're heading which are is like one degree apart in this case and in flying animals here is some do mannequins the males are course you can the females and actually the gaze at the female. +[652.000 --> 664.000] The image dance and traveling to the side so being able to distinguish hiding and traveling directions very important in spatial navigation but so far. +[665.000 --> 681.000] Maybe not so far maybe after two years ago one year ago is only a direction cells has been short and it has been showing almost all the animals you can see there many different animals species have this had a direction cell. +[682.000 --> 708.000] But the traveling direction cell hasn't been shown until like a year ago and that since the focus of this talk is how to bring at back to the output be show you the evidence we think for this traveling direction and focus on how the brain factors in short this traveling direction signals in the full flies they. +[708.000 --> 733.000] If they locate in this friendship about this middle layer of the friendship body in two groups of neurons at least once college dot B one is called PFR so only talk about each dot these in this talk they also have a bomb of like activity moving left and right along the friendship body so this three structure are closely located and they all belong to the same. +[733.000 --> 738.000] Structure in a fly brain cost central complex. +[738.000 --> 760.000] So intuitively should be very straightforward if we record this new activity and also simultaneously record the traveling direction in for example walking fly we should be able to correlate this to signals and show if it's a had a traveling direction signal. +[760.000 --> 770.000] And but unfortunately the traveling direction of fly when it's walking is changing so fast like when it's jitters a little bit to the left in maybe. +[770.000 --> 785.000] Maybe a sub a hundred milliseconds the traveling direction change dramatically and the g camp indicator from that we use or exist to not respond in such a fast way so. +[785.000 --> 811.000] To get things working with switched to recording from flying flash but the problem in flying flies is that it's tethering they're now actually traveling to any places to any direction so the trick we use is to not record the actual movement of the fly but to use VR to. +[811.000 --> 830.000] To show the fly which direction you're traveling kind of like the virtual reality where using out in humans so the way we do it is to give the fly this optic flow thing that's this team does on the lower part of the arena that's the surrounding a flight. +[830.000 --> 847.000] When we move this team does collectively in this manner it's simulating what you feel like when you're moving forward and when you're moving forward to have your head direction traveling direction are are land and this is what we observed in this to groups of neural. +[847.000 --> 860.000] With some opportunities to record activity from this to groups of neural now the x axis is time and between the two dash lines are the moments where this optic flow is going. +[860.000 --> 888.000] And the y axis is the unwrapped linearized this brain structure so the top low is epg that simulate indicates hiding direction so that now the flashing and hiding direction is to the to the edge of the y axis and traveling direction from each other be this bomb also goes to the edge and we can estimate the phase from this to. +[888.000 --> 917.000] Patents and they show that when stimulus is on both phases of land indicating your hiding and turning direction and land and across a population of 13th class this is also true so this is just one direction let's test more other directions another direction is to simulate the fly going backward and when we display this movie what we thought is that now. +[918.000 --> 940.000] The had direction it keeps changing but that's why we recorded the direction simultaneously so the had a train it goes to the edge and the traveling direction it locates to the center and they are on average one to degree apart from each other and we also did this recording in other. +[940.000 --> 957.000] A four direction so in total six directions and you can see the separation of these two bombs actually changes in a great manner and average they change it they form a very nearly linear correlation with the similarity the constant curve in direction. +[957.000 --> 980.000] So egocetral here is it's a word means this traveling direction is with reference to the flash body like going front back and there's another word called a allows into direction traveling direction with reference to the to the world like going south north west of east well need to distinguish this to in the rest of the time. +[981.000 --> 1008.000] So to summarize this first part now there are these two bombs in the fly brain one the blue bomb track tracks of last highly direction the think of red bomb track of last traveling direction so when the flight flying around if they move forward both bombs a lot and when the flight is blown backwards traveling bomb switch one degree whereas the blue bomb remains still here is another example. +[1008.000 --> 1037.000] So we think this is true so we show that the case in flying flies and we also have a lot of data showing this is called this TV or semi quantitatively true in walking flies and for more if you want yeah and we hope we think so now let's say also we think this track this pink bomb is to is to represent hopefully. +[1038.000 --> 1045.000] The traveling the abstract traveling direction of the animal the matter is flying or walking. +[1046.000 --> 1065.000] So how do the fly brain build this a lot of scientific traveling direction signal because the exact is not part of which the the sausage north west how do the fly brain build this out of the direction. +[1065.000 --> 1091.000] So I think this process is done can be summarized as a vector computation process so first let me go through the algorithm we think that's going on why this is a vector competition process so here are the two angles the same abstract the same angle but reference to different. +[1091.000 --> 1120.000] The one that left is the ego syndicate that one the right is that of the difference between these two angles are the flight is the flat hiding direction and so you add equal traveling with hiding you've got a lot of traveling okay so the one we want is a lot of some travel interaction but ego centric traveling directly is seemingly more easier to compute so there are actually many. +[1121.000 --> 1125.000] The recordings from your activities in insects. +[1125.000 --> 1148.000] Previously from Bumblebee's also show in this work that we have that new activity whose activity correlates nicely with the lens of this ego centric traveling vector projected on to one this 40 part 45 degree axis and that's very arbitrary but. +[1149.000 --> 1172.000] In short is we have new activity was that not a card it's with this lens out which is a production of traveling direction and we have different new activities from different populations that card it's with different lens out which is the production of this ego centric traveling direction onto it's for different axis. +[1172.000 --> 1196.000] So one way the flight can reconstruct this green arrow is to use those new activity the screen arrows and the sum is up and then you can get the green arrow and this is the stop we think it's a vector addition and once you have this green arrow you just resulted it by the amount of age and then you got. +[1196.000 --> 1204.000] I know some trick telling direction and this is what we call I think it's according a transformation now you can switch from ego centric to I was. +[1204.000 --> 1224.000] This is down in two steps and what's actually down in the fly brain that they combine these two steps into one steps so the reason this is a green arrow is in ego centric reference frame is because this for access the use to come pure vectors are reference to their body. +[1224.000 --> 1250.000] So if this access are directly reference to the other centric word then the vector they got the sum the vector automatically becomes reference to the other centric word so this is what we think it's going on in the fly brain and throughout the rest of the talk out on a talk about data that we think it's reflecting this process. +[1250.000 --> 1258.000] So first of all how do you react neurons like represent vectors. +[1258.000 --> 1279.000] Actually I think a trick that's being used is called a further representation of this vector so every two dimensional vectors have you can have this lens and angle and but then you can also represent it in a sending soil way where the lens the angle represent that equals the field. +[1279.000 --> 1291.000] The face of the senior soil and the lens of the vector equals the amplitude the amplitude is very important the amplitude of the senior and if you have another vector. +[1291.000 --> 1300.000] You just have a nano another another senior soil and if you want to add these two vectors together you just add this to senior soil together. +[1300.000 --> 1313.000] So since these two senior soils are of the same frequency the added the sum is also a senior soil the same frequency. +[1313.000 --> 1342.000] This is a mathematical mathematical tool and this is actually an idea that actually have been put forward in many models theoretical models that to use to calculate to understand or to explain many spatial navigation tasks and the one thing I forgot to mention is that in neurons is actually much easier to represent. +[1342.000 --> 1368.000] The things on the right then the things on the left because to just to represent one senior soil you can use one group of neurons say this one drawn on the bottom and to each each subgroup of this neuron can represent each being of the senior soil and the height of each being in this senior soil can be represented by the activity of the subgroup of neurons. +[1368.000 --> 1383.000] So together this group of neurons with their activity that can use to have a one to one mapping between this purpose and your soil and you can have different groups of neurons to represent different. +[1383.000 --> 1395.000] Senior soil and either together you can achieve bacteria competition in the fly break so that's I think the take home message if you don't understand anything I say from now. +[1395.000 --> 1412.000] Okay now the biological evidence for it the first we need more copies of hiding direction or more copies of the senior soil so that's the theoretical and computational paper including one. +[1412.000 --> 1424.000] Also have from proper weapons don't and net hands that also have experiment data in in peace. +[1424.000 --> 1441.000] Okay so the more copies of the hiding of these bumps so one way to look at them is in the in this EP the output region of this EPD cells which is the hiding direction cell in the flat brain that we talked about earlier. +[1441.000 --> 1466.000] So in the lips of bodies then dress and in the bridge it's output so they have two outputs one the left one right there all the same for stoder so if we look at the output of this EPD cells we found this two other groups of neurons that both in a way the left and right bridge and send their output to the. +[1466.000 --> 1480.000] So it's a scientific body and this to our color PF and DMPF and B cells and it's a lot of names but I always use it always color code them to make things easier. +[1480.000 --> 1496.000] The one with simultaneously record your activities from EPD cells and PF and B cells for example in the bridge we saw this two bumps of activity in the EPD cells and also two bumps of activity in the PF and B cells that are. +[1496.000 --> 1512.000] Moving along the bridge synchronously and they also have overlapping bump of activity suggesting this different cells more or less it's receiving the EPD cells input. +[1512.000 --> 1541.000] Okay so that's now we have four additional bumps that all track the hiding direction of the flight is gives the data from KFD cells so but to compare if we look at the shape of this hiding direction cell of the neural activities for example if we look at EPD cells they have a very narrow turning curve this is consistent with the data we've got. +[1542.000 --> 1567.000] We've got from single EPD cells from electrophilology recordings where when the hiding direction of the flight change a little bit the neural the high the the EPD cells will lose its activity very sharply to to their preferred direction and here if we image them all together you can see the they are also the shape is very narrow. +[1568.000 --> 1587.000] And but compared to this narrowly distributed shape here are the PF and B cells they have a much broader wider shape and if we try to fill this shape to a tiny so it's actually fit very quite nicely. +[1587.000 --> 1616.000] So this is at least one piece of evidence that this orange and it's the same it's true for this brown bumps as well so it's now we have this for wider bump that's a little tiny soil like shape and to narrow bump in EPD cells they all all in six bumps as they track the flight and in direction when the flight moves around. +[1617.000 --> 1636.000] Now we actually met the two criteria for for vector computation why is we have more components more hiding bumps the second aid that these bumps are sinusoidal shape. +[1637.000 --> 1655.000] Okay, a third requirement for vector computation is the following for this four bumps because they all their face all pointing to the hiding direction of the flight if you put them together that actually point they have the key point to the same direction. +[1655.000 --> 1671.000] But what we need is to this is what they look like in the using a failure diagram into the space they all pointing to the same direction that vectors represent what we need are these vectors to point to different directions. +[1671.000 --> 1700.000] And this in return require these bumps to have peaks to have their face also separated and this is the third requirement that we need the phase fifth between this different sinusoidal bumps and this can be achieved when this is a certain anatomy of this new that they project from the from their dendritic area towards their external output area. +[1701.000 --> 1703.000] From the bridge to the fine shape body. +[1704.000 --> 1730.000] So let's focus on one of the cells this two pairs this two pf and b cells that they integrate the center of each side of the bridge and but when they propagate to the fine shape body they the one the right bridge shift a little bit to the left and the one left bridge shift a little bit to the right. +[1731.000 --> 1759.000] And if you say the whole layer is like 360 degree because the bumps swips through the whole layer when the fly changes travel in directions 360 degree then this shift we can quantify the amount of this anatomical shift and it's roughly one is the width of the whole layer and which corresponds to 45 degree after minus. +[1759.000 --> 1786.000] And I just want to give a highlight to this work from Tonya wolf and Gary Rubin that in 2014 which is almost 10 years from now they use that microscopy to sparsely labeling this neurons and they actually have this quality to be they have this conclusion that the this pf in neurons when the project from bridge to the fine shape body they have some anatomical shift. +[1786.000 --> 1808.000] And more recently from this honey brain electro microscopy data set and we can more accurately quantify this in fact I'll show the quantification results later but this is this is a EM picture reconstructing the pf in the neurons given. +[1808.000 --> 1830.000] Yes, I found the break to the fine shape, but I can see the shift it over to the right so let's say this is 45 degree plus or minus that's now the end of the story so the this means the senior source the hiding bump when they propagate from the break to the fine shape bodies to the pf in v neuron they have this anatomical shift of 45 degree. +[1830.000 --> 1839.000] But they need to actually travel to the external output of the this is dot be near which is what where we see. +[1839.000 --> 1868.000] The covering direction in the external output of this is not the new us so to travel from the dendritic field to the external output of the ran yours there is another off layers which which is the corresponds to 20 degree switch so this time you switch from the bridge to the tip of the start be neurons actually have combined minus one 35 degree shift. +[1869.000 --> 1889.000] So that's from the last bridge of pf in v cell and from the rubbery shape in the result this shift is positive one 35 degree and for pf in these cells things are very similar the product to the fancy body they have this 45 degree shift the connect to a delta be near on. +[1889.000 --> 1909.000] But they connect very quickly to the dendrit the rest of this pf in of this is start be near of the ran yours for much more intensively they connect directly to the output of the start be near which is a rally of the in the orange pf in the. +[1909.000 --> 1928.000] Because the connect to the output versus the dendras of the ran yours is like have the opposite effect because they are one degree apart so it's kind of like winner takes all and we can for simplicity we can just think about this. +[1928.000 --> 1951.000] So in this case there's no additional one 35 degree shift so the pf in these are from the rab bridge have 45 degree shift in total and from the last bridge have minutes 45 degree shift in total yeah. +[1951.000 --> 1965.000] So this is the parallel what published with the virtual world in slab and where they also show similar ideas of how the sentimental shifts plays a role in vector competition. +[1965.000 --> 1968.000] I was a general. +[1968.000 --> 1994.000] Okay so in some now this four Sanus was that our face of land in the bridge all tracking the epg the hiding direction of the flight as shown on the right when they propagate to the fancy body to the tip of this it's not be near us the easy herit a different shift the left bridge pf in these cells have minus 40 degree. +[1994.000 --> 2008.000] Minus 45 degree shift you either take it the angle 45 degree the right bridge party 45 degree and party 135 and like to one 35. +[2008.000 --> 2011.000] So this is what we think how. +[2011.000 --> 2037.000] This neurons actually this face shift and this is just to show the grid a year for people walking in flight where this e m data quite accurate that we just using the synapses synapses detected from this e m data between different groups of neurons and that we calculated the time to go shift is turns out much. +[2037.000 --> 2043.000] Very close to a perpendicular or signal access. +[2043.000 --> 2060.000] So now we have four senior so is that pointing to the right direction in the fancy body and that lots the remaining step last step is to have this senior so it. +[2060.000 --> 2077.000] have different to have the factors that they represent you to have different lives and not just changing lens but also changing lens you know very specific mind because when the flight change or traveling direction. +[2077.000 --> 2090.000] The this vector will change their lens in a senior so it away of because it's a projection relationship and. +[2090.000 --> 2098.000] So this in turn requires this senior so is to change their amplitude. +[2098.000 --> 2111.000] So that's a lot of things we try to observe in experiments that is that whether this for senior so it will change their amplitude in a senior so it away so there are two senior so it's here. +[2111.000 --> 2119.000] So we image the activity of this neurons in a bridge that's where we can separate them. +[2119.000 --> 2147.000] So when we simulate the fly going backwards that pf and d south have very small amplitude of their bomb whereas pf and v south have very large bomb and the same groups of flies when we change the traveling direction similarly traveling direction to the front you can see the pf and d bomb increase their sinusoid sorry increase their amplitude and pf and v bomb decrease their amplitude. +[2147.000 --> 2162.000] And this is true for different following direction simulated and this bomb become asymmetric and actually magically have a different amplitude. +[2162.000 --> 2187.000] So we can quantify the amplitude of each senior so it in this picture and what we got is for example the left bridge clapping these out what we got is this mean tuning the tuning curve that's close to us faded senior so it's ways the peak of this fading of this faded senior so it's pointing to the up left. +[2187.000 --> 2215.000] So this up left is actually there maximum responding direction and there's another up left for this pf and d left bridge in these south which is there anatomical projection shift so the fact that these two angles they match each other means that the pf and d south from the left bridge if they are advocating for this left. +[2215.000 --> 2235.000] Up left direction they will respond more strongly to this upper left direction this is also prediction from the month and the same is true for the right bridge heaven d south and left right bridge heaven b south where you can see their maximum response to the front to the back left and back right. +[2235.000 --> 2250.000] Just as a control this EPG cells that signals the hiding direction of the fly to now change their amplitude only change the hiding direct traveling direction of flight. +[2250.000 --> 2279.000] Okay so here we have all the component and let me summarize this in video with no sound the video has no sound so here's the fly happening in space so for simplicity we just in this cartoon we showed the flash had it's always pointing to the up so the hiding direction of the flat never change but it comes. +[2280.000 --> 2293.000] And they actually constantly changing her traveling direction represented by this right bump sweeping through the family body so how does this right pop calculated. +[2293.000 --> 2303.000] First of all the same traveling direction is now projected on to this round back left right access which we human use to. +[2303.000 --> 2310.000] They actually projected on to this for other access also a certain access. +[2310.000 --> 2320.000] So the reason I say this is because some data I didn't show in this talk for the sake of time that we have this for different groups of neurons. +[2320.000 --> 2329.000] Who's activity correlates with the length of projection onto each axis. +[2330.000 --> 2339.000] And actually are anti correlated so it's making make sure some quite interesting feature why this regulation needs to be. +[2339.000 --> 2347.000] To be done in a disinhibitive in territory matter but nevertheless we have this is. +[2347.000 --> 2366.000] The traveling direction of the fly projected on to this for different access so now this act this neurons only signal not a vector but one dimensional variable to the place the step that they become vector is where the combine was. +[2366.000 --> 2380.000] And the one that's in the B.P.E. this hiding bump and the combine it in with a half of other neurons which I didn't talk about that they become sinusoidal shift bump in this P.F.M. +[2380.000 --> 2388.000] P.F.M. cells and there are four bombs of this P.F.M. we P.F.M. cells and the contractively. +[2388.000 --> 2402.000] But they represent the signal the flash hiding direction by the phase of the bump and signal the flash speed onto one of these four axis by the amplitude of this sinusoid. +[2402.000 --> 2410.000] And this is the step where vector scaling is happened is achieved in the fly brain. +[2410.000 --> 2425.000] Now this bump all pointing to the same direction the the step where they achieve different shifts is through this anatomical projection form the bridge to the fancy body and this is the step where. +[2425.000 --> 2437.000] The vector rotation is implemented so here in the fancy body is for sinusoid they all. +[2437.000 --> 2453.000] Some they are all sound by the same group of neurons which is this right neural college dot B neurons and the sum is also a sinusoid and with the peak response to this right dot which is the flash traveling direction and this is the step where. +[2453.000 --> 2481.000] The vector summation is achieved and so to either last slide and we go back to this data from the vertebrates so now we have similar activity of neurons compared to this data from vertebrates so now what are their functions this is not saying that these are the function of this features but just to give some insight. +[2481.000 --> 2491.000] So this broad tuning curve of the heading direction self could be used to explicitly represent 2D vectors. +[2491.000 --> 2507.000] And this anatomical shift between the grass and the axons shown in the vertebrates wanted to agree as an example and actually in flight there are many different levels of shift and I think the function of this could be to. +[2507.000 --> 2534.000] The other path to rotate the 2D vector and lastly this constructive tuning of heading direction with other signals could be used to as the according transformation to transform the other signal into the reference where the heading direction relates. +[2534.000 --> 2561.000] With that I went to my talk and sent my PhD mentor Gabi women and my collaborator and also my co mentor Larry Abbott from Columbia and I think I want to thank everyone from the minimum lab where we want together to crack this problem and here are the flies we received for this project and here are the funding. +[2561.000 --> 2563.000] Thank you. That's it. diff --git a/transcript/allocentric_sBSclBay7N4.txt b/transcript/allocentric_sBSclBay7N4.txt new file mode 100644 index 0000000000000000000000000000000000000000..f1e056bf979c8335ac345f5602fdf0213eee7b35 --- /dev/null +++ b/transcript/allocentric_sBSclBay7N4.txt @@ -0,0 +1,2 @@ +[0.000 --> 30.000] 1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5 +[30.000 --> 36.720] 1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1.5-1 diff --git a/transcript/allocentric_uZ4qC2SltXA.txt b/transcript/allocentric_uZ4qC2SltXA.txt new file mode 100644 index 0000000000000000000000000000000000000000..6d269caf30b74237466efef22bf60c96659d74af --- /dev/null +++ b/transcript/allocentric_uZ4qC2SltXA.txt @@ -0,0 +1,592 @@ +[0.000 --> 16.000] This is kind of an informal me copy and pasting a couple figures from the previous presentation the one on Monday and just kind of giving it a little more context or doing just a little bit review focusing on the things that seem to be confusing. +[16.000 --> 32.000] So just to review the one thing I did that may have seemed a little weird it was kind of a leap I made was I chose to simulate these biological rings using these pairs of units. +[32.000 --> 61.000] And like my philosophy there is that artificial units and neurons artificial artificial units and neural networks are already not neurons they're already non biological it's it's not like there if you draw an analog between neural networks and biological neural networks is unlikely that the right connection is that a neuron is a neuron like it if there is a similarity between them that's not the right abstraction level. +[61.000 --> 72.000] And so I just decided to embrace that and do something even more extreme and model an entire ring of neurons using a pair of activations that can be different. +[72.000 --> 80.000] So let's be a couple of quick things right away first of all the a pair of neurons. +[80.000 --> 88.000] Someone has to read out from that pair of neurons right I mean it's like each a and b are neat. +[88.000 --> 98.000] You only when you look at them together you get you get a behavior that's somewhat similar to the ring right but is that correct right. +[98.000 --> 103.000] But but to really stimulate the ring. +[103.000 --> 126.000] I mean it's hard to imagine how a neuron could read out something and because I mean I'm trying to understand it when I think about a ring a ring I can oh there's a set of neurons that are actively firing in succession like there's it's there's that ring is considered a bunch of neurons and they're taking turns here I can. +[126.000 --> 141.000] I guess i'm struggling with okay yeah I can see how they a and b combine mathematically in some sense would link to if you could read them out together they would lead to a result but you but no neuron could read out that result to me and bit right it's like. +[141.000 --> 151.000] There isn't easy answer to this and if the artificial world you can do it with a pair of weights if the weights are a lot to be positive or negative. +[151.000 --> 168.000] A single neuron meeting those weights would only represent one point on the ring right I thought I saw if I have a set of neurons each one reading out a and b it would different way right so I don't have a ring until I set up neurons reading out and bit. +[168.000 --> 187.000] I'm saying that a single read out neuron I could put another C unit up here and I could have that see activate only when the activations right here on the ring only when the activation is here by to by giving see the right yeah okay I'm not seeing your cursor oh the. +[187.000 --> 200.000] The zoom is being heard here here i'm going to switch to oh i'm sorry I didn't mean to stop sharing but i'm going to switch to showing i think we're an agreement I just want to be clear I understand. +[200.000 --> 216.000] Now you can probably see my cursor and i'm just going to show it yeah okay so yeah okay so yeah I was saying a see unit could read these out and here i'm i'm an artificial neural network land yeah talk about this you could could read up any point on this. +[216.000 --> 241.000] I don't have any point on this ring by choosing different pairs of weights on the a and b yeah alright so but to to to to achieve the same thing as a ring attractor which has in this case what you know it has 10 neurons I have to have 10 neurons reading out from a and b right so that would be one way to do that yes okay so i'm just pointing out that a and b are sort of the. +[241.000 --> 256.000] So the big what you if you take a and b you can create a set of ring or a ring attractor by having a bunch of neurons attached to a and b but on their own a and b do not or not it's not a reattractor on a and b you could create a ring attractor maybe is that right. +[256.000 --> 267.000] Okay so so just using this just saying I have a and b doesn't mean I have a ring attractor yet I still have to have a set of neurons that are going to read from a and b. +[267.000 --> 277.000] And then the next question is it when we talk about the ring there you know it's a simple assumption that's a 1 to a grid cell model is that right is that how you're thinking of it. +[277.000 --> 296.000] Yes yes I mean to be explicit it's yeah yes that's roughly what it is yeah to be explicit to be called a grid cell has to have certain firing properties so but yes it is basically the underlying substrate of a 1 to grid cell module yeah yeah I mean it's the way you have it here there wouldn't be any. +[296.000 --> 308.000] So I could have showed the velocity controlled oscillator version of this you could substitute this figure for a velocity control oscillator. +[308.000 --> 319.000] So so so so now I'm not so this is good I like this but I'm not sure what you why you going to the subtraction of the a and b cell. +[319.000 --> 326.000] And what did you gain anything that was just a way of maybe bringing it back into the world of of mini cost. +[326.000 --> 334.000] I did that purely because this lets me use this as a machine learning technique. +[334.000 --> 363.000] So this lets me I can this is this is something I can do as an unsupervised learning algorithm where here we have two sets of motivations I want to find a good unsupervised learning algorithm that does useful things and teaches us something about grid cells so I've drawn this big correspondence where when we want to live in biology land we might think like this but when we want to think of like unsupervised learning of data we might use this because we can you know run back propagation and stuff on it. +[363.000 --> 378.000] Okay, so it's just a way of you're you're the main thing you're doing is saying okay these ring attractors are kind of wonky biological things maybe but here's a machine learning equipment that lets me work in the machine learning world with us. +[378.000 --> 381.000] Yes, yeah that's what it is. +[381.000 --> 391.000] But this will be relevant but the right the right hand side couldn't really do unions but the left hand side could do a little bit of unions. +[391.000 --> 409.000] So that's that's true but I guess the well we've been able to realize this is kind of by design here I'm I'm treating the core building block of these algorithms is these rings that can't do unions these rings that have a phase in a firing rate and nothing else or a phase in the magnet nothing else. +[409.000 --> 420.000] Okay, so okay, so now basically said okay, there's a here's an equivalent thing to a ring attract it's equivalent if I had ten neurons reading out and do. +[420.000 --> 425.000] But the dot act I could learn it could learn to do that function. +[425.000 --> 438.000] Yeah, okay, and I can do everything to this I can do path integration to it I can perform updates on this by you know multiplying it in a careful way it does require careful some carefulness but but anything that can do this can do this. +[438.000 --> 464.000] Okay, and so okay the second point I made here is so I you know I built a ring I built a network on this idea and as as we talked about it led to these linear filters led to these units responding to basic sine waves oriented sine waves. +[464.000 --> 470.000] An important an important qualification here and really the second and third bullet point go together. +[470.000 --> 482.000] I got these filters because I used a linear mapping because I used a very basic network in a sense the only solution this network could have possibly found was sine waves. +[482.000 --> 489.000] The only solution that that meets the subjective were the rings translate linearly. +[489.000 --> 500.000] Can you go through this network again see we're showing we have we have a bunch of rings and they're feeding into then you have a bunch of neurons are getting pixels and they're feeding into the rings. +[500.000 --> 518.000] But but we're learning the picture they this link these these neurons are actually learning to be filters right so at this point the rings aren't even contributing this you're only showing you're only showing information go from those from those neurons to the rings. +[518.000 --> 528.000] Right so the thing being learned as this linear mapping and the thing being optimized is that I want these rings to update smoothly. +[528.000 --> 551.000] And you also have a reconstruction objective yes that's true as well so how does that is there a decoder on top of this or yeah I was there there's a decoder and I went with a trivial solution of taking this linear mapping and using its transpose as the decoder and that worked well enough for this purpose. +[551.000 --> 558.000] Okay I've I've experimented a little bit with a separate decoder but I didn't go very far with that. +[558.000 --> 574.000] I guess there's a lot of steps here in the second image which I'm just missing right yeah I obviously you're you're trying to learn of where you're learning you're learning these filters on these. +[574.000 --> 594.000] And it's rectangular box in the line and but you're also but your constraint is that you have to have smooth progression in the in the trunk that we said yeah but but that's not you're not showing the feeding back into the into the filters right must be well that that's basically it's that's the loss function and then back propagation. +[594.000 --> 623.000] So we're not showing the back propagation we're just showing after it's learned it would work like that yeah but that that would be true of almost any feed forward network I know I know but I just doesn't look like a typical network and so I want to think about neurons I always think like you're showing the connection that's the only way moves so yeah so there is a back prop operation yes that that and so some I don't understand how this works yet but you're saying the the ring attractors are really are a B cells. +[623.000 --> 629.000] I think that's what saying it is yourself this is an A and a B this is another A and a B. +[629.000 --> 640.000] Oh oh oh I didn't see that I didn't get that okay let me think about this now. +[640.000 --> 647.000] That was not obvious from the drawing I'm sorry. +[647.000 --> 661.000] I didn't make that connection I was somehow to assume that the A B cells were just part of the ring attractor and you just you're showing it's a ring attractive of the A B cells were up there someplace but no this is really little with these are the A B cells. +[661.000 --> 672.000] Yeah and and and so those ring attractors that's those are just those are actually one cell in the same ring is that right. +[672.000 --> 682.000] So this remember I said you had to have 10 sort of cells that were reading the A B cell to make a ring attractor so these these are like you're showing three of those is that right. +[682.000 --> 689.000] Like the first two cells lead to one neuron in one position in the ring and that's yourselves in another position of ring and so on. +[689.000 --> 693.000] Yeah these are yeah these are three different rings. +[693.000 --> 697.000] But completely different rings. Oh there wait okay now I'm going to do this again. +[697.000 --> 699.000] Sorry I must have misunderstood. +[699.000 --> 712.000] Yeah the A B cells themselves are not a ring the to have a ring you have to have 10 cells reading amount in your drive you have to have 10 cells that they're looking at the A B. +[712.000 --> 719.000] And so okay so the so so these three rings are three different rings there's 10 cells in each ring. +[719.000 --> 735.000] So the cell is learning a set of connections it's a set of A B cells so if I look on the left there the first one of the three there's a there's going to be like 10 cells in the ring and each one's going to be looking at A B forming the correct way so that it looks like a ring. +[735.000 --> 742.000] That's but you actually have that you don't actually have those cells right you're just throwing it as if you had them. +[742.000 --> 750.000] These 10 cells don't actually exist in some of the coding they could exist if yeah they could right you're aren't you. +[750.000 --> 765.000] The fundamental thing the fundamental building block I'm using is is actually sorry I'm doing this on the fly take here I'm going to resize that move it I did not work at all. +[765.000 --> 791.000] So we do demofel it's going to take a second so the fundamental building block I'm using here the fundamental the fundamental thing is these abstract rings I'm going to remove these little these little confusing gray dots these abstract rings that have a phase and a magnitude and this is the fundamental thing and this 10 unit. +[791.000 --> 809.000] This 10 unit ring right here is just one way of implementing this this pair of units here is another way of implementing this and but but abstractly this is the fundamental thing this isn't a fundamental thing these 10 units aren't fundamental this is it's the abstract thing I'm representing is this. +[809.000 --> 812.000] If that helps no. +[812.000 --> 826.000] I think I'm making a product but ultimately you're trying let me try this you're trying to come up with a way of training a and B is that correct. +[826.000 --> 840.000] Yes correct right and training a and B's weights yes yes well a and B's weights to some well there has to be weights to something is it weights to the pixels or is it weights to the ring. +[840.000 --> 858.000] So you're you're trying to each each A and B unit is looking at a bunch of pixels they're trying to learn how to respond to those pixels in the end you're going to get these sort of comport like filters. +[858.000 --> 874.000] And and and you're saying well what is the constraint that I'm applying on top of the A B cells to that would cause them to learn what is the right and so you're going to be using some this thing you've shown as a ring that is your constraint. +[874.000 --> 883.000] Now it's not really a ring you're telling me it's it's not a ring of cells not set itself I think you said it earlier something about smooth. +[889.000 --> 908.000] So can you explain a little bit more what it is that the constraint is that you're enforcing on the A and B cells that can monitor the constraint is that as a time series of inputs goes into the network the rings should update their phases linearly. +[909.000 --> 921.000] And so if I think about the A and B cells that means the A and the B cells might have to be like a sign or a cosine or something yeah they're going up and down. +[922.000 --> 938.000] And so somehow that is your constraint as the inputs are chain at the inputs are changing you want that that intersection that the dot and the upper right corner you want that to be smoothly going around the way. +[939.000 --> 956.000] And so you are just like okay you're saying here's another input and I'm assuming that it's that the thought has moved and therefore what weights should A and B have and there's another input is in the thoughts moved again and what is something like that yes. +[962.000 --> 965.000] How to end up with different scales for the different filters. +[966.000 --> 981.000] Well it's just naturally happened the reason that happens is because keep in mind this is also optimizing reconstruction and error and so that basically back prop just discovered that using multiple scales leads to better reconstructions. +[982.000 --> 992.000] So but there must be some sort of competition then between the different sets of A and B cells otherwise have to know otherwise what we force from the creative filters. +[993.000 --> 1007.000] And so we write if there will be some back prop just kind of naturally does that with populations it'll it'll make one part of the population it'll it'll lead to some specialization. +[1008.000 --> 1025.000] Because there's not there's not any benefit into units representing the same thing so whichever of them is doing a slightly better job will kind of take over I see that part and whatever's doing a slightly worse job will go and explore other filters it can use. +[1025.000 --> 1026.000] Okay. +[1027.000 --> 1028.000] Okay. +[1032.000 --> 1033.000] Okay. +[1033.000 --> 1037.000] Yeah so that'll happen even without the ring constraint. +[1037.000 --> 1052.000] Yeah I'd say what the ring constraint does is as it gives you these clusters of cells it gives you these clusters of orientations and scales where because it because keep in mind as you already know but just to just to say it again. +[1053.000 --> 1063.000] These two filters this this A and B reflects actually like in this interpretation or reflect a cluster of 10 cells with different translations. +[1063.000 --> 1069.000] And so the clusterings is the thing that the ring gets you. +[1069.000 --> 1077.000] And without the rings they're they're not these pretty good boys they're more like messy sort of good boys. +[1077.000 --> 1093.000] So what is the so why so I don't mean that's the restart about this but what is the what is it that gets your excited about this what is what is it that you feel like you accomplish here that that it's insightful. +[1093.000 --> 1102.000] It's the yeah I guess the the idea is in a few words representing novel environments using good cells. +[1102.000 --> 1125.000] Now how does that know look alright so we haven't gotten there yet so this is that that wasn't yet that Monday presentation but not this one this one so far you say okay I just learned how to use this sort of smooth movement of a vertical sort of brain type of thing to train these input unit. +[1125.000 --> 1130.000] So but but now how does that now lead to novel environments. +[1130.000 --> 1147.000] Well here the the images that this is representing in the context of environments is object vector cells and boundary vector cells and those and the various vector cells essentially that is the image that I am representing. +[1147.000 --> 1150.000] The image or represent. +[1150.000 --> 1155.000] Yeah I was confused a little bit about the actual input that's coming in. +[1155.000 --> 1166.000] You say pixels but it's I use sort of always in the center of the image and as you're moving around things are coming closer to you and further away. +[1166.000 --> 1169.000] Yeah what's happening. +[1169.000 --> 1183.000] And because it's in this these all eccentric vector cells with that that include boundary vector cells object vector cells as you turn around those aren't actually changing their their. +[1183.000 --> 1197.000] So I see boundary at a certain location relative to you but in the direction of reference frame so I'm sorry I did you the input to the system was assumed to be these these object vector cells. +[1197.000 --> 1200.000] And in this simulation yes. +[1200.000 --> 1208.000] And so how would I how would I interpret these filters and what is that one of those filters mean. +[1208.000 --> 1213.000] So this means like I'll just zoom in on one of these. +[1213.000 --> 1227.000] And this means that I'm this unit's going to respond to something that is allocentrically to your left or allocentrically up and up up here. +[1227.000 --> 1233.000] You're Northwest or allocentrically to yourself. +[1233.000 --> 1236.000] And then you can respond maximally if there's an actual diagonal. +[1236.000 --> 1249.000] Yeah if so if there's if there's literally a boundary going diagonally across the room right here it's going to make the cell respond vigorously. +[1249.000 --> 1256.000] Okay. +[1256.000 --> 1265.000] So then how to let's go back to the novel environments then yeah where does that come. +[1265.000 --> 1275.000] And I said okay here's a bit of object vector cell that says oh I have some boundary often diagonally my left or anywhere on that line. +[1275.000 --> 1283.000] Okay fine so that could fire anywhere I suppose right where where is the novelty thing coming to this. +[1283.000 --> 1293.000] So so any any environment can be described as a set of boundary vector cells and object vector cells. +[1293.000 --> 1294.000] Okay fine. +[1294.000 --> 1297.000] So it's like you're it's like you're create. +[1297.000 --> 1301.000] Do you see the analogy to it's like you're painting a picture of the room. +[1301.000 --> 1308.000] Yeah well I can imagine that the pictures composed of object vector responses. +[1308.000 --> 1317.000] Yeah but okay but but that's a picture it's it's where so now I have a new room that's novel. +[1317.000 --> 1321.000] How represented differently right you have a different representation. +[1321.000 --> 1326.000] Yeah and it that's all in the activations you're not having to learn anything new in that new room. +[1326.000 --> 1332.000] But wouldn't that be true. +[1332.000 --> 1353.000] I mean I mean if if if if a cell just says I'm going to respond whenever there's an object off of this diagonal space here well of course you'll respond to the novel environment too so what's what's what's surprising about that it's just I've seen something over there. +[1353.000 --> 1363.000] So it'd be like saying I'm looking at a picture and this pixel you know lines in the picture and I say well here's a novel image and there's a diagonal line off in this part of the image well okay fine. +[1363.000 --> 1370.000] But I haven't generalized or anything I'm just saying up there's a there's a just several pictures have that diagonal line in it. +[1370.000 --> 1390.000] So what it's going to do is now you're going to be able to move around this room and it's going to stay accurate it's going to continue activating the proper grid cells so that you can continue to anticipate hey there's a boundary over there or there's an object over there. +[1390.000 --> 1393.000] Even after you move. +[1393.000 --> 1398.000] I say so it's not purely observational. +[1398.000 --> 1404.000] It's like saying. +[1404.000 --> 1408.000] Alright I'm in some spot in the room I've got my picture of the room. +[1408.000 --> 1411.000] Oh yeah these are the boundaries around me these are the objects around me. +[1411.000 --> 1420.000] Obviously if I go to a new spot and I just observe I'll form another representation so these are the objects there are different positions around me have a different image. +[1420.000 --> 1430.000] But you're saying I could move to that new spot and not observe and yet predict what my reference my image repeat. +[1430.000 --> 1431.000] Yes. +[1431.000 --> 1437.000] Alright so where does movement come into this this to this figure here. +[1437.000 --> 1446.000] Well the movement is going to cause each ring to update at a certain rate. +[1446.000 --> 1452.000] Is that built into this network we don't there's no movement factors here there's no movement input at all on this side right so. +[1452.000 --> 1458.000] Is that a separate system that says oh we're just going to assume that we have some way of taking movement to move ourselves around these rings that. +[1458.000 --> 1464.000] Yeah yeah I mean that's that is essentially it's a separate system that I haven't shown. +[1464.000 --> 1472.000] But the smoothness thing will sort of assume that whatever changes have are expected to be smooth. +[1472.000 --> 1483.000] Right if you have the the the loss you have for the for the rings the fact that it has to be temporarily smooth. +[1483.000 --> 1492.000] Well it works smooth is I might have messed up by using the word smooth but but it's. +[1492.000 --> 1501.000] Because it's like updating linearly it's it's each each step needs to cause the bump to move this. +[1501.000 --> 1509.000] But it's if you are randomly jumping around the environment this would still work. +[1509.000 --> 1513.000] There's no assumption that one input is temporarily. +[1513.000 --> 1520.000] We have no idea how you move to get to the new spot in the environment you have to have some knowledge how you got there. +[1520.000 --> 1523.000] So that's not in this that work that's what my question is. +[1523.000 --> 1530.000] It couldn't possibly work to be having a movement in place then you wouldn't know where you're going to be. +[1530.000 --> 1535.000] So okay for super type. +[1535.000 --> 1538.000] Super size point doesn't involve prediction yet. +[1538.000 --> 1539.000] Oh sorry. +[1539.000 --> 1542.000] So it does. +[1542.000 --> 1545.000] So yeah I mean. +[1545.000 --> 1551.000] This is an unsupervised technique that as I've shown it didn't use movement yet. +[1551.000 --> 1562.000] It just assumed that it assumed that its input series is going to tend to be straight lines through. +[1562.000 --> 1571.000] So to put it one way it tends to be straight lines through some manifold and it's discovering that manifold. +[1571.000 --> 1574.000] Or another way is that it's. +[1574.000 --> 1578.000] But does the previous input impact the current input it anyway. +[1578.000 --> 1579.000] No. +[1579.000 --> 1583.000] So you could be randomly shuffled you could randomly shuffle the inputs. +[1583.000 --> 1585.000] And it would still work fine. +[1585.000 --> 1591.000] Correct. It would work fine. Although if you randomly shuffle the inputs it would do do very poorly on the objective. +[1591.000 --> 1593.000] So if you change the results of the. +[1593.000 --> 1599.000] No sorry I don't mean randomly shuffle the pixels I mean randomly shuffle the temporal course of the images. +[1599.000 --> 1606.000] So if you had you know image one two three four five you could have just as well given it image three then five and then one and then two. +[1606.000 --> 1611.000] Yeah so if you're just doing inference that's just going to cause these rings to hop around randomly. +[1611.000 --> 1615.000] But that's going to be my question is. +[1615.000 --> 1624.000] But in training is there any value in the temporal course of the inputs. +[1624.000 --> 1636.000] Yes because the it's objective like it's objective is literally that it wants the distance from here to here to be the same distances from here to here. +[1636.000 --> 1656.000] And so if you scramble the inputs is going to be just a bunch of I'm using the word distance but like the phase the change in phase should be the constant and within this module for a sequence of inputs. +[1656.000 --> 1660.000] Okay so there is a temporal. +[1660.000 --> 1663.000] So there is a factor in the last function. +[1663.000 --> 1666.000] Yes during learning time. +[1666.000 --> 1669.000] Okay that was my question is. +[1669.000 --> 1673.000] Okay but has to be constant velocity essentially. +[1673.000 --> 1675.000] That is point. +[1675.000 --> 1678.000] Yes I gave my I made this the easy version of the experiment basically. +[1678.000 --> 1679.000] Okay. +[1679.000 --> 1684.000] Once it's true random walks it becomes more difficult but I think still doable. +[1684.000 --> 1688.000] But yes. +[1688.000 --> 1696.000] So I'm going to talk about the topic of yet on the topic of just I'm I'm intentionally. +[1696.000 --> 1705.000] To designing a system where where we have a natural place to put in movement commands the natural place to put in movement commands is you update these rings using. +[1705.000 --> 1707.000] But I but I have not done that yet. +[1707.000 --> 1708.000] Okay. +[1708.000 --> 1720.000] Let me let me try rephrasing this entire exercise in different language and see if this encapsulates what you're talking about. +[1720.000 --> 1725.000] So I walk into a room. +[1725.000 --> 1728.000] And I've never been in this room before. +[1728.000 --> 1737.000] And I look around the room and somehow through my sensory inputs I build up a set of object vector cells for the room. +[1737.000 --> 1743.000] Now I I'm going to move to a new spot in the room. +[1743.000 --> 1745.000] And. +[1745.000 --> 1755.000] And I want I want to be able to calculate what the new object vector cells will be in that new location room as opposed to just observing it. +[1755.000 --> 1763.000] So that's a prediction I'm saying okay I'm going to move over here and I move from one corner to the other corner of what will be my representation room object vector cells. +[1763.000 --> 1769.000] Now we can do this right I can mentally do this I can walk in the room the new room look around. +[1769.000 --> 1780.000] So the understand the room then I could just imagine what what what would be my perception room from a different from a different corner and I can do that I can mentally do that exercise. +[1780.000 --> 1786.000] So this is a way of saying here's the potential mechanism for how you might do that. +[1786.000 --> 1792.000] How you might say okay former representation will one part of the room now now. +[1792.000 --> 1800.000] Move or pretend to move or think you're going to move to another part and predict what the representational be from that part of the room. +[1800.000 --> 1803.000] And this is a mechanism will do that. +[1803.000 --> 1805.000] And now is that correct. +[1805.000 --> 1806.000] Yes. +[1806.000 --> 1810.000] Okay now the way I've this is clearly something we have to deal with do. +[1810.000 --> 1823.000] And my assumption in the past really been something like well we must form a representation room using a reference frame will be assigned different objects at different displacements in the in this reference frame. +[1823.000 --> 1829.000] And and now I know my I move to a new location of the reference frame. +[1829.000 --> 1841.000] And then from that location I can build up a set of object vector cells building my model room. +[1841.000 --> 1846.000] And that's a bit hand wavy but that's how I would thought about it. +[1846.000 --> 1855.000] But you're saying here this is this is perhaps a very sort of different mechanism but you the same results that's that somehow much simpler. +[1855.000 --> 1857.000] Would that be correct. +[1857.000 --> 1858.000] Yeah that's correct. +[1858.000 --> 1861.000] There are there are two ways of approaching the problem. +[1861.000 --> 1877.000] And I even I feel like this would be too big of a topic but I can actually draw a link between I can draw a direct connection between how this solves that problem and how the dis how the displacement cells version solves the problem and how they're actually kind of. +[1877.000 --> 1884.000] I have a unified theory in my head about why how those two are the same how how I am. +[1884.000 --> 1888.000] How those are at chord a similar problem. +[1888.000 --> 1891.000] Yeah I think that that's very useful. +[1891.000 --> 1904.000] As I've been thinking about these issues it's it's really complex of course and there's only steps you have to go to solve these problems you know like oh well I'm dealing with you know movements are going to be a +[1904.000 --> 1920.000] little bit accurate using おの basis of the use of a variation by uniquely well I think to determine if I want to focus on any method machine that's still going to be this one is the real question of what I can turn on for we could have about it, right? +[1920.000 --> 1931.000] So get back at saying yes I just wanted to take a look at first question what I want to do is really solving this potentially a nice talk on how you think, this is a very similar thing and then this is a great, I really want to know about that. +[1931.000 --> 1936.000] There's a much simpler way of achieving these results. +[1936.000 --> 1939.000] And that's really appealing. +[1939.000 --> 1946.000] But I don't yet so there have a deep understanding of it in the way that I have a deep understanding +[1946.000 --> 1948.000] of grid cells and grid cell modules. +[1948.000 --> 1950.000] It's sort of like, okay, you settle these things. +[1950.000 --> 1951.000] It kind of makes sense. +[1951.000 --> 1955.000] There's a lot of abstractions which I'm just, you know, I have an internalized yet. +[1955.000 --> 1957.000] And somehow it just magically works. +[1957.000 --> 1959.000] It's just like, okay, that's really great. +[1959.000 --> 1960.000] I don't understand it. +[1960.000 --> 1963.000] I mean, I, you walked through it, but I don't deeply understand it. +[1963.000 --> 1967.000] I couldn't explain it to somebody. +[1967.000 --> 1970.000] But I want to make sure that I'm understanding what you're proposing. +[1970.000 --> 1975.000] So the alternate way of at least going from one, +[1975.000 --> 1982.000] allocentric representation to another allocentric representation based on some sort of probably allocentric movement. +[1982.000 --> 1987.000] That just that basically converts object vector cells object vector cells. +[1988.000 --> 1995.000] And so I could, you could think of it this way, giving a set of object vector cells plus an allocentric movement. +[1995.000 --> 1998.000] What's my next allocentric object vector cells? +[1998.000 --> 1999.000] Yeah. +[1999.000 --> 2000.000] Okay. +[2000.000 --> 2004.000] I think if you phrase the problem up front that. +[2004.000 --> 2006.000] I mean, that's a subset of everything has to happen, right? +[2006.000 --> 2007.000] It's clearly a subset. +[2007.000 --> 2008.000] How do we get the hours of different episodes? +[2008.000 --> 2010.000] How do we get these object vector cells? +[2010.000 --> 2012.000] Obviously, things are like, okay. +[2012.000 --> 2020.000] But, but at least you could just, if you'd phrased it that way, it would help for me at least. +[2020.000 --> 2026.000] You know, it's like, hey, you reminds me of, we saw whose paper it was. +[2026.000 --> 2027.000] We did a review of a paper. +[2027.000 --> 2028.000] Remember, I did it. +[2028.000 --> 2033.000] And at the end, it was as big of those a figure that occupied an entire page. +[2033.000 --> 2038.000] And they were showing how you could do a mapping between. +[2038.000 --> 2044.000] An allocentric vector cells with orientation. +[2044.000 --> 2045.000] And you might remember. +[2045.000 --> 2055.000] It was like a big grid. +[2055.000 --> 2061.000] And the heart of metal actresses, sorry, the aligned from the side and they routed, they showed up +[2061.000 --> 2064.000] only give around and didn't remember is it figure? +[2064.000 --> 2068.000] I could find the paper. But anyway, they were trying to convert between egocentric and +[2068.000 --> 2073.840] allocentric vector cells. And this was a paper there arguing that there was, that there was +[2074.960 --> 2079.840] complementary egocentric vector cells and as well as allocentric vector cells. +[2080.800 --> 2085.520] And so this, that was like a transform. And I was, and it's like, oh, that's something like +[2085.520 --> 2090.160] that has to happen here. This is another type of transform. This is saying, okay, I've got a bunch of +[2090.160 --> 2095.200] allocentric vector cells. And I have an allocentric movement and I'm going to get another set of +[2095.840 --> 2099.760] allocentric vector cells to represent in a moment. I suppose this would work if I was doing +[2099.760 --> 2104.480] with egocentric too, right? I could have egocentric, your cells, an egocentric movement. And +[2104.480 --> 2108.240] this mechanism would also maybe give me egocentric, the new egocentric vector cells. +[2110.640 --> 2115.760] Okay, and then, and now I'm, now I'm going to ask the question, what is the fundamental core idea here +[2115.840 --> 2121.840] that allows that to happen? What is the thing you propose, which is really the trick, or like a +[2121.840 --> 2127.520] module trick or something like that that allows us to happen? I haven't internalized that yet. +[2130.240 --> 2137.200] I love there's an easy explanation for that. Well, I guess I can try it. I can try it. It starts with +[2137.200 --> 2143.680] that I allow these modules to have not just a phase, but also a magnitude. And +[2144.320 --> 2150.720] Why does the magnitude make a difference in that case? Why does that trick work? Where does that come +[2150.720 --> 2157.280] into the algorithm? You have that magnitude. Okay, do you need that? You need that to do your +[2157.280 --> 2161.920] upper right hand picture here, right? You need a magnitude to have this ring attractors in? +[2162.560 --> 2167.920] Well, no, the ring attractor, if it didn't have a magnitude, then you could, +[2168.880 --> 2175.360] it could just be a phase. The issue is that if you, if you didn't have magnitudes, +[2177.680 --> 2188.480] then there would be no way to, to reconstruct an image. In this case, the image, the +[2188.480 --> 2194.640] quantum pixels were reconstructing our object vector cells. And there'd be no way to reconstruct those +[2194.640 --> 2207.280] if you don't have magnitudes. Okay, I hear you say that, but why? How did the magnitude come +[2207.280 --> 2212.320] into play? Can you just say, well, because if the cell is firing twice as much, then +[2212.320 --> 2221.920] next time. So if all of the magnitudes were the same, then these types of filters, +[2222.880 --> 2231.040] you would have to add them all up, like, and describing what this represents. You add up all +[2231.040 --> 2236.320] these filters, the same amount. You wait this filter by one, you wait this filter by one, +[2236.320 --> 2248.080] add them all up and what you're going to get is a single. Oh, oh, so the magnitude represents, +[2248.080 --> 2251.520] in some sense, the magnitude of the object vector cell. Is that what you're saying? +[2252.480 --> 2258.400] It's like saying a magnitude of zero would say, oh, there is no object at the location. Yeah, +[2258.400 --> 2263.200] well, like, for example, like if there's a boundary going diagonally like this next to you, +[2263.200 --> 2268.080] that's going to be reflected in the magnitude. Got it. And if there's, if there's not a boundary +[2268.080 --> 2276.800] there at all, then it's, then of course, the magnitude is going to be zero. Okay, so in the past, +[2276.800 --> 2279.840] all right, well, that makes a hell of a lot of sense. I mean, clearly, I mean, if a cell, +[2279.840 --> 2283.600] if I have an object vector cell, there's nothing there, then it's not going to be far. +[2292.480 --> 2296.960] And so why would it be a scalar? I mean, you might say either the object is there or the object +[2296.960 --> 2301.040] is not there. I mean, why do we need a magnitude? What does it mean if it's half-body? +[2301.520 --> 2305.520] You know, it's at half, it's at 0.5. What does that mean? +[2307.760 --> 2316.640] Because because the goal of these cells is to work together, to reconstruct the image, not to, +[2316.640 --> 2324.800] not to single-handedly do it. And so reconstructing the like, hey, there's an object here, +[2324.800 --> 2329.840] there and there, but not there requires a population working together where they can each +[2330.720 --> 2334.160] where some of them can be kind of on some are responding vigorously. +[2334.160 --> 2338.000] Well, you could, couldn't it also be like a, you know, somewhere on, somewhere off? I mean, +[2338.000 --> 2342.800] you could say the image is like, yes, I have objects of these locations, these limitations, +[2343.600 --> 2349.680] those are all one, and the other things are zero. Where am I adding things together at like 0.5 +[2349.680 --> 2357.120] and 0.75? What would a w do that? Well, maybe I have a two-subligi-cum-idea-b and object vector cell. +[2359.840 --> 2371.040] So what's going on here is very analogous to, to, I don't know, maybe this will help you, +[2371.040 --> 2377.680] maybe it won't. What's going on here is very analogous to representing a 2D picture using, +[2379.200 --> 2386.960] using the Fourier, and basically like a fully capable Fourier transform with like, +[2387.520 --> 2392.880] with, with magnitudes and with enough of these diagonal bars can reconstruct any image. +[2392.880 --> 2398.160] Maybe that doesn't make sense. I guess if I look at one of these filters, the filters have multiple +[2398.160 --> 2409.040] bands, right? At least some of them. So in some sense, each filter is, is not on its own, +[2409.040 --> 2413.040] going to tell you exactly what's out there, right? It's a multiple case. So these are not, +[2413.520 --> 2418.720] these are not true object vector cells because, if they were object vector cells, it would be, +[2418.720 --> 2423.120] no, it's, object vector cells are just, it's here, it's not like here or here or here. +[2426.160 --> 2430.880] So these are, then I can't really, these are like object vector cells that are, +[2430.880 --> 2437.280] that are, the horuses, or, they're ring attractors, something like that. And so, +[2438.880 --> 2442.320] so then how would I actually get, because when we look at object vector cells, they don't appear to +[2442.320 --> 2447.680] be like that, right? Object vector cells don't, they're not like grid cells, or more like +[2447.680 --> 2452.480] place cells. They just, they activate, but they don't activate multiple locations. +[2452.960 --> 2455.920] Right. So they would have to be a readout of multiple rings. +[2457.360 --> 2463.120] Now, so these, all right. So these, these filters aren't really a set of object vector cells. +[2463.120 --> 2467.760] They are a basis set from which I could somehow create a set of object vector cells. +[2468.720 --> 2476.240] Yes. In some sounds, you're saying these are like, these are like grid cells in the sense that they +[2476.240 --> 2485.840] repeat and wrap around, but they're like object vector grid cells. It's like, I could have a bar +[2485.840 --> 2490.160] here or here or here, an object here or here or here relative to me. I don't know which one. +[2491.360 --> 2495.120] But if I have a bunch of them, I'll know exactly what it is. +[2496.080 --> 2499.600] It's the same with grid cells, right? Grid cells say, I'm at this location, this location, +[2499.600 --> 2502.640] this location, I know where I am, but if I have a bunch of grid cell modules, then I can say, oh, +[2502.640 --> 2508.080] yeah, I know where I am. Is that a good analogy? Yes. +[2508.080 --> 2513.520] Is that okay? Do we have any evidence that the brain actually has repeating object +[2514.320 --> 2517.520] cells, object vector cells? Is there evidence for that, will? +[2517.520 --> 2521.040] Well, I'm not arguing for that. I'm not arguing for that. I'm not arguing for that. I'm saying, +[2521.600 --> 2530.720] these are grid cells. I'm showing how object vector cells could be reconstructed from grid cells. +[2530.720 --> 2533.440] Are you saying he's a grid cell? I thought you said these are like object, +[2534.560 --> 2537.280] and I'm talking about brain bar saying he's an object over there. +[2539.040 --> 2547.280] Each pixel would be denoting that there is an object vector cell. Each pixel would be saying +[2547.360 --> 2559.280] there is an object here. But that's only grid cell. This is a 1D grid cell. This filter shows +[2559.280 --> 2565.120] a, this parafilters leads to a ring that is a 1D grid cell. +[2567.760 --> 2571.840] So the filter itself is not a grid cell, obviously, because it's not, it's not +[2571.920 --> 2582.080] you know, in location, it's denoting an object relative to me. That's not a grid cell. +[2582.080 --> 2587.920] So this language is giving me confused much or something. +[2590.960 --> 2595.600] I wonder if it would be interesting to have a visualization where you literally created those 1D +[2595.600 --> 2602.320] modules up top and then showed the animal kind of moving around at the bottom, you know, +[2602.320 --> 2607.040] the thing and see how that's changing and you would see those things going around and around at +[2608.560 --> 2613.920] the circle. I mean, the grid cells would be the actual dots in the ring module up top. +[2613.920 --> 2614.320] Right. +[2616.000 --> 2617.680] Each of those would be a 1D grid cell. +[2619.520 --> 2624.000] Yeah. So the filters are not grid cells, right? A second ago, I thought you said they were. +[2624.480 --> 2627.680] The two of these filters together lead. +[2627.680 --> 2629.760] Oh, together. They could lead to a grid cell. +[2631.360 --> 2634.320] Yeah. Well, they'll represent a whole module, I think. +[2634.800 --> 2635.680] Yeah, they represent. +[2635.680 --> 2641.360] Yeah, you could read out different, different weights, different cells of different weights would read out. +[2641.360 --> 2653.680] I think, you know, Mark, I feel like it really onto something here, but I, +[2656.240 --> 2660.880] sometimes it's happened in the past. I feel like there's this sort of mismatch between where +[2660.880 --> 2666.720] you're thinking about it and the way I'm thinking about it. And so I'm not, I'm not saying it's +[2666.720 --> 2672.960] this. I need to bridge that, not putting any. It's just my issue, your issue, perhaps, both maybe +[2672.960 --> 2676.800] all my issue. I'm just trying to, it's like, keep asking these questions and trying to like, +[2678.720 --> 2680.560] get a better interpretation of what you're saying here. +[2681.440 --> 2686.160] Marcus, there's some reason why using quads of images, filter images rather than just +[2686.960 --> 2693.040] quadruple pairs. Yeah, the, I mean, maybe that was just a faulty visualization. I could have put +[2693.040 --> 2700.160] these further down that they're not actually quads. I just, okay, just, how I fit them into this +[2700.160 --> 2707.760] slide. Okay, thank you. What are they? They're just, they're just six pairs of cells. They're just +[2707.760 --> 2714.880] pairs. I see. Yeah. Like for this picture, maybe you remove the bottom three pairs. Yeah. +[2715.520 --> 2730.160] Is this, is this suggesting that you, you can create Vizel modules and update them by +[2731.840 --> 2736.720] starting with, starting with these object vector filters. +[2737.680 --> 2746.240] Like these are observed things. Are you asking if there's empirical evidence for me? +[2746.240 --> 2750.320] No, I guess I'm asking what is that with this is suggesting. Again, I'm trying to have this idea +[2750.320 --> 2757.920] how grid cells work. Right. You know, and in some sense, the way, the way we think about grid cells, +[2757.920 --> 2762.160] you can think about them completely, almost completely independent of sensory input. I mean, +[2762.240 --> 2767.120] once they get anchored, they basically path integrate. Of course, the path integration is very +[2767.120 --> 2772.160] noisy, so they have to get re-anchored all the time. But in an ideal world, you can imagine that +[2772.160 --> 2777.760] they don't need to get re-anchored. If it was perfectly accurate. But then we, so that's like one view +[2777.760 --> 2783.120] of the world. Okay, yes, I have a bunch of grid cell modules are updated by movement. I've put +[2783.120 --> 2787.760] in a new movement. I know my new location. We have several reasons we know this nothing doesn't work. +[2787.760 --> 2792.320] One is that I have to get re-anchored all the time. Two is the grid cells are actually distorted. +[2793.520 --> 2797.520] There's a bunch of reasons this doesn't work. But we still, in the background mind, +[2797.520 --> 2803.760] if I give it, that's how it basically works. And then the system is compensating somehow for noise +[2803.760 --> 2810.800] and distortions. But think of it as a purely motor driven update of your location. But that could +[2810.800 --> 2816.800] be wrong. And that doesn't have to be like right. It could be that motor driven updates +[2816.800 --> 2823.120] to grid cells only works locally. It's a local phenomena. They can use it temporarily for +[2823.120 --> 2829.520] in a certain, quite of an environment. But that's not how the whole system really works. And it's +[2829.520 --> 2835.920] much more driven by sensory input. As you're starting with a series of sensory inputs that +[2837.120 --> 2842.560] that then you can do a prediction and do first calculations using grid cells, but it's not really +[2842.640 --> 2848.560] being, but that's not the backbone system. The way I do it now is the grid cells are like the backbone +[2848.560 --> 2855.360] and everything. They're the reference range. And that you're getting sensory input to fill in the +[2855.360 --> 2861.120] reference frame data and to update, you know, make sure the grid cells don't get lost. But the +[2861.120 --> 2866.880] ultimate view is it's more sensory driven than the grid cells are sort of a local path integration +[2866.880 --> 2872.640] phenomena that works in some places, one out everywhere. And therefore your rough +[2872.640 --> 2877.840] and trained is sort of discontinuous or at least it's not, it could be distorted, something like that. +[2877.840 --> 2882.400] So I'm wondering if you're, if this is sort of coming out of that way, like you're saying here, +[2883.200 --> 2887.280] I have some sort of observations about the world. And I can use those to +[2888.960 --> 2894.480] to learn grid cell behaviors. And I can do some path integration within a certain realm. +[2895.200 --> 2898.240] But I'm the system's really driven sensory input. +[2899.120 --> 2901.440] Does that, that description, +[2902.400 --> 2908.240] that I see? Yeah, I'm drawing a link to sensory input. A way I would say is, so you at the +[2908.240 --> 2912.800] beginning of everything you just said, you said, you said that like the grid cells are anchored at +[2912.800 --> 2918.800] the beginning. And then from that point forward, you can move and update them. And in an ideal +[2918.800 --> 2922.480] world, you could just update them endlessly and everything will stay in sync. +[2923.920 --> 2929.760] Here I'm saying that same thing, you anchor the grid cells at the beginning, +[2931.040 --> 2938.880] but I'm no longer anchoring them randomly. I am doing it in a way that that describes the +[2938.880 --> 2945.680] surrounding environment. Doing it in a way, operates the object vectors cells and the boundary +[2945.680 --> 2952.960] vector cells. Uh-huh. And if I if I dropped you randomly into two points in a new room, +[2956.080 --> 2962.080] would this system anchor the grid cell the same way? Yes, in a compatible way. +[2963.360 --> 2968.320] And in a way that I would not, I wouldn't, I would say, oh, I know this is the same room. And +[2968.320 --> 2973.200] therefore, I know how to anchor the grid cells such that if I did path integration between the two +[2973.200 --> 2978.800] points that would work. Yes. All right, that's a, okay, that was a really big thing. It gets back to +[2978.800 --> 2983.520] my, my conversation earlier about like, oh, I mean, in a novel room, can I predict what my object +[2983.520 --> 2988.640] vectors cells would be at a different point in a room? This, this, this, the, the, the crux of this is +[2988.640 --> 2993.120] not, maybe it's, I'm going to state this and see if you agree with it. The crux of this is something +[2993.120 --> 3001.040] you just said that grid cells are not randomly anchored, which is an assumption we've had. +[3001.840 --> 3006.240] It's not true. You're saying, grid cells wouldn't be randomly anchored in this thing. +[3007.360 --> 3010.640] They're going to be anchored in a very way that's very specific to your sensory input. +[3011.440 --> 3017.280] And therefore, I could be dropped into the same room from two different places. And I would +[3017.280 --> 3021.840] anchor my grid cells in a compatible way that path integration would work between them. +[3022.480 --> 3028.960] Yes. All right, that's that may be the simplest explanation what you're performing in the +[3028.960 --> 3035.600] big one. Because we had, we, and you're saying I can do that without having learned that, +[3036.320 --> 3042.960] well, I still have to, I still have to, I guess I don't have to learn, I don't have to learn the +[3042.960 --> 3050.880] room. I just have to see that the positions of objects are similar. Even though they're, they're +[3050.880 --> 3055.040] somehow the same in the room's reference frame, even on the two different parts here. +[3056.000 --> 3061.680] So that's that would be it, you know, that's that this is free phrase of the game. +[3061.680 --> 3065.840] We up to now we've essentially grid cells randomly randomly anchored and then you have to, +[3066.320 --> 3070.960] but and then you build a model of hop and top of that. And this model says no, they don't +[3070.960 --> 3078.160] randomly anchored. There's somehow they're going to be based on observation. And I know if I +[3078.160 --> 3083.280] were dropped into the same room in a different location, I would, I would even know I have different +[3083.280 --> 3091.200] observations. I would, I would anchor them correctly, make them have them. +[3091.200 --> 3096.080] Yeah, within limits course, like if you're down the hallway too far away, it's not going to work. +[3096.080 --> 3101.120] But if you, if you know, if you have the site of the same boundaries, put from a different, +[3101.120 --> 3106.160] uh, different viewing location than yes. So that's that's really powerful idea. But although you're +[3106.160 --> 3110.880] assuming here that we have a good, allosectic representation of object vectors, +[3110.880 --> 3114.560] the different objects that you have in the area that you're looking at. +[3114.560 --> 3119.360] So that's the input. So they really need to agree on head direction cells. They really need to agree on, +[3119.360 --> 3125.680] you know, your sense of direction. Uh, that's when they have to agree. I mean, I mean, somehow, +[3125.680 --> 3131.840] yeah, somehow I have to form a representation. Yeah. Right. Obviously, if I'm dropped in some place, +[3131.840 --> 3138.720] and I have a bunch of object vector cells and and in other places, a bunch of object vector cells, +[3138.720 --> 3142.800] if I was in different orientations, and I didn't know that, the whole thing would be messed up. +[3142.800 --> 3151.200] Right. Yes. Um, so how does that get resolved? Uh, I mean, the overall, um, my overall mental +[3151.200 --> 3156.400] model for that is that your head direction cells follow certain heuristics, like using the long +[3156.400 --> 3163.760] axis of the room. And it just has almost a set of rules where, where your head, your, your +[3163.760 --> 3169.120] orientation system tries to just use a, a consistent set of rules so that no matter where you +[3169.120 --> 3174.400] enter a room, it will be consistent. Yeah. You know, I've been thinking about this because, you know, +[3174.400 --> 3181.600] orientation is a lot like the grid cell. I mean, I've mentioned this before. Um, you can do path +[3181.600 --> 3186.640] integration with orientation. Like I can close my eyes and I can rotate my body of certain amount. +[3186.640 --> 3193.360] And I have a sense of how far I've rotated. I can predict what I'm going to see. Um, and, um, and so, +[3195.120 --> 3202.240] and so we have this path integration. Um, there's an anchoring component to it. It's very analogous +[3202.240 --> 3211.360] to grid cells, except that the, the, we're instead of unfolding our torus or our ring, literally over +[3211.360 --> 3216.400] in one distance, is we wrap around on itself. So it brings six degrees back to our start of them. +[3217.200 --> 3223.280] Um, and yet, so the question then is because how, you know, how does it get anchored? And, and, you know, +[3223.280 --> 3227.680] this idea that you picked these long distance things to look for, those some evidence for that. +[3228.160 --> 3232.320] But it still seems very weird to me. It just doesn't seem how sufficient somehow. It feels like +[3232.320 --> 3237.920] almost like somehow you have to infer what your orientation is. And not just based it on some +[3237.920 --> 3242.000] simple observation. Like, it could be many places where it's not obvious what the correct orientation +[3242.080 --> 3246.400] is. So somehow you have to infer it, you have to figure out what your anchoring should be for the +[3246.400 --> 3250.960] grid cells. So Simon, you're figuring out what your orientation is as well as your location. +[3253.440 --> 3261.200] Both can be confused. You know what I'm saying? Um, okay. All right. Well, I think, I think I've made +[3261.200 --> 3267.040] some progress and understand what you said today. At least it's very, very high level of +[3267.040 --> 3273.920] image. Like, okay, we're, we're not going to randomly pick, we may never randomly pick, um, +[3275.600 --> 3280.640] anchoring points in modules that we're going to do it on observations. We're going to do that, +[3280.640 --> 3283.520] whether it's an orientation module, we're going to do that, whether it's a grid cell module, +[3284.320 --> 3291.040] that somehow observations are going to choose what our anchoring point is. And somehow you've +[3291.040 --> 3296.160] magically made it work so that in a room with different objects, um, in a room with objects, +[3296.160 --> 3300.240] I would pick the copacetic anchoring points with different locations in a room based on observation. +[3300.880 --> 3302.640] And I hope to think about how you did that. +[3305.520 --> 3310.080] That would be a, that would be a big win in my mind. Um, that's what you've done. +[3310.800 --> 3318.400] Yeah, that's, that's the motivation. Uh, so the last bullet point here, um, the, the, the, +[3318.400 --> 3322.480] the point I wanted to make, first I'll just read it and then say what I mean. So, uh, uh, this, +[3322.480 --> 3328.240] this was a simple linear mapping. It's a single set of weights. And so a multi layer network or +[3328.240 --> 3337.200] recurrent network, uh, will be more advanced. Um, it'll, it will use population codes and won't be, +[3337.200 --> 3343.920] the, it won't necessarily be these simple sign waves. What I'm saying here is that, um, this, +[3344.640 --> 3350.240] I'm not literally predicting that a grid cells, grid cells are going to respond to a boundary +[3350.240 --> 3355.840] going diagonally like this. I mean, maybe they will, but, um, but in some ways, this is the reason +[3355.840 --> 3360.320] you're seeing these sign waves like this is because I was giving this, the simple version of it. +[3362.320 --> 3370.320] Um, if these rings are set by a nonline, nonlinear feed forward network, it's, it's no, you're no +[3370.320 --> 3377.520] longer going to have these just simple filters. It's, it's going to be more complicated. Um, +[3379.120 --> 3384.400] although you might still get a bunch of these simple filters. Possibly, yeah. But it won't be +[3384.400 --> 3390.480] constrained. And, and, and it's like right right now, each of these rings is sort of independent +[3390.480 --> 3394.640] of all the others. In the sense that like you can describe what this ring responds to, you can +[3394.640 --> 3399.600] respond, you can describe what this ring responds to independently of each other. Uh, but once you +[3399.600 --> 3405.680] add multiple layers of processing, they're kind of co-linked. They're kind of, um, you, you can't +[3405.680 --> 3411.920] really describe what would just one responds to an isolation. And uh, that's, that might not help +[3411.920 --> 3416.400] this presentation because I'm making things more confusing. But it's just like this is the simple +[3416.400 --> 3423.680] version. And the system changes once you have multiple layers, or if you add recurrence to this layer, +[3424.320 --> 3430.000] where it's, where this, the linear filters are no longer, um, the exact truth. +[3433.040 --> 3438.720] Maybe this didn't help my presentation because I just added more confusion. But, um, but, but I do +[3438.720 --> 3445.520] think that like a realistic version of this is not going to use a linear mapping here. And grid +[3445.520 --> 3449.120] cells probably aren't just a linear mapping of the surrounding head direction cells. There's +[3449.120 --> 3453.840] something a little more complicated than a linear mapping. Is anything, is anything you present +[3453.840 --> 3460.640] here suggest how the Alice and Ecosentric, the Alicentric transformation will occur? So these +[3460.640 --> 3465.760] everything you've faced here so far is an on sort of Alicentric logic centric. Right. I know, no, +[3465.760 --> 3471.200] I haven't taken a stand on that. This, this all, like you said earlier in the presentation, +[3471.200 --> 3477.920] all these mechanisms can work in either it can work any Ecosentric or an Alicentric. But, um, +[3477.920 --> 3482.480] I did, I haven't, haven't stated anything about the curve. So, the reason I asked is, and I've +[3482.480 --> 3488.320] said this many times ago, but it's not forward, but this conversion from one reference frame to another +[3488.320 --> 3493.520] has to occur. And I believe it has to occur in every quarter of a column, right? Because it's +[3493.520 --> 3496.400] going to occur anywhere. It's going to have the multiple places to cortex. So it's going to occur +[3496.400 --> 3502.960] everywhere in the cortex. So this makes sense. So V1 has to somehow make some movement. And maybe +[3502.960 --> 3510.240] not complete, but some movement from a pure Ecosentric perhaps reference frame to a less +[3510.240 --> 3515.200] Ecosentric reference frame. Maybe it makes it all in one false loop, which is what I've been +[3515.200 --> 3518.800] thinking about. Maybe there's some sort of intermediate phases that takes in the area that I don't know. +[3518.800 --> 3522.960] But at least it has to be some amount of Ego to Al O, it has to be occurring in every column. +[3523.680 --> 3527.280] And maybe a complete Ego to Al O, because it never calmed out, you know? +[3527.840 --> 3534.800] Well, so like are you 100% attached to that? Or occasionally we've brought up the idea that +[3535.440 --> 3543.200] some columns are Ego and some are Al O. Maybe that's the wear distinction. We don't know, but that's +[3543.760 --> 3549.920] totally a compound idea that each column does a reference frame transform. So for example, +[3550.160 --> 3559.280] in a wear column, you might be doing a transform from a finger's reference frame to a hand's +[3559.280 --> 3565.760] reference frame. Or you might be doing a reference frame from your eyes orientation in your head, +[3565.760 --> 3572.480] your eyes orientation to your body. So it's less about Ego and Al O. It's more about there's +[3572.480 --> 3576.880] a reference frame transformation that occurs in every column. I'm pretty wedded to that. That +[3576.880 --> 3583.200] one's I feel strong about it. The Al O Ego thing is, it can be applied to lots of different things. +[3583.200 --> 3588.080] But a reference frame transformation I think has to occur everywhere. +[3590.000 --> 3595.520] And so again, I hope those examples are clear that you can be doing between multiple different +[3595.520 --> 3602.160] egocentric reference frames. And other people have written about that. You know, like they've +[3602.240 --> 3607.200] just point out that when you move your muscles off your arm and your fingers, it's a gazillion +[3607.200 --> 3619.200] of the transformations that have to be done. And so just to know where every part is relative to +[3619.200 --> 3624.800] the other part, just to know where your finger is relative to your head. That's to know where my +[3624.800 --> 3629.920] fingers, all it just my nose requires my finger relative to my hand, my hand, my arm, all these +[3630.880 --> 3636.960] things. So something like that. So yeah, I'm wedded to that idea. So anyway, the point +[3636.960 --> 3646.880] where I'm getting at is, okay, so I think that's a to me a really key problem here, how do you do +[3646.880 --> 3659.600] that? And I just was wanting to know if you had adjusted here, you hadn't. But I think we have +[3659.600 --> 3667.040] to somehow do it. So I'll think about that if I can try to think about what you suggest +[3667.040 --> 3675.760] in here. Kevin saying something. +[3675.760 --> 3683.120] It's kind of a link because one of the things that's suggestive of this for me is that there's a +[3683.120 --> 3688.800] trick that you may have seen it already, Marcus, that if you take a four-year transform of two images, +[3690.000 --> 3698.320] and basically take the magnitude, swap the magnitudes of the two images, fundamentally it doesn't +[3698.320 --> 3703.440] change only information, most of the information, sealing information is in the phase part of the +[3703.440 --> 3711.360] image. And so when I see these guys contributing, you know, these axes in quadrature to that, +[3712.240 --> 3718.400] that phase information, you know, contains a lot of interesting information. And you're showing +[3718.400 --> 3723.680] examples of things pairs, you're saying, you might not be a linear mapping. But the fact that +[3723.680 --> 3729.280] you're combining these things in a way that is indicative of like a phase relationship, +[3730.000 --> 3736.800] means that you can embed a lot of information that's important. It's a question of, you know, +[3738.400 --> 3744.560] does the brain find this encoding way of using this encoding of information? +[3745.040 --> 3753.760] Importantly, I think that what you're trending toward is something that's important. How do you +[3753.760 --> 3760.320] take these one-d things and get, you know, these relationships, these phase relationships to things? +[3760.320 --> 3767.280] So I find it evocative because of what I see in four-year transforms. +[3768.240 --> 3774.800] So I'm at least aesthetically pleased with what you're doing. +[3776.000 --> 3780.400] I'll look at what you linked to because that is interesting sounding. Like maybe we don't, +[3781.120 --> 3787.760] maybe we can have very coarse magnitudes or we can actually see how necessary the magnitudes +[3787.760 --> 3790.480] actually are. Exactly. Exactly. +[3790.560 --> 3802.560] All these ideas floating through my head. Does anybody else want to talk further about this? +[3804.880 --> 3808.880] What do you want to do next, Marcus? This is always what happens. +[3811.360 --> 3817.680] The hard part is choosing exactly what to do next because I have like a whole list of ideas. +[3817.680 --> 3828.560] One thing that seems like that this is lacking is the fact that the grid cells are set from the +[3828.560 --> 3838.400] instantaneous input. This system is not going to handle ambiguity really. It's going to, if you give +[3838.400 --> 3845.520] the system identical input at two times, it's going to activate the identical output. +[3846.960 --> 3855.920] So something that does incorporate time into inference is going to, it's a big to-do item. +[3855.920 --> 3860.320] So that's a hard task that's in front of me and I'm deciding whether that's the right next step. +[3861.280 --> 3869.200] Another task is this third bullet point is the most confusing part of this because I haven't +[3870.720 --> 3878.000] tried it out and figuring out how to flesh out this third bullet point and make it into something +[3878.000 --> 3886.080] that I can explain to everyone and demo it. It's sort of this is a big unknown and I think exploring +[3886.080 --> 3893.600] this space is also important. I guess one thing on the topic of what to do next is that a lot of this, +[3894.800 --> 3901.200] I'm sort of living in machine learning land still. The fact that I represented all of this using +[3901.200 --> 3905.120] these two units, use backprop, use this loss function and interpret this as a bunch of rings. +[3909.120 --> 3915.200] One question on what to do next, one point of view is that I'm kind of doing it in the machine learning +[3915.200 --> 3924.880] world while maintaining a grip on neuroscience. I'm bringing that up because you could come in with +[3924.880 --> 3930.320] the perspective of no, it's time to dive deeper into the neuroscience and take a focus, start +[3930.320 --> 3935.680] implementing these rings instead of these. Right now I'm more on the side of I think I can get more +[3935.680 --> 3940.880] done by living in the machine learning world and analyzing this technique from that. That's more +[3940.960 --> 3949.760] just a discussion topic on what to do next. Well I think it seems like the real +[3950.560 --> 3958.720] kind of the core powerful idea here is that novel environments together with locations can be +[3958.720 --> 3968.880] represented using this some encoding scheme like this. It's not clear to me that you've fully shown +[3968.960 --> 3975.040] that can happen well and it seems like whatever you can do to really show that working well would +[3975.040 --> 3980.400] be great. I'm not sure whether you really need multi-layer networks or not to do it but +[3982.160 --> 3987.600] and then from there being able to show the predictive nature of this, it seems like those are +[3987.600 --> 3994.400] two really powerful ideas or the same coding scheme. It seems like that seems like a key thing here. +[3994.400 --> 3998.720] Are you real? When you said that phrase earlier you said against the prototype like oh you're +[3998.720 --> 4004.720] representing these novel environments. It didn't it didn't work for me. What worked for me is this idea that +[4005.600 --> 4013.920] that I can anchor my grid cells that they're not randomly anchored. They're anchored based on +[4014.800 --> 4018.720] yeah we could use that that's that's the same. I think to me those are the same but we can use that +[4018.720 --> 4025.360] okay well to one that it was it was a lot of ambiguity in the in the phrase that you and Marcus +[4025.360 --> 4030.320] used at least in my mind. I didn't understand what it meant but this I can completely understand +[4030.320 --> 4034.240] now I can say oh that's more percent yeah the way you're saying it is more precise I think. So I +[4034.240 --> 4041.760] know what that means now. I just just I just throw out an observation Marcus that think about what +[4041.760 --> 4046.960] I've I used to when we were in the office I used to talk about this I'd say oh I walk into a +[4047.520 --> 4055.440] conference room I never been in before and I quickly look around and and now if I I and I just +[4055.440 --> 4060.640] what I said earlier I can then close my eyes like walk a different part of the room and I can picture +[4060.640 --> 4070.800] what I would say and I think that is what you meant by learning a novel room I or something like +[4070.800 --> 4078.400] that it's it's like I'm essentially not only I know I know I know where I am in the room when I +[4078.400 --> 4083.120] walk I'm I was a close and I'm not only where I am in the room but I can also visualize what's +[4083.120 --> 4087.600] in the room what my viewpoint would be from this point so we clearly built some sort of model of +[4087.600 --> 4094.560] the room and to allow me to do that but this idea that that the observations initially are +[4094.800 --> 4101.680] or are do not need to ran them grid soliciting it's very that's the key thing I think we said here +[4103.680 --> 4109.120] and and that's part of this solving this problem but the object makes I wanted to make here +[4109.120 --> 4114.320] who's the object I wanted to come so I get off track when I walk into the room I just don't open my +[4114.320 --> 4120.160] eyes and take a glance and close it I literally have to attend the different objects in the room +[4120.560 --> 4125.920] I think I have to do that I can't just like take a picture and go being open shot I have to say oh +[4125.920 --> 4132.160] there's a chair there's this and as I do this I'm somehow I see where these things are relative to me +[4132.160 --> 4136.720] I clearly perform a representation where the chair is relative to me and where the table is relative to +[4136.720 --> 4146.480] me and where the coffee pot is relative to me and so this it's not a snapshot it's a I'm building up +[4146.480 --> 4151.600] something by looking at multiple things in their position and all this is to me and my point is +[4151.600 --> 4157.280] it's not like I just have an image that's a flash it's like I have to go through a series of +[4157.840 --> 4165.200] observations of the different components in the room to to get to that representation that +[4165.200 --> 4170.240] was what it is and and so on and somebody might think about that issue here too I'm just pointing +[4170.240 --> 4175.520] out like uses sets at the moment ago where you say well this is like a single image and then from +[4175.520 --> 4180.000] that I can do all this I in the biology I don't think that's the case in biology I think we have +[4180.000 --> 4189.600] to somehow construct something through multiple observations that allows me to that that's +[4189.600 --> 4195.600] somehow allows me to anchor myself something I don't know anyway I'm gonna shut up I'm just +[4195.680 --> 4205.040] rambling here I take away that from me is no random good sign grain yeah that's the big takeaway +[4205.680 --> 4211.440] yes I would suggest you said that says assorting function for the different things you're thinking +[4211.440 --> 4219.360] of trying to what would really show that property well that makes sense yeah it's a very powerful idea +[4219.360 --> 4224.480] yeah and in hindsight it almost seems like it has to be true you know it's like oh that's pretty +[4224.480 --> 4234.800] likely true but not that you mentioned it it's like oh yeah almost passed the truth so I like that +[4235.600 --> 4240.640] the striping is like oh yeah okay good idea now you've got a mechanism yeah I don't quite that +[4240.640 --> 4247.600] standing if I'm thinking about it I think maybe the next time you're present this I may be able to +[4247.600 --> 4257.360] understand the mechanism in the context of that phrasing of the problem yeah that's helpful +[4259.680 --> 4270.160] okay I think we're done diff --git a/transcript/allocentric_vc0HwO_AJ40.txt b/transcript/allocentric_vc0HwO_AJ40.txt new file mode 100644 index 0000000000000000000000000000000000000000..17e1e5dfb8881ed64fd0585671b1435896514998 --- /dev/null +++ b/transcript/allocentric_vc0HwO_AJ40.txt @@ -0,0 +1,733 @@ +[0.000 --> 3.200] So that's the work that I'm presenting here. +[3.200 --> 6.840] So it's on allocentric spatial memories. +[6.840 --> 9.280] I'll get to what that means in a second. +[9.280 --> 12.200] So let's go. +[12.200 --> 16.880] OK, so I guess the my life's mostly focused on computer vision. +[16.880 --> 21.040] And this is the it's taken a turn towards machine learning +[21.040 --> 23.560] like many other areas recently. +[23.560 --> 25.880] And usually it's concerned with what you find in images +[25.880 --> 28.280] and how in videos and how you parse that into things +[28.280 --> 30.520] that machines can understand. +[30.520 --> 35.080] And so normally the algorithms that people are concerned with, +[35.080 --> 38.400] they learn how to do things like recognize objects and so on. +[38.400 --> 43.520] So you can see some examples here where you can detect objects +[43.520 --> 47.520] like these little bottle there or segment them. +[47.520 --> 52.200] So tell which pixels are belonging to the monitor and so on. +[52.200 --> 57.240] And even some treating information like depth and 3D structure, +[57.240 --> 58.680] but always relative to the camera. +[58.680 --> 61.480] So you can see that there's a trend. +[61.480 --> 63.680] If you step back a little, you'll +[63.680 --> 65.440] see that they're all very image-centric tasks, +[65.440 --> 69.160] which is not very surprising because it's computer vision. +[69.160 --> 71.640] But we probably want, you know, in order +[71.640 --> 78.000] to have machines that actually can reason beyond images +[78.000 --> 80.440] and see the world a bit like we do, +[80.440 --> 82.040] then we probably need to go beyond that. +[82.040 --> 86.160] So we want to have a sort of machine +[86.160 --> 88.600] to be able to parse the world into something +[88.600 --> 89.320] that it understands. +[89.320 --> 92.560] It's like when you stop seeing things, +[92.560 --> 97.360] you don't necessarily forget about them. +[97.360 --> 99.120] So this is object permanence. +[99.120 --> 102.400] And so this is really what you need +[102.400 --> 107.360] to construct a more long-term view of what you've seen now +[107.360 --> 112.320] and use that to do some long-term planning +[112.320 --> 114.880] and have some goals and so on. +[114.880 --> 115.800] So this is important. +[115.800 --> 118.840] So not to have just a sort of situational awareness +[118.840 --> 121.840] that's second to second, but actually +[121.840 --> 124.800] be able to aggregate things in a consistent view. +[124.800 --> 126.080] And obviously, that's always going +[126.080 --> 129.720] to be more or less centered on the world, +[129.720 --> 132.520] rather than centered on yourself. +[132.520 --> 135.000] And that's really what the word allocentric comes from. +[135.000 --> 137.560] It's the opposite of egocentric. +[137.560 --> 142.120] So it just denotes a change of point of view. +[142.120 --> 144.360] So as you can see, this is mostly for robots, +[144.360 --> 148.120] but it also applies to self-driving cars +[148.120 --> 151.960] and a bunch of other things. +[151.960 --> 154.240] OK, so obviously, I'm not the first one +[154.240 --> 155.040] to think about this. +[155.040 --> 158.680] And people have been doing robotics for a very long time. +[158.680 --> 164.720] And so they usually refer to this problem as slam +[164.720 --> 168.320] or simultaneous location mapping, if you heard about that. +[168.320 --> 173.120] And it's usually like a recursive kind of algorithm +[173.120 --> 177.200] where you observe frames. +[177.200 --> 181.640] So as time moves on, you observe new things in the world, +[181.640 --> 182.840] like frames in a video. +[182.840 --> 184.160] And then from that, you're supposed +[184.160 --> 188.800] to get an idea of your location, but also +[188.800 --> 191.040] continuously build a map. +[191.040 --> 192.640] And as you do that, you base yourself +[192.640 --> 195.600] on the previous estimate of the location and the map. +[195.600 --> 198.840] You receive a new frame, and now you do that all over again. +[198.840 --> 200.840] And so you sort of have to keep up +[200.840 --> 204.800] on the rotating things over time. +[204.800 --> 209.840] And there are many, these pipelines are very complicated. +[209.840 --> 210.960] They have a lot of engineering. +[210.960 --> 214.120] Like very talented people spent a lot of time with them. +[214.120 --> 215.920] But they were sort of made by hand +[215.920 --> 221.680] in this sort of engineering perspective. +[221.680 --> 224.080] And so it's a bit like, it's very hard +[224.080 --> 226.640] to adapt to new environments, for example, +[226.640 --> 228.200] because they were hand-tuned. +[228.200 --> 231.600] And obviously, we'd like something that's a bit more automatic. +[231.600 --> 234.520] And the other thing that's more fundamental +[234.520 --> 236.680] is that there's no semantic information in this. +[236.680 --> 241.360] So these maps are usually composed of point clouds. +[241.360 --> 243.800] So they tell you about surfaces, but they don't necessarily +[243.800 --> 247.760] tell you anything about what those surfaces are a part of. +[247.760 --> 251.400] So whether it's a door or a wall, it's all the same +[251.400 --> 254.640] to an algorithm like that. +[254.640 --> 256.240] And why do you care about that? +[256.240 --> 262.120] Because you can use this information to compensate for missing data. +[262.120 --> 266.480] So when you have, if your sensor for some reason +[266.480 --> 270.240] didn't detect a part of a wall, if you see both ends of the wall, +[270.240 --> 273.040] you can extrapolate and think, well, this is probably +[273.040 --> 275.040] going to be the same wall. +[275.040 --> 277.200] And without any prior information like this +[277.200 --> 280.320] that depends on semantics, you lose a lot of robustness +[280.320 --> 282.640] and a lot of adaptability. +[282.640 --> 288.880] Another example might be, if your goal is to just walk down a corridor, +[288.880 --> 295.760] you don't really need to have absolute centimeter level understanding of where you are. +[295.760 --> 298.320] You can just keep going in this direction. +[298.320 --> 301.640] And compensate when things get a bit less good. +[301.640 --> 304.280] So those are important points about robustness, +[304.280 --> 306.720] and it's something that biological systems have +[306.720 --> 310.480] that these systems don't. +[310.480 --> 311.480] These are the official ones. +[313.640 --> 317.160] Now, of course, if you know about the trend of deep learning, +[317.160 --> 321.480] which has taken machine learning and all of these engineering disciplines by storm, +[321.480 --> 324.800] lately you're thinking about how do you apply deep learning to this problem. +[324.800 --> 327.560] And so it has resisted application for a while, +[327.560 --> 329.480] compared to other areas. +[329.480 --> 331.160] And there are a few reasons for that. +[331.160 --> 337.920] So the first, probably the first approach was to just use, +[337.920 --> 340.160] with deep learning, these are these, +[340.160 --> 343.840] you probably heard about them, these deep neural networks. +[343.840 --> 346.200] And they essentially consume a lot of data, +[346.200 --> 349.200] and they learn very non-linear functions. +[349.200 --> 351.000] They are a little bit like black boxes, +[351.000 --> 354.400] so there's some black magic to them. +[354.400 --> 358.880] So that means the first approaches were a bit more simplistic. +[358.880 --> 365.240] So what they've done is, the first thing was to just predict the EGOM motion, +[365.240 --> 370.440] which is the frame-to-frame change in your agent's pose. +[370.440 --> 374.000] So if you rotate it a little bit, or moved forward, +[374.000 --> 376.400] and so you just predict that frame-to-frame. +[376.400 --> 380.840] Now, obviously you can do that without having any information about the environment, +[380.840 --> 384.880] just looking at things like optical flow and so on. +[384.880 --> 386.920] But this does not build them up, +[386.920 --> 389.080] which you might be interested in for other reasons, +[389.080 --> 392.040] for downstream tasks, like planning, navigation. +[392.040 --> 394.320] And also it doesn't correct for drift. +[394.320 --> 401.400] So as you keep accumulating these frame-to-frame pose changes, +[401.400 --> 403.280] you will accumulate drift. +[403.280 --> 408.280] So there's no way around that using this sort of technique. +[408.280 --> 413.560] So then a sort of more advanced version of that was to use, +[413.560 --> 417.520] to learn localization, but now learn offline. +[417.520 --> 422.720] So essentially, training a network on one specific environment. +[422.720 --> 428.240] So for example, this room, and for every image predict the location where it is. +[428.240 --> 430.600] Now, when you can do that, if you have ground-through +[430.600 --> 439.240] the positions of the camera, so you know some associations between images and camera +[439.240 --> 441.880] or agent positions. +[441.880 --> 445.400] But then the problem is now that the map is really only very implicit. +[445.400 --> 452.600] So the map really only exists in code of somehow into the trained network. +[452.600 --> 457.280] So there is no easy way, for example, it will not transfer to new environments. +[457.280 --> 459.560] Any new environment will require retraining. +[459.560 --> 465.280] And training is something that is a sort of a labor-intensive process +[465.280 --> 467.200] because of all of the tuning you have to do. +[467.200 --> 473.920] So that's also not great if you want to just deploy your robot and do that kind of thing. +[473.920 --> 480.400] So obviously, then people try to do even better. +[480.680 --> 488.880] And so performing online mapping without encoding the map, +[488.880 --> 492.960] the environment, into the networks parameters, the weights. +[492.960 --> 497.880] So essentially, what they do is you create a map on the fly. +[497.880 --> 500.240] This map exists only as the activation, +[500.240 --> 504.520] so the intermediate predictions of the deep neural network. +[504.520 --> 508.560] And so this is a way to essentially build it online +[508.560 --> 511.880] and make sure that it generalizes for new environments. +[511.880 --> 516.320] But the problem with these techniques is that they depend. +[516.320 --> 517.800] They lose the localization bit. +[517.800 --> 522.600] So they take that as granted as just an extra input. +[522.600 --> 524.160] And that's also not good. +[524.160 --> 529.320] So they solved another part of the problem. +[529.320 --> 535.240] So what I proposed in this work last year was to improve on both of these +[535.240 --> 540.840] and actually be able to do both mapping and localization with the deep network +[540.840 --> 543.800] without assuming any other prior knowledge. +[543.800 --> 548.800] So this is a network that builds a map on the fly +[548.800 --> 554.240] and actually uses the map to perform localization, so no other sources. +[554.240 --> 556.000] And it's able to do this fully online. +[556.000 --> 562.040] So as it sees new images, it will compile them into its special memory +[562.040 --> 564.720] and use that to know where it is. +[564.720 --> 567.280] And that's important. +[567.280 --> 572.520] OK. +[572.520 --> 573.800] So what's the special source? +[573.800 --> 576.640] What's different here? +[576.640 --> 578.800] I don't think it might not be. +[578.800 --> 580.840] It's kind of obvious in your Android respect, +[580.840 --> 587.040] but essentially everything comes out of just this one decision that you make very early on, +[587.040 --> 590.040] that constrains everything that comes downstream from that. +[590.040 --> 592.320] So if you commit to this representation, +[592.320 --> 595.720] you'll probably derive the exact same equations I did. +[595.720 --> 602.960] So all you do is just assume that your map model represents the ground plane as a two-dimensional +[602.960 --> 604.880] grid. +[604.880 --> 610.200] So we're not concerned about full-treat, the full-treaty problem here yet. +[610.200 --> 617.120] And for each cell of these two-dimensional grid, you can associate one embedding to +[617.120 --> 618.120] that location. +[618.120 --> 622.680] The embedding is the intermediate prediction of a deep neural network. +[622.680 --> 625.000] So that means it's some semantic encoding. +[625.000 --> 630.520] It's a vector that represents a point in this semantic space of all the things that you +[630.520 --> 633.440] can see in an image. +[633.440 --> 638.520] And so essentially you get to associate one sort of visual code per cell of this two-dimensional +[638.520 --> 640.680] grid. +[640.680 --> 641.680] And just doing that. +[641.680 --> 646.840] So essentially, this is a special memory that allows associating semantics with world +[646.840 --> 648.080] coordinates. +[648.080 --> 654.520] And the fact that it's world coordinates, not egocentric camera-centered coordinates, +[654.520 --> 656.360] that's very important. +[656.360 --> 664.480] So once you have this special memory, this two-dimensional grid, which is visualized here as this cube, +[664.480 --> 666.320] then you can read and write from it. +[666.320 --> 673.120] And so given an image, you can perform localization by essentially reading from the map, so accessing +[673.120 --> 679.600] which locations on the grid have embeddings that look like the one that you're seeing in +[679.600 --> 680.600] the image. +[680.600 --> 687.080] And then you can perform, and that gives you a position or orientation, so what knowing +[687.080 --> 688.880] where you are. +[688.880 --> 693.000] And then you can perform the inverse operation, which is writing, and that's mapping. +[693.000 --> 698.520] So given the fact that your image sees some new things that you could not see before, +[698.520 --> 702.000] and given that you know your position, that you're just estimated, then you can write +[702.000 --> 707.560] it back into the discrete of embeddings and just make sure that some of the embeddings +[707.560 --> 714.000] that were not set yet, now you can write there what you've seen that's new. +[714.000 --> 714.840] And that's the whole thing. +[714.840 --> 717.240] So that's the starting point. +[717.240 --> 723.880] Now once you do that, sort of everything else will come out of it. +[723.880 --> 732.680] So the main finding I guess is that you can, if you take your image and you use the +[732.680 --> 737.720] writing embedding, which I'll get to in a minute, you can perform localization very quickly +[737.720 --> 744.040] and very efficiently by just using convolution operator, and I'll get to the details. +[744.040 --> 753.180] But the other nice part is that writing to the map or updating the map, actually you +[753.180 --> 758.080] can prove mathematically that it's equivalent to deconvolution, which is the dual operator +[758.080 --> 759.720] to convolution. +[759.720 --> 762.060] And that has a nice symmetry to it, I guess. +[762.060 --> 768.100] So the main message of the paper, and I'll get to what these boxes mean in a second, +[768.100 --> 773.160] it's just that you have this dual pair in localization and mapping, which is localization +[773.160 --> 779.860] as convolution on these two dimensional grid and mapping as these deconvolution. +[779.860 --> 787.800] Okay, so I mentioned that you have to use the writing embedding to be able to support +[787.800 --> 794.060] this, and that embedding, that's what I get to here. +[794.060 --> 800.420] So if you know the networks, the first processing step you do with them is to extract some +[800.420 --> 805.460] features using from the image, using a convolutional neural network, so that gives you access +[805.460 --> 812.740] to a bunch of embeddings, which are these vectors that encode some semantics. +[812.740 --> 816.100] And then, so what's the goal here? +[816.100 --> 824.180] We have these two dimensional grid that corresponds to world space coordinates on the ground plane. +[824.180 --> 828.480] So we need to somehow, and we want to perform comparisons against that based on what we +[828.480 --> 829.860] see in the image. +[829.860 --> 836.860] So essentially what we have to do is somehow get the embeddings that you see in the image +[836.860 --> 841.500] onto the same format as the ground plane embeddings. +[841.500 --> 849.700] So to get it into that format, what you do, you just do a projection onto the ground plane. +[849.700 --> 856.620] So projection, projecting as in actually squashing it into this from 3D to 2D. +[856.620 --> 861.340] So we assume that we're given the depths and a calibrated camera, which is, I guess, +[861.340 --> 867.660] reasonable for these robots and self-driving car scenarios. +[867.660 --> 872.540] But remember, these are centered in the camera, so that does not make it trivial, because +[872.540 --> 874.020] it's still only local information. +[874.020 --> 879.420] As your camera moves around, you have no idea what those 3D points correspond to because +[879.420 --> 881.540] of the camera motion. +[881.540 --> 886.540] So all you do is, given these depths, you know, the 3D coordinates in the local camera +[886.540 --> 889.460] space of each pixel. +[889.460 --> 894.900] So you can associate each pixel to a ground plane if you just project it down, and then +[894.900 --> 895.900] discretize it into a grid. +[895.900 --> 898.540] And that's the operation that you see here in the center. +[898.540 --> 902.700] So you see the camera in 3D with the 3D points. +[902.700 --> 908.620] You see these green cell over there on the right. +[908.620 --> 915.220] And essentially, all of the pixels that are tinted green, those are projected onto that +[915.220 --> 917.620] one green cell. +[917.620 --> 918.860] So that's all you do. +[918.860 --> 922.460] So you just take the CNN embeddings from that. +[922.460 --> 923.740] You aggregate all of them. +[923.740 --> 924.940] I use max pooling. +[924.940 --> 926.620] You can use other things. +[926.620 --> 929.900] And put them all into that one embedding for that one cell. +[929.900 --> 935.020] So what that gives you is what you see on the right, which is a local view. +[935.020 --> 939.740] So the CNN embeddings in the ground plane. +[939.740 --> 943.020] But this is still centered on the camera, so it does not give you a sort of world-space +[943.020 --> 950.580] global view of things. +[950.580 --> 957.660] So now that you have that, oh, and by the way, I represented that as this ground plane +[957.660 --> 959.420] square thing with these cones. +[959.420 --> 965.140] So this is, if you look at the camera view, it's going to, you run projected. +[965.140 --> 967.860] It looks like the blue shaded part. +[967.860 --> 973.260] If you play top-down games on your computer, then you know what this is. +[973.260 --> 974.260] Yeah. +[974.260 --> 975.260] So just picture that. +[975.260 --> 979.540] So you have this local view that's been run projected. +[979.540 --> 986.540] Now all you have to do really is to just find it on your map memory. +[986.540 --> 990.220] So we do this in the most straightforward way, which is just dense matching. +[990.220 --> 997.660] So you just essentially try all possible positions and attribute the square to each one of those. +[997.660 --> 999.660] And that will give you the overall position. +[999.660 --> 1004.300] So we do that by that's equivalent to a cross correlation operation or convolution. +[1004.300 --> 1006.700] People sometimes use it interchangeably. +[1006.700 --> 1009.060] There's some subtleties there. +[1009.060 --> 1014.340] And then you just use a common softmax operator, which is using deep networks to turn scores +[1014.340 --> 1018.500] into a normalized probability. +[1018.500 --> 1024.820] So this might seem a bit heavy, but these operations are complete, they're parallelizable and +[1024.820 --> 1029.260] they're optimized like crazy in these frameworks. +[1029.260 --> 1033.100] So this is really the fastest thing you can do. +[1033.100 --> 1034.540] One of the fastest. +[1034.540 --> 1038.660] So this can also be interpreted as addressing a spatial associative memory. +[1038.660 --> 1042.380] So you're given a query, which is your local view, and you're trying to find where in +[1042.380 --> 1047.460] the memory, it corresponds to where searching the memory is equivalent to looking at different +[1047.460 --> 1052.740] two-dequardinets on the spatial grid. +[1052.740 --> 1054.820] And that gives you a position hit map. +[1054.820 --> 1057.820] Now of course this is really only the position. +[1057.820 --> 1065.140] And when we move around the world, even as you're moving sort of attached to the ground, +[1065.140 --> 1070.100] positions not enough, you also rotate your view and you look around. +[1070.100 --> 1072.020] So we need to also address that. +[1072.140 --> 1075.300] Turns out that's pretty easy. +[1075.300 --> 1080.620] So to consider the orientation of the camera, so as you rotate around, all you have to do +[1080.620 --> 1087.900] really is to just consider rotated versions of the view that you just got. +[1087.900 --> 1094.420] So you see here on the left, the local view, you just essentially rotate it artificially, +[1094.420 --> 1099.140] and you get this stack of rotated views, different angles. +[1099.140 --> 1101.460] So these are the possible views depending on the angle. +[1101.460 --> 1108.900] And if you now search for those with the same idea of using cross-correlation, then you +[1108.900 --> 1114.300] will get instead of just one map of positions, you get a stack of map of positions, one for +[1114.300 --> 1117.380] each orientation. +[1117.380 --> 1123.340] And so that gives you a joint probability of being at its position and orientation at the +[1123.340 --> 1125.340] same time. +[1125.340 --> 1126.340] And that's all. +[1126.340 --> 1128.740] So it's just used as a filter bank. +[1128.740 --> 1133.220] So it turns out to be pretty simple. +[1133.220 --> 1140.740] And just to emphasize the big picture of what we're doing here, we're actually going from +[1140.740 --> 1144.580] the normal computer vision point of view, which is camera centric. +[1144.580 --> 1148.620] We're moving from this camera reference frame to the world reference frame. +[1148.620 --> 1155.580] So the map exists on global coordinates, world space coordinates, not on the camera space. +[1155.580 --> 1162.580] And that's the direction we want to move in. +[1162.580 --> 1163.580] Okay. +[1163.580 --> 1167.540] So I guess not the only thing that's missing is audio feedback. +[1167.540 --> 1176.020] Audio, take the local view and are able to update the spatial memory based on what you've +[1176.020 --> 1178.620] just seen. +[1178.620 --> 1183.740] So the first thing you have to do is essentially to take your local view and register it with +[1184.460 --> 1185.900] the world space coordinates. +[1185.900 --> 1188.140] So you know your position? +[1188.140 --> 1190.020] You know what you're seeing? +[1190.020 --> 1195.260] You have to make sure you now have what you're seeing at the correct position of the map. +[1195.260 --> 1201.740] And once you have that, so register local view, then it's very easy to update because it's +[1201.740 --> 1208.460] position of the world space registered view corresponds to the each position of the map. +[1208.460 --> 1214.940] So you can just integrate it with any simple interpolation or anything like that. +[1214.940 --> 1221.660] And so it turns out that if you crunch the equations, this comes out as deconvolution, +[1221.660 --> 1223.580] which is maybe a bit surprising. +[1223.580 --> 1226.540] So there's some intuition into why this works. +[1226.540 --> 1232.020] If you look at that, so essentially you have this stack of position hit maps, one for +[1232.020 --> 1233.020] orientation. +[1233.020 --> 1237.660] That's your position orientation probabilities. +[1237.660 --> 1241.740] So essentially, if you imagine that you are absolutely certain that you are at only one +[1241.740 --> 1243.460] position. +[1243.460 --> 1245.860] So you are at one particular orientation. +[1245.860 --> 1250.660] That means out of that stack on the right, everything, every stack is zero except for +[1250.660 --> 1253.340] one at the correct orientation. +[1253.340 --> 1259.260] So that means when you deconvolve, you will pick out the correct, the corresponding rotation +[1259.260 --> 1265.340] out of the stack of rotations on the top of the rotated local views. +[1265.340 --> 1273.160] And then the deconvolution with a function that looks like a pick like that essentially +[1273.160 --> 1276.740] moves the rotated local view to that position. +[1276.740 --> 1280.340] So that's the more pictorial version of that. +[1280.340 --> 1283.100] But you can also prove it a bit more formally. +[1283.100 --> 1286.780] It's not that complicated. +[1286.780 --> 1290.460] Just no room for that on the slides. +[1290.460 --> 1295.220] And once you've registered it, it's easy to integrate into the map. +[1295.220 --> 1299.460] While in your interpolation or a convolutional STM. +[1299.460 --> 1300.460] Yeah. +[1300.460 --> 1303.780] So essentially, here's the full pipeline. +[1303.780 --> 1308.220] I'd say it's pretty small for a slam pipeline because usually those are gigantic. +[1308.220 --> 1315.180] So minimum, maybe, I don't know, 10 or 12 modules with all sorts of different craziness +[1315.180 --> 1316.180] inside. +[1316.180 --> 1324.980] So all that you do is essentially take your image, pass it to a standard CNN to get the embeddings, +[1324.980 --> 1328.980] grant project using the depth so that you get a local view. +[1328.980 --> 1334.580] And then if you rotate those, you get this nice filter bank that you can use to localize +[1334.580 --> 1337.420] yourself in position orientation. +[1337.420 --> 1342.980] And then deconvolve to re-register the local view. +[1342.980 --> 1346.140] And then you update them up using that. +[1346.140 --> 1348.260] So that's the whole thing. +[1348.260 --> 1355.580] And I guess these are the two important or surprising results, which are the localization +[1355.580 --> 1359.660] as convolution and mapping as deconvolution. +[1359.660 --> 1361.620] So I started with some type problems. +[1361.620 --> 1365.140] I promise there's more interesting ones. +[1365.140 --> 1366.740] But this is sort of a controlled setting. +[1366.740 --> 1368.700] So we'll start with that. +[1368.700 --> 1372.860] So I just generated 100,000 mases that look like that. +[1372.860 --> 1376.220] And I simulated an agent that moves at random. +[1376.220 --> 1381.940] And the important thing to be able to be related to the real world other than the fact that +[1381.940 --> 1385.940] it's to the, it's actually just that it's limited and very local visibility. +[1385.940 --> 1395.700] So if the agent is at the orange dot there, then looking up, then the only cells that it's +[1395.700 --> 1400.260] able to see, everything else is occlusive, are the blue ones. +[1400.260 --> 1407.500] So once you take this local view, that's what I've shown on the right, which is you really +[1407.500 --> 1411.020] can see beyond your field of view. +[1411.020 --> 1416.820] And regardless of the camera rotation, the local view is always going to be facing one direction +[1416.820 --> 1421.460] that you select, which is in this case, it's always facing the right. +[1421.460 --> 1428.420] So even as the agent moves around, the local view always starts by facing the right. +[1428.420 --> 1432.660] And having this very limited field of view. +[1432.660 --> 1436.300] So essentially you take this as the input. +[1436.300 --> 1443.900] I trained it on input sequences of five frames with supervision of position orientation. +[1443.900 --> 1451.140] So the last function is just making sure that the position corresponds to the ground truth +[1451.140 --> 1453.140] with a logistic loss. +[1453.140 --> 1457.500] And after training, this is what you get. +[1457.500 --> 1462.140] So here's an example sort of step by step. +[1462.140 --> 1465.060] This is a global view where the agent starts. +[1465.060 --> 1469.940] And this is the first frame where you get this local view, as I said, always facing the +[1469.940 --> 1473.140] right because it's local. +[1473.140 --> 1479.060] And so everything that's gray is just the things that you can see. +[1479.060 --> 1481.980] And then there's walls and ground. +[1481.980 --> 1485.740] And then at the bottom you see the position prediction heat map. +[1485.740 --> 1492.020] So that's black for probability zero and white for probability one and everything in between. +[1492.020 --> 1497.820] And then the blue thing is the ground truth just for comparison. +[1497.820 --> 1499.820] And that's what you see on the first frame. +[1499.820 --> 1504.380] And then on the second frame, the red dot has moved over there. +[1504.380 --> 1507.820] You get a new local view. +[1507.820 --> 1511.700] And now the predicted position is the predicted position heat map is something like that. +[1511.700 --> 1519.220] So it actually seems to have gone on to the ground truth position pretty well. +[1519.220 --> 1525.540] And then here's another one and another and another. +[1525.540 --> 1527.340] And the result is always the same. +[1527.340 --> 1531.340] So it's very certain of the position. +[1531.340 --> 1536.220] You wouldn't think that immediately just from the description of the algorithm. +[1536.220 --> 1537.900] But it turns out to be true. +[1537.900 --> 1543.380] And I think it's a bit impressive because of the way that this is working. +[1543.380 --> 1547.540] Essentially the local views are like little pieces of a gig saw puzzle. +[1547.540 --> 1553.620] And in order to create a big map to localize yourself against new views, you have to +[1553.620 --> 1558.420] stitch together those puzzle pieces one by one into one larger view. +[1558.420 --> 1563.860] And so by rotating and translating them around. +[1563.860 --> 1564.860] So that's funny. +[1564.860 --> 1568.980] And then there are some more interesting cases. +[1568.980 --> 1571.340] Test sort of test the limits of what this can do. +[1571.340 --> 1575.700] So here's one where it's starting position is actually completely symmetrical. +[1575.700 --> 1578.700] So vertically. +[1578.700 --> 1589.580] So when you get a new local view, you can actually fit it in both ways very well. +[1589.580 --> 1594.100] And so that means that the predicted position is actually just a 50% probability of being +[1594.100 --> 1596.660] at the top and 50% probability of being at the bottom. +[1596.660 --> 1601.900] And that's the gray, the two gray cells there. +[1601.900 --> 1603.780] And the ground truth is actually the one at the bottom. +[1603.780 --> 1608.620] But given this information, it's very hard to actually impossible to tell. +[1608.620 --> 1613.100] And so what it does is there are new frames coming along. +[1613.100 --> 1618.580] It's sort of symmetrically propagating the probabilities on this hit map of probabilities +[1618.580 --> 1620.100] over time. +[1620.100 --> 1627.140] But then essentially there is one frame where it does not fit one of the possibilities +[1627.140 --> 1628.140] too well. +[1628.140 --> 1631.420] So the probability at the top starts vanishing. +[1631.420 --> 1634.300] And one more step and it vanished completely. +[1634.300 --> 1643.140] So you can, because it's a hit map of probabilities, you can propagate this very multimodal complex +[1643.140 --> 1649.100] possibilities of what happened simultaneously. +[1649.100 --> 1655.100] OK, so I mentioned it's a spatial memory. +[1655.100 --> 1659.580] So what does that map look like? +[1659.580 --> 1664.420] So it's very hard to visualize because these are deep network embeddings and those always +[1664.420 --> 1666.500] look like gibberish to everyone. +[1666.500 --> 1669.780] That's why they say it's not very interpretable. +[1669.780 --> 1670.780] And it isn't. +[1670.780 --> 1677.060] I guess if you just plot the channel, so I'm plotting one channel per column here for +[1677.060 --> 1678.300] different samples. +[1678.300 --> 1685.840] And you can see that there's some of them correspond to things like free space or walls or +[1685.840 --> 1688.020] coordinators and things like that. +[1688.020 --> 1690.300] And that's what the embedding seemed to encode. +[1690.300 --> 1695.980] But more importantly also that these local vies are aggregated into a larger map. +[1695.980 --> 1698.820] And I guess you can't really take any more conclusions. +[1698.820 --> 1703.300] So I tried to be a bit more systematic about showing whether these actually encodes any +[1703.300 --> 1705.540] semantics. +[1705.540 --> 1714.160] What I did was just label each cell of the maze with either being a corridor or a dead +[1714.160 --> 1717.940] end or a crossing intersection that kind of thing. +[1717.940 --> 1724.300] So those are ground-thread labels for what each cell corresponds to. +[1724.300 --> 1729.900] And then train classifiers based on your embeddings, the embeddings that I got from this system +[1729.900 --> 1739.340] that and see if it's possible to classify space into these labels like devents and turns +[1739.340 --> 1740.500] and so on. +[1740.500 --> 1743.860] But solely based on the embeddings. +[1743.860 --> 1748.460] And so chance would be 50%, which means it wouldn't really encode that. +[1748.460 --> 1754.580] But it turns out you can with a simple linear classifier gets way more than 50%. +[1754.580 --> 1759.940] So that shows the embeddings encode some semantic information about these things. +[1759.940 --> 1766.380] So that would not be necessarily obvious because you're training the network to perform localization. +[1766.380 --> 1770.020] You're not training it to get semantic information. +[1770.020 --> 1779.540] But somehow along the way, the best way to localize yourself is to look for these semantic +[1779.540 --> 1785.620] features of the world and use those as the basis for your decision. +[1785.620 --> 1789.700] Okay, so I also did some experiments with 3D data. +[1789.700 --> 1793.220] First one was from a game, which some of you might recognize. +[1793.220 --> 1795.940] It's this old game called DOOM. +[1795.940 --> 1803.660] And so you go from the maze situation to this one by only adding the ground projection +[1803.660 --> 1804.660] step. +[1804.660 --> 1812.180] That goes from 3D, from images and depth to the two dimensional view. +[1812.180 --> 1817.220] So what I'm showing here is the images at the top where I had to hack the source code +[1817.220 --> 1823.500] a little bit and remove the monsters because they were messing up the localization. +[1823.500 --> 1825.140] And so that's what you see at the top. +[1825.140 --> 1833.020] And then at the bottom you see on the left a top-down view of the map in RGB. +[1833.020 --> 1838.900] And then these probabilities as a heat map that sort of flares up a little bit sometimes +[1838.900 --> 1844.940] when it's a bit more uncertain about where it is and the trajectory as one line. +[1844.940 --> 1848.220] And then on the right you see the orientation probabilities. +[1848.220 --> 1855.340] So as it rotates you can see that it more or less gets the correct orientation. +[1855.340 --> 1862.660] And this was just one way to visualize the position orientation heat maps of the network +[1862.660 --> 1864.380] over time. +[1864.380 --> 1870.220] So this was done on a four-recorded speed runs through the game which consists of six hours +[1870.220 --> 1872.140] of gameplay. +[1872.140 --> 1876.220] I think so you've probably seen some works where they use virtual worlds like these to test +[1876.220 --> 1877.220] things. +[1877.220 --> 1883.540] And sometimes it's a little bit disappointing because it's very easy to just start from +[1883.540 --> 1885.700] a small maze or something like that. +[1885.700 --> 1887.660] And that's fine to start with. +[1887.660 --> 1895.500] But the difference between that and this is that these are hours of gameplay over 32 +[1895.500 --> 1900.340] levels that are pretty big and they were created to be interesting to humans and visually +[1900.340 --> 1901.340] diverse. +[1901.340 --> 1904.340] So don't be fooled by the quality of the graphics. +[1904.340 --> 1909.500] I mean this is enough to keep things entertaining for humans for many hours. +[1909.500 --> 1913.060] So should be entertaining for a network for many hours as well. +[1913.060 --> 1915.180] Or at least difficult. +[1915.180 --> 1917.260] Yeah. +[1917.260 --> 1920.460] So that's why I think it's interesting. +[1920.460 --> 1926.060] So then the nice thing about the fact that the architecture I described doesn't really +[1926.060 --> 1934.500] have a lot of environment specifics in it is that I took essentially the same network, +[1934.500 --> 1935.900] exactly the same hyper parameters. +[1935.900 --> 1938.300] I just retrained it on some new data. +[1938.300 --> 1943.140] And the new data was from a robot platform. +[1943.140 --> 1948.500] So by training on that, you can see that the images are a bit different now. +[1948.500 --> 1952.260] So this is a robot moving around the kitchen. +[1952.260 --> 1957.900] And here you see the same representation of the top-down view and then your probability +[1957.900 --> 1963.460] of being at each location and the history of the trajectory and orientations. +[1963.460 --> 1970.860] So this was trained on a subset of 19 indoor scenes. +[1970.860 --> 1976.020] But this dataset contains essentially, if you want to work on robotics and you don't +[1976.020 --> 1981.780] have a robot, this is probably the best dataset to get because it allows you to simulate what +[1981.780 --> 1987.820] it would be like for a robot to traverse these environments and with real images. +[1987.820 --> 1997.500] And the reason why you can do that is because you have, they collect photos at every possible +[1997.500 --> 2000.660] position or orientation in these houses. +[2000.660 --> 2001.660] They are actual robots. +[2001.660 --> 2008.420] So because they collected densely every possible image, now you can simulate new trajectories +[2008.420 --> 2011.780] by just stitching together these images in the right way, they provide the toolkit for +[2011.780 --> 2012.780] that. +[2012.780 --> 2014.780] So I think that's cool. +[2014.780 --> 2017.420] And then you can do a bunch of tests on that. +[2017.420 --> 2020.540] And I guess that's it for the results. +[2020.540 --> 2026.020] I could go into some quantitative results maybe later if you guys care about that, I compared +[2026.020 --> 2035.380] to some set of TART based lines and some classical slim methods and obviously, yeah, so it seems +[2035.380 --> 2038.380] to favor this system. +[2038.380 --> 2040.780] And yeah, so conclusions. +[2040.780 --> 2048.020] It's possible to perform slim entirely online with these deep neural networks. +[2048.020 --> 2053.060] The other funny finding was that location mapping can be expressed as a dual pair of +[2053.060 --> 2057.660] convolution, deconvolution, and that's you actually get some semantics out of that. +[2057.660 --> 2063.820] And then this is supposed to be a framework that supports long-term goals and navigation, +[2063.820 --> 2067.540] and that will be essentially the next step. +[2067.540 --> 2072.820] Yeah, and that's it for the first part of the talk. +[2072.820 --> 2076.940] So now I don't know if I still have enough time. +[2076.940 --> 2077.940] Half an hour. +[2077.940 --> 2078.940] Okay, great. +[2078.940 --> 2080.260] That's exactly how long I need. +[2080.260 --> 2083.620] So I can go 25 minutes, yeah. +[2083.620 --> 2088.180] That's all right. +[2088.180 --> 2094.260] So I'm going to give you only 30 seconds to breathe and then move on to the completely +[2094.260 --> 2098.260] different topic. +[2098.260 --> 2101.980] Okay. +[2101.980 --> 2107.780] So just flush out everything that you just heard. +[2107.780 --> 2111.260] Yeah. +[2111.260 --> 2117.460] So the other part was a bit more application focused than visual. +[2117.460 --> 2119.940] This one is unfortunately not as visual. +[2119.940 --> 2125.860] It's slightly bit more theoretical, but it also has, so I'll skip over some of the technical +[2125.860 --> 2128.140] details in the interest of gravity. +[2128.140 --> 2134.340] But it's also just interesting for the sort of different point of view when you consider +[2134.340 --> 2136.500] something like metal learning. +[2136.500 --> 2137.500] So this is about metal learning. +[2137.500 --> 2140.980] I'll get into that. +[2140.980 --> 2143.940] How many of you heard about it before? +[2143.940 --> 2146.460] Wow, okay. +[2146.460 --> 2154.340] I'm impressed that five people have heard about it because it's maybe a bit obscure. +[2154.340 --> 2155.860] Okay. +[2155.860 --> 2159.580] So these some work which is presented this year at the ICLR. +[2159.580 --> 2162.660] Okay, so what's the goal here? +[2162.660 --> 2167.780] The main goal of metal learning like the interest in it is to perform this task, this application +[2167.780 --> 2169.860] called one-shot learning. +[2169.860 --> 2177.980] So one-shot learning is part, one small part of the holy grail of machine learning, which +[2177.980 --> 2183.660] is to essentially learn a new concept from just one or very few examples. +[2183.660 --> 2187.940] So this is something that humans can do very effortlessly. +[2187.940 --> 2191.820] That's why you're watching these talks. +[2191.820 --> 2201.140] And then we obviously would like to have systems that can have some form of these ability. +[2201.140 --> 2207.140] So some examples, maybe they're not the best ones, but in systems that exist today would +[2207.140 --> 2212.260] be specializing in your CR optical recognition to new writers or alphabets. +[2212.260 --> 2220.340] So on your phone you probably have some form of adaptation for the way that you write. +[2220.340 --> 2223.460] That can also work for handwriting. +[2223.460 --> 2230.740] And then for example, single object tracking where you're given and you just meet a new object +[2230.740 --> 2234.820] and you thank you're interested in and then you have to follow it around. +[2234.820 --> 2236.700] It's another example. +[2236.700 --> 2239.740] But obviously generalizing to concepts. +[2239.740 --> 2242.260] That's really the end goal. +[2242.260 --> 2249.980] So I'm going to go a bit over how to this approach to one-shot learning. +[2249.980 --> 2257.860] Because on the face of it, it seems very unattainable to people to solve something like this +[2257.860 --> 2260.420] in very difficult settings. +[2260.420 --> 2263.740] Okay, so this is what Mattel learning is. +[2263.740 --> 2269.140] Essentially, you take your learning algorithm and you put a new learning algorithm inside +[2269.140 --> 2270.140] of it. +[2270.140 --> 2271.980] So you just iterate the concept. +[2271.980 --> 2273.660] And that's where the matter comes from. +[2273.660 --> 2279.820] It's a matter something if you iterate the concept on itself. +[2279.820 --> 2282.980] So you take a learning algorithm and you put it inside a learning algorithm. +[2282.980 --> 2285.780] What does that mean? +[2285.780 --> 2292.380] So to be just a little bit more concrete, you have some things, some algorithm that can +[2292.380 --> 2299.620] learn some fancy neural network that can learn concepts, just simple ones. +[2299.620 --> 2303.820] So for example, this one that learns different kinds of fruits. +[2303.820 --> 2309.740] Essentially taking examples of these fruits, it learns and the output of learning is a model. +[2309.740 --> 2317.220] So something that you can use to predict the type of fruit given new data. +[2317.220 --> 2321.900] So we don't want to just train something that learns some fruits and is done with that. +[2321.900 --> 2326.420] We want something that we'll be able to learn for other kinds and also of fruits, but also +[2326.420 --> 2330.420] also other kinds of objects. +[2330.420 --> 2339.220] So the way you do this is you essentially wrap it, you get a new learning algorithm that's +[2339.220 --> 2343.900] supposed to generalize over different tasks. +[2343.900 --> 2348.260] So if this base learner is concerned with learning different fruits, you might imagine +[2348.260 --> 2355.660] other kinds of learners that try to learn different types of dogs, breeds of dogs, or different +[2355.660 --> 2359.100] types of objects that you might find lying around the desk. +[2359.100 --> 2361.380] So these are all different tasks. +[2361.380 --> 2366.500] So now you have a different learner that's outside that sees a sort of more global view +[2366.580 --> 2373.500] and is concerned with tuning things, learning things such that to enable the base learner +[2373.500 --> 2378.660] to learn each of these small tasks more effectively. +[2378.660 --> 2383.980] So essentially if you're given these episodes, which are, I call them episodes, better +[2383.980 --> 2392.420] tasks, so different learning tasks that are small and manageable, you start with a +[2392.420 --> 2400.580] base learner that learns things for each episode specifically and adapts really fast. +[2400.580 --> 2406.460] And then you have a meta learner that learns generic parameters that are going to be shared +[2406.460 --> 2411.340] across all of these base learners, but adapts more slowly. +[2411.340 --> 2415.020] And the important bit, so you might think, okay, well, that's just like learning one big +[2415.020 --> 2417.260] model for everything. +[2418.260 --> 2423.020] And the part that distinguishes it from that kind of point of view is that the meta +[2423.020 --> 2431.020] parameters that can either belong to the model, which would be that of equivalent to +[2431.020 --> 2436.220] learning one big model for everything, or they can also belong to the learning algorithm. +[2436.220 --> 2441.660] So things like the parameters that you tune when you learn the learning algorithm. +[2441.660 --> 2449.940] So for example, if you have a belt with training some small deep learning model on these +[2449.940 --> 2454.060] nice tutorials that are online, you'll find that you have to tune the learning rate. +[2454.060 --> 2458.460] So for example, the learning rate would be one thing that belongs to the algorithm that +[2458.460 --> 2460.580] usually the method itself cannot tune. +[2460.580 --> 2465.660] And so you need something with an outside view that will be able to tune that. +[2465.660 --> 2470.460] And the way you tune that is by having the meta learner take a macro view over all of +[2470.460 --> 2482.300] the specific tasks and tune a learning rate based on for each one of these. +[2482.300 --> 2490.900] So in the context of a deep network, I guess, you can consider that this way of training +[2490.900 --> 2496.380] networks is kind of like this black box or actually here yellow box. +[2496.380 --> 2500.780] That's the base learner that would be standard algorithm like SGD. +[2500.780 --> 2508.460] And it's a goal is to output some parameters in this case some weights of a convolutional +[2508.460 --> 2509.980] network. +[2509.980 --> 2516.780] So that then it can use those weights to classify a new test image and give a prediction +[2516.780 --> 2519.020] about that image. +[2519.020 --> 2524.620] So that's essentially a big abstraction of the standard pipeline. +[2524.620 --> 2529.860] Now you can now take some of the, so this is what you have. +[2529.860 --> 2533.100] So there are some hidden parameters here, the ones that you tune by hand. +[2533.100 --> 2538.780] So we're going to make those explicit there on the right, Lambda, which are the meta parameters +[2538.780 --> 2543.180] like learning rate, like regularization parameters and that kind of thing. +[2543.180 --> 2549.780] But you can also make them and compare some of the filters of the, some of the weights of +[2549.780 --> 2551.140] the network. +[2551.140 --> 2555.500] Consider those meta parameters that are not specific to a task. +[2555.500 --> 2559.980] And so your base learner now will only be concerned with learning a sub part of the network. +[2559.980 --> 2565.860] But you get all of these freedom of making all of the meta parameters explicit. +[2565.860 --> 2569.580] So now you can try to find a systematic way to set all of those simultaneously so that +[2569.580 --> 2575.580] they work across all the settings, all of the tasks. +[2575.580 --> 2579.820] And the way you do that is just essentially by a great interest. +[2579.820 --> 2583.780] So which is the workhorse of how you learn these deep networks. +[2583.780 --> 2591.500] So if you essentially write this down, this computational graph, you can just ask your +[2591.500 --> 2597.900] fancy deep learning pipeline framework to back propagate errors through that. +[2597.900 --> 2603.620] So essentially you cheat a little bit and you back propagate errors from the test loss. +[2603.620 --> 2613.860] And if you make that whole step I guess, which this will give you gradients for the meta +[2613.860 --> 2622.980] parameters, which will, so errors, how does the error or loss function at the end vary +[2622.980 --> 2624.580] when you vary those meta parameters. +[2624.580 --> 2630.220] And that allows you to adapt them slowly. +[2630.220 --> 2631.700] So yeah, essentially, so that's the rest. +[2631.700 --> 2636.540] So that's the whole meta learning business in this sort of view, which is you're trying +[2636.540 --> 2640.780] to generalize across a bunch of episodes or tasks. +[2640.780 --> 2643.140] So you just select one. +[2643.140 --> 2648.900] You forward propagate, so you evaluate the base learner, which trains a small model that +[2648.900 --> 2651.940] specific to that task to a few parameters. +[2651.940 --> 2656.060] Then you make one prediction on a test sample. +[2656.060 --> 2659.500] And you essentially incur an error based on that. +[2659.500 --> 2664.660] And you back propagate error and that will give you an adaptation for the meta parameters. +[2664.660 --> 2668.700] So by doing that, you do meta learning. +[2668.700 --> 2673.700] So I guess it's probably going to be best to, oh yeah, okay. +[2673.700 --> 2677.460] So I've abstracted the base learner a little bit. +[2677.460 --> 2685.820] If you look at a lot of meta learning algorithms, they always seem to seem on the surface to be +[2685.820 --> 2687.300] very different. +[2687.300 --> 2690.860] But they actually all fit this sort of framework. +[2690.860 --> 2696.380] And if by changing the base learner, that's how you get all of the latest papers on meta +[2696.380 --> 2700.180] learning or almost all of them. +[2700.180 --> 2705.140] So that would be based on nearest neighbors, skat nearest neighbors, just fit for networks, +[2705.140 --> 2707.300] SGD and logistic class. +[2707.300 --> 2712.780] And the one that I used here was just regression and a few other variants. +[2712.780 --> 2715.740] So it was just by changing the base learner. +[2715.740 --> 2722.500] And so now I'm going to change gears a little bit and just explain what that base learner +[2722.500 --> 2731.140] inside the yellow box is and why it's good for this meta learning setting. +[2731.140 --> 2733.700] So the key is just to use regression. +[2733.700 --> 2737.300] So why? +[2737.300 --> 2739.660] Because essentially it's a very powerful linear classifier. +[2739.660 --> 2743.900] So if you don't care about the DPR or Keo features, if you only care about adapting the +[2743.900 --> 2751.300] final layer, for example, which is viable in this meta learning setting, then you can +[2751.300 --> 2754.100] use a linear classifier. +[2754.100 --> 2756.020] It's trained in essentially one step. +[2756.020 --> 2759.420] So just one closed form formula gives you the solution. +[2759.420 --> 2764.300] And also it's easy to differentiate, to get gradients or errors to back propagate through +[2764.300 --> 2766.580] the training procedure. +[2766.580 --> 2770.580] So those are all the elements that make it a good fit. +[2770.580 --> 2771.580] Yeah. +[2771.580 --> 2777.300] So a refresher, just the regression is just least squares. +[2777.300 --> 2782.860] The least squares problem that includes a little bit of regularization, which is L2 regularization. +[2782.860 --> 2786.580] I guess you can see the formula here. +[2786.580 --> 2789.220] The regularization is the lambda parameter there. +[2789.220 --> 2797.020] So if you've heard your professors bat mounting least squares, then you should know that it's +[2797.020 --> 2798.380] not that bad. +[2798.380 --> 2802.300] And the thing that gives it the bad reputation is the fact that it's unregularized. +[2802.300 --> 2806.740] So if you add that tiny term of regularization over there, you get with regression, which +[2806.740 --> 2816.660] actually is very, very competitive to very fancy linear algorithms like SVMs and so on. +[2816.660 --> 2822.260] So and this is something you can apply in just one line of course on anything that you're +[2822.260 --> 2823.860] using. +[2823.860 --> 2825.940] Yeah. +[2825.940 --> 2829.300] So that was our base. +[2829.300 --> 2833.180] One thing that's a bit awkward is that so these matrix inversions, so if your features +[2833.180 --> 2838.540] are to, if you have too many features, these inversions will be large. +[2838.540 --> 2846.300] And that doesn't really help in a deep learning setting where you want to go through millions +[2846.300 --> 2849.980] of samples quickly. +[2849.980 --> 2855.020] So essentially it scales quadratically with the size scales quadratically with a number +[2855.020 --> 2857.140] of features. +[2857.140 --> 2862.780] So there's a nice way to solve this. +[2862.780 --> 2866.900] Based in the Woodbury identity. +[2866.900 --> 2870.700] So it looks like what you see there. +[2870.700 --> 2877.340] And essentially it's just, if you find, if you have something that looks like that and +[2877.340 --> 2882.820] these matrix in the, that you're going to invert and add to the identity, it's features +[2882.820 --> 2884.940] by features that's the size. +[2884.940 --> 2889.820] You can essentially exchange it, essentially exchange the order of the transpose there, +[2889.820 --> 2892.940] put it on the other side. +[2892.940 --> 2895.580] And that gives you, now the inversion is samples by samples. +[2895.580 --> 2900.380] So these things are equivalent, which for me was not obvious on the first time I saw +[2900.380 --> 2901.380] it. +[2901.380 --> 2905.020] But it's a neat trick because it means that essentially every time you have an inversion +[2905.020 --> 2913.060] with something symmetric inside, you can exchange it for a smaller inversion. +[2913.060 --> 2916.260] Yeah. +[2916.260 --> 2924.100] So it turns out this is pretty optimal for one-shot learning because our base learner is supposed +[2924.100 --> 2932.260] to learn from very few samples and a lot of, and very large, very large, deep learning +[2932.260 --> 2934.020] features. +[2934.020 --> 2938.980] So the optimal case is to exactly use the one on the right. +[2938.980 --> 2942.260] So it's going to be, the inversion is going to be samples by samples in one-shot learning +[2942.260 --> 2947.260] that's usually one sample or two or three or five samples tops. +[2947.260 --> 2949.260] So it's very small, very manageable. +[2949.260 --> 2953.540] While the feature size is usually in the hundreds or thousands. +[2953.540 --> 2955.180] So you've gained a lot. +[2955.180 --> 2956.180] Yeah. +[2956.180 --> 2960.420] So essentially you apply that, you can generalize these to other algorithms I won't go +[2960.420 --> 2963.900] into that. +[2963.900 --> 2968.780] And at the time when we published this, this was state of the art. +[2968.780 --> 2971.620] Obviously the field moves very fast. +[2971.620 --> 2975.100] So these have already been beaten in some way or another. +[2975.100 --> 2982.820] But the important thing is that this is, even maybe if you're seeing metal learning for +[2982.820 --> 2989.460] the first time, it might be a bit of a mothful. +[2989.460 --> 2995.500] But if you've seen other metal learning and much outlearning papers, it's sort of compared +[2995.500 --> 2997.540] to some of the constructions you see. +[2997.780 --> 2999.340] This turns out to be very simple. +[2999.340 --> 3007.180] So as you've seen, this is the formula you implement. +[3007.180 --> 3008.860] It's like one line of code. +[3008.860 --> 3013.740] And you back propagate through that, which is with automatic differentiation, which is +[3013.740 --> 3018.540] what you get in PyTorch, TensorFlow, those frameworks, then that's very easy. +[3018.540 --> 3023.260] So in terms of implementation, it turns out to be very simple. +[3023.260 --> 3026.540] While all of these other comparisons and the things that came out afterwards, they're +[3026.540 --> 3028.420] much more complicated. +[3028.420 --> 3037.700] So striving for simplicity, even if you're working within a relatively complicated framework. +[3037.700 --> 3039.620] So yeah, I guess that's it. +[3039.620 --> 3043.340] I'm not sure if that's a very good introduction to metal learning. +[3043.340 --> 3044.900] I've seen a few talks on metal learning. +[3044.900 --> 3048.860] It's just very hard to describe it to you, easily. +[3048.860 --> 3055.140] But I hope that was elucidating in at least some way. +[3055.140 --> 3058.900] It also shows that you can vary the base learner quite a lot. +[3058.900 --> 3060.820] And there are some that are more optimal than others. +[3060.820 --> 3067.740] And this sort of says that you can use Regigression, which is just a very neat, small algorithm +[3067.740 --> 3070.060] that you can keep in your pocket all the time. +[3070.060 --> 3074.460] And just use the wood very trick to make it very fast. +[3074.460 --> 3080.540] And yeah, and that gets you one shot learning on, essentially, image tasks, which I didn't +[3080.540 --> 3082.620] get too much into. +[3082.620 --> 3088.500] One of them is learning alphabets, generalizing to learn new alphabets very quickly. +[3088.500 --> 3090.020] The others were just based on images. +[3090.020 --> 3092.300] We also tested it on things like tracking. +[3092.300 --> 3099.020] So learning to recognize new objects that are not in any category. +[3099.020 --> 3101.420] And that also works. +[3101.420 --> 3102.780] And I guess that's it. +[3102.780 --> 3106.020] So yeah, thanks for being here. diff --git a/transcript/allocentric_vm9vMjOPr2k.txt b/transcript/allocentric_vm9vMjOPr2k.txt new file mode 100644 index 0000000000000000000000000000000000000000..d47eeb14dd0c430d73f4c99d9e90f42266c47deb --- /dev/null +++ b/transcript/allocentric_vm9vMjOPr2k.txt @@ -0,0 +1,192 @@ +[0.000 --> 4.000] Hold on for person to sit down and today we're going to be discussing some of the new discoveries +[4.000 --> 10.160] from the last few months in regards to the dangers of space exploration. Or more specifically, +[10.160 --> 14.800] the idea of space health. Something we'll discuss in some of the previous videos you can find +[14.800 --> 19.360] in the description, and something that's actually exceptionally important for the upcoming +[19.360 --> 25.040] Artemis mission, Artemis 1 mission, and also the potential crude mission to Mars. And this is, +[25.040 --> 28.560] of course, why a lot of scientists in the last few months and the last few years have been +[29.040 --> 34.560] trying to figure out exactly what happens to the human body as it travels in the outer space, +[34.560 --> 39.680] and more importantly, as it lives in these zero gravity conditions. And so in this video, +[39.680 --> 43.760] we're going to be discussing several different studies, actually quite a lot of different studies, +[43.760 --> 48.640] but focus on some of these new discoveries and potential resolutions to some of the problems +[48.640 --> 52.880] discovered. And although all of these studies are going to be in no particular order, +[52.880 --> 56.800] I actually wanted to start with the one that was just released a couple of days ago from when I +[56.800 --> 61.120] make a video. A study that naturally you can find in the description below. And in this case, +[61.120 --> 66.400] the scientists behind the study examined blood samples from 14 different NASA astronauts that flew +[66.400 --> 72.880] between 1998 and 2001, discovering that pretty much all of them developed various types of mutations +[72.880 --> 79.600] in their DNA. With the biggest mutation, calling what's known as hemotopoasis. Usually caused by +[79.600 --> 84.880] the majority of the blood cells in the body being produced by just a few clones, as opposed to a much +[85.040 --> 90.800] wider variety of the initial cells that produced blood. With a lot of other small mutations detected +[90.800 --> 95.680] as well, and although none of them resulted in anything serious, all of this of course suggests that +[95.680 --> 101.120] by staying in space alone enough, this can actually result in some serious mutations. But technically, +[101.120 --> 106.160] none of this is really new to us. A lot of medical tests have been conducted in the last two decades, +[106.160 --> 111.440] but a lot of exciting ones were only conducted in the last couple of years. And even a few weeks ago, +[111.440 --> 116.640] the astronauts on the International Space Station were required to conduct several medical tests, +[116.640 --> 122.080] including the tests wearing various electrodes, and tests that monitored the blood flow in the head +[122.080 --> 127.280] and the chest. NASA actually developed this device known as ultrasound 2, which to some extent +[127.280 --> 132.880] was inspired by some of the medical devices in the Star Trek series that's able to scan various +[132.880 --> 137.840] body parts of astronauts including their eyes, determining if there are any serious medical issues +[137.840 --> 142.640] that need to be resolved. None of this is naturally automated like in Star Trek and it still requires +[142.640 --> 147.760] a medical doctor that knows how to use all of this, but still kind of cool. And so by using all of +[147.760 --> 152.640] these really complex medical devices, NASA started to discover some really important things about +[152.640 --> 157.440] human body. And one of the first major discoveries back in the days was actually in regards to the +[157.440 --> 162.880] way that the fluid in our body tend to shift because of the gravity. In the microgravity conditions, +[162.880 --> 168.160] the fluid shifts to the top half of our body, which ends up triggering quite a lot of unneeded +[168.160 --> 173.040] responses. For example, it ends up producing a lot of interesting signals in our body that +[173.040 --> 177.840] tell our body that we need to pee. And so because of this in the first few weeks, astronauts actually +[177.840 --> 182.880] have to go to the toilet quite a lot. And because of this, they also end up always feeling thirsty +[182.880 --> 187.760] and always end up being dehydrated as well. But later on, when they come back to Earth, +[187.760 --> 192.640] because of the misbalance in the fluid distribution, it causes some of the astronauts to faint, +[193.120 --> 197.600] and sometimes even produces serious medical emergencies. So today, the scientists believe that +[197.600 --> 202.320] by spending approximately six months in outer space, in these microgravity conditions, +[202.320 --> 207.680] any astronaut that landing on Mars might actually end up experiencing serious medical emergencies as +[207.680 --> 213.280] well. But unlike here on Earth, there's going to be nobody to help them if anything goes bad. +[213.280 --> 218.160] And so relatively recently, these scientists from Australia started to work on a kind of a model +[218.160 --> 222.480] that can actually help us predict if something similar can actually happen to a specific +[222.480 --> 228.160] astronaut traveling to Mars for six months. A model entirely based on our current understanding +[228.160 --> 233.440] of how our cardiovascular system works and what happens to it in microgravity conditions. +[233.440 --> 239.600] But that's of course just one of the issues. Blood and circulation, we have five more to discuss. +[239.600 --> 245.280] Let's start with the slightly more unusual one. The one we don't really get to see or talk about much. +[245.280 --> 250.480] The bacteria inside our bodies, specifically in our guts. And their importance was discussed in +[250.480 --> 256.400] one of the previous videos about our gut and how the bacteria here influence everything in our bodies. +[256.400 --> 261.200] But one particular thing that has been ignored so far in many studies is how exactly physical +[261.200 --> 266.720] forces and specifically microgravity conditions affect the bacteria inside of us as well. +[266.720 --> 271.760] Because as I mentioned in that previous video, by keeping our gut healthy, we'll keep ourselves +[271.760 --> 276.960] happy and healthy as well. And unfortunately, this is just one of the very, very few studies that +[277.040 --> 282.800] so far has tackled the idea of bacteria effects on human body. With this particular study, +[282.800 --> 288.160] focusing on a bacteria that's generally not particularly needed in our bodies, Salmanella. +[288.160 --> 293.600] The pathogen that generally causes a lot of problems inside of us. But because it's so easy to study, +[293.600 --> 298.560] the scientists wanted to start here and they wanted to see what effects the microgravity is going +[298.560 --> 304.160] to have on the bacteria itself and the interaction with our body. And the idea here was pretty simple. +[304.160 --> 309.280] Put this Salmanella bacteria with its cells very similar to the ones we find in our body +[309.280 --> 313.840] and try to see how it actually acts on those cells and how it interacts with them if placed +[313.840 --> 319.200] into conditions similar to microgravity and outer space. And similar to what we find on the +[319.200 --> 323.920] International Space Station. And their discovery is pretty important. They discovered that the +[323.920 --> 330.320] actual physical forces that we usually refer to as fluid shear forces change quite dramatically +[330.320 --> 335.840] in this case. So because microgravity conditions tend to change the way that fluids flow on the +[335.840 --> 341.520] inside and specifically they essentially reduce the amount of these downstream currents that generally +[341.520 --> 347.440] affect the way that the cells evolve. The actual cells tend to evolve slightly differently and +[347.440 --> 352.320] produce different genes as a result. And so when they took the Salmanella bacteria that were raised +[352.320 --> 357.680] in these conditions and then introduced them to the cells very similar to the ones in our body, +[357.680 --> 363.360] the new Salmanella cells infected the human tissue much more effectively and produced much +[363.360 --> 369.440] worse effects. Which of course implies two things. First, any potential pathogen can actually become +[369.440 --> 374.800] much more deadly or much more dangerous if it spends a lot of time in microgravity conditions +[374.800 --> 379.680] and evolves some of these characteristics. But second of all, it also implies that some of the +[379.680 --> 384.080] bacteria that already live in our body and there are quite a lot of them out there. Also, +[384.080 --> 388.800] have a potential to mutate and thus change their characteristics quite dramatically, +[388.800 --> 393.680] potentially becoming dangerous or potentially becoming useless. Which means that it's going to +[393.680 --> 399.280] affect our body one way or another. With all this because of this liquid abrasion of cell surfaces, +[399.280 --> 405.600] that's generally referred to as fluid shear. With this shear stress, then producing various effects +[405.600 --> 410.080] in this bacteria. But considering the amount of bacteria in our bodies and also considering the +[410.080 --> 414.880] fact that everybody on the planet has generally different bacteria inside of them, it really means +[414.880 --> 419.840] that it's more or less unpredictable what's actually going to happen to an individual and here +[419.840 --> 424.640] we're talking about an actual astronaut living in space long enough as the bacteria inside of them +[424.640 --> 430.080] evolves and potentially becomes either useless or dangerous. Which of course just means that more +[430.080 --> 434.960] studies are needed. Okay, next on the list is something that we've known about for a long time. +[434.960 --> 439.280] The astronauts vision. We know that it changes quite dramatically by living in zero-duk +[439.280 --> 444.960] conditions because of the fluid distribution as the fluid migrates to the upper region of the body. +[444.960 --> 449.040] And approximately one third of all astronauts that lived on the International Space Station +[449.040 --> 454.720] and one enough experienced major changes in their vision. With those living for up to six months, +[454.720 --> 460.080] usually experiencing it much more severely. And because of these fluid changes, the swelling +[460.080 --> 464.800] at the back of the eye surrounding the optic nerve tends to lead to some long-term changes in +[464.880 --> 469.440] vision that unfortunately cannot be fixed even after returning to planet Earth. And though in +[469.440 --> 474.160] some cases they do actually resolve after approximately one year, a few astronauts have never +[474.160 --> 479.120] regained their vision after living in space long enough. Although the effects on the optic nerve +[479.120 --> 483.360] as just some of the ones discovered so far, it's quite possible that there is also a genetic +[483.360 --> 487.360] component and it's also quite possible that there are other components here as well that have not +[487.360 --> 492.800] been discovered yet. The effects though are real and measurable and so any astronaut going to +[492.800 --> 498.000] Mars for six months by the time that the land on Mars might actually experience really bad vision +[498.000 --> 503.600] on top of all of the other issues as well. And so not having fully functioning vision is one of many +[503.600 --> 508.960] problems future Martian astronauts might have to face. But that's just the beginning. We have +[508.960 --> 514.880] the bigger problem here. The brain. Brain 2 undergoes some major changes in microgravity conditions +[514.880 --> 520.000] and some of these changes seem to be more or less permanent as well. This relatively recent study +[520.000 --> 526.080] that focused on 15 astronaut brains and their MRI scans before and after the International Space +[526.080 --> 532.800] Station focused on the size of what's known as perivascular spaces. These are spaces in our brain +[532.800 --> 538.320] that generally allow various foods to pass in order to essentially clean the brain. +[538.320 --> 543.280] And interestingly veteran astronauts, the ones that spent months or years in space, did not +[543.280 --> 548.400] generally have much difference here. But first time astronauts had a major increase in size of +[548.400 --> 553.680] these perivascular spaces, which also would generally put more pressure on the brain when the +[553.680 --> 558.960] astronauts would return to Earth. In a sense think of this as suddenly having a lot more fluid in +[558.960 --> 564.400] the brain and so when you return to gravity conditions things might not really be as comfortable +[564.400 --> 569.280] anymore. Although right now it's not particularly clear exactly what effects it's going to have long +[569.280 --> 574.720] term or if there are any potential health risks that we have to be aware of. Generally these +[574.720 --> 580.240] particular regions are used when we sleep. They seem to play a role in removing waste from our brains, +[580.240 --> 586.000] the waste that accumulates while we're active. But intriguing enough, the same study also discovered +[586.000 --> 592.000] that eight out of 24 astronauts whose MRIs were studied also had a much higher swelling of white +[592.000 --> 597.760] matter in their brain and their spinal cord but also experienced major vision problems. So there +[597.760 --> 602.000] actually could be a connection here with some of the vision loss as well. Although intriguing enough, +[602.080 --> 607.440] as I mentioned, some of the more experienced astronauts did not really have major changes afterwards. +[607.440 --> 612.080] The site is believed because they reached some kind of a homeostasis or some kind of a balance. +[612.080 --> 616.720] But I guess what's unusual is that because of these changes in these ventricle spaces it also +[616.720 --> 621.200] seems to affect some of the other brain regions as well, mostly because of the additional pressure +[621.200 --> 626.080] that it causes on the brain, which also ends up shifting some of the brain tissue from one region +[626.080 --> 630.480] to another. And so there's definitely a significant change in the brain in terms of the structure and +[630.480 --> 635.760] the shape. And this also might affect the way that we start acting and the way we start thinking +[635.760 --> 640.960] as well. And on top of this, the other study from just a few months ago, I also discovered that +[640.960 --> 645.680] the brains of various astronauts seem to undergo major structural changes, +[645.680 --> 651.120] rewiring a lot of these signals in the process. And so in some sense you can actually see this as +[651.120 --> 655.520] a slightly different person coming back to Earth from the one who left. The way that they think +[655.520 --> 660.400] about things the way that they sort of understand things would be slightly different because of +[660.400 --> 665.040] this connectivity and structural changes. But until future studies we're not really going to +[665.040 --> 670.000] know much more. Next on the list, muscles. In this case we actually do discover something really +[670.000 --> 674.560] interesting that potentially has a solution. The scientists have known for many years that muscles +[674.560 --> 679.440] in space generally become weaker and weaker and at some point become almost useless. For example, +[679.440 --> 685.040] muscles that are responsible for maintaining our posture or allowing us to move against the gravity +[685.040 --> 690.480] tend to completely become useless when there is no gravity to speak of. And in this case the human +[690.480 --> 696.320] calf muscles dramatically reduce their volume in space. And so in this case trying to walk on marks +[696.320 --> 701.760] after six months in microgravity is going to be almost impossible. That's actually of course why +[701.760 --> 706.400] all of the astronauts you oversee coming back from space are always pictured sitting down. They're +[706.400 --> 711.600] just incapable of standing. But in this recent study the scientists had a major breakthrough using +[711.840 --> 716.800] a nematode worm, a worm that shows very similar molecular and physiological effects, +[716.800 --> 722.640] especially on its muscle performance, one exposed to zero G or microgravity. In other words this +[722.640 --> 728.400] worm also tends to have neuromuscular decline as observed by various experiments in microgravity. +[728.400 --> 732.400] But in this case these worms were raised in several conditions and one of these conditions +[732.400 --> 736.880] included various types of microbeads that the worms were allowed to interact with, +[736.880 --> 742.000] simulating a kind of physical contact. And so when the worms raised with physical contact were +[742.000 --> 747.360] compared to the ones that basically just had nothing, the microbeads somehow reestablished the +[747.360 --> 753.440] pathways responsible for dopamine and restored a lot of the muscle function in most of these worms. +[753.440 --> 758.160] In a sense allowing these worms to function almost as well as the ones on finite earth. +[758.160 --> 761.840] And that's actually a pretty big discovery because it means that at least one of the problems, +[761.840 --> 766.560] muscle degradation could be maybe solved. And solved by something as simple as some kind of a +[766.560 --> 771.760] massage wand or some kind of a massage apparatus, they can produce physical stimulation on all +[771.760 --> 776.720] of the muscles that we need for walking for example and thus prevent the atrophy over time. +[776.720 --> 781.520] Although actually this is still a brand new discovery and no physical solution exists yet. +[781.520 --> 786.240] But the theory is definitely sound. And then there are also problems with the bones, +[786.240 --> 791.760] the skeletal structure. By being in space we also lose a lot of bone density. And unfortunately +[791.840 --> 796.240] even after a year a lot of this density is not recovered in most of the astronauts. +[796.240 --> 801.120] With the astronauts spending the time in space the longest being sound as slow as to recover +[801.120 --> 805.840] their bone density. And this would be of course a huge issue for anyone going to Mars. +[806.400 --> 811.120] After a three year Mars mission, which is what we think Martian mission would take, +[811.120 --> 817.360] a typical astronaut might lose approximately 33% of bone density, potentially leading to osteoporosis. +[817.360 --> 822.000] And although the majority of astronauts today are usually in good health and don't actually +[822.000 --> 827.200] notice these differences, for an astronaut going to Mars this could be a very different story. +[827.200 --> 833.200] They would definitely lose at least 1% of bone mass per month spent in our space. And that's even +[833.200 --> 838.720] if they do a lot of exercise on daily basis. But once again there's maybe at least one potential +[838.720 --> 843.920] solution and this time it's a pretty intriguing solution that can even work here on Earth. +[843.920 --> 848.720] Space salad or a technical space lettuce. Okay it's actually something that the scientists think +[848.720 --> 853.600] they can put in the lettuce then the astronauts can then eat. The medicine known as PTH, +[853.600 --> 858.720] also known as paratharoid hormone, can actually stimulate bone formation and can generally help +[858.720 --> 864.480] us recover bone mass even in microgravity. But it does generally also require daily injections. +[865.280 --> 870.160] Transporting this to Mars might not be very practical. Instead the scientists think they can +[870.160 --> 874.960] genetically modify sound the plants that the astronauts can easily grow in space. With the +[874.960 --> 880.080] genetically modified lettuce then delivering all of the needed medicine. And since in this case +[880.080 --> 884.560] the scientists have already been able to produce this particular lettuce, it's now just a matter +[884.560 --> 888.960] of growing it in space. Although in this case you would have to get a lot of this lettuce at least +[888.960 --> 894.400] for now. Approximately 8 cups per day which is about 400 grams which I'm sure would not really +[894.400 --> 899.280] make a lot of astronauts particularly happy. But this can also be done with potentially other food +[899.280 --> 904.000] and so by trying to genetically modify other food this can potentially deliver a lot of +[904.000 --> 908.800] needed medicine to a lot of future astronauts. And since all of this can just be grown in space +[908.800 --> 914.000] directly it would naturally save a lot of space on any potential mission in Mars. But I guess +[914.000 --> 918.880] more importantly some of this genetically modified lettuce can then also be used here on Earth +[918.880 --> 924.000] for anyone struggling with the same problems. And so for anyone struggling with osteoporosis +[924.000 --> 928.880] this lettuce might be a solution. And so once again a lot of space research seems to bring back +[928.880 --> 934.240] a lot of important ideas back here to plan Earth, the ideas that we can then use in improving +[934.240 --> 939.760] our society and improving our health. But when it comes to space health there's still a lot of +[939.760 --> 944.960] challenges. And any potential mission to Mars still looks pretty dangerous and pretty difficult to +[944.960 --> 949.760] achieve. But there's a really big chance that if we're successful a lot of really good things +[949.760 --> 954.320] will come out of all of this research. Which means that we should definitely keep going and keep +[954.320 --> 958.880] trying to achieve more and more. But I guess until future discoveries or until the next video +[958.880 --> 963.600] in regards to space health that's pretty much all I wanted to mention. Check out previous videos, +[963.600 --> 967.280] some right there or in the description below. Subscribe, share this with some of the other +[967.280 --> 971.120] and what space and sciences come back to Mars to learn something else and maybe support the +[971.120 --> 974.800] HTML feature by doing a channel membership or by buying the wonderful person teacher that you +[974.800 --> 984.800] can find in the description. Stay wonderful, I'll see you tomorrow and as always bye bye. diff --git a/transcript/allocentric_w691PF_lpnM.txt b/transcript/allocentric_w691PF_lpnM.txt new file mode 100644 index 0000000000000000000000000000000000000000..584d03eaeabe5ae97a6cc36ec7f8173839468243 --- /dev/null +++ b/transcript/allocentric_w691PF_lpnM.txt @@ -0,0 +1,271 @@ +[0.000 --> 3.360] I think you again, Claire and Collette were playing together this series. +[4.360 --> 13.880] They have the role in student navigation and the cognitive group in particular has been great to interact with and it's my pleasure to launch this series. +[15.280 --> 23.880] As it says, I'm going to talk about human navigation without vision, seeing with our ears and tongues who advances in sensory substitution and augmented reality. +[24.640 --> 51.880] So a little bit about me. So at the University of Bath, that's how we say in Bath 2, I direct the cross mode cognition lab and we are an interdisciplinary group housed in the Department of Psychology, but with a lot of collaborators PhD students and and students in computer science at the University who are part of the lab. +[52.840 --> 58.040] Here we are in front of our new is just like college building with just a few years old on the campus. +[59.640 --> 64.320] Back when we had a big sign up about some of our research on seeing the sound. +[65.600 --> 74.440] One of the big collaborators who inspired me and got interested in seeing what sound is in the audience today, Peter mayor, which is great to see you. I'd be here. +[75.960 --> 78.800] And yeah, I'm glad you can make it. +[79.760 --> 84.400] And you'll hear a little bit about what Peter invented later in the talk. +[86.240 --> 90.880] And I have to you ever see a bath. We have a number of things that help support our work. +[94.240 --> 96.400] Should we have some slides up? Yes, or are we? +[100.240 --> 101.440] I can see it just fine. +[102.240 --> 103.240] I'm sure it was great. +[104.800 --> 107.880] But it is lovely to have a head as well. Thank you. +[108.800 --> 109.280] There you go. +[110.480 --> 115.040] Yeah, so just to see some of the members of the group who are there today that we picked a photograph. +[116.120 --> 127.480] Doing a number of colleagues at the University of Bath, but also at Bath Spa University and now at the University of Bolton, where Dr. Dave Brown is now a lecture in psychology. +[128.440 --> 135.080] Also part of and what founders of reveal just a research center we have for real and virtual environments on location labs. +[135.960 --> 145.720] And I'm a investigator for camera, which is UKRI research center looking across all different aspects of motion tracking and virtual environments. +[146.840 --> 157.000] And one part that's been really helpful in the love my work into facial cognition applications has been funding and collaboration with accounts. +[157.560 --> 163.960] Working in particular with their human center design architecture team, which is a great group. +[165.160 --> 175.880] And as Colette mentioned in the beginning, I'm currently over at what's now called meta since the big name change at the end of last year. +[176.200 --> 182.680] So wait for the new logo for reality labs research, which is where I'm based for a two years of medical. +[183.640 --> 187.400] Working primarily in vision. +[187.400 --> 199.640] So with the eye tracking team in particular and how I tracking can be integrated across a number of applications and a number of other sorts of things we like to track life and. +[199.880 --> 212.120] Heads bodies as people interact at various cases and so far has been pretty working with the team really fascinating to see how things work on this side on the industrial side of things. +[212.840 --> 216.840] So a core. +[216.840 --> 238.520] Fundamental question that underlies a lot of my research whether it's looking at these sorts of devices that can assist people with visual impairments or whether it's looking at the eye tracking side of research that I'm doing over here at research every healthy lab research. +[239.480 --> 242.920] Is this sort of simple question what does it mean to see. +[245.160 --> 257.720] Now in some ways it's probably sort of deceptively simplistic right because on the one hand you could probably ask you know any own person what does it mean to see and I know how some easy answer for you. +[258.360 --> 278.120] You ask a vision scientist or an engineer you might get a very different answer but I think it's fundamental for being able to understand a lot things particularly when doing research on humans as most of my research is because we are such a visual species you know we rely on this and to a global degree. +[279.320 --> 283.080] We have one of the highest visual opportunities for mammals. +[284.040 --> 307.720] Which is pretty exceptional and sort of shows the importance of it and the amount of processing in the brain that we dedicate to it something incredibly important to us which is why something like visual impairments are so fascinating because you'll have people with a visual parent trying to interact with the world design by a bunch of other people who might be able to see quite well. +[307.720 --> 337.640] But when you're thinking about this question and different approaches to it you know it's something that's been important to define for a long time and I love this is a memo from the artificial intelligence group and MIT back in 1966 when they define something as the summer vision project with their idea that they would put a bunch of just summer workers towards handling computer vision and their definition of what meant to see. +[337.720 --> 350.120] At least this project was pattern recognition and you know if you think about the computers back then in the 60s you know which are taking up whole rooms to be able to run computations. +[350.120 --> 367.320] It's quite an ambitious project just for a single summer and unfortunately they didn't solve the problem of vision in that single summer it's something in terms of animal vision much less computer vision we're still trying to come to grips with today. +[367.720 --> 376.920] But so when it comes out the same tradition at MIT the liquid David Mar and his wonderful book vision. +[376.920 --> 388.120] Approach this question in its introduction as the pain man's answer to it and it ourselves to would be to know what is where by looking. +[388.120 --> 400.520] So it's simple in some ways it has two component parts what and where and facing that in the act of looking to receive that information. +[400.520 --> 411.120] That memo of course is focusing much more on the what aspect in some ways so how their recognition being some way by identifying what the pattern is. +[412.120 --> 427.120] But of course the where aspect is incredibly important even for that you know because in order to know what something is to some extent you have to know where it is so you can process the information at that location in order to identify. +[428.120 --> 448.120] So it doesn't really cover everything so if you actually do ask you what means to them to see a lot of times people say a bunch of other things things that seem much more high level much more emotional but I think they all probably take root in sort of this simple or answer before we're able to get to the level of the more complex aspects of vision. +[449.120 --> 473.120] And vision something funny because it happens so quick and it seems so effortless to us as human adults that we can't be for granted how much processing is required to take all this information that's coming in and activating the cells on our retina in order to get to the level of knowing what something is and where something is. +[473.120 --> 481.120] And we seem to do it without effort seem to do it quickly out of magpie but there's really a lot going on that we need to understand. +[485.120 --> 493.120] So we see here touch they smell and experience all our senses in the brain and this sounds obvious but. +[493.120 --> 522.120] It's important to realize that a lot of the really interesting processing that allows us to see or to more broadly to perceive is happening through all these computations in the brain so obviously we use the eyes ears fingers mouth knows to bring in the information from the environment but all the really interesting processing that takes place to allow us to actually have these perceptual experiences is taking place in the brain. +[523.120 --> 542.120] And so there's an important idea that comes out of that and I think it was best stated by Paul Bakery who's work on mentioned later on sensory substitution that because all this happens in the brain one way you can think about it that we're deprived of one sense. +[542.120 --> 552.120] So if you have some peripheral damage say to the eyes years well the other senses can compensate or even substitute for the missing sensory modality. +[553.120 --> 558.120] As long as we're gathering that information we need and getting it to the brain. +[558.120 --> 570.120] So it's sort of an important idea for thinking about how we can possibly substitute one sense or another particularly when thinking about something as complex and fascinating as navigation. +[573.120 --> 596.120] So in this talk, I'm going to talk about how visual impairments inspired my research into understanding a typical augmented reality, which essentially these sensory substitution devices or augmentation central augmentation devices, which serves an excellent means for understanding the basics of spatial cognition. +[597.120 --> 606.120] And this will be through as I described in the sort of somehow my talk translate images into displays that could be heard or touched. +[611.120 --> 622.120] So at the core of this comes a bunch of research over the years that has demonstrated the importance of visual experience or cognition and broadly. +[623.120 --> 633.120] And that's because a lot of work has shown that visual deprivation such as through being visually impaired either from birth or becoming vision and later in life. +[634.120 --> 640.120] Other lines of work in terms of hearing impairments about other changes that occur there. +[641.120 --> 646.120] But having that deprivation division changes spatial and multi sensory process. +[647.120 --> 651.120] And it's particularly interesting to the spatial cognition and navigation. +[652.120 --> 665.120] And so more broadly looking at that navigation literature, for example, a lot of work looking at people who have visual impairments have noted there is this root knowledge preference, for example, over serving knowledge. +[666.120 --> 680.120] So more of an interest in knowing the root one takes left the right turns from a different A to B rather than having sort of an allocentric map of life knowledge or survey life knowledge for the location of things. +[681.120 --> 688.120] And part of this doesn't come down to multi sensory integration because as one interacts with space. +[689.120 --> 694.120] It usually involves several fences and several ways of gathering information. +[695.120 --> 714.120] And so vision being so important for humans, since it'd be important for sort of gluing together the experiences of space, the sound of a space, the feeling of a space, the way we have appropriate exception, the body movement of our body through that space. +[714.120 --> 725.120] Vision seems to be very important for sort of gluing all that information together. And that's why visual experience seems to have played such an important role in the basis of spatial cognition. +[728.120 --> 739.120] And one aspect I'll use as a case study for some of that is thinking about the different reference frames we use to understand space. +[740.120 --> 754.120] And some of this is related to this idea of say root knowledge versus serving knowledge being useful and having different preferences for those, depending on whether a person is with site or as a visual impairment. +[755.120 --> 764.120] And one idea that maps on this idea of root versus serving knowledge is allocentric versus egocentric spatial reference frames. +[764.120 --> 770.120] So egocentric of course is a term, you know, well usually more use as a personality description of people. +[771.120 --> 784.120] And the idea there is similar to root knowledge. It's having a representation of where things are in space based on your personal location and perspective. +[785.120 --> 792.120] And so you'll have some way of knowing where things are relevant to where you are and your perception of them. +[793.120 --> 813.120] In contrast, more in line with this survey knowledge or map like knowledge idea is you have these allocentric spatial reference frames where instead your knowledge of where things are is their locations relevant to one another sort of independent of your perspective that you may have. +[814.120 --> 827.120] And a lot of work has looked at this primarily in vision, but there's increasing number of papers that are looking at this in terms of the other senses as well. Now describe some of our work in this area too. +[829.120 --> 837.120] So before I get to that, why would visual experience have this impact on how we think about space. +[838.120 --> 847.120] And one way to think about it is this clear link between visual experience and different forms of mapping space. +[848.120 --> 863.120] And one way to think about is that one of the earliest levels of representing space in the brain is this relationship between the presentation of visual information on the right now. +[864.120 --> 875.120] And direct mapping of that into the first areas of cortex primary visual cortex that that visual information reaches. +[876.120 --> 886.120] And within primary visual cortex, so if you look on the right here, you'll see it at the back of the brains, this is the front on the left, this is the back on the right. +[887.120 --> 895.120] And you have this multi colored areas along here that are representing different parts of the visual field that are presented to someone. +[896.120 --> 915.120] And what's amazing to see is this nice relationship where the retina is mapped onto the visual cortex and number of other visual areas in a space, you have a sort of direct relation to where things are in space. +[917.120 --> 922.120] And then there are processing at the retina and then a mapping of that in the brain. +[923.120 --> 933.120] And so you have these spatial maps directly present there that will let you know where something is being presented to the one who's seen. +[933.120 --> 957.120] And there's also this work here, for example, which has directly looked at this in the brain of monkey, the macaque monkey cats, for example, among other species, where using tracers that lead directly from the retina to the brain, you can see how the presentation of sort of bullseye stimulus creates this clear mapping that you can see about there. +[958.120 --> 969.120] Now, what's interesting is so visual experience has been known for a long time to be necessary to some extent for the full development or maintenance of these maps that are present in the brain. +[970.120 --> 976.120] They still exist, a lot of them are still sort of laid down development lead in the absence of visual experience. +[977.120 --> 988.120] But there's definitely some sort of plasticity in the brain where because they're not used, they aren't necessarily fully maintained through adulthood in the absence of any visual experience. +[989.120 --> 995.120] So that's one we visual experience is important for sort of spatial mapping in terms of how the brain might feel represented. +[996.120 --> 1017.120] Another is just thinking about what vision is good for and one reason vision is so useful for us as humans is that it is a great way to be able to access and represent silent objects that are pistol so far away from you so not something you can necessarily touch. +[1017.120 --> 1024.120] We can see a star and have an idea of its location but also visions good parallel processing. +[1024.120 --> 1034.120] So we can process information to a certain extent from across the visual field in parallel, for example, if you want to. +[1035.120 --> 1040.120] Hang on what sort of football team you might be a fan on the say your team wears red. +[1040.120 --> 1053.120] You can pay attention to the color red and serve in parallel pick up on those locations so then you can make an eye movement to the different people wearing red to find say the player that has the ball. +[1053.120 --> 1068.120] And so you can do some of these things in parallel and so vision is also good for that and this way of representing things that are far away and representing things in parallel is incredibly useful for something like this allocentric reference frame that I described earlier. +[1069.120 --> 1082.120] There's one way of being able to do that is having to have access to this information of where things are related to another in some parallel acquisition sense. +[1083.120 --> 1098.120] But let's put this in the context of a study so a lot of work by Nick Numera, including this work with moon colleagues, I looked at the rule of vision for representing states in terms of egocentric reference frames to all the central reference frames. +[1099.120 --> 1105.120] One way to do this you can do this of course outdoors representing say buildings in a city. +[1106.120 --> 1111.120] But easier way to manipulate it is in a group with objects in front of you. +[1111.120 --> 1131.120] And one way they'll go ahead and present this is a lot of person standing in one location and so acquiring the information visually about where the information is and trying to remember the location of the objects so remembering what and where. +[1131.120 --> 1154.120] And then testing their ability to remember that and the test ability in two ways one is this egocentric condition where they'll ask them to imagine later so the memory test for the locations locations of the objects related to other objects within the display here. +[1154.120 --> 1164.120] And so people will be asked to for example imagine you're at the banana facing the frying pan now point to the book. +[1164.120 --> 1171.120] And so it'll be consistent with the way in which they acquired the information for this egocentric perspective. +[1171.120 --> 1184.120] But in the memory test they'll do a manipulation as well where the awesome to do a different view that doesn't align from the one that they actually saw. +[1184.120 --> 1189.120] So they actually saw the objects physically in the room, this angle that you're seeing the photograph. +[1189.120 --> 1207.120] But then the test will also ask them to sort of mentally rotate themselves in the space and instead imagine there perhaps still the location of the banana, but this time facing the scissors and then they need to point to location of the book. +[1207.120 --> 1226.120] So the test turns from just remembering what look like from your egocentric perspective, to having to understand location of the objects related to one another to shift your perspective and then be able to represent the locations accurately. +[1226.120 --> 1242.120] Now in those vision tests you essentially see the sort of sawtooth pattern that you see here where people perform more accurately in the conditions that are allocentric lead of fun. +[1242.120 --> 1249.120] And they surprisingly perform worse at the conditions that are egocentric lead of fun. +[1249.120 --> 1265.120] Now one reason for that, if we look back here at the objects is that in order to encourage this allocentric reference frame and to allow to be successful, you'll notice you have these nice rows and columns that are coming from this oblique angle. +[1265.120 --> 1272.120] Instead of this more hazard diamond shape that's coming from this egocentric angle. +[1272.120 --> 1280.120] And so it looks like in vision, people sort of automatically just represent the objects in relation to one another. +[1280.120 --> 1296.120] To such an extent that they'll actually perform better from viewpoints that gave them this metric in this allocentric reference frame and even better than they would from the actual direction they saw the items. +[1296.120 --> 1301.120] Now what you're actually seeing here is a slightly different data set. +[1301.120 --> 1309.120] And in this one, because we're interested in visual experience, what we did was we blindfolded cited participants. +[1309.120 --> 1330.120] And instead of allowing them to see the objects, we took them from this location at the bottom of the screen and went to each object and back to the origin to another object back to the origin again and again until they walked each location felt the object to recognize what it was. +[1330.120 --> 1343.120] And then they were tested on their memory for the scene through this method of acquiring the information through proprioception walking and through touch as they felt the object to find out its identity. +[1343.120 --> 1351.120] And so it's interesting here is we found the exact same pattern of results that you would find as if they had seen them. +[1351.120 --> 1367.120] So they still seem to pick up on this all essentially defined metric locations of the objects and they perform better in this memory task in terms of being able to point to where the objects were in relation to one another. +[1367.120 --> 1376.120] So even though they're blindfolded, they still seem to form this idea of the allocentric relationship of the objects. +[1376.120 --> 1385.120] So then we also tested people who had visual experience earlier in life, but became visually impaired later in life for family visually impaired. +[1385.120 --> 1391.120] And in this late blind group, we essentially found the same pattern as the sighting group. +[1391.120 --> 1403.120] So they also in the same path with walking to the objects and feeling them had better performance for the allocentric condition and for the egocentric condition. +[1403.120 --> 1415.120] So rather than using the starting point of where they walk from to remember where the objects were, they seem to pick up on this survey like representation of where the objects were. +[1415.120 --> 1419.120] The last group we studied were those who had no visual experience. +[1419.120 --> 1430.120] And so this is a group of people who were congenitally blind and therefore never had any visual experience to form up some of these mental maps of the world. +[1430.120 --> 1436.120] And this group had the opposite power of effects from those who had visual experience. +[1436.120 --> 1445.120] So here what we see is they perform better in the egocentric conditions than in the allocentric condition. +[1445.120 --> 1458.120] And so this aligns well with this prior work that had shown that those who are congenitally visually impaired have this preference for root knowledge for using ideas about their egocentric perspective. +[1458.120 --> 1462.120] So we're moving between locations to remember those locations. +[1462.120 --> 1473.120] Instead of picking up on the allocentric structure, which might be more similar to like a survey representation that you could have. +[1473.120 --> 1478.120] And so overall this is a contrusive this idea that visual experience. +[1478.120 --> 1481.120] So these people are not seeing in this past. +[1481.120 --> 1487.120] But the two groups, the late line inside it have had visual experience during development. +[1487.120 --> 1489.120] The congenitally blind group has not. +[1489.120 --> 1493.120] And that visual experience seems to influence these. +[1493.120 --> 1505.120] These preferences that people have in terms of being able to spatially represent the locations of things. +[1505.120 --> 1517.120] And so we should shift a little bit to discuss technological approaches for thinking about how we could provide visual experience to those without it. +[1517.120 --> 1526.120] So one thing I'll go back just to mention is you'll notice that all the points here are kind of overlapping. It's just the pattern has changed. +[1526.120 --> 1532.120] One group did not perform better than another. The all performed at the same level overall. +[1532.120 --> 1537.120] It's just the pattern of when they perform better seem to change. +[1537.120 --> 1544.120] But you could imagine there might be some situations we're having an egocentric reference frame might be incredibly useful. +[1544.120 --> 1547.120] And others were an allocentric reference frame might be incredibly useful. +[1547.120 --> 1555.120] And so why do here is whether it's possible to give some visual experience to those who have congenital visual impairments. +[1555.120 --> 1560.120] And in order to help enable that sort of fellow centric process. +[1560.120 --> 1562.120] But how can we do that? +[1562.120 --> 1566.120] Well, one approach I'm going to describe is sensory substitution. +[1566.120 --> 1572.120] So going back by the discussion at the beginning. It's this idea that if one sense is impaired. +[1572.120 --> 1581.120] Perhaps we can substitute another sense to provide the information to the brain that otherwise would be provided by that sense. +[1581.120 --> 1586.120] And so this is largely done with where essentially augmented reality devices. +[1586.120 --> 1591.120] And the examples of the scribe are seeing with sound or the tongue. +[1591.120 --> 1599.120] And we have a news article from a wild dot and observer in the Guardian that described some of our work using the voice. +[1599.120 --> 1610.120] So you see here the OIC, which is the device that was invented by one of our audience members here Peter Mayor from the Netherlands. +[1610.120 --> 1622.120] Now, when trying to think about how you could turn images and something that could be processed by another sense substituting provision. +[1622.120 --> 1629.120] You come to with for me as someone who studies psychology, neuroscience and computer science. +[1629.120 --> 1633.120] A huge information processing job. +[1633.120 --> 1638.120] So there's different estimates of how much information the different senses can process. +[1638.120 --> 1643.120] Vision, as I mentioned earlier, like in humans, we have pretty spatial adjudity. +[1643.120 --> 1654.120] We're able to process a lot of information per second with some old estimates that actually converge with some estimates in nearly apology as well. +[1654.120 --> 1658.120] And we're able to do that more or less in parallel. +[1658.120 --> 1666.120] Now, one of the first century substitutions chose tactile or touch processing as a way to access image information. +[1666.120 --> 1675.120] In part, because it's easy to imagine, to some extent, so you can think of the skin as a spatial analog to the right. +[1675.120 --> 1686.120] So, for example, if I was just to present you know these letters on the screen to your retina, you can imagine how the light enters the eye and then you essentially have that image of the letter on the back of the right now. +[1686.120 --> 1690.120] Similarly, you can play a game where you write letters say on someone's hand. +[1690.120 --> 1699.120] And you have the spatial analog to being able to feel the letters kind of in the same way the retina is processing the visual information of the letters. +[1699.120 --> 1705.120] But our touch system has lower to the vision. +[1705.120 --> 1709.120] It processes a lot less information per second vision. +[1709.120 --> 1725.120] And across a lot of studies, they seem to suggest that a lot of this stuff processing is serial. So, rather than a process a lot in parallel, we're sort of restricted to how much information we can process at any given moment in time. +[1725.120 --> 1728.120] So, pop up your view. I mentioned beginning. +[1728.120 --> 1743.120] And then I created one of these first sensory substitution devices as an image from their work into 60s, where the image is taken by a camera process back here that take the pixels in that black and white image. +[1743.120 --> 1751.120] And then you could feel it on your back, you can see all these sort of little solenoids here on this dental like chair. +[1751.120 --> 1765.120] And if there was a circle shown in front of the camera like a ball, you would feel that shape on your back with the size of it changing depending on how close the ball was to the camera lens. +[1765.120 --> 1777.120] Nowadays coming out that same tradition ended up the researchers coming out of the redistribution for the company, which has created something called the brain port. +[1777.120 --> 1790.120] And then we have an older version down here. So essentially you have glasses with the camera right above the nose and a processing unit that converts the image into these pixels, which are then filled on the tongue. +[1790.120 --> 1798.120] I'll show you a little close up of this company unit here. This one, the new units that we have in the left. +[1798.120 --> 1811.120] And so the basic idea is you have this on your tongue is providing just a little bit of electrical stimulation. Now until the tongue to go to place because that saliva, that's conductive electricity. So it doesn't have to be too strong. +[1811.120 --> 1821.120] And if you were looking at an image of a bar on the screen, you essentially feel that diagonal bar across your tongue. +[1821.120 --> 1834.120] And it's a really interesting experience. It kind of reminds me something like popping candy. I don't know if you've had that before where you put the candy on your tongue, you kind of feel it popping in different ways. +[1834.120 --> 1843.120] And we've done some work trying to understand the different important aspects of how people are able to read things on their tongue. +[1843.120 --> 1858.120] And in particular dealing with the strange orientation of feeling information coming from a camera turned onto your tongue, which also is influenced by the sensitivity of the tongue as well. +[1858.120 --> 1865.120] But there are these drawbacks, right. So the acuity is not very good. The amount of information processes and that's good. +[1865.120 --> 1876.120] And so then thinking about processing sound or auditory system becomes a really interesting idea. +[1876.120 --> 1886.120] And the reason why it's so good. Well, temporal acuity is very good. It's known that we have very good sensitivity that timing differences, little differences in timing. +[1886.120 --> 1890.120] We have very good salad localization. +[1890.120 --> 1893.120] This is something actually. +[1893.120 --> 1904.120] And so what we call the have nerves have looked at across mammals and seeing this relationship between having sort of highly sensitive, the fofial vision like humans have. +[1904.120 --> 1911.120] So we have very sensitive reason at the sort of center of our right now and sound localization ability. +[1911.120 --> 1922.120] And animals like us can use localizing sound as a way to turn our eyes to the right location to be able to localize where the sounds coming from identify it sorts. +[1922.120 --> 1927.120] So there's a multi sensory thing here that's crucial. And so we're good at sound localization as well. +[1927.120 --> 1937.120] You can process more information in sound that you can't in touch still less than vision, but it's getting you much closer to that level of information processing. +[1937.120 --> 1951.120] But the problem is how can we provide visual spatial information through sound right doing this in touch seem more obvious doing it and sound really takes it a bit of a creatively. +[1951.120 --> 1960.120] So one way to think about is the first list is to a little experiment here. Each of these shapes has a name. +[1960.120 --> 1965.120] One of these is called key key and one is called booba. +[1965.120 --> 1972.120] And perhaps you can just put something in the chat or just think the answer to yourself. What do you think this first shape is called. +[1972.120 --> 1983.120] Did you call this one teaky or booba. +[1983.120 --> 1991.120] So I'm just coming in saying booba and so else is definitely booba. +[1991.120 --> 2001.120] And I can't say you're right or wrong necessarily, but I will note that that's consistent with what over 90% of people say. +[2001.120 --> 2004.120] So most people named this spiky object as key key. +[2004.120 --> 2010.120] And therefore this object over here, the software edges as booba. +[2010.120 --> 2017.120] And this has been found through different developmental ages has been found cross culturally with this team of Japanese researchers. +[2017.120 --> 2023.120] We work with doing some work in northern Malaysia. It's really consistent effect. +[2023.120 --> 2035.120] So there seems to be this correspondence between the sound of the word key gear booba and the look the site of these shapes. +[2035.120 --> 2041.120] So there are things between what we see and what we hear. +[2041.120 --> 2045.120] And you see this across present stream of values as well. +[2045.120 --> 2050.120] And so let's think about the image in terms of these sort of cross multiple responses. +[2050.120 --> 2060.120] Now obviously image is more complex. This is from a photo taken by a product law, who is a. +[2060.120 --> 2072.120] And the user of this voice device. I'll be describing in India and took some photos up around the Himalayas and doing a lot of amazing work with the device taking photos. +[2072.120 --> 2082.120] And then this photo, you know, if you just think about information, one easy way to break it down to think about pixels and the image, I just represented by large squares here. +[2082.120 --> 2087.120] And what is present in these different parts of the image. +[2087.120 --> 2091.120] So if you look at the left square over here at the left. +[2091.120 --> 2097.120] You can think about it's why X axis locations slow to the left and its movements is dark. +[2097.120 --> 2107.120] If we go over here to pick another pixel on the y axis, it's high X axis is for the right and is luminance is bright. +[2107.120 --> 2117.120] So we have different cross for cross multiple correspondences that exist for these sorts of spatial relations and to the brightness of an image. +[2117.120 --> 2123.120] We know that we have associations between the height of space and pitch. +[2123.120 --> 2128.120] So you can have a high frequency or high pitch sound representing this one. +[2128.120 --> 2136.120] So you can space on the X axis, like I said, we're good at sound localization. So you can represent it to the right here. +[2136.120 --> 2146.120] And we also have this association that brighter lights are associated with louder sounds. The amplitude seem to be linked for us, approximately. +[2146.120 --> 2149.120] And this one down over here. +[2149.120 --> 2158.120] And the other one down here is the distance is low should be low pitch. You can hear it more in your left here, with stereo tuning and is dark so it can be quiet. +[2158.120 --> 2164.120] Just look at a quick question. I'll say that one to the end. +[2164.120 --> 2167.120] Come back to that one. +[2167.120 --> 2173.120] And then we can take this problem of vision being able to sort of take things in parallel. +[2173.120 --> 2178.120] But to reduce the amount of information having we process a point is at a temporal feature. +[2178.120 --> 2185.120] And so an other thing we can do is scan the image in terms of the columns from left to right. +[2185.120 --> 2192.120] And in that way, taking advantage of this sensitivity to the time we make the dollar for existence good at. +[2192.120 --> 2202.120] You can just worry about processing the information at any given one point and then integrating it across the image as a way of being able to reduce the complexity of it. +[2202.120 --> 2210.120] And so you'll scan across the image. You'll hear things that are high image is high pitches things are low is low pitches. +[2210.120 --> 2219.120] And also if you have stereo earphones on you can hear things on the left and the left things on the right on the right. +[2219.120 --> 2230.120] And so we can see a few examples from a video we made from new scientists when they cover a product pictures that he takes when he goes on holiday and travels and such. +[2230.120 --> 2242.120] And just so that you hear what the voice created by Peter Mara sounds like. +[2242.120 --> 2255.120] Okay, you'll hear a high pitch as it scans across the flat line, a low pitch as it goes across the flat line at the bottom here as a scans across you hear a change of pitch. +[2255.120 --> 2260.120] All right. +[2260.120 --> 2279.120] So you should be getting that direct sense of it going up as it goes up and pitch and going down as it goes down a pitch from the left to right. +[2279.120 --> 2290.120] Now, of course, what you hear is the complexity of real images compared to these simple shapes. It's very representative of the complexity of the world, you know, the world has a lot of information within it. +[2290.120 --> 2307.120] And once you think about it in terms of trying to translate to sound, hopefully you can pick up on the complexity that when we see we might take for granted. +[2307.120 --> 2321.120] Now, I like that one because you can hear the pitch sort of dropping as it's following this drop in elevation here. +[2321.120 --> 2326.120] And there's just an example of implementation as well. +[2326.120 --> 2351.120] Okay, so going back to that reference frame study we did before my postdoc Akile and former PhD student Typhoon went on to examine whether using the voice to provide a way of seeing the scene would mimic what you see in terms of visual experience. +[2351.120 --> 2355.120] And indeed what they found was exactly that. +[2355.120 --> 2365.120] So some reason they're graph align itself at 315 with 0 here, you'll see that's the lowest point, which corresponds to what cited and they find people to. +[2365.120 --> 2382.120] So this parent of results matches what people do when they can actually see the scene, but the crucial thing is that they didn't actually see it with their eyes, in this case, they saw it with their ears and with hearing. +[2382.120 --> 2393.120] What's the catch well one thing that's sometimes a challenge is it takes learning, you know, so when we look think about vision, we've had our whole infancy to learn to see. +[2393.120 --> 2402.120] And our interactions with the world and with the other senses and when doing more complex tasks that get to the level of something like navigation. +[2402.120 --> 2414.120] We need to take people through periods of training to learn to use this new auditor input as something that provides them this rich information about the environment. +[2414.120 --> 2424.120] We do things essentially to make us simpler so we have this all black room, which is our virtual reality and motion tracking lab and we present these. +[2424.120 --> 2433.120] High contrast objects, you know, shown it white across the background to make sure people are able to focus on learning the locations of things. +[2433.120 --> 2444.120] But that gives us an opportunity after training to build examine how people can integrate things like seeing what sound through the voice along with other things like self motion. +[2444.120 --> 2453.120] So the normal way you might remember where objects are as we did in our prior studies, is that movement to them as a way to representing them. +[2453.120 --> 2465.120] And then there's different ways we can test that either egocentric read on the left here, how do you people remember how to get to a location of object from their starting point their perspective. +[2465.120 --> 2481.120] Or all centrically where they're shifted to a starting point of another object and therefore have to be able to calculate how you get from each object to each other object, rather than from their egocentric starting location. +[2481.120 --> 2486.120] Just give a couple examples of some of the data that we found so far along these lines. +[2486.120 --> 2506.120] And one thing I think is exciting is doing comparisons between those were excited versus those were visually impaired and finding that participants are able to efficiently integrate using the voice to hear the environment with self motion information. +[2506.120 --> 2515.120] And that's what you see by having sort of the lowest variability their air so their precision gets very good. +[2515.120 --> 2526.120] And it gets the level of ideally being able to integrate the two forms of input so seeing with sound and with their self motion of walking through the environment. +[2526.120 --> 2534.120] And they're performing better than cited if they're visually impaired using the device with their self motion as well. +[2534.120 --> 2539.120] Of course, this is for egocentric tasks, which we know the visually impaired are very good at. +[2539.120 --> 2551.120] But what we're excited was to see that the visually impaired were also able to integrate and have strong performance as good as the sighted in terms of their precision and the allocentric past as well. +[2551.120 --> 2562.120] And where they have to really think about where the objects are located in relation to another rather than their starting point. So we're really excited about that work. +[2562.120 --> 2565.120] Now can the learning process be made more efficient. +[2565.120 --> 2580.120] Well, we're trying something new right now. So one thing we're trying is we developed an app game a few years ago that we now had a chance to test where we gain a fight learning to see with sound for lots of different shapes objects. +[2580.120 --> 2588.120] And we're trying to get a lot of experience with on mobile phone as one way of training to do it. And it's actually a pretty fun game for learning how to do it. +[2588.120 --> 2594.120] And then we tested people in where tasks one in terms of just table top localization. +[2594.120 --> 2601.120] But another really had to navigate to indoor space and avoid obstacles within that space. +[2601.120 --> 2611.120] And then we found that the visual results are encouraging and giving us some sense of how much training is helpful for these types of tasks for the table top tasks. +[2611.120 --> 2625.120] The sort of biggest drop we found and lower is better here in terms of the speed while remaining after it in this past was after eight hours of training with the phone training game. +[2625.120 --> 2644.120] And we found even greater results for the actual obstacle avoidance went indoor navigation tasks where without any training people were sort of randomly equally successful past assessor failing to do this using seeing the sound. +[2644.120 --> 2655.120] And we saw that the proportion of successful trials jump up and eight hours just add on a little bit more improvement in their performance. +[2655.120 --> 2665.120] And keep in mind that's going from playing a game on a phone screen to a real three dimensional environment. And so we're really excited about that work. +[2665.120 --> 2670.120] And here's just the example video of one of the training past we had someone go through. +[2670.120 --> 2687.120] And we've really been exploring through a lot of our papers, the ways this can inform spatial processes in general, as well as the potential implications for sensory substitution as application for the visual impaired. +[2687.120 --> 2700.120] So just to wrap up key ideas, visual experience helps maintain map like structures in the brain and has implications for how we represent the space around us and how we interact with it. +[2700.120 --> 2705.120] But the important thing is we proceed with the brain, not just with our various sensory receptors. +[2705.120 --> 2726.120] And so I think the promise of sensory substitution is that we can sort of hack into the brain and provide information through another sense that can enable people to be able to represent things and respond to things and even navigate in ways as if they could see even if they're doing it with sound. +[2726.120 --> 2751.120] One recent paper with a typhoon in Vanessa here, we outlined a number of ways we can take advantage of our knowledge about how multi sensory integration and cooperation works to think about inclusive design for many things that we could do with representing information in ways that feel the process visually or through sound or through touch. +[2751.120 --> 2762.120] So with that, let's turn over to some questions. Thank you very much for all of you coming and I'll let Colette run things but it looks like we have a few questions in the chat. I could probably serve. diff --git a/transcript/allocentric_w6a7pUJ4zBA.txt b/transcript/allocentric_w6a7pUJ4zBA.txt new file mode 100644 index 0000000000000000000000000000000000000000..246df68a761d1aae095156b137ee7909c11455ac --- /dev/null +++ b/transcript/allocentric_w6a7pUJ4zBA.txt @@ -0,0 +1,5 @@ +[0.000 --> 6.000] Without you it's hard to survive +[6.000 --> 10.000] Cause every time we touch I get this feeling +[10.000 --> 15.000] And every time we kiss I swear I could fly +[15.000 --> 18.000] Can't you feel my heart beat fast? +[18.000 --> 24.000] I want this to last I need you by my side diff --git a/transcript/allocentric_wuZ1fSBp1Ts.txt b/transcript/allocentric_wuZ1fSBp1Ts.txt new file mode 100644 index 0000000000000000000000000000000000000000..13fa93bd72e32bc9a70e1e9117efa7158c7b13fa --- /dev/null +++ b/transcript/allocentric_wuZ1fSBp1Ts.txt @@ -0,0 +1,27 @@ +[0.000 --> 11.600] I appreciate the time you've been doing to make it special. +[11.600 --> 12.600] Thank you. +[12.600 --> 14.600] I see a lot of friends. +[14.600 --> 19.520] Now in verbal signals, keep the classroom running well by minimizing interruptions. +[19.520 --> 22.240] I clearly need a review number of models. +[22.240 --> 24.240] Where are you with that? +[24.240 --> 26.280] Okay, so we'll get a little more practice before. +[26.280 --> 27.280] Give me a fist to fight. +[27.280 --> 31.400] Give me what you think how you did on this learning target. +[31.400 --> 32.400] Give it to me. +[32.400 --> 34.200] You can flash it to me fast. +[34.200 --> 38.120] Teachers can communicate messages to the whole class or individuals quickly. +[38.120 --> 41.080] They're about to go right into our centers. +[41.080 --> 43.320] And silently. +[43.320 --> 47.920] Students can make requests of the teacher or send messages to each other without disturbing +[47.920 --> 55.160] the work of others. +[55.160 --> 59.320] Schools and classrooms can invent their own signals or American sign language can provide +[59.320 --> 61.160] signals for any purpose. +[61.160 --> 63.600] What are some of the ways you can build on the ideas of others? +[63.600 --> 64.600] Show me, disagree. +[64.600 --> 65.600] Good. +[65.600 --> 67.600] What else can you do to build? +[67.600 --> 68.600] Good. +[68.600 --> 69.600] Add on. +[69.600 --> 70.600] What else can you do? +[70.600 --> 71.600] Clarify. +[71.600 --> 72.600] Okay, what else? diff --git a/transcript/allocentric_xF4GkHLiHJQ.txt b/transcript/allocentric_xF4GkHLiHJQ.txt new file mode 100644 index 0000000000000000000000000000000000000000..13db85806f3df6045b9685216f9a628fd01320c5 --- /dev/null +++ b/transcript/allocentric_xF4GkHLiHJQ.txt @@ -0,0 +1,338 @@ +[0.000 --> 17.000] Thank you so much for that lovely introduction. I don't often hear all of my education recited +[17.000 --> 21.000] out to me and so it makes me realize gosh I've been doing this for a while so thank you very +[21.000 --> 27.000] much. Can you guys all hear me okay if I hold it like this? Yeah okay good. So one of the things +[27.000 --> 32.000] that I want to just kind of elaborate on a little bit further with my training is just the fact +[32.000 --> 36.000] that I've really had the unique opportunity to work with a lot of patients with a variety of +[36.000 --> 41.000] different developmental and medical disorders and I'm really thankful for the lessons that they've +[41.000 --> 47.000] shared with me and today what we're going to talk about is a specific diagnosis called nonverbal +[47.000 --> 53.000] learning disability. How many of you guys have heard about this diagnosis? Okay so the fair +[53.000 --> 58.000] amount of you guys have heard about it. For those of you who haven't we are going to spend a fair +[58.000 --> 64.000] amount of time today talking about this diagnosis. As some of you guys in the room are aware of there's a +[64.000 --> 69.000] fair amount of controversy behind this diagnosis in the field and so I really do want to spend some +[69.000 --> 75.000] time giving you guys tools and information as a family to help you understand how to advocate for your +[75.000 --> 81.000] child and what hurdles you might come up against if you were to and you hear this diagnosis or +[81.000 --> 86.000] use this diagnosis with your child if appropriate. So we'll talk about the history of nonverbal learning +[86.000 --> 91.000] disabilities and where they came about. We'll talk about the assessment and the classification of +[91.000 --> 97.000] nonverbal learning disability. Some of the scientific issues and why there is some controversy in the +[97.000 --> 102.000] field and then we'll talk about some specific interventions. So regardless of whether or not the label is a +[102.000 --> 109.000] label that we use we do see a constellation of weaknesses. And so briefly before we talk about some +[109.000 --> 116.000] of the history what I'd like to talk about just what is nonverbal learning disability. What are we talking about when we use this term? +[116.000 --> 122.000] The term I think can be a little confusing because you hear the word nonverbal and you think does that mean they can't +[122.000 --> 129.000] talk. But in fact we actually see that these are individuals who have notable strengths in language skills. +[129.000 --> 138.000] So the weaknesses that we tend to see are actually in visual spatial skills and in mathematics. There's been the most +[138.000 --> 145.000] empirical evidence to suggest that those are the two longest standing or most well supported kind of areas of +[145.000 --> 153.000] vulnerability. There's also some varying levels of evidence for weaknesses in motor skills, weaknesses in +[153.000 --> 161.000] social cognition, emotional functioning, and attention and executive functioning. And again this constellation of +[161.000 --> 168.000] weaknesses is seen within the context of some notable strengths in language based skills. So vocabulary development, +[168.000 --> 177.000] language based academics those are things that these individuals tend to do quite well on. When we think about these +[177.000 --> 184.000] NLD cognitive profiles we hear about them not only in Turner syndrome but there's also been a number of other +[184.000 --> 191.000] medical disorders where this cognitive profile has been identified. So looking at individuals with autism spectrum +[191.000 --> 199.000] disorders or what we used to call aspergers, fetal alcohol syndrome, hydrocephalus, individuals who have received +[199.000 --> 207.000] radiation treatment for oncology backgrounds, traumatic brain injuries, absence of the corpus callosum, the +[207.000 --> 214.000] fibers that connect the two hemispheres of the brain and then we also can see it in epilepsy or seizure disorders. +[214.000 --> 222.000] So that's not just isolated to one specific population, it is in fact a cognitive profile that we can see in a +[222.000 --> 231.000] variety of other medical disorders. So how did this term come about? So we first started talking about specific learning +[231.000 --> 240.000] disabilities in the 1960s and at that time we really heard more information about verbal learning disabilities. So either reading +[240.000 --> 248.000] and writing or speech language and those tended to be the areas that we had the most concern about. In the 1970s, +[248.000 --> 258.000] Michael Busce started talking about what was identified as the constellation of nonverbal learning disabilities. So really looking at what we consider a +[258.000 --> 267.000] split if you will between verbal and performance IQ. So verbal IQ being your ability to define words to tell me how two words are the same. +[267.000 --> 276.000] And performance IQ being more like your ability to replicate visual information or recognize a sequence what was happening in the +[276.000 --> 284.000] sequence of pictures and be able to predict what should come next. So really recognizing individuals who had a unique strength with verbal +[284.000 --> 294.000] intellectual abilities. Also identified was handwriting difficulties, a poor concept of time, math weaknesses, difficulties with social +[294.000 --> 304.000] relationship, things that we tend to think of as being associated with the right hemisphere and also some difficulty negotiating nonverbal aspects of their +[304.000 --> 312.000] environment. So many of the themes of that initial kind of description of the nonverbal learning disabilities have remained consistent over time. +[312.000 --> 327.000] We did see some further research by Declan, Declan talking about poor spatial orientation, poor math understanding, social emotional deficits and left-sided motor signs within the context of these +[327.000 --> 339.000] higher level language skills. We saw in 1983 kind of this idea of a behavioral kind of syndrome. So talking really about identifying some specific weaknesses within the +[339.000 --> 359.000] right hemisphere. So again, you guys may remember from kind of your high school biology class that the right hemisphere kind of controls the left side of our brain and the left side controls the right side. And we tend to think that each hemisphere of the brain is especially able to better handle certain things. So the left hemisphere tends to do a lot of +[359.000 --> 376.000] language and the right hemisphere tends to do more nonverbal or visual spatial kind of reasoning skills. So really was starting to these researchers started to look at specific neurological diagnoses where we were seeing kind of this right hemisphere disorder, if you will. +[376.000 --> 403.000] Volier again kind of provided some further evidence for the idea of strengths with verbal reasoning and language based academics within the context of weaker visual spatial and math skills. But also added some kind of concerns about in attention and some but recognizing that there's a lot of variability in these presentations that it wasn't as clear or as straightforward as we might have thought. +[404.000 --> 418.000] Bruce Pendington did a little bit of work with visual spatial problems and kind of really identified this pop you know the NLD kind of profile is including visual spatial problems handwriting math and social cognition. +[418.000 --> 432.000] And really this idea that verbal learning disabilities are based on low academic achievement, but how do we diagnose NLD, which is really kind of looking at impairments at a cognitive level, if you will. +[435.000 --> 437.000] Oh sorry, I forgot to bring it up there. +[437.000 --> 454.000] So one of probably the most prolific researchers in this area has been Rorx model of NLD and some of you guys if you're familiar with NLD, you've probably heard this term before. But really he identified this idea of assets and deficits. +[454.000 --> 475.000] So things that children did quite well including kind of early speech language development that seemed within normal age expectations, strengths in wrote memory skills, attention to detail, language based academic skills being adder above age expectations, so reading and spelling. +[475.000 --> 493.000] And then recognizing that these children tend to be quite articulate that they're able to really present as someone who has some really notable verbal strengths within this set of strengths, you also started to see some he identified these specific areas of concern, so visual spatial skills. +[494.000 --> 503.000] So specifically looking at visual perception, so visual perception really looks at this idea of what, how do you view the world and are you viewing it the same way. +[503.000 --> 515.000] So we look at kind of visual discrimination, so can you identify similarities between visual pictures, can you replicate visual information, really trying to get an idea of that, and then looking at visual recall. +[515.000 --> 521.000] So if I show you a set of visual pictures, are you able to recognize or replicate those at a later time. +[522.000 --> 547.000] Also looked at motor skills, so thinking about fine motor, what we consider or call graph a motor skill, so pencil control handwriting, fine motor skills like getting dressed, buttoning those kind of weaknesses, overall coordination, so an individual's ability to navigate through their environment without bumping into a variety of things or kind of what we would consider kind of clumsy behaviors. +[548.000 --> 553.000] And then concerns about balance, or another area that were noted to be a concern. +[554.000 --> 571.000] And then this idea of social skills, so difficulty with nonverbal communication, so social judgment, being able to utilize nonverbal gestures, understanding and interpreting facial expressions, those kind of weaknesses that were identified. +[572.000 --> 586.000] And then the thought behind these kind of deficits, that those sets of weaknesses directly affected attention, memory, problem solving, and a child's kind of social competence. +[588.000 --> 598.000] His model is really the one that has the most professional and lay impact, and is also the one that has the most theoretical or empirical support at this stage. +[599.000 --> 615.000] Dr. Work also started working on kind of what we would call the white matter hypothesis. So in our brain we have two kind of major divisions that we talk about with gray matter and white matter. +[615.000 --> 630.000] White matter is essentially the highways between the different areas of the brain, so it helps with communication between the different areas so that we're able to communicate and kind of share information across our brain. +[631.000 --> 644.000] There tends to be a higher ratio of white to gray matter in the right hemisphere. And so it was really this thought that since there's more white matter, if there is dysfunction or damage or... +[645.000 --> 653.000] ...allusion to that right hemisphere that we're more likely to see those deficits associated with white matter disease. +[654.000 --> 668.000] So this is kind of our standard kind of brain picture that we've always seen, right? And then this is some newer neuroimaging studies that are really allowing us to look at the white matter tracks within the brain. +[668.000 --> 676.000] And so I just really like this picture because I think when I talk about white matter tracks as highways, it doesn't kind of come across as well with this picture. +[677.000 --> 691.000] But really when you look at this, this is really the idea that we're seeing that there are some really notable highways or communication tracks within the brain that can be potentially damaged with a variety of different medical diagnoses. +[692.000 --> 713.000] So the prevalence for NLD, there is some controversy in this, but some of the early kind of research would suggest that within a population of children who have learning disabilities, so in a clinic for kids with learning disabilities, that they were seeing 5 to 10% of that clinic sample having a diagnosis of NLD. +[713.000 --> 722.000] Some more, some different research would suggest it was only 1%. So I think that there is some variability, I think you might be hearing higher numbers with some of the other research. +[723.000 --> 731.000] But at this point, that's kind of the standard that people were thinking is within kids diagnosed with learning disabilities that NLD was happening not about 5 to 10%. +[732.000 --> 749.000] So assessment methods, how do we tell if a child has a nonverbal learning disability? Early studies with work in his colleagues really just looked at did a child have a math disability, and that was the way that they diagnosed a nonverbal learning disability. +[750.000 --> 764.000] That progressed to really people looking more holistically at a variety of different skills, so not only math, but also some of the other areas that we've talked about so the visual spatial and some of the social cognition and kind of thinking about those. +[765.000 --> 774.000] A lot of the research that you'll hear about or the early research talked about this, again this verbal performance split, and using a cutoff of 10 points. +[775.000 --> 782.000] So if your verbal IQ is 10 points higher than your performance IQ, really using that as a diagnostic criteria. +[783.000 --> 792.000] The problem is that a lot of people in the general population have that split, that it's not considered significant or rare if you will. +[793.000 --> 802.000] And so that makes it a little bit more challenging to really say, gosh, that should be a diagnostic criteria when it can be so easily applied to other populations. +[804.000 --> 821.000] We also see that that 10 point split isn't consistent across the lifespan. So 30% of kiddos who are 9 to 15 with NLD showed that 10 point split, but 70% of 7 to 8 year olds did. +[822.000 --> 830.000] So noticing that even using it as a diagnostic criteria within individuals identified with NLD wasn't sensitive if that makes sense. +[831.000 --> 838.000] You guys holding in there with me okay? Yeah, all right. I know there's a lot of data and history hanging in there. We're getting to the good stuff okay. +[839.000 --> 845.000] So over the time, we really kind of expanded it to reflect this idea of assets and deficits. +[846.000 --> 859.000] There was a proposal for ICD-9. So ICD-9 is kind of the diagnostic manual, if you will, that is used by the majority of the medical field to make consistent medical diagnoses. +[860.000 --> 869.000] So this was kind of the proposed criteria. So you can see some of the things that we've talked about in the past as well as some additional new information. +[870.000 --> 877.000] So tactile perception was looked at as having bilateral deficits, so bilateral just meaning both sides. +[878.000 --> 887.000] And what this was saying is that an individual has trouble with really understanding either what's being held in their hand or how their hand is being touched if that makes sense. +[887.000 --> 896.000] So just that their sensory or their ability to understand the touch that they're having isn't as effective or as intact as it might be for other people. +[897.000 --> 910.000] Also talked about the kind of complex psychomotor coordination, which we've talked a little bit about. And again, that's just this idea of clumsy behaviors or behaviors that might be indicating some kind of gross motor weaknesses. +[910.000 --> 923.000] We also see extremely impaired visual spatial organizational skills. So again, that idea of being able to interpret and replicate visual information is something that we see as an area of potential concern. +[925.000 --> 939.000] One of the other things that we can talk about a little bit is dealing with novel or complex information. So that was something that the authors of this proposal had indicated that kids seem to struggle more with situations. +[940.000 --> 951.000] So they were aware they had to really utilize information that was more appropriate in novel situations to help kind of stay on task. +[952.000 --> 959.000] Empowerments of novel problem solving. So we look at, I don't know if it's better this way or not, you guys, sorry. +[959.000 --> 971.000] Having to use kind of planning and organizational strategies and being able to utilize that information and efficiently solve problems that they might not be as aware of. +[972.000 --> 983.000] Can you guys still hear me okay? Okay, good. A distorted sense of time. So having just a misunderstanding of what how much time has passed or how much time is left. +[983.000 --> 998.000] Really not being able to do that quite as well as we might expect for other people their age. And all again within this idea of well developed wrote verbal abilities being highly verbose, being able to chat it up, but not always understanding the pragmatics. +[999.000 --> 1010.000] So some of those kind of literal interpretations of statements like hit the road Jack and some of those things that seem to be a little bit harder for people to understand who might have an LD. +[1010.000 --> 1025.000] We also saw deficits in math within the context of those strengths in reading and spelling and then difficulties in social perception and judgment and a high risk for what's called internalizing behaviors or symptoms like anxiety and depression. +[1026.000 --> 1039.000] So this is what was proposed to the ICD-9 for the ICD-9 10 which just was revised within the last year. So and what we saw was that it was not accepted. So at this point, we have some questions. +[1040.000 --> 1061.000] So some scientific limitations with the use of the term nonverbal learning disability. So nonverbal learning disability is not widely recognized in the United States. It is not included as I mentioned in the ICD-10 or in the diagnostic statistical manual that is typically used by psychologists to consistently make diagnoses. +[1061.000 --> 1078.000] Even within practitioners, you'll hear people talk about NLD or NVLD depending on which coast you're on. So even within the field, I think we really struggle on being able to consistently title what this constellation of weaknesses. +[1078.000 --> 1095.000] It's not formally recognized outside of the US as well, although there tends to be some more acceptance at a clinical level. So there's some studies in Britain and Australia and Italy that have started to indicate that there might be some more research happening in these areas. +[1096.000 --> 1114.000] The other kind of scientific limitation or clinical limitation is that schools don't recognize nonverbal learning disability. So if you take a report to a school district and you say my child has a nonverbal learning disability, there's not a category for that label to be used on their IEP. +[1114.000 --> 1129.000] So for those of you guys who may know, schools have offers special educational services to children who have disabilities, but they have to fit within this kind of criteria that's been developed at a federal level. +[1129.000 --> 1147.000] And so these are the 13 categories that are used. And so oftentimes there's kind of the struggle of what do we do, how do we help serve these patients who might have these notable cognitive vulnerabilities and how do we support them at school or advocate for them to be supported at school. +[1148.000 --> 1157.000] So when we, one of the ones that would make the most sense when we're talking about a nonverbal learning disability is a specific learning disability. +[1157.000 --> 1176.000] But when you look at the actual criteria for a specific learning disability, what you're seeing and I'm just going to read it quickly is that it means a disorder in one or more of the basic psychological processes involved in understanding or using language spoken or written that may manifest itself in the ability to be used. +[1177.000 --> 1186.000] So you listen, think, speak, read, write, or spell, or to do math calculations. +[1186.000 --> 1196.000] So really when you look at it, there's not a lot of range to say, gosh, that's a kid who has visual spatial weaknesses, or it's a kid who has a specific nonverbal learning disability. +[1196.000 --> 1200.000] That's not included as a part of the description. +[1200.000 --> 1215.000] So in clinical standpoint, these are patients who we see in clinic who have this set of weaknesses with nonverbal learning disabilities in terms of visual spatial weaknesses and kind of math reasoning and how do we advocate for them. +[1215.000 --> 1226.000] And we can talk a little bit about that in terms of interventions, but I think it is one of the main hurdles that we can come up against when people use that terminology. +[1226.000 --> 1236.000] So why? Why is there this discrepancy? Why do we have these camps in the U.S. of people who are really for the NLD diagnosis and people who are still hesitant about it? +[1236.000 --> 1249.000] And the reason is because of some of the limitations in empirical or scientific literature. So we haven't been able to, as a field, really measure this diagnosis or this cognitive profile very well. +[1249.000 --> 1264.000] Partly because of some of the limitations of the populations that have been studied. So a lot of times what we see is that early studies really were comparing kids with nonverbal learning disabilities to kids with learning disabilities, so reading or dyslexia. +[1264.000 --> 1275.000] We don't have a great consensus as a field on how to define or the criteria that's necessary for nonverbal learning disability, and so that makes it very challenging to replicate research. +[1275.000 --> 1285.000] And as you guys know, one of our great kind of foundations of peer research is this idea that we can replicate it over and over again to prove that it's an appropriate diagnosis. +[1285.000 --> 1297.000] There's also been what we call methodological issues with previous research, so many of the studies haven't included what they call a neurotypical or a controlled group. +[1297.000 --> 1307.000] There hasn't been exclusionary criteria, so the study has involved a lot of other kiddos who might be at greater risk for neurocognitive weaknesses just based on their medical diagnosis. +[1307.000 --> 1317.000] So kids who've had seizures or kids who've had traumatic brain injuries, which might not get us a clear idea of what nonverbal learning disability might look like from a developmental standpoint. +[1317.000 --> 1331.000] And we talked a little bit about the diagnostic criteria and then small sample size, so a lot of these studies have included a very, very small group of people, which again makes it very hard to generalize out into larger populations. +[1331.000 --> 1350.000] So our research at this point has led us to a lot of different questions, so is nonverbal learning disability any different from just having a math learning disability, or are the weaknesses that we see in visual constructional skills just part of having a math learning disability? +[1350.000 --> 1362.000] What's the influence on attention on social perceptions? So is it that kiddos with some of these social skills really truly just have difficulties with attention and executive functioning skills? +[1362.000 --> 1370.000] Is nonverbal learning disability any different from an autism spectrum diagnosis, or is that really the more appropriate diagnosis? +[1370.000 --> 1385.000] And are there subtypes of nonverbal learning disability? So is this a true diagnosis, but that there's so much variability in the individuals that we've studied that maybe there really are different levels of this kind of diagnosis and different set of weaknesses? +[1385.000 --> 1393.000] But as you start to get into a very small population of kiddos, carving out subtypes becomes even more challenging as you can imagine. +[1393.000 --> 1414.000] So lots and lots of questions, and I think there's been a lot of recent, or more recent kind of conversation about this in the neuropsychological field in which a lot of the neuropsychology kind of community has started to kind of challenge this nonverbal learning diagnosis as an appropriate diagnosis. +[1414.000 --> 1428.000] And the idea behind some of the recent studies by Pennington and Rissa Nortz or Gregor was that is it really its own diagnosis, or are we pulling together a number of different weaknesses and calling them one thing? +[1428.000 --> 1435.000] And is that helpful, or is it better to really recognize some of these other labels that are already out there? +[1435.000 --> 1446.000] Is it a unique syndrome and then kind of this idea of the diagnostic classification that we just don't have a great understanding of how we diagnose this across providers? +[1446.000 --> 1456.000] So that being said in the scientific field, I think there's lots of conversations about why this might not be a great diagnosis or why there's some concerns about using this diagnosis. +[1456.000 --> 1472.000] But in the lay culture we have seen an explosion of interest in this field. And so within the last several years we've actually seen 14 books for parents and teachers published since 2000, which I realize is not within the last several years. +[1472.000 --> 1484.000] But we also have seen nine within the last three years. So the vast majority of publications that are coming out that are being published have actually happened within the recent history. +[1484.000 --> 1493.000] And then we have two websites specifically dedicated to families and helping families understand and advocate for their children who might have these neuro cognitive weaknesses. +[1494.000 --> 1513.000] So I think that there's a lot of interest within the general population about this diagnosis. And I think there's a number of research groups that are trying to kind of make some headway in really understanding what is happening and how do we understand this as a diagnosis. +[1514.000 --> 1522.000] So that being said, you know, we've talked a lot about challenges. I've talked a lot about kind of the limited empirical support for this. +[1522.000 --> 1530.000] There are many researchers and clinicians who will say there is a cognitive profile that includes the weaknesses that we've talked about. +[1530.000 --> 1541.000] So the nonverbal or the visual spatial weaknesses, the mass difficulties, the fact that these kiddos present with social deficits that looks very different from an autism spectrum diagnosis. +[1541.000 --> 1565.000] So the most important thing is that we do see these motor weaknesses. So clinicians and researchers out there are understanding it. But I think that our understanding of nonverbal learning disability is likely going to expand in the future years as we continue to do more research in this area and recognize that parents and families have questions about this diagnosis and the applicability of it. +[1565.000 --> 1574.000] So what I mean said, I think that because we recognize these weaknesses, one of the things that's really important to do is to talk about the interventions. +[1574.000 --> 1585.000] So whether or not we call it a math learning disability, if we call it a visual, a instructional disability, what can you do to support your or your child's kind of weaknesses in these areas? +[1585.000 --> 1598.000] So really we think about kind of the major areas of potential vulnerability. And so oftentimes we start with kind of the motor. We'll talk a little bit about the speech language, the academic and the social. +[1598.000 --> 1605.000] And within the context of each of these, I think it's really important to think about intervention in a couple of different ways. +[1605.000 --> 1619.000] So we talk about, Sue Thompson has talked about this idea of cams. So compensation, accommodations, modifications and strategies. So not looking at really just what we can make the child do differently. +[1619.000 --> 1626.000] But how can we change the environment to better suit the child's needs or unique learning profile? +[1626.000 --> 1645.000] So we first talk about motor interventions. So oftentimes because I'm a neuropsychologist and I don't always have, I'm not an expert in fine motor skills, I will refer to my colleagues occupational therapists who have amazing exercises and skills and specific academic accommodations that I think can really help. +[1645.000 --> 1657.000] So I think that any child where there's the consideration of a nonverbal learning disability should really be seen or work with an occupational therapist for further evaluation and intervention. +[1657.000 --> 1671.000] But the general goal would be to reduce some of those fine motor demands. So within a classroom for school age children, we would really advocate or recommend kind of note taking support. +[1671.000 --> 1689.000] So this could be as simple as the teacher providing a copy of the notes so that the student is able to keep up with the lectures and doesn't struggle at being able to really understand what's happening and having to write down everything at the same time. +[1689.000 --> 1704.000] And we also talk about having a scribe so as an individual kind of progresses through school and goes into a college setting or on specifics kind of standardized testing, kind of advocating for someone who can write for them under timed conditions. +[1704.000 --> 1725.000] And there's also a number of technology resources that are available for kiddos who might have some of those fine motor demands. So one would be something like a smart pin that kind of helps with in class note taking and we'll talk about drag and naturally speaking, which is kind of a dictation software, which can help reduce the motor demands of having to write out an essay. +[1725.000 --> 1739.000] And Dragon is an amazing tool but does take a substantial amount of time to train. And so recognizing that it can be a useful tool but we tend to recommend it for kiddos in later grades just because it can be so tricky to use. +[1739.000 --> 1757.000] We also talk about allowing a child to orally present knowledge so rather than having to write huge essays, you know, giving a little speech to the class or having a presentation to the teacher so that they're able to convey what they know without really having to get bogged down with the writing aspect. +[1757.000 --> 1768.000] So allowing allowances to type assignments and keyboarding instruction, thinking that those are going to be ways to kind of compensate or work around some of those fine motor weaknesses. +[1768.000 --> 1786.000] And then of course, this idea of extended time. So knowing that kiddos who might have fine motor weaknesses are going to need extra time to complete writing assignments, they may need extra time at home to get dressed, that they may be someone who really struggles with buttons or zippers and that rushing them in the morning might not be as exciting of an experience for them. +[1786.000 --> 1805.000] Everyone involved. Questions or thoughts so far? Are you guys hanging in there? All right. So academic interventions. So we talk about, you know, four kiddos who really are exhibiting what we would consider a specific learning disability and mathematics. +[1805.000 --> 1818.000] We would recommend additional instruction and not just repeating the same kind of instruction that they've had within the general education, educational classroom, but really looking at modifying instruction. +[1818.000 --> 1831.000] And within Colorado, and I'm not sure for a number of states outside of Colorado as well, what we're seeing is that we are using a math instructional pattern that's really heavily relying on visual spatial skills. +[1831.000 --> 1853.000] For those of you families in Colorado, you may notice that your kiddos drawing lots of little circles and some of their early math skills. And that specific kind of instruction, I've noticed that many of our families, many of our patients are struggling with is kind of using visual representation of this numbers, that that seems to bog them down even a little bit further. +[1853.000 --> 1880.000] And so really helping teachers to recognize that these families or these patients might need, these students might need more or different kind of tailored instruction. So the idea of moving from a concrete or a physical representation to more abstract reasoning. So giving those manipulatives, allowing them to have some visual representation within a solid format that really helps them be able to generalize those math skills. +[1880.000 --> 1895.000] Looking at relating new information to prior knowledge, so really helping them tie what they're learning in the moment to what they've learned in the past, providing multiple opportunities to use math information to solve real life problems. +[1895.000 --> 1907.000] So tying it to baking, tying it to shopping, tying it to budgeting, those kind of kind of functional academic skills that are a little bit easier to encode into your memory. +[1908.000 --> 1933.000] There's also a couple of teaching strategies that we can look so cover, copy and compare and what we call incremental rehearsal, which are kind of what they sound like. But this idea of kind of really teaching specific interventions and really helping individuals be able to learn that information within a context of a spiraling instructional strategy. +[1933.000 --> 1946.000] So starting with one task, once they've mastered that moving on to another and then coming back and revisiting that information so that you're hopefully kind of solidifying those skills before you're going forward and not losing them over time. +[1946.000 --> 1962.000] And then of course this idea of academic accommodations or modifications, so breaking down math concepts or math problems into individual steps, teaching and rehearsing those steps independently, creating a notebook so that the child can kind of refer back to that with a regard, +[1962.000 --> 1978.000] with a number of basic kind of mathematical procedures and then providing kind of a math chart, kind of math facts chart or a calculator so that some of those basic wrote math skills aren't things that hold the child back from learning some of the higher level math computations. +[1979.000 --> 1999.000] We also within this population can see some evidence for difficulties with reading comprehension. So with those difficulties it tends not to be the factual kind of information that's obtained through a passage but it tends to be more the inferential questions that are asked. +[1999.000 --> 2009.000] So it's what if what would happen? What do you think, you know, hypothesis testing those kind of things, those tend to be the areas of vulnerability. +[2009.000 --> 2019.000] And so really teaching strategies to help support reading comprehension. And a lot of it is just kind of developing questions about the story beforehand. +[2019.000 --> 2029.000] So I'm going to read a story about butterflies, kind of what kind of information might you expect, kind of building a framework for you to be able to hang that information on if you will. +[2029.000 --> 2042.000] Review the chapter outlines and headings, so looking through and kind of getting an idea of what might we be paying attention to, what are going to be the main facts or the main topics of the information that we're going to read. +[2042.000 --> 2057.000] Reading a summary and end of the chapter questions first so that you can kind of understand what information might be important to focus on identifying themes, pre-teaching any vocabulary words that you think might be new. +[2057.000 --> 2066.000] So if there's any specific information that the child might not have had exposure to being able to identify and help support the development of that skill before. +[2066.000 --> 2075.000] And then learning to summarize important ideas. So this is something that we spend a fair amount of time on for kiddos in general education. +[2075.000 --> 2089.000] But I think is something that again really that focus on making inferences and that non literal interpretation of written exposure is going to be something that girls with Turner syndrome might benefit from more exposure or more focus on. +[2089.000 --> 2102.000] And then reducing the amount of information that is expected to be processed or comprehended at one time. So if we know that a big book report is doing two weeks, we might not want to read it all the night before. +[2102.000 --> 2115.000] But really allowing for this idea of kind of breaking it down and having good in depth conversations about not only the factual information, but any of the concepts or themes that are presented. +[2115.000 --> 2121.000] And the other idea surrounds kind of these visual spatial interventions that we can see. +[2121.000 --> 2138.000] So again, what we talk about is when kiddos are struggling with visual spatial deficits in the academic setting what we can see the most notable difficulties with is interpretation of graphs and charts, keeping lines with math facts and things like that. +[2138.000 --> 2143.000] And those can be the difficulties that we see is really the orientation of some of these problems. +[2143.000 --> 2158.000] And so we talk about using not only a visual but an oral kind of component to these instructional strategies. So helping teach a child to talk through a visual graph so that they're able to understand the information that's being presented. +[2158.000 --> 2169.000] And giving them kind of direct instruction in these compensatory strategies. So verbally explaining to support these activities that require complex visual spatial thoughts. +[2169.000 --> 2178.000] We also talk about various methods to solve problems. So instead of just having the one method really teaching them a number of different strategies. +[2178.000 --> 2185.000] So when you look at a graph, you could look at this bar, you could look at this bar, it's going to tell you different information. +[2185.000 --> 2191.000] And what would be a way to what kinds of different information can you glean from a graph that might be helpful. +[2191.000 --> 2199.000] Because you could say that 50% of people with this disorder have learning problems, but 50% of this have math problems. +[2199.000 --> 2208.000] And so how can you really look at that other information and kind of if you're thinking of like a bar chart or something like that if that makes sense. +[2208.000 --> 2213.000] And then we talked about kind of that modifications for both the motor and the visual spatial weaknesses. +[2213.000 --> 2226.000] So note taking support, keyboarding assistance, allowances to type assignments and then using dictation software like drag and naturally speaking. +[2226.000 --> 2242.000] Some of the specific spatial vulnerabilities that we can see would be things that have been recommended by Sue Thompson, this idea of providing a verbal rope to help orient kiddos who might get lost within new kind of environments or new situations. +[2242.000 --> 2252.000] So talking them through, hey look, there's this picture on the wall right outside of your classroom or you have to go to the third hallway. Look, it has green carpet. +[2252.000 --> 2267.000] That's another way to kind of help them verbally orient themselves within a space so that they're able to kind of utilize those verbal cues when they might feel disoriented if they experience that in like a new school setting. +[2267.000 --> 2281.000] We also talk about having a buddy that might be able to watch out and kind of direct them in new situations when they're going to the zoo or if they're on a field trip or recess to kind of be able to kind of help them get back where they need to be. +[2281.000 --> 2310.000] And then rehearsing. So this is especially a big deal for kind of our middle school students who've been in a small kind of nice environment in elementary school and suddenly they're being asked to transition between classrooms and really helping support that by having this repeated exposure to what that looks like and how they're able to kind of walk through these new environments and allowances to have that time ahead of schedule before they start on the first day. +[2310.000 --> 2328.000] All right, guys, just keeping up with my notes here. So social skills are one of the main areas that have also been identified as potential areas and again it does feel like with a clinician as a clinician when you see these patients that they look very different from patients who have an autism spectrum disorder. +[2328.000 --> 2338.000] And the research has really suggested that there's varying results with that. So sometimes it looks like there's a lot of overlap and other studies would say maybe not as much. +[2338.000 --> 2349.000] But I think a lot of the social skill interventions that we have designed as a field can be very effective for girls who have nonverbal learning disabilities if that's appropriate. +[2349.000 --> 2367.000] So one would be kind of therapeutic interventions like a friendship group at school where a school counselor or mental health provider is able to really bring in a group of kiddos some with social skills difficulties and some children who don't have those weaknesses and really help them learn how to build good relationships. +[2367.000 --> 2376.000] We talk about social stories, social stories are kind of these therapeutic narratives that we help a child develop to address specific areas of vulnerability. +[2376.000 --> 2382.000] So it may be that they tend to get bullied and they don't always recognize or know how to react to that. +[2382.000 --> 2389.000] Or it may be as if someone gets older that they just don't always seem to understand that someone might not have the best intentions. +[2389.000 --> 2399.000] And so helping them kind of walk through if someone says this to you, you should say this and kind of giving them a script if you will to be able to respond to those statements. +[2399.000 --> 2416.000] And then structured kind of play dates. So allowing your child to have someone in your home where you can kind of monitor and facilitate those good relationships with other peers and recognizing that you might need to be a little bit more of a part of that than you would otherwise be. +[2417.000 --> 2423.000] So we also talk about speech language interventions. So again, a lot of these kiddos do quite well from a verbal standpoint. +[2423.000 --> 2432.000] So we have really well developed vocabulary, pretty good kind of understanding of basic kind of language based academics like reading and spelling. +[2432.000 --> 2441.000] But we do see this difficulty with pragmatic language and understanding kind of non literal statements as well as kind of the reading comprehension and social skills. +[2442.000 --> 2459.000] And I think those three areas are where we can really draw on the literature and the support of our colleagues in the speech language field where they really are used to working with families and helping kiddos develop specific compensatory and intervention compensatory strategies and intervention for these areas. +[2460.000 --> 2479.000] So I like to spend a little bit of time on just the literal interpretation. So I don't know for those of you guys who have kiddos or those of you who have experienced these situations in the past of really understanding situations like hit the road, Jack or I wasn't born yesterday or some of those kind of statements that you can hear. +[2480.000 --> 2490.000] So what we think of is this idea, oh, thank you, perfect, thanks. Explaining things that might be misinterpreted. So as a parent, the kid might be like, what? +[2490.000 --> 2500.000] So thinking of, I have a little three year old at home and someone told him, gosh, it's just going to explode. And he was like, what? It's going to explode. +[2500.000 --> 2512.000] And I mean, like, oh, wait, wait, that's not really what I mean. And it's not going to explode. Like it's in stepping back and kind of talking through what you meant when you use that verbiage to help them be able to understand. +[2512.000 --> 2525.000] Simplifying or breaking down those abstract concepts. So again, gosh, when I meant hit the road, Jack, I said, gosh, it's just time to go. We got to get out the door, you know, and helping them understand that that's what why you said that and what you meant when you said that. +[2525.000 --> 2532.000] So rather than not using them, really just helping them understand what you're saying when you're using that non-lateral language. +[2532.000 --> 2542.000] Understanding metaphors, nuances in our emotional languages and multiple meanings, I think is something that may not be understood as readily. +[2542.000 --> 2549.000] So really taking the time and recognizing that as a potential area of vulnerability and really helping the teachers see that as well. +[2549.000 --> 2569.000] Because especially as we get into some of the higher level academic readings, that becomes more of a common theme in some of the books is really not taking the literature for face value, but really having to make these inferences or this understanding of these concept or these themes that are running through the books that might not be as explicitly stated. +[2569.000 --> 2590.000] And then teaching the child to advocate for herself, right? I don't understand what you mean when you say that and giving her the words to be able to say that and the ability to advocate when she's in a situation that might feel uncomfortable because she might know she's missing something but isn't quite sure what's been missed. +[2590.000 --> 2599.000] And giving her the vocabulary to help her just decipher kind of what you might have meant. So kind of using those problem solving skills of gosh. +[2599.000 --> 2608.000] Two birds in the, what is it? So a bird in the hand is worth more than two in the bush, right? So gosh, what does that mean? What do you think that means? Let's walk through that together. +[2608.000 --> 2615.000] And giving her some of those problem solving exercises rather than just completely staying away from them. +[2615.000 --> 2630.000] All right. So then emotional evident interventions. And again, you know, there's been varying, varying levels of support, but we do tend to see anxiety as something that is commonly or at higher risk for girls with turners and girls. +[2630.000 --> 2640.000] So I think monitoring stress and anxiety, especially in academic settings is something that's very important. Knowing that she might need more support as she has big transitions. +[2640.000 --> 2649.000] And so right before middle school, right before high school, at the beginning of the school year, recognizing that those can be areas of, or times that are particularly stressful. +[2649.000 --> 2661.000] And considering psychotherapeutic interventions when necessary. So if your child seems to be struggling, feeling like it's completely normal to reach out to mental health providers who can give you guys some ideas. +[2661.000 --> 2675.000] You're amazing parents, right? You guys have all been here. You're doing a good, good job. But sometimes it's helpful to have other people on your team who can really give you some ideas that might not come readily to you while you're in the middle of the forest, right? +[2675.000 --> 2685.000] And then ensure participation in activities that the child feels successful and bad. So we spend a lot of time talking about what we should do to strengthen areas of vulnerability. +[2685.000 --> 2697.000] But I think it's equally as important to strengthen those skills that a child feels very successful and very competent. And we don't spend as much time talking about that, but that's very important as well. +[2697.000 --> 2706.000] So a couple of resources that are out there on the internet for you guys. So one would be nldline.com. +[2706.000 --> 2722.000] And then a site that was established by Dr. Work before his passing that can be a helpful website to look at and kind of get some further information about how people are interpreting and feeling about nonverbal learning disability. +[2722.000 --> 2742.000] And looking at those websites, several of them have very active family involvement. And so there's a number of resources and phone calls and talks that are presented throughout the year at a variety of different sites. And so if you're looking for ways to advocate for your child within your home state, that might be an option for you. +[2742.000 --> 2756.000] Some books that have been kind of well received by the kind of community would be this source for nonverbal learning disorders. This is probably one of them most commonly referenced book. +[2756.000 --> 2771.000] It's quite dated at this point, but I think it's still something that families are reporting that they feel pretty well supported in it. But there are a couple of more recent ones by Tangwei and Davis Embrytman that might be. +[2771.000 --> 2789.000] And then some references for all the scientific literature that I made you guys suffer through. So I will leave some of the books out because I think that's probably one of the most helpful situations. +[2789.000 --> 2802.000] And we do have a little bit of time for questions, I think, right? So about 10 minutes for questions. Okay, so I don't know if you guys have any specific questions or thoughts about presentation at all. Yes. +[2802.000 --> 2816.000] My other questions are sometimes sitting in the back of the mirror and pushing the sit-in chair. Okay. +[2816.000 --> 2827.000] And ask the teacher about the issues that you have, you know, those are the things that should stay for the future or what should you do for all? +[2827.000 --> 2838.000] So I think that's a great question. So I think for any time before we would see a child held back, we would have liked to see an evaluation of some kind to say what are her learning strengths and weaknesses. +[2838.000 --> 2859.000] And is she having trouble? Why is she having trouble staying seated in her chair? I think that having an evaluation with a neuropsychologist or a developmental kind of psychologist would probably be a really good fit at this point so that you can understand not only what her thinking skills are, but also what her academic and kind of those visual spatial weaknesses are, and as well as her attention. +[2859.000 --> 2868.000] So it sounds like some of that motor restlessness that you're experiencing or that you're seeing in class for her. Are you seeing it at home as well? +[2868.000 --> 2884.000] Okay. Yeah. Sure. Okay. Okay. And so there's lots of potential hypotheses, if you will. So kiddos who are anxious look very fidgety. +[2884.000 --> 2894.000] Kiddos who are young have a wide range of normal kind of motor activity, but we also could would be potentially concerned about kind of a diagnosis of an intentional disorder. +[2894.000 --> 2910.000] And that, you know, I think people think, oh my gosh, you're going to tell me to medicate my child, and that's never the first place we go with young children. And so really looking at structuring kind of the environment and giving good behavioral strategy so that we're able to support her attention within the classroom. +[2910.000 --> 2927.000] And then knowing that attention to task is so highly correlated to academic success, if we're not able to regulate those attentional behaviors or that impulsivity or hyperactivity, if you will, better than looking at kind of the other potential psychopharmacological interventions that might be available for you. +[2927.000 --> 2937.000] But at this stage, I would really encourage you to consider kind of reaching out to a neuropsychologist and I'm happy to help you find that if that would be beneficial. Yeah, absolutely. Yes, ma'am. +[2937.000 --> 2946.000] Do you have an idea of what the first thing in the classic church role that you're being done with is in and about what age you think is like? +[2946.000 --> 2955.000] Those are excellent questions. I, the literature that I know wouldn't suggest that we have that well developed of an understanding of this diagnosis at this point. +[2955.000 --> 2967.000] And because I think it's that discrepancy between the criteria that's been used, and I don't think that we've seen as much research into specific diagnoses in terms of base rates and things like that at this point. +[2967.000 --> 2982.000] My thought about the age of diagnosis was that it's probably more likely to occur as math difficulties become more obvious. And so we oftentimes, again, because there is such a wide range of normal increase school and kindergarten kids in terms of math concepts that I could see. +[2982.000 --> 2989.000] You know, later identification of math learning disabilities and then kind of seen some of those concepts that would fall into place as well. +[2989.000 --> 2993.000] But I don't know if we have great literature about that. Yes. +[2993.000 --> 3001.000] Is it possible for someone to have this and not have the issue of math and get my knowledge of everything else? +[3001.000 --> 3004.000] Except the math learning disability. +[3004.000 --> 3020.000] She does okay. So I think that you're highlighting one of this ideas are their subtypes, right? Are there people that struggle with, you know, the visual spatial but not the math? +[3020.000 --> 3024.000] Are there people that struggle with the math but not the social? +[3024.000 --> 3036.000] And I think that one of the things that's really important to recognize about turner syndrome is that it may just be that the diagnosis alone is enough for us to say that we know that these cognitive weaknesses are associated. +[3036.000 --> 3043.000] And so do we need another diagnosis? And there's been some argument in the field to say, isn't this medical diagnosis enough? +[3043.000 --> 3053.000] And oftentimes, that's how I advocate for families within special education teams is not to say let's call this a specific learning disability, but that let's call this other health impaired, +[3053.000 --> 3061.000] which is that special educational interclassification that I think helps us be able to capture the constellation of weaknesses. +[3061.000 --> 3072.000] So not only the math learning disability, whether or not it's present, but also the visual spatial risks, some of the motor concerns, and then the social stuff that I don't oftentimes think falls into an autism diagnosis. +[3072.000 --> 3077.000] And oftentimes, they're not meeting criteria for an autism diagnosis. +[3077.000 --> 3084.000] So that is usually how we're reaching out and kind of trying to advocate for the families within the Colorado setting as well, as at least. +[3084.000 --> 3091.000] I think it is that we need different criteria for a plan and they did that only thing that we could do is so they didn't have the diagnosis. +[3091.000 --> 3097.000] Yeah, yeah. And I think that that in a lot of ways is the easiest way to advocate as a family, right? +[3097.000 --> 3103.000] But you know, I mean, we could have some additional information as we keep going and learning more about this diagnosis. Yes, ma'am. +[3103.000 --> 3112.000] You said that the concept of climate change is now there. You know that these strategies help us develop that? +[3112.000 --> 3124.000] Yeah, I think that's a great idea. I'm probably a question that I would refer back to some of the educational teachers and tutors of Al, because I think teaching time is something that we spend a fair amount of time on in kind of early grades. +[3124.000 --> 3132.000] And so I think kind of referring back to some of those providers and asking what we can do. I would think about like the visual spatial demands of that, right? +[3132.000 --> 3140.000] So teaching someone kind of that traditional clock and recognizing that it might be really hard, short-hand, long-hand, which one's pointing where? +[3140.000 --> 3143.000] That maybe just going to a more digital kind of. +[3143.000 --> 3149.000] Oh, of how long? Yes. +[3149.000 --> 3163.000] That she doesn't have enough time. That I think can include a lot of executive functioning skills perhaps, so planning and organization and kind of recognizing, you know, how long did this task take me in the past? +[3163.000 --> 3171.000] How long is it going to take me in the future? And that kind of a topic I think would be like a whole nother lecture quite frankly in terms of those executive functioning skills. +[3171.000 --> 3178.000] But there are a couple of books. So the Smart but Scattered series is one of the great kind of books to look at executive functioning skills. +[3178.000 --> 3191.000] I don't know if they have specific interventions or accommodations for time instruction, but that would be kind of where I would refer to in terms of apparent, friendly kind of way to think about executive functioning strategies. Yes, ma'am. +[3191.000 --> 3195.000] Oh, sorry. +[3195.000 --> 3200.000] Yes. +[3200.000 --> 3205.000] Uh-huh. +[3205.000 --> 3215.000] Every year at our annual meeting, I just wanted her to say 99% of girls in her mind are not allowed to do this work, but I just, they say, we don't recognize that. +[3215.000 --> 3219.000] But I said, well, she's got some weaknesses. You better recognize them. +[3219.000 --> 3223.000] Right. +[3223.000 --> 3227.000] Sure. +[3227.000 --> 3240.000] Rather than just going in there and blabbering on that, should I just leave it at term or syndrome or should I try to come up with one of those 13-9 masks and those that work on the list? +[3240.000 --> 3246.000] So that's what we advocate for other health impaired. +[3246.000 --> 3248.000] Yeah. +[3248.000 --> 3255.000] So I actually formally trained as a school psychologist. I spent a fair amount of time sitting on the other side of the table in terms of IEP meetings. +[3255.000 --> 3261.000] And I think that there are boxes that have to be checked and there is not a box for NLD. +[3261.000 --> 3266.000] And there's a box for math learning disability, but that's only a piece of the puzzle for a lot of these students. +[3266.000 --> 3274.000] And so I think that really going under the classification of other health impaired opens up the door for more interventions. +[3274.000 --> 3276.000] And it also, thank you. +[3276.000 --> 3277.000] Thanks so much. +[3277.000 --> 3282.000] And it also, I'm sorry, I lost my train of thought. +[3282.000 --> 3291.000] It also does, it's going to minimize the battle that you're going to have every year because her other health impaired category is never going to change, right? +[3291.000 --> 3292.000] Yeah. +[3292.000 --> 3299.000] She's good. +[3299.000 --> 3301.000] Right, right. +[3301.000 --> 3302.000] Yeah. +[3302.000 --> 3309.000] So I think it's harder to discontinue an IEP with another health impaired diagnosis, has been my experience. +[3309.000 --> 3317.000] If a child is doing academically well and we don't think that they need the accommodations any longer, then it is harder to advocate for an IEP. +[3317.000 --> 3327.000] But if we know that she has these consistent cognitive vulnerabilities, which are not seen in every kiddo with Turner syndrome, but if that is something that she continues to exhibit. +[3327.000 --> 3331.000] And we think that there's a direct influence on how well she can do academically. +[3331.000 --> 3343.000] Then I think there's a lot of support to continue an IEP, but the other health impaired seems to be, and my personal experience, an easier way to advocate for continuation of those services, rather than the specific learning disability. +[3343.000 --> 3344.000] Yes, ma'am. +[3344.000 --> 3355.000] So, for considering those, her and her, I've got her, which is five in the language program, thinking just as her girls know what they're doing. +[3355.000 --> 3356.000] Sure. +[3356.000 --> 3357.000] Yeah. +[3357.000 --> 3362.000] What kind of a passive of her, or do you think that that's about her or her math? +[3362.000 --> 3365.000] So I wouldn't think that it should hurt her math. +[3365.000 --> 3372.000] I mean, I think that I would want to make sure that she has some additional focus on math, and she's getting some good evidence-based instruction in math. +[3372.000 --> 3373.000] But... +[3373.000 --> 3376.000] Now is class Spanish, or in the other language? +[3376.000 --> 3377.000] Oh. +[3377.000 --> 3384.000] First, you know, they started, and I think it's not Spanish, and I think it's in English, each year they have. +[3384.000 --> 3386.000] I think it's in English. +[3386.000 --> 3387.000] Okay. +[3387.000 --> 3391.000] And so some of the programs that I'm aware of in Colorado have, like, half-day programs. +[3391.000 --> 3397.000] We're in the morning, it's dual language, and then some of the afternoon programs, like specific concepts like math or taught in English, +[3397.000 --> 3402.000] is that that doesn't sound like that's the program she's thinking about. +[3402.000 --> 3403.000] Okay. +[3403.000 --> 3404.000] Okay. +[3404.000 --> 3411.000] So, I mean, one option would be to give it a try, and to see how it goes, and then to supplement with some math instruction at home, +[3411.000 --> 3419.000] either kind of individually, or kind of hiring, kind of a tutor, to make sure that she's just continuing to acquire those concepts that I'll rate that you might expect. +[3419.000 --> 3426.000] But I think that dual language would play to some of her verbal strengths, and is a great asset to have as you're going forward as an adult. +[3426.000 --> 3434.000] So, I wouldn't say don't do it, I would just say be very mindful, and keep a very close eye, and all schools are required to do kind of curriculum-based measurement, +[3434.000 --> 3438.000] and so you'll be able to see how she's doing compared to her peers on math. +[3438.000 --> 3448.000] And if she starts to seem like she's not making the progress we would expect, or you're seeing a decline, then that would be when I would encourage you to maybe seek out other resources, or consider a change of schools. +[3448.000 --> 3450.000] And then I know I missed you last time, so I'm going to jump. +[3450.000 --> 3452.000] I said I'd come in with a little story. +[3452.000 --> 3454.000] What would you do for the first time? +[3454.000 --> 3456.000] About the what I'm sorry? +[3456.000 --> 3461.000] About the not being able to get the concept of time. +[3461.000 --> 3462.000] Oh, okay. +[3462.000 --> 3467.000] Why not at the same rate, and what I know is related to the show for the one. +[3467.000 --> 3473.000] I'd say this day, one month, and one month, or two months, whatever, that's like a lot. +[3473.000 --> 3475.000] And then I'll miss you. +[3475.000 --> 3478.000] I like that strategy, that's nice. Thank you for sharing. +[3478.000 --> 3481.000] Anyone else have any other questions or thoughts? +[3481.000 --> 3484.000] All right. Well, thank you guys so much. I appreciate it. +[3484.000 --> 3485.000] I know it's a... +[3485.000 --> 3486.000] Thank you very much. diff --git a/transcript/allocentric_xG1zuIXC9dc.txt b/transcript/allocentric_xG1zuIXC9dc.txt new file mode 100644 index 0000000000000000000000000000000000000000..0b8b13065190d75bd66f18213def991705665b8d --- /dev/null +++ b/transcript/allocentric_xG1zuIXC9dc.txt @@ -0,0 +1,3 @@ +[0.000 --> 1.760] geben со своими армиямиvo +[10.720 --> 11.480] перебли +[12.580 --> 21.940] я неrig diff --git a/transcript/allocentric_yNvSfYxAes8.txt b/transcript/allocentric_yNvSfYxAes8.txt new file mode 100644 index 0000000000000000000000000000000000000000..8b15b3dd9eb56f6ee8da7bdcc3d209204a4e4142 --- /dev/null +++ b/transcript/allocentric_yNvSfYxAes8.txt @@ -0,0 +1,95 @@ +[0.000 --> 13.120] The sense of navigation in the animal kingdom is a mystery in neuroscience. +[13.120 --> 15.960] How does the brain create a map of its surroundings? +[15.960 --> 20.760] How do we maintain our sense of direction when we encounter an obstacle? +[20.760 --> 25.720] In the 1980s scientists had discovered a group of cells called head direction cells that +[25.720 --> 29.280] help us know our angular orientation. +[29.280 --> 34.680] Here are two compass needles, these cells indicate the angle in which the head is pointed. +[34.680 --> 38.920] But what happens when we are thrown off course by an unseen force? +[38.920 --> 43.320] And we are trying to navigate in a direction that is different from the way that our head +[43.320 --> 45.200] is pointed. +[45.200 --> 50.120] In this episode we talk about the discovery of special neurons in our brains that carry +[50.120 --> 55.680] out complex vector math to keep track of the direction in which the body is moving, +[55.680 --> 59.200] regardless of which way the body is pointing. +[59.200 --> 62.720] I am Mohna Basu and this is pure science. +[62.720 --> 67.840] Consider a fly being shunted backward by a strong wind in defiance of their forward beating +[67.840 --> 68.840] wings. +[68.840 --> 75.640] A fish swimming up river or traps, cutling sideways, even humans moving left while looking +[75.640 --> 78.920] to the right present similar challenges. +[78.920 --> 84.400] A new study published in the Nature Journal reports that the fly brain has a set of neurons +[84.400 --> 89.200] that signal the direction in which the body is traveling, regardless of the direction +[89.200 --> 92.240] in which the head is pointing. +[92.240 --> 98.080] The findings by researchers from Rock Valley University in the US also describes in detail +[98.080 --> 104.640] how the fly's brain calculates this signal from more basic sensory inputs. +[104.640 --> 109.720] Even when we close our eyes we can usually retain a good idea of where we are in the room +[109.720 --> 111.760] and which way we are facing. +[111.760 --> 117.640] That's because our brain construct an internal understanding of where we are in space. +[117.640 --> 123.400] The head direction cells play a key role in letting us know our angular orientation and +[123.400 --> 126.840] flies to have cells with similar functions. +[126.840 --> 131.600] The cells activity indicates the angle in which the head is pointing. +[131.600 --> 137.400] These cells would be sufficient when we were walking or flies are flying in the same direction +[137.400 --> 139.160] that the head is facing. +[139.160 --> 144.080] The head direction cells update the internal sense of where one is going. +[144.080 --> 149.240] But if we walk north while facing east or if a fly attempts to fly forward while the +[149.240 --> 154.720] wind pushes it backward, the head direction cells point in the wrong direction. +[154.720 --> 158.120] Yet somehow the system still works. +[158.120 --> 164.280] Flys are relatively unperturbed by the indignities of wind currents and humans don't get lost +[164.280 --> 168.680] when we look around to take in the scenery during a walk. +[168.680 --> 173.960] So researchers wondered how flies know where they are going even when their head direction +[173.960 --> 178.200] cells were seemingly relaying inaccurate on information. +[178.200 --> 183.400] For the study the team glued through flies to miniature harnesses that hold only the insects +[183.400 --> 188.520] heads in place, enabling them to record brain activity while leaving the flies free +[188.520 --> 193.240] to flap their wings and steer their bodies through a virtual environment. +[193.240 --> 198.400] The setup contained several which will cues including a bright light representing the +[198.400 --> 203.120] sun and a feel of dimmer dots that could be adjusted to make the fly feel like it was +[203.120 --> 206.480] being blown backward or sideways. +[206.480 --> 212.240] The team found that the head direction cells consistently indicated the flies orientation +[212.240 --> 218.760] to the sun which here was simulated by the bright light independently of the perceived +[218.760 --> 220.440] motion. +[220.440 --> 226.240] In addition the researchers identified a new set of cells that indicated which way the +[226.240 --> 230.760] flies were travelling and not just the direction their head was pointing. +[230.760 --> 237.680] For example if the flies were oriented directly to the sun in the east while being blown backward +[237.680 --> 241.960] these cells indicated that the flies were travelling west. +[241.960 --> 247.660] This is the first set of cells known to indicate which way an animal is moving in the world +[247.660 --> 252.300] centered reference instead of a head centered reference frame. +[252.300 --> 257.900] The team also wondered how fly brains compute the animal's travel direction at the cellular +[257.900 --> 258.900] level. +[258.900 --> 265.860] But the team was able to demonstrate that fly brains engages in a sort of mathematic exercise. +[265.860 --> 271.420] In physics when we plot an object's trajectory it is broken into components of motion plotted +[271.420 --> 274.500] along the x and y axis. +[274.500 --> 280.500] Similarly in the fly brain four classes of neurons that are sensitive to visual motion indicate +[280.500 --> 285.540] the flies travelling direction as components along four axis. +[285.540 --> 290.300] Each neuronal class can be thought of as representing a mathematical vector. +[290.300 --> 294.060] A vector is an object that has both magnitude and direction. +[294.060 --> 299.620] So for example when we talk about an object's velocity it is an example of a vector because +[299.700 --> 305.020] it describes how fast an object is moving and in which direction it is moving. +[305.020 --> 309.660] The vector's angle points in the direction of which associated axis. +[309.660 --> 314.780] The vector's length indicates how fast the fly is moving along that direction. +[314.780 --> 320.260] So a neural circuit in the fly brain rotates these four vectors so that they are aligned +[320.260 --> 324.380] properly to the angle of the sun and then adds them up. +[324.380 --> 328.940] The result is an output vector that points to their direction the fly is travelling. +[328.940 --> 331.460] The reference to the sun. +[331.460 --> 336.220] Vector math in this case is not just an analogy for the computation taking place. +[336.220 --> 342.660] The team found evidence that the fly brain is literally performing vector maths. +[342.660 --> 348.940] Neurons explicitly represent vectors as waves of activity with the position of the wave +[348.940 --> 354.780] representing the vector's angle and the height of the wave representing its length. +[354.780 --> 360.100] The researchers even tested this idea by precisely manipulating the length of the four input +[360.100 --> 365.820] vectors and showing that the output vector changes just as it would if the flies were +[365.820 --> 368.860] literally adding up vectors. +[368.860 --> 375.260] What makes the study unique is that it provides extensive evidence to show how neuronal circuits +[375.260 --> 378.700] implement sophisticated mathematical operations. +[378.700 --> 384.620] The research clarifies how flies figure out which way they are going in the moment. +[384.620 --> 390.220] In future studies we will examine how these insects keep track of their travel direction +[390.220 --> 394.980] over time to know where they are ultimately ended up. +[394.980 --> 400.820] The core question is how does the brain integrate signals related to the animals travel direction +[400.820 --> 404.260] and speed over time to form memories. +[404.260 --> 408.100] That can help further understand how working memory looks in the brain. +[408.100 --> 411.900] The findings may also have implications for human disease. +[411.900 --> 417.100] And spatial confusion is often an early sign of Alzheimer's disease. +[417.100 --> 424.940] Many neuroscientists are interested in understanding how brain construct an internal sense of space. +[424.940 --> 429.420] The fact that insects with their tiny brains have explicit knowledge of their traveling +[429.420 --> 434.980] direction means we should also search for similar signals in mammal brains. +[434.980 --> 441.060] Such a discovery might inform aspects of dysfunction underlying Alzheimer's disease as well +[441.060 --> 445.260] as other neurological disorders that afflict spatial cognition. +[445.260 --> 448.540] This is Mohana Basu, Special correspondent at the print. +[448.540 --> 452.340] If you like our work, you can consider paying for a subscription to the print. +[452.340 --> 454.820] You can do so through the link in the description box below. diff --git a/transcript/allocentric_z1Ak18pM3u0.txt b/transcript/allocentric_z1Ak18pM3u0.txt new file mode 100644 index 0000000000000000000000000000000000000000..4a1167eb2ce2044d408a1ffdceb9bb7138d7a86e --- /dev/null +++ b/transcript/allocentric_z1Ak18pM3u0.txt @@ -0,0 +1,92 @@ +[0.000 --> 7.360] We're going to talk about proxemics or the distance that people prefer between themselves and other people. +[7.360 --> 10.600] It's a key part of communication and daily interaction. +[10.600 --> 14.480] I'm working out of BB and Master's in's book on communicating in small group, +[14.480 --> 18.000] but this research is in numerous books across the field. +[18.000 --> 19.200] So let's get into it. +[24.200 --> 28.280] So the key point here, the overarching idea, as we look at all these things, +[28.280 --> 33.760] is that the amount of space that we put between ourselves and others depends upon a lot, +[33.760 --> 37.360] like the relationship, the situation, the context. +[37.360 --> 41.000] If you were in, for example, a crowded room, like a crowded nightclub, +[41.000 --> 46.360] you would allow and feel okay with a much more personal contact with other people, +[46.360 --> 51.640] then you would if you were in an outdoor park or other kinds of big public situation. +[51.640 --> 53.400] So it really depends on a lot of factors. +[53.400 --> 57.640] Nevertheless, there are some norms that we are going to look at. +[57.800 --> 64.920] Proxemics, quickly put, is the study of how close or far we choose to be from other people and objects. +[64.920 --> 69.720] And studying proxemics helps us understand our own use of personal space +[69.720 --> 74.120] and gives us clues about our relationship with other people. +[75.320 --> 77.800] Edward T. Hall developed this research. +[77.800 --> 79.960] He came up with the four zones of space. +[79.960 --> 84.000] The closest is the intimate zone between zero and one and a half feet. +[84.000 --> 88.080] The personal zone, which is one and a half feet to four feet, the social zone, +[88.080 --> 89.600] which is four to 12 feet. +[89.600 --> 92.960] And beyond that is the public zone, beyond 12 feet. +[92.960 --> 96.800] So we're going to look at each of these and what they all mean in order. +[96.800 --> 98.920] So first, let's look at the intimate zone. +[98.920 --> 102.120] This is between zero and 1.5 feet. +[102.120 --> 104.280] So zero means you're probably touching. +[104.280 --> 109.560] And most personal and intimate conversations happen in this distance. +[109.560 --> 113.920] We see this distance between friends and intimate partners. +[114.080 --> 118.000] Best friends, in fact, and parents and young children, especially. +[118.000 --> 122.240] And we only really see it in group situations and in a work situation. +[122.240 --> 127.600] Let's say if somebody is leaning into whisper something to somebody for just a moment. +[127.600 --> 130.880] This is the distance where kisses and hugs happen. +[130.880 --> 133.280] This is the distance where headbutts happen. +[133.280 --> 138.080] So this is an extremely intimate zone where almost anything can happen, +[138.080 --> 142.880] and which is why we allow very few people into this intimate zone. +[142.960 --> 145.160] The second zone is the personal zone. +[145.160 --> 148.120] This is 1.5 feet to four feet. +[148.120 --> 152.720] Conversations with family and close friends happen in this zone. +[152.720 --> 155.320] In groups, let's say you're in a group setting, +[155.320 --> 158.000] you might be this distance from other people, +[158.000 --> 162.120] but some people in the group may feel that this is too personal. +[162.120 --> 165.000] This kind of space is a bit too close. +[165.000 --> 168.680] This is the distance where handshakes happen, high five happen, +[168.680 --> 171.280] a slap on the back would happen for a good job. +[171.320 --> 173.440] This is also though, fighting distance, +[173.440 --> 175.720] which is why not everybody is comfortable with it. +[175.720 --> 179.040] If somebody is close enough to punch you or kick you, +[179.040 --> 184.240] then there is a bit of vulnerability we may feel when we're in this personal zone. +[184.240 --> 186.040] Number three is the social zone. +[186.040 --> 188.400] This is four to 12 feet. +[188.400 --> 192.040] Most group interactions happen in this zone. +[192.040 --> 196.720] This is where interactions with coworkers and other kinds of professionals +[196.720 --> 201.120] occur. This is close enough to pass objects back and forth, +[201.120 --> 202.760] close enough to sit around a table, +[202.760 --> 207.600] close enough to talk to somebody across a desk in an office situation. +[207.600 --> 210.560] Still, sometimes because it's four to 12 feet, +[210.560 --> 212.600] sometimes it's a little too close to people, +[212.600 --> 214.120] if people are jammed in there. +[214.120 --> 218.680] And so what you'll see is often people trying to maintain their territory +[218.680 --> 220.640] in very specific ways. +[220.640 --> 225.960] For example, they might make big gestures to claim a little bit more space around them. +[226.000 --> 230.480] They might place objects like they might put a coffee mug in a certain position +[230.480 --> 233.320] so that no one comes into that space. +[233.320 --> 238.240] They may lay their papers and notebooks out around a table that way. +[238.240 --> 242.760] They may choose a certain seating arrangement that helps them maintain a little bit more space. +[242.760 --> 244.320] This is the social zone. +[244.320 --> 249.360] And because it happens in the workplace and it has such a huge expanse from four to 12 feet, +[249.360 --> 252.920] you see a lot of maneuvering around this distance. +[252.960 --> 258.840] And then the last one is the public zone, 12 feet in, beyond teachers and public speakers. +[258.840 --> 262.560] Often stand this distance when we're interacting with each other. +[262.560 --> 264.520] This is the distance you'd likely choose, however, +[264.520 --> 269.920] if you were in an almost empty restaurant or an almost empty library or a museum, +[269.920 --> 272.920] certainly if you were outdoors in some public situation, +[272.920 --> 274.680] you would be 12 feet in beyond. +[274.680 --> 281.480] If you get closer than this in one of those public situations where you don't have a relationship with other people, +[281.480 --> 284.280] usually there's a pretty good reason for it like there's a big crowd there. +[284.280 --> 287.880] Otherwise, we choose lots of space. +[287.880 --> 289.240] So why does this matter? +[289.240 --> 296.680] Well, our increased awareness of the kinds of proxemic preferences that we have and others have +[296.680 --> 301.640] will help us to adjust to other people in conversations and in relationships. +[301.640 --> 307.240] For example, you may want to respect other people's needs for more space. +[307.240 --> 310.960] Usually people have a comfortable distance that they're willing to be +[310.960 --> 317.680] to someone else. They have a bubble and you don't want to enter that bubble unless the other person seems like +[317.680 --> 319.920] they want to be that close to you. +[319.920 --> 324.080] So the key tip takeaway here, especially when you're getting personal, +[324.080 --> 333.440] is it's riskier to get too close to somebody to stand too close than it is to maintain a slightly more respectful distance. +[333.440 --> 335.200] So don't get too close too fast. +[335.200 --> 340.320] Make sure you maintain a distance and let the other person adjust to you as well. +[340.400 --> 345.840] So question of the day, have you ever gotten feedback on your personal use of space? +[345.840 --> 348.640] Do you stand too close? Do you stand too far? +[348.640 --> 356.080] I would love to hear your comments and your comfort level with distance to other people in that comment section below. +[356.080 --> 358.880] So thanks. I will see you soon. Take care. diff --git a/transcript/allocentric_zS70dXUPsqE.txt b/transcript/allocentric_zS70dXUPsqE.txt new file mode 100644 index 0000000000000000000000000000000000000000..f32531035127a8e5c09a7234f1f15e5ea8479eac --- /dev/null +++ b/transcript/allocentric_zS70dXUPsqE.txt @@ -0,0 +1,5 @@ +[0.000 --> 4.720] Knowledge will liberate you knowing that this is what is motivating them. +[4.720 --> 10.560] All right, now I have options. I don't have to react. I don't have to get emotional. I can +[10.560 --> 18.320] retreat. I can withdraw. I can play certain games on them that'll, if you want to infuriate them +[18.320 --> 22.640] and imbalance them so they'll leave you alone, have more than enough material to do that. You have +[22.640 --> 28.960] deterrent strategies from deterring them from being aggressive. You have options when you have knowledge.