davidshapiro_youtube_transcripts / Microsoft LongNet One BILLION Tokens LLM OpenAI SuperAlignment SINGULARITY APPROACHES_transcript.csv
| text,start,duration | |
| morning everybody David Shapiro here,0.0,7.379 | |
| with a surprise update so yesterday July,2.46,8.46 | |
| 5th this paper dropped long net scaling,7.379,6.541 | |
| Transformers to 1 billion,10.92,4.44 | |
| tokens,13.92,4.619 | |
| now to put this in context,15.36,6.48 | |
| this is a three gigapixel image which,18.539,7.381 | |
| you can make sense of at a glance,21.84,6.9 | |
| and I'm not going to dig too deep into,25.92,5.88 | |
| the uh the cognitive neuroscience and,28.74,5.64 | |
| neurological mechanisms about why you,31.8,5.34 | |
| can make sense of so much information so,34.38,4.08 | |
| quickly but if you want to learn more,37.14,3.06 | |
| about that I recommend the forgetting,38.46,3.779 | |
| machine,40.2,5.1 | |
| but what I want to point out is that you,42.239,4.681 | |
| can take a glance at this image and then,45.3,4.259 | |
| you can zoom in and you understand the,46.92,5.28 | |
| implications of every little bit of this,49.559,5.221 | |
| image this is clearly an arid mountain,52.2,4.499 | |
| range there's a road going across it,54.78,4.2 | |
| there's uh some Haze there's a city in,56.699,4.561 | |
| the background you can keep track of all,58.98,4.32 | |
| of that information at once just by,61.26,4.5 | |
| glancing at this image and then when you,63.3,4.08 | |
| zoom in,65.76,3.899 | |
| you can say oh look there's a nice house,67.38,3.9 | |
| on the hillside and you can keep track,69.659,4.261 | |
| of that information in the context of,71.28,4.699 | |
| this three gigapixel image,73.92,6.239 | |
| this is fundamentally what sparse,75.979,7.78 | |
| attention is and that is how this paper,80.159,6.241 | |
| solves the problem of reading a billion,83.759,3.841 | |
| tokens,86.4,4.5 | |
| so let's unpack this a little bit first,87.6,6.12 | |
| I love this chart this is this is a,90.9,4.38 | |
| really hilarious Flex right at the,93.72,3.0 | |
| beginning of this paper right under the,95.28,4.68 | |
| abstract they're like okay you know 512,96.72,8.64 | |
| uh 12K 64k tokens uh 262 a million,99.96,7.08 | |
| tokens and then here's us a billion,105.36,4.5 | |
| tokens so good job oh also I want to,107.04,5.219 | |
| point out this is from Microsoft this is,109.86,4.56 | |
| not just from some Podunk you know,112.259,4.621 | |
| Backwater University this is Microsoft,114.42,4.8 | |
| you know who's in partnership with,116.88,5.94 | |
| openai uh and so I saw a post somewhere,119.22,4.56 | |
| I think it was a tweet or something,122.82,2.399 | |
| someone's like Microsoft seems like,123.78,3.54 | |
| they're really just falling down on AI,125.219,3.661 | |
| research and I have no idea what rock,127.32,3.48 | |
| they're living under but pay attention,128.88,4.92 | |
| to Microsoft my money is on Microsoft,130.8,5.88 | |
| and Nvidia uh for for the AI race and,133.8,4.56 | |
| then of course there's Google,136.68,3.779 | |
| um but I don't understand get Google's,138.36,3.42 | |
| business model because they invented,140.459,2.701 | |
| this stuff and then sat on it for seven,141.78,3.24 | |
| years so I have no idea what Google's,143.16,4.92 | |
| doing anyways sorry I digress,145.02,8.1 | |
| okay billion tokens seems kind of,148.08,8.28 | |
| out there kind of hyperbolic right the,153.12,7.259 | |
| the chief uh Innovation here is one they,156.36,5.519 | |
| they have a training algorithm which I,160.379,2.64 | |
| don't care about that as much I mean,161.879,2.881 | |
| distributed training okay lots of people,163.019,3.661 | |
| have been working on that but the chief,164.76,3.9 | |
| Innovation here is what they call,166.68,5.88 | |
| dilation so let me uh bring that up uh,168.66,6.299 | |
| so what they do is let's see hang on,172.56,3.72 | |
| where did it go where did it go where's,174.959,3.661 | |
| the dilation diagram all right so what,176.28,4.5 | |
| it allows it to do a dilated attention,178.62,3.24 | |
| sorry,180.78,3.48 | |
| so what dilated attention allows it to,181.86,4.26 | |
| do is to zoom out,184.26,4.559 | |
| and take in the entire sequence all at,186.12,6.32 | |
| once which controls the amount of memory,188.819,6.42 | |
| uh and computation that it takes to take,192.44,6.1 | |
| in that large sequence just like you and,195.239,5.821 | |
| your brain zooming in and out,198.54,5.339 | |
| and keeping the entire context of this,201.06,6.179 | |
| image in mind,203.879,6.061 | |
| at the same time,207.239,5.041 | |
| and so the way that it does that is,209.94,5.7 | |
| actually relatively similar,212.28,6.06 | |
| to the way that human brains do it hang,215.64,4.86 | |
| on hang on where did it go I'm missing,218.34,5.58 | |
| the diagram okay so what they do,220.5,5.64 | |
| is they create sparse representations,223.92,3.899 | |
| and those who have been following me for,226.14,3.42 | |
| a while you might remember when I came,227.819,3.121 | |
| up with the idea of sparse priming,229.56,3.959 | |
| representations this is something to pay,230.94,5.579 | |
| attention to because what I realized is,233.519,4.921 | |
| that language models only need a few,236.519,3.961 | |
| Clues just a few breadcrumbs to remind,238.44,5.219 | |
| it to cue in as to what is going on in,240.48,4.74 | |
| the message to what's going on in the,243.659,3.601 | |
| memories and this is actually why it's,245.22,3.84 | |
| really good at you just give it a tiny,247.26,4.199 | |
| chunk of text and it can infer the rest,249.06,5.7 | |
| why because it has read so much that it,251.459,5.46 | |
| is able to infer what came before that,254.76,4.62 | |
| text and what came after and so by,256.919,4.921 | |
| zooming out and creating these really,259.38,4.92 | |
| sparse representations of larger,261.84,4.98 | |
| sequences it can keep track of the,264.3,6.06 | |
| entire thing and what it does is it will,266.82,6.06 | |
| take the the up to a billion token,270.36,3.6 | |
| sequence,272.88,4.62 | |
| break it up slice it up and then makes a,273.96,5.94 | |
| layered sparse representations of the,277.5,4.259 | |
| entire thing and it will therefore be,279.9,4.26 | |
| able to keep track of it now okay that,281.759,5.701 | |
| sounds really nerdy uh but here's here's,284.16,6.06 | |
| what it does for the performance so with,287.46,4.5 | |
| this sparse representation with this,290.22,5.16 | |
| dilation and doing it in massively in,291.96,6.72 | |
| parallel it solves a few problems so one,295.38,6.599 | |
| you see that the runtime stayed under a,298.68,5.76 | |
| thousand uh milliseconds under one,301.979,5.101 | |
| second it's more about half a second all,304.44,5.1 | |
| the way up to a billion tokens,307.08,4.619 | |
| so because of that it's basically,309.54,4.02 | |
| zooming in and out of the text the,311.699,3.181 | |
| representation of the text that it,313.56,3.9 | |
| creates in the same way a very similar,314.88,4.62 | |
| way that your brain keeps track of a,317.46,3.72 | |
| three gigapixel image as you zoom in and,319.5,3.9 | |
| out you're like okay cool okay I see a,321.18,3.84 | |
| bunch of cars parked on the side of the,323.4,2.579 | |
| road,325.02,2.28 | |
| um and you can just remember that fact,325.979,4.081 | |
| oh let's do a quick test what else do,327.3,4.5 | |
| you remember about this image maybe you,330.06,3.54 | |
| remember that the the Hollywood sign is,331.8,3.0 | |
| in the background over here somewhere,333.6,2.28 | |
| there it is,334.8,2.88 | |
| oh no that's not it,335.88,4.14 | |
| but it's somewhere in here so it's like,337.68,4.56 | |
| okay based on the cars and the Arid,340.02,4.32 | |
| desert and that I'm based I'm guessing,342.24,4.86 | |
| that this is Los Angeles right uh,344.34,5.16 | |
| anyways point being is that oh there it,347.1,4.68 | |
| is Hollywood uh so this is these are the,349.5,4.199 | |
| Hollywood Hills and you can remember oh,351.78,3.84 | |
| yeah there was a nice mansion over here,353.699,3.84 | |
| there's cars parked over here that's,355.62,4.2 | |
| probably downtown LA the Hollywood sign,357.539,4.141 | |
| is over here so by keeping by basically,359.82,4.7 | |
| creating a mental map this treats,361.68,5.7 | |
| gigantic pieces of information not,364.52,5.2 | |
| unlike a diffusion model,367.38,4.86 | |
| when and because I I got I was clued in,369.72,4.319 | |
| on that when I looked at the way that it,372.24,3.959 | |
| was it was um mapping everything and I,374.039,4.5 | |
| was like hold on it's creating a map of,376.199,4.381 | |
| the text by just breaking it down,378.539,4.201 | |
| algorithmically and saying okay let's,380.58,4.2 | |
| just make a scatter plot of all the text,382.74,3.239 | |
| here,384.78,3.0 | |
| uh or scatter plot's not the right word,385.979,4.261 | |
| but the it's basically making a bitmap,387.78,5.639 | |
| of the text of the representations of,390.24,4.98 | |
| what is going on in the sequence and I'm,393.419,4.141 | |
| like okay this is a fundamentally,395.22,3.9 | |
| different approach to representing text,397.56,4.079 | |
| and this is also really similar to some,399.12,4.019 | |
| of the experiments that I've done if you,401.639,3.301 | |
| remember Remo rolling episodic memory,403.139,3.541 | |
| organizer which creates layers of,404.94,4.14 | |
| abstraction this does it algorithmically,406.68,5.16 | |
| in real time so this just blows,409.08,5.04 | |
| everything that I've done with memory,411.84,5.46 | |
| research completely out of the water it,414.12,7.079 | |
| also has the ability to basically uh,417.3,7.019 | |
| kind of summarize as it goes and that's,421.199,5.581 | |
| not necessarily the right word because,424.319,4.261 | |
| summarization means that you take one,426.78,3.78 | |
| piece of text and create a smaller piece,428.58,4.2 | |
| of text but this creates a neural,430.56,4.02 | |
| summarization a neural network,432.78,4.139 | |
| summarization by creating these layers,434.58,3.78 | |
| of abstraction,436.919,4.381 | |
| and this allows it to zoom in and out as,438.36,4.8 | |
| it needs to so that it can cast its,441.3,4.14 | |
| attention around internally in order to,443.16,4.2 | |
| keep track of such a long sequence now,445.44,4.8 | |
| okay great what does this mean,447.36,5.52 | |
| as someone who has been using GPT since,450.24,5.7 | |
| gpt2 where it was basically just a,452.88,5.939 | |
| sentence Transformer it couldn't do a,455.94,4.92 | |
| whole lot more you know like in in this,458.819,5.22 | |
| model up here the original GPT was 512,460.86,7.14 | |
| tokens and gpt2 I think was what a,464.039,5.28 | |
| thousand I don't remember,468.0,4.8 | |
| maybe it was five uh 512 as well and,469.319,6.421 | |
| then the initial version of gpt3 was 2,472.8,5.459 | |
| 000 tokens we got upgraded to 4 000,475.74,6.239 | |
| tokens then we got GPT 3.5 and gpt4 so,478.259,5.041 | |
| we're at eight thousand and sixteen,481.979,3.0 | |
| thousand tokens,483.3,4.14 | |
| as these attention mechanisms get bigger,484.979,4.5 | |
| and as the context window gets bigger,487.44,3.9 | |
| one thing that I've noticed is that,489.479,4.261 | |
| there are one these are step changes in,491.34,3.78 | |
| terms of,493.74,4.2 | |
| algorithmic efficiencies but in terms of,495.12,6.66 | |
| what they are capable of doing as I tell,497.94,6.479 | |
| a lot of my uh my consultation clients,501.78,5.639 | |
| do not ever try and you know get around,504.419,4.921 | |
| the context window limitation because,507.419,4.56 | |
| one a new model is coming out within six,509.34,4.499 | |
| months that's going to completely blow,511.979,4.5 | |
| open that window and two it's just a,513.839,4.32 | |
| limitation of the model,516.479,3.901 | |
| so when you can read a billion tokens,518.159,5.641 | |
| which by the way humans read about one,520.38,5.16 | |
| to two billion words in their entire,523.8,3.479 | |
| lifetime,525.54,3.66 | |
| when you have a model that can read a,527.279,3.721 | |
| billion tokens in a second,529.2,4.319 | |
| that is almost that is half a lifetime,531.0,5.279 | |
| worth of reading and knowledge that this,533.519,5.581 | |
| model can take in in a second so when,536.279,4.56 | |
| you have a model that can ingest that,539.1,4.26 | |
| much information suddenly retraining,540.839,4.44 | |
| models doesn't matter you just give it,543.36,4.08 | |
| the log of all news all events all,545.279,4.261 | |
| papers whatever tasks that you're doing,547.44,4.68 | |
| you just give it all of it at once and,549.54,4.739 | |
| it can keep track of all of that text in,552.12,4.8 | |
| its head in its virtual head,554.279,4.381 | |
| um all at once and it can pay attention,556.92,4.02 | |
| to the bits that it needs to with those,558.66,4.619 | |
| sparse representations,560.94,5.579 | |
| it is it is impossible for me to,563.279,6.841 | |
| oversell the long-term ramifications of,566.519,6.841 | |
| these kinds of algorithmic changes and,570.12,6.3 | |
| so a couple months ago when I said AGI,573.36,5.159 | |
| within 18 months this is the kind of,576.42,3.62 | |
| trend that I was paying attention to,578.519,4.32 | |
| there is no limit to the algorithmic,580.04,4.299 | |
| breakthroughs we are seeing right now,582.839,3.12 | |
| now that doesn't mean that there won't,584.339,4.44 | |
| eventually be diminishing returns but at,585.959,5.281 | |
| the same time we are exploring this blue,588.779,5.581 | |
| ocean space and we've we've all right,591.24,5.159 | |
| for those of you that have played Skyrim,594.36,5.58 | |
| and other RPGs we unlocked a new map and,596.399,5.641 | |
| the grayed out area is this big and,599.94,4.079 | |
| we've explored this much of this new map,602.04,4.799 | |
| that is how much potential there is to,604.019,5.341 | |
| explore out here and the other thing is,606.839,5.281 | |
| this research is accelerating there's a,609.36,4.38 | |
| few reasons for that on one of the live,612.12,3.18 | |
| streams like someone asked me like how,613.74,4.08 | |
| do we know that this isn't an AI winter,615.3,4.74 | |
| and I pulled up a chart that showed an,617.82,5.16 | |
| exponential growth of investment where,620.04,5.16 | |
| the money goes the research goes and,622.98,3.78 | |
| because the money is flowing into the,625.2,4.44 | |
| research it's happening what you I mean,626.76,4.38 | |
| we saw the same thing with solar and,629.64,2.759 | |
| literally every other disruptive,631.14,3.18 | |
| technology is once the investment comes,632.399,3.781 | |
| you know that the breakthroughs are,634.32,3.84 | |
| going to follow it's just that simple,636.18,3.779 | |
| and this is one of those kinds of,638.16,5.52 | |
| breakthroughs so what does this mean uh,639.959,7.261 | |
| put it this way rather than trying to,643.68,6.54 | |
| you know play Tetris with memory and you,647.22,4.98 | |
| know trying to fit 10 pounds of stuff,650.22,5.52 | |
| into a five pound bag now once this,652.2,5.699 | |
| becomes commercially ready which it's,655.74,4.8 | |
| coming it's it's possible on paper they,657.899,5.041 | |
| did it so even if we don't get a billion,660.54,4.16 | |
| tokens this time next year it's coming,662.94,4.92 | |
| the what this allows you to do is let's,664.7,4.96 | |
| say for instance,667.86,5.28 | |
| um you are working on a medical research,669.66,5.94 | |
| thing and it's like okay well you know,673.14,4.319 | |
| we've we've got a literature review of,675.6,4.799 | |
| literally 2000 papers per month to read,677.459,5.401 | |
| put all the papers in this model and and,680.399,4.321 | |
| say tell me exactly which papers are,682.86,3.96 | |
| most relevant,684.72,4.799 | |
| so the the the ability for in-context,686.82,5.94 | |
| learning uh is incredible and it can,689.519,5.88 | |
| hold more in its brain in its mind than,692.76,5.4 | |
| any 10 humans can,695.399,5.101 | |
| and this is again this is not the limit,698.16,4.44 | |
| imagine a year from now we're six months,700.5,4.62 | |
| from now when uh long net two comes out,702.6,4.2 | |
| and it's a trillion tokens or 10,705.12,4.32 | |
| trillion tokens and what they say in,706.8,5.46 | |
| this paper is that maybe we're gonna see,709.44,6.78 | |
| a a point very soon where it could have,712.26,6.0 | |
| its context window could include,716.22,4.619 | |
| basically the entire internet,718.26,5.28 | |
| this is a step towards super,720.839,4.981 | |
| intelligence make no mistake that the,723.54,4.739 | |
| ability to held and use that much,725.82,5.519 | |
| information in real time to produce,728.279,6.541 | |
| plans to forecast to anticipate to come,731.339,6.481 | |
| up with insights this is a critical step,734.82,6.18 | |
| towards digital super intelligence I am,737.82,5.22 | |
| not being hyperbolic here and neither is,741.0,3.48 | |
| this paper when they say we could,743.04,3.6 | |
| conceivably build a model that can read,744.48,4.38 | |
| the entire internet in one go,746.64,4.68 | |
| so with all that being said I wanted to,748.86,5.58 | |
| Pivot and talk briefly about open ai's,751.32,6.0 | |
| announcement also yesterday that they,754.44,5.22 | |
| are introducing super alignment so the,757.32,5.699 | |
| tldr is that openai is creating a a,759.66,5.58 | |
| dedicated team to aligning super,763.019,5.521 | |
| intelligence uh which you know again I,765.24,5.399 | |
| am super glad that we are living in the,768.54,3.359 | |
| timeline where someone is doing this,770.639,4.021 | |
| it's about time uh you know I've got my,771.899,4.261 | |
| book out there benevolent by Design,774.66,3.0 | |
| where I talked about aligning super,776.16,3.239 | |
| intelligence and my solution is that you,777.66,3.66 | |
| really can't but one thing that I want,779.399,5.521 | |
| to point out is that whether or not you,781.32,7.86 | |
| can align one model in the lab is that's,784.92,6.0 | |
| part of the that's a necessary part of,789.18,3.12 | |
| the solution I don't want to disparage,790.92,3.06 | |
| the engineers and scientists at openai,792.3,3.839 | |
| and Microsoft and other places working,793.98,4.859 | |
| on this but while it is a necessary,796.139,5.221 | |
| component of the of the solution it is,798.839,4.981 | |
| not a complete solution and this is,801.36,5.099 | |
| where uh researchers like Gary Marcus,803.82,6.12 | |
| and Dr Rahman Chowdhury have testified,806.459,6.961 | |
| to Congress saying look you know they,809.94,6.78 | |
| expect that open source models will,813.42,5.219 | |
| reach parity with closed Source models,816.72,4.619 | |
| and then overtake them and so when open,818.639,6.121 | |
| source models who anyone can deploy are,821.339,6.12 | |
| aligned any which way that you want you,824.76,5.759 | |
| lose total control so that while I,827.459,4.921 | |
| definitely appreciate in value because,830.519,3.601 | |
| we need to know how to align super,832.38,3.54 | |
| intelligent models,834.12,4.32 | |
| the good guys right the the aligned,835.92,6.78 | |
| models need to be uh as powerful as all,838.44,6.959 | |
| the unaligned models because in the AI,842.7,4.98 | |
| arms race it's going to be AI versus AI,845.399,4.68 | |
| in the example of cyber security where,847.68,5.159 | |
| we already have adaptive intelligence in,850.079,4.44 | |
| uh in firewalls and other security,852.839,4.44 | |
| appliances basically you're going to,854.519,5.041 | |
| have you know an AI agent running in,857.279,4.981 | |
| your firewall versus an AI based DDOS,859.56,5.16 | |
| attack just as one example you're going,862.26,4.74 | |
| to have ai based infiltration programs,864.72,4.52 | |
| versus AI Hunter programs on the inside,867.0,5.82 | |
| so it's going to be spy versus spy and,869.24,5.92 | |
| so we need to make sure that the that,872.82,5.16 | |
| the models that we build that that do,875.16,4.44 | |
| remain aligned that we do remain control,877.98,4.62 | |
| over are as smart as possible and also,879.6,5.34 | |
| trustworthy absolutely needs to to,882.6,4.44 | |
| happen basically you fight fire with,884.94,4.259 | |
| fire and I know that that sounds like,887.04,3.9 | |
| mutually assured destruction and it kind,889.199,3.901 | |
| of is which is another reason that the,890.94,4.56 | |
| nuclear arms race metaphor is very apt,893.1,4.56 | |
| for the AI arms race,895.5,4.26 | |
| So This research absolutely needs to,897.66,4.38 | |
| happen but what I want to drive home is,899.76,3.9 | |
| that it is a necessary but not,902.04,4.56 | |
| sufficient set of solutions that there,903.66,4.619 | |
| also needs to be the adoption the,906.6,3.72 | |
| implementation and deployment of,908.279,3.721 | |
| Alliance systems and we also need to,910.32,3.6 | |
| make sure that those Alliance systems,912.0,3.68 | |
| can communicate and collaborate together,913.92,5.099 | |
| so with all that being said uh big steps,915.68,4.839 | |
| in the right direction but it is coming,919.019,4.32 | |
| faster than anyone realizes and I stand,920.519,6.421 | |
| by my assertion AGI by the end of 2024,923.339,6.901 | |
| actually by by the mid basically uh,926.94,7.019 | |
| let's see September or October 2024 any,930.24,5.76 | |
| definition that you have of AGI will be,933.959,4.921 | |
| satisfied and then from there it's a,936.0,5.279 | |
| very very very short period of time to,938.88,4.8 | |
| Super intelligence now fortunately for,941.279,4.62 | |
| us right now the only computer is,943.68,3.959 | |
| capable of running them running these,945.899,4.141 | |
| models and researching them are like the,947.639,3.901 | |
| Nvidia supercomputers that they're,950.04,4.739 | |
| building so that but that barrier that,951.54,5.28 | |
| threshold is going to start going down,954.779,4.381 | |
| because remember remember Nvidia at,956.82,4.92 | |
| their at their keynote speech and and,959.16,4.08 | |
| for several months they've been saying,961.74,4.02 | |
| hey you know our machines are literally,963.24,4.2 | |
| a million times more powerful in the,965.76,2.879 | |
| last decade and we're going to do it,967.44,4.019 | |
| again in the next decade well when your,968.639,4.981 | |
| desktop computer is as powerful as,971.459,3.901 | |
| today's super computers in 10 years,973.62,3.719 | |
| you're going to be able to run all of,975.36,3.419 | |
| these and then when you combine that,977.339,3.0 | |
| with the ongoing algorithmic,978.779,3.481 | |
| efficiencies everyone is going to be,980.339,4.5 | |
| running their own AGI within five to ten,982.26,4.8 | |
| years mark my words,984.839,3.541 | |
| so,987.06,3.839 | |
| time is of the essence we do need a,988.38,4.5 | |
| sense of urgency and I am really glad,990.899,4.321 | |
| that open AI is doing this,992.88,6.78 | |
| again you know I'm not I would like to,995.22,6.9 | |
| see more governmental participation more,999.66,4.979 | |
| universities uh I would like to see,1002.12,5.1 | |
| something like a Gaia agency a global AI,1004.639,5.281 | |
| agency or an Aegis agency an alignment,1007.22,4.979 | |
| enforcement for Global uh Global,1009.92,4.38 | |
| intelligence systems,1012.199,4.921 | |
| um because the thing is is corporations,1014.3,5.12 | |
| and governments are not ready for this,1017.12,5.76 | |
| and uh that to me is the biggest risk,1019.42,6.039 | |
| because from it from a purely scientific,1022.88,6.059 | |
| standpoint I 100 believe that we can,1025.459,5.401 | |
| align super intelligence I wrote a book,1028.939,4.02 | |
| about it I demonstrated how you can take,1030.86,4.199 | |
| unaligned models and align them to,1032.959,4.021 | |
| Universal principles very very easily,1035.059,4.02 | |
| I've done it plenty of times the data,1036.98,3.54 | |
| sets are out there for free just search,1039.079,3.24 | |
| for heuristic imperatives and core,1040.52,3.84 | |
| objective functions on my GitHub,1042.319,5.821 | |
| but again aligning a single model is not,1044.36,6.12 | |
| the entire solution you also need the,1048.14,4.26 | |
| deployment you need the the security,1050.48,3.72 | |
| models we need to update things like the,1052.4,4.139 | |
| OSI model and defense in depth we need,1054.2,4.2 | |
| to look at the entire technology stack,1056.539,3.901 | |
| but we also need to look at the entire,1058.4,4.62 | |
| economic and governmental stack to make,1060.44,4.2 | |
| sure that companies are aware of this,1063.02,3.899 | |
| and that companies are start deploying,1064.64,5.7 | |
| uh these uh systems whether it's,1066.919,5.581 | |
| security checkpoints whether it's,1070.34,4.92 | |
| internal policies that sort of thing,1072.5,4.32 | |
| because when you've got a really,1075.26,3.6 | |
| powerful Cannon you have to aim that,1076.82,5.04 | |
| Cannon really really well otherwise it's,1078.86,6.24 | |
| going to kill everybody uh and again you,1081.86,5.04 | |
| all know me I am a very very very,1085.1,3.6 | |
| optimistic person when it comes to,1086.9,3.36 | |
| alignment and the future that we can,1088.7,4.32 | |
| build but at the same time when you're,1090.26,4.799 | |
| playing with fire like you need to make,1093.02,3.659 | |
| sure that you wear the proper safety,1095.059,3.961 | |
| gear uh because the the more energy,1096.679,4.321 | |
| something has the more dangerous it is,1099.02,4.8 | |
| and the level of energy or intelligence,1101.0,3.96 | |
| or however you want to look at it,1103.82,3.06 | |
| whatever metaphor you want to pick is,1104.96,4.2 | |
| going up very quickly so thanks for,1106.88,3.659 | |
| watching I hope you got a lot out of,1109.16,4.5 | |
| this it's the long net paper and then of,1110.539,5.121 | |
| course uh introducing super alignment,1113.66,6.259 | |
| but yeah thanks for watching cheers,1115.66,4.259 | |